AI chatbots are springing to life the world over, and as conversations with a whole variety of robots become possible, several companies are offering users a chance to chat with a ‘simulation‘ of their deceased loved ones for prices as low as US$10.
Some who have already bought into the tech take comfort in the text, voice, or video simulations. They say it feels like their loved ones really are speaking to them from beyond the grave. Others find the AI’s immortalization of the deceased disconcerting and manipulative.
Ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska from the University of Cambridge are the latest to voice their concerns over the risks of the ‘digital afterlife industry’.
They argue chatbots that imitate deceased persons – sometimes called deadbots, griefbots, or ghostbots – pose several key social and ethical questions that we have yet to confront.
Like, who owns a person’s data after they die? What is the psychological effect on survivors? What can a deadbot be used for? And who can shut the bot down for good?
Such questions once inspired an eerie episode of the sci-fi series Black Mirror. Now, such an imagined future is looking ever more possible.
Think about the risks of the following potential scenario, which Hollanek and Nowaczyk-Basińska put forward in their recent research article. A 28-year-old woman’s grandmother passes so she decides to upload their text exchanges and voice notes to an app. This app allows the woman to call an AI simulation of her deceased grandmother whenever she wants. After a free trial, her digital grandmother begins selling her things while talking to her.
“People might develop strong emotional bonds with such simulations, which will make them particularly vulnerable to manipulation,” suggests Hollanek.
“Methods and even rituals for retiring deadbots in a dignified way should be considered. This may mean a form of digital funeral… “
Such regard for an AI chatbot might sound absurd at first, but in 2018, some ethicists reasoned that a person’s digital remains are precious and should be treated as more than just a form of profit but as “an entity holding inherent value”.
This logic aligns with the International Council of Museums’ Code of Professional Ethics, which mandates that human remains are handled with due respect and “inviolable” human dignity.
Hollanek and Nowaczyk-Basińska don’t think an outright ban on deadbots is feasible, but they do argue that companies should treat a donor’s data “with reverence”.
They also agree with previous opinions that deadbots should never appear in public digital spaces like social media. The only exception should be for historical figures.
In 2022, ethicist Nora Freya Lindemann argued that deadbots should be classified as medical devices to ensure that mental health is a key priority of the technology. Young children, for instance, may become confused by the physical loss of a loved one if they are still digitally ‘alive’ and part of their daily life.
But Hollanek and Nowaczyk-Basińska argue this idea is “too narrow and too restrictive, since it refers specifically to deadbots designed to help service interactants process grief.”
Instead, they contend, these systems should be “meaningfully transparent” so that users know to the best of their ability what they are signing up for and the possible risks involved.
There’s also the matter of who can deactivate the bot. If a person gifts their ‘ghostbot’ to their children, are the children allowed to opt out? Or is the deadbot forever around if the deceased person willed it? The desires of the involved groups may not always agree. So who wins out?
“Additional guardrails to direct the development of re-creation services are necessary,” Hollanek and Nowaczyk-Basińska conclude.
The duo from Cambridge hopes their arguments “will help center critical thinking about ‘immortality’ of users in human AI interaction design and AI ethics research.”
The research article was published in Philosophy & Technology.