In February last year, the world baulked as the media reported that a South Korean broadcaster had used virtual reality technology to “reunite” a grieving mother with the 7-year old child she lost in 2016.

As part of a documentary entitled I Met You, Jang Ji-sung was confronted by an animated and lifelike vision of her daughter Na-yeon as she played in a neighborhood park in her favorite dress. It was an emotionally charged scene, with the avatar asking the tearful woman, “Mom, where have you been? Have you been thinking of me?”.
“Always”, the mother replied.
Remarkably, documentary makers saw this scene as “heartwarming”, but many felt that something was badly wrong. Ethicists, like Dr. Blaby Whitby from the University of Sussex, cautioned the media: “We just don’t know the psychological effects of being “reunited” with someone in this way.”
Indeed, this was unchartered territory.
Under different circumstances tech-driven reanimation can feel comparatively harmless. Like the recreation of Dr Martin Luth King Jr. for a VR museum exhibit about the 1963 March on Washington, the revival of actor Peter Cushing to reprise his role in Star Wars, and Tupac’s surprise appearance at Coachella in 2012, 15 years after he passed.
So, when is it okay to raise the dead?
This question bubbled to the surface again this week, as an AI-generated deepfake of celebrated Spanish singer Lola Flores took center stage as the face of a new ad campaign by Sevillian beer company Cruzcampo.
Flores died way back in 1995, and her family — specifically her daughters — have been involved with the project. Nevertheless, the campaign has received some fierce criticism from those that believe this is a cheapening of her iconic image, putting words into her mouth.
Words geared towards increasing beer sales for a commercial firm she never endorsed in life.
One Twitter user remarked:
I’m always sorry to be a bit of a Grinch, but Lola Flores de Cruzcampo’s ad seems WRONG to me. As good as the text is, leave dead people alone and do not put things in their mouths that they did not say. And less for commercial purposes.
It’s convenient to believe that the consent of living family members is enough to give this kind of gimmick validity (and most of the viral attention it has received has been positive), but it is important to ask at this nascent stage: should your descendants be able to assent to your deepfake resurrection?
If the answer is “yes”, then does that also mean that it’s permissible for them to have your reanimated image puppeteered in a way that makes you say and do things that you never actually said or did? Perhaps things you would never have said or done…?

This isn’t the first time that the morality of deepfaking the deceased has come into question. Back in 2019, long dead James Dean was “cast” in a new role for the film Finding Jack. It was reportedly the first time a dead actor had been set to star in an entirely new role, but some Hollywood figures, including actor Chris Evans, reacted strongly against the decision on the basis of Dean never consenting to the role
Indeed, deepfaking the dead seemingly becomes divisive when film and images are manipulated to do something that the individual never did and may never have approved of. Unlike the Dr. Luther King example, Dean, Flores, and (possibly) young Na-yean have had their actions, voices and mannerisms — the things that compose their full identify as perceived by the outside world — convincingly spoofed, but the actual content of their communication is scripted by clever-clever television, movie, and advertising executives.
The question is, does this constitute a harm or a case of “dishonoring the dead”? The answer may depend on whether we agree that the dead can be subject to harm in the first place. Philosopher Steven Luper has written extensively of what is termed the posthumous harm thesis, explaining:
“Even after our lives are over, it seems that we have a stake in what happens in the world, posthumous events can advance (and others can impede) the projects we undertook whilst alive or our directives concerning what will be done with our property after we are dead. If this view is correct…events occuring after we die can harm us.”
Steven Luper in The Philosophy of Death.
It seems pretty non-controversial to assert that our identities and associated projects persist beyond our bodily death. This identity is our own self-determined composition and the ultimate (and only truthful) trademark of who we are. Therefore, for others to warp, manipulate, and supplement it with inauthentic sentiment or action does seem to wreak damage. This damage — a dilution of the truth — is what critics are responding to.
Now, as we begin to figure out what is right and wrong, acceptable and unacceptable in this strange new world, we should undoubtedly be considering whether we’re content to be reimagined as a scripted bot, avatar, or deepfake after our death.
Do we relish the idea of attending family celebrations and holidays sans consciousness? What if the performance of our virtual marionette was lucrative for our descendants (as with Flores)?
If the answer is a resounding “no!” — as it will be for many — then we need it to be plain how people can easily protect themselves from this watering down of who or what they once were, lest relatives and friends in years to come are keen to extract amusement or money from their pliant deepfake forms.
Pingback: Our “AI Ghosts” Could Have A Stake In The Future | You The Data