It’s hard to explain the mix of emotions that spark upon seeing a photo of Frederick Douglass come alive with the click of a button. And yet, there he is, blinking and nodding as if he were just alive yesterday, as if he hadn’t died in 1895, years before film recording became commonplace.
His animated image and others like it – at the same time unsettling, emotional, and a bit fantastical, are made possible by Deep Nostalgia, an artificial intelligence program from the genealogy platform MyHeritage.
As far as AI-animated images go, the technology behind these Harry Potter-esque photos isn’t particularly complex.
Users are invited to supply old photos of their loved ones, and the program uses deep learning to apply predetermined movements to their facial features. It also makes up for little moments that aren’t in the original photo, like the reveal of teeth or the side of a head. Together it creates, if not an entirely natural effect, than a deeply arresting one.
Responses to the Deep Nostalgia images – tears at seeing a grandmother’s smile, an eerie feeling of connection to a long-dead historical icon – knock on a mysterious emotional wall between us and this type of rapidly-evolving technology.
We rely on perception and emotion
“The draw here is that visual imagery is visceral and compelling and we respond to it,” says Hany Farid, associate dean and head of the School of Information at UC Berkeley. “We are visual beings. When you see your grandmother or Mark Twain come alive, there’s something fascinating about it.”
Fascinating – and yes, a little frightening.
Our brains, as sophisticated as they are, have a prehistoric response to things that are almost human, but not quite. This is commonly called the uncanny valley, and a lot of deepfakes and AI-driven image manipulations set off this ancient alarm bell. Even MyHeritage addresses this reaction in their explanation of the program.
“Indeed, the results can be controversial and it’s hard to stay indifferent to this technology,” their FAQ page reads.
When it’s a beloved relative occupying that almost-but-not-quite space in reality, the parts of our brain that love and fear are pitted against each other, even if we know full well that what we’re looking at isn’t real.
“The way our brain processes images of people is different than inanimate objects. It taps into neural circuitry,” Farid says. “For years we have been able to synthesize inanimate objects, and that completely fools the visual system because we don’t have preconceived notions of how they move. But when it comes