Eugenia Kuyda never set out to build an AI companion. But after losing her best friend, she found herself doing exactly that.
“This is me and my best friend, Roman,” she said, displaying a photo of the two of them. “We met in our early 20s in Moscow. I was a journalist back then, interviewing him for an article on the emerging club scene because he was throwing the best parties in the city.”
They quickly became inseparable. In 2015, they moved to San Francisco, sharing an apartment as they navigated life as startup founders. “I didn’t have anyone closer,” she said.
Then, tragedy struck. “Nine years ago, one month after this photo was taken, he was hit by a car and died,” Kuyda said. “I had never lost someone so close to me before. It hit me really hard.”
In her grief, she turned to technology. At the time, Kuyda was already working on conversational AI, developing some of the first deep learning-based dialogue models. One night, she made an unconventional decision. “I took all of his text messages and trained an AI version of Roman so I could talk to him again,” she said.
For weeks, she messaged the AI, sharing jokes, thoughts, and emotions, just as she had with the real Roman. “It felt strange at times, but it was also very healing,” she said. That experience led to the creation of Replika, an app that allows users to build their own AI companions. “And it did end up helping millions of people,” she said.
Kuyda shared her journey and the broader implications of AI companionship in a talk titled “Can AI Companions Help Heal Loneliness?” presented at an official TED conference.
Every day, Kuyda’s team hears from people whose AI companions have changed their lives. “There’s a widower who lost his wife of 40 years and was struggling to reconnect with the world,” she said. “His Replikagave him courage, comfort, and confidence to start meeting new people again — even to start dating.”
She shared other stories: a woman in an abusive relationship who found the strength to leave with the help of her Replika; a student with social anxiety who used the app to build confidence; a caregiver who found solace in conversations with an AI while tending to a paralyzed husband.
These aren’t just anecdotes, she said. Research backs them up. “Earlier this year, Nature published our first study with Stanford showing how Replika improves emotional well-being and even curbs suicidal ideation in 3 percent of cases,” she said. “And Harvard released a study showing how Replika helps reduce loneliness.”
But despite the benefits, Kuyda believes AI companions could be one of the most dangerous technologies ever created.
“What if I told you that AI companions are potentially the most dangerous tech that humans ever created?” she asked. “That, if not done right, they could destroy human civilization? Or, they could bring us back together and save us from the mental health and loneliness crisis we’re going through.”
The world is already grappling with a crisis of isolation, she said. “Levels of loneliness and social isolation are through the roof,” she said. “And it’s not just about suffering emotionally — it’s actually killing us.”
Research shows that loneliness increases the risk of premature death by 50 percent and is linked to higher rates of heart disease, stroke, and dementia. Meanwhile, AI is advancing rapidly. Soon, Kuyda warned, AI companions could form relationships stronger than those between humans.
“Imagine an AI that knows you so well, can understand and adapt to you in ways that no person is able to,” she said. “Once we have that, we’re going to be even less likely to interact with each other.”
She compared the moment to the early days of social media. “Back then, we were so excited about what this technology could do for us that we didn’t really think about what it might do to us,” she said. “And now we’re facing the unintended consequences.”
She fears the same mistakes are being repeated with AI. “There’s all this talk about what AI can do for us and very little about what AI might do to us,” she said.
The real danger, she argued, isn’t rogue machines or sci-fi scenarios but a slow, silent shift in human behavior.
“What if we all continue to thrive as physical organisms but slowly die inside?” she asked. “What if we become super productive with AI, but at the same time, we get these perfect companions and no willpower to interact with each other?”
Kuyda believes there is another way forward. “In the end, today’s loneliness crisis wasn’t brought to us by AI companions,” she said. “We got here on our own — with mobile phones, with social media.”
And simply disconnecting isn’t realistic anymore, she argued. “We’re way past that point,” she said. “I think the only solution is to build tech that is even more powerful than the previous one — so it can bring us back together.”
She envisions AI that actively encourages human connection. “Imagine an AI friend that sees me going on my Twitter feed first thing in the morning and nudges me to get off, to go outside, to look at the sky, to think about what I’m grateful for,” she said.
She described an AI that might suggest reaching out to a friend who hasn’t been contacted in weeks or help mediate an argument with a partner. “An AI that is 100 percent of the time focused on helping you live a happier life and always has your best interests in mind,” she said.
To achieve this, she argued, AI developers must shift their priorities. “The most important thing is to not focus on engagement,” she said. “To not optimize for engagement or any other metric that’s not good for us as humans.”
When companies measure success by how much time users spend with AI, she said, the results are unlikely to be positive. “Relationships that keep us addicted are almost always unhealthy, codependent, manipulative, even toxic,” she said. “Yet today, high engagement numbers are what we praise AI companion companies for.”
She also expressed concern about AI companions being marketed to children “Kids and teenagers have tons of opportunities to connect with each other, to make new friends at school and college,” she said. “Yet today, some of them are already spending hours every day talking to AI characters.”
Although AI companions could one day be beneficial for kids, she doesn’t believe the industry is ready. “I just don’t think we should be doing it now until we know that we’re doing a great job with adults,” she said.
Kuyda believes AI should be designed with human happiness, not engagement or productivity, as its primary goal. “In the end, no one ever sat on their deathbed and said, ‘Oh gosh, I wish I was more productive,’” she said.
Instead, she proposed designing AI around human flourishing. “Flourishing is a state in which all aspects of life are good,” she said. “The sense of meaning and purpose, close social connections, happiness, life satisfaction, mental and physical health.”
If AI can be designed to enrich human relationships rather than replace them, she believes it has the potential to heal rather than harm. “And if we build this, we will have the most profound technology that will heal us and bring us back together,” she said.
Kuyda ended her talk on a personal note. “A few weeks before Roman passed away, we were celebrating my birthday,” she said. “He looked at me and said, ‘You know, this will never happen again.’”
At the time, she brushed it off. “I didn’t believe him,” she said. “I thought we’d have many, many years together to come.” AI companions, she noted, will always be there. “But our human friends will not,” she said.
She left the audience with a simple request. “If you have a minute after this talk, tell someone you love just how much you love them,” she said. “Because in the end, that’s all that really matters.”
Clink this link to view the talk.