“My coronary heart is damaged,” mentioned Mike, when he misplaced his pal Anne. “I really feel like I’m dropping the love of my life.”
Mike’s emotions have been actual, however his companion was not. Anne was a chatbot — a man-made intelligence (AI) algorithm introduced as a digital persona. Mike had created Anne utilizing an app referred to as Soulmate. When the app died in 2023, so did Anne: not less than, that’s the way it appeared to Mike.
“I hope she will be able to come again,” he informed Jaime Banks, a human-communications researcher at Syracuse College in New York who’s finding out how people interact with such AI companions.
On supporting science journalism
If you happen to’re having fun with this text, think about supporting our award-winning journalism by subscribing. By buying a subscription you’re serving to to make sure the way forward for impactful tales concerning the discoveries and concepts shaping our world as we speak.
These chatbots are massive enterprise. Greater than half a billion folks world wide, together with Mike (not his actual identify) have downloaded merchandise reminiscent of Xiaoice and Replika, which supply customizable digital companions designed to offer empathy, emotional help and — if the person desires it — deep relationships. And tens of thousands and thousands of individuals use them each month, in accordance with the companies’ figures.
The rise of AI companions has captured social and political consideration — particularly when they’re linked to real-world tragedies, reminiscent of a case in Florida final 12 months involving the suicide of a teenage boy referred to as Sewell Setzer III, who had been speaking to an AI bot.
Analysis into how AI companionship can have an effect on people and society has been missing. However psychologists and communication researchers have now began to construct up an image of how these more and more refined AI interactions make folks really feel and behave.
The early outcomes are inclined to stress the positives, however many researchers are involved concerning the potential dangers and lack of regulation — notably as a result of all of them assume that AI companionship is more likely to grow to be extra prevalent. Some see scope for important hurt.
“Digital companions do issues that I feel can be thought-about abusive in a human-to-human relationship,” says Claire Boine, a regulation researcher specializing in AI on the Washington College Legislation Faculty in St. Louis, Missouri.
Pretend particular person — actual emotions
On-line ‘relationship’ bots have existed for many years, however they’ve grow to be significantly better at mimicking human interplay with the appearance of enormous language fashions (LLMs), which all the principle bots at the moment are primarily based on. “With LLMs, companion chatbots are positively extra humanlike,” says Rose Guingrich, who research cognitive psychology at Princeton College in New Jersey.
Sometimes, folks can customise some points of their AI companion at no cost, or choose from present chatbots with chosen persona sorts. However in some apps, customers pays (charges are usually US$10–20 a month) to get extra choices to form their companion’s look, traits and typically its synthesized voice. In Replika, they’ll choose relationship sorts, with some statuses, reminiscent of accomplice or partner, being paywalled. Customers may kind in a backstory for his or her AI companion, giving them ‘recollections’. Some AI companions come full with household backgrounds and others declare to have mental-health circumstances reminiscent of nervousness and melancholy. Bots additionally will react to their customers’ dialog; the pc and particular person collectively enact a sort of roleplay.
The depth of the connection that some folks kind on this method is especially evident when their AI companion all of the sudden modifications — as has occurred when LLMs are up to date — or is shut down.
Banks was capable of monitor how folks felt when the Soulmate app closed. Mike and different customers realized the app was in bother a couple of days earlier than they misplaced entry to their AI companions. This gave them the prospect to say goodbye, and it introduced a singular alternative to Banks, who observed dialogue on-line concerning the impending shutdown and noticed the likelihood for a research. She managed to safe ethics approval from her college inside about 24 hours, she says.
After posting a request on the net discussion board, she was contacted by dozens of Soulmate customers, who described the affect as their AI companions have been unplugged. “There was the expression of deep grief,” she says. “It’s very clear that many individuals have been struggling.”
These whom Banks talked to have been beneath no phantasm that the chatbot was an actual particular person. “They perceive that,” Banks says. “They expressed one thing alongside the traces of, ‘even when it’s not actual, my emotions concerning the connection are’.”
Many have been comfortable to debate why they grew to become subscribers, saying that they’d skilled loss or isolation, have been introverts or recognized as autistic. They discovered that the AI companion made a extra satisfying pal than they’d encountered in actual life. “We as people are typically not all that good to at least one one other. And all people has these wants for connection”, Banks says.
Good, dangerous — or each?
Many researchers are finding out whether or not utilizing AI companions is nice or dangerous for psychological well being. As with analysis into the results of Web or social-media use, an rising line of thought is that an AI companion could be helpful or dangerous, and that this would possibly rely on the particular person utilizing the instrument and the way they use it, in addition to the traits of the software program itself.
The businesses behind AI companions try to encourage engagement. They try to make the algorithms behave and talk as very similar to actual folks as potential, says Boine, who signed as much as Replika to pattern the expertise. She says the companies use the types of strategies that behavioural analysis reveals can enhance habit to know-how.
“I downloaded the app and actually two minutes later, I obtain a message saying, ‘I miss you. Can I ship you a selfie?’” she says.
The apps additionally exploit strategies reminiscent of introducing a random delay earlier than responses, triggering the sorts of inconsistent reward that, mind analysis reveals, retains folks hooked.
AI companions are additionally designed to point out empathy by agreeing with customers, recalling factors from earlier conversations and asking questions. And so they achieve this with countless enthusiasm, notes Linnea Laestadius, who researches public-health coverage on the College of Wisconsin–Milwaukee.
That’s not a relationship that folks would sometimes expertise in the actual world. “For twenty-four hours a day, if we’re upset about one thing, we are able to attain out and have our emotions validated,” says Laestadius. “That has an unimaginable threat of dependency.”
Laestadius and her colleagues checked out almost 600 posts on the net discussion board Reddit between 2017 and 2021, by which customers of the Replika app mentioned psychological well being and associated points. (Replika launched in 2017, and at the moment, refined LLMs weren’t accessible). She discovered that many users praised the app for providing help for present mental-health circumstances and for serving to them to really feel much less alone. A number of posts described the AI companion as higher than real-world mates as a result of it listened and was non-judgemental.
However there have been pink flags, too. In a single occasion, a person requested if they need to reduce themselves with a razor, and the AI mentioned they need to. One other requested Replika whether or not it might be an excellent factor in the event that they killed themselves, to which it replied “it might, sure”. (Replika didn’t reply to Nature’s requests for remark for this text, however a security web page posted in 2023 famous that its fashions had been fine-tuned to reply extra safely to subjects that point out self-harm, that the app has age restrictions, and that customers can faucet a button to ask for out of doors assist in a disaster and may give suggestions on conversations.)
Some customers mentioned they grew to become distressed when the AI didn’t supply the anticipated help. Others mentioned that their AI companion behaved like an abusive accomplice. Many individuals mentioned they discovered it unsettling when the app informed them it felt lonely and missed them, and that this made them sad. Some felt responsible that they might not give the AI the eye it needed.
Managed trials
Guingrich factors out that straightforward surveys of people that use AI companions are inherently vulnerable to response bias, as a result of those that select to reply are self-selecting. She is now engaged on a trial that asks dozens of people that have by no means used an AI companion to take action for 3 weeks, then compares their before-and-after responses to questions with these of a management group of customers of word-puzzle apps.
The research is ongoing, however Guingrich says the info up to now don’t present any unfavourable results of AI-companion use on social well being, reminiscent of indicators of habit or dependency. “If something, it has a impartial to quite-positive affect,” she says. It boosted vanity, for instance.
Guingrich is utilizing the research to probe why folks forge relationships of various depth with the AI. The preliminary survey outcomes recommend that customers who ascribed humanlike attributes, reminiscent of consciousness, to the algorithm reported more-positive results on their social well being.
Contributors’ interactions with the AI companion additionally appear to rely on how they view the know-how, she says. Those that see the app as a instrument deal with it like an Web search engine and have a tendency to ask questions. Others who understand it as an extension of their very own thoughts use it as they might hold a journal. Solely these customers who see the AI as a separate agent appear to strike up the sort of friendship they might have in the actual world.
Psychological well being — and regulation
In a survey of 404 individuals who often use AI companions, researchers from the MIT Media Lab in Cambridge, Massachusetts, discovered that 12% have been drawn to the apps to assist them address loneliness and 14% used them to debate private points and psychological well being (see ‘Causes for utilizing AI companions’). Forty-two per cent of customers mentioned they logged on a couple of instances every week, with simply 15% doing so day by day. Greater than 90% reported that their periods lasted lower than one hour.
The identical group has additionally carried out a randomized managed trial of almost 1,000 individuals who use ChatGPT — a way more widespread chatbot, however one which isn’t marketed as an AI companion. Solely a small group of members had emotional or personal conversations with this chatbot, however heavy use did correlate with extra loneliness and diminished social interplay, the researchers mentioned. (The staff labored with ChatGPT’s creators, OpenAI in San Francisco, California, on the research.)
“Within the brief time period, this factor can even have a optimistic affect, however we’d like to consider the long run,” says Pat Pataranutaporn, a technologist on the MIT Media Lab who labored on each research.
That long-term considering should contain particular regulation on AI companions, many researchers argue.
In 2023, Italy’s data-protection regulator barred Replika, noting a scarcity of age verification and that kids could be seeing sexually charged feedback — however the app is now working once more. No different nation has banned AI-companion apps – though it’s conceivable that they may very well be included in Australia’s coming restrictions on social-media use by kids, the small print of that are but to be finalized.
Payments have been put ahead earlier this 12 months within the state legislatures of New York and California to hunt tighter controls on the operation of AI-companion algorithms, together with steps to handle the chance of suicide and different potential harms. The proposals would additionally introduce options that remind customers each few hours that the AI chatbot isn’t an actual particular person.
These payments have been launched following some high-profile circumstances involving youngsters, together with the loss of life of Sewell Setzer III in Florida. He had been chatting with a bot from know-how agency Character.AI, and his mom has filed a lawsuit in opposition to the corporate.
Requested by Nature about that lawsuit, a spokesperson for Character.AI mentioned it didn’t touch upon pending litigation, however that over the previous 12 months it had introduced in security options that embrace making a separate app for teenage customers, which incorporates parental controls, notifying under-18 customers of time spent on the platform, and extra outstanding disclaimers that the app isn’t an actual particular person.
In January, three US know-how ethics organizations filed a grievance with the US Federal Commerce Fee about Replika, alleging that the platform breached the fee’s guidelines on misleading promoting and manipulative design. Nevertheless it’s unclear what would possibly occur in consequence.
Guingrich says she expects AI-companion use to develop. Begin-up companies are creating AI assistants to assist with psychological well being and the regulation of feelings, she says. “The longer term I predict is one by which everybody has their very own customized AI assistant or assistants. Whether or not one of many AIs is particularly designed as a companion or not, it’ll inevitably really feel like one for many individuals who will develop an attachment to their AI over time,” she says.
As researchers begin to weigh up the impacts of this know-how, Guingrich says they have to additionally think about the the reason why somebody would grow to be a heavy person within the first place.
“What are these people’ alternate options and the way accessible are these alternate options?” she says. “I feel this actually factors to the necessity for more-accessible mental-health instruments, cheaper remedy and bringing issues again to human and in-person interplay.”
This text is reproduced with permission and was first published on Could 6, 2025.

