At Stanford for At eight o’clock on Monday, representatives from Anthropic, Apple, Google, OpenAI, Meta and Microsoft met in a closed-door workshop to discuss the use of chatbots as companions or in role-playing scenarios. Interactions with AI tools are often mundane, but they can also lead to dire results. Sometimes users experience mental breakdowns during long conversations with chatbots or confide in them about their suicidal thoughts.
“We need to have really big conversations across society about what role we want AI to play in our future as humans interacting with each other,” says Ryn Linthicum, user well-being policy lead at Anthropic. At the event, hosted by Anthropic and Stanford, industry folks mingled with academics and other experts, breaking into small groups to discuss nascent AI research and brainstorm deployment guidelines for chatbot peers.
Anthropic says that less than one percent of its Claude chatbot interactions are user-initiated role-playing scenarios; that’s not what the tool was designed for. Still, chatbots and the users who like to interact with them as companions are a tricky problem for AI builders, who often take disparate approaches to security.
And if there’s one thing I’ve learned from the Tamagotchi era, it’s that humans will easily form bonds with technology. Even if some AI bubble pops imminently and the hype machine continues, many people will continue to seek out the kinds of friendly, sycophantic AI conversations they’ve grown accustomed to over the past few years.
Proactive steps
“One of the really motivating goals of this workshop was to bring together people from different industries and different fields,” says Linthicum.
Some of the early findings of the meeting were the need for better targeted interventions within robots when harmful patterns are detected and more robust age verification methods to protect children.
“We were really thinking in our conversations about not just whether we can categorize this as good or bad, but how we can do prosocial design more proactively and incorporate nudge,” says Linthicum.
Part of this work has already begun. Earlier this year, OpenAI added occasional pop-ups during long chat conversations that encourage users to move away for a break. On social media, CEO Sam Altman claimed the launch had “been able to mitigate the serious mental health issues” linked to ChatGPT use and would be rolling back the higher restrictions.

