I. The Founder
Sol Kennedy used to ask his assistant to read the messages his ex-wife sent him. After the couple separated in 2020, Kennedy says, she found their communications “difficult.” An email would arrive, or a stream of them, things about her two children mixed with unrelated emotional entanglements, and her day would be ruined trying to respond. Kennedy, a serial tech founder and investor in Silicon Valley, was in therapy at the time. But outside of the weekly sessions, he felt the need for real-time support.
After the couple divorced, their communications moved to a platform called OurFamilyWizard, used by hundreds of thousands of parents in the United States and abroad to exchange messages, share calendars and track expenses. (OFW keeps a record of everything, time-stamped and admissible to the court.) Kennedy paid more for an add-on called ToneMeter, which OFW touted at the time as “emotional spelling.” As you composed a message, its software performed basic sentiment analysis, flagging language that might be “worrying,” “aggressive,” “annoying,” “degrading,” and so on. But there was a problem, Kennedy says: His co-parent didn’t seem to be using she tonimeter
Kennedy, an early adopter, had been experimenting with ChatGPT to “co-create” bedtime stories with her children. Now he turned to her for advice on communications with his ex. He was amazed, and he was not the first. On Reddit and other Internet forums, people with difficult exes, family members, and co-workers were posting in amazement about the seemingly excellent guidance and precious emotional validation that a chatbot could provide. Here was a machine that could tell you, with no apparent agenda, that you weren’t crazy. Here was a counselor who would patiently hold your hand, 24 hours a day, while you waded through any amount of shit. “A scalable solution” to supplement therapy, as Kennedy puts it. Finally
But out of the box, ChatGPT was too talkative for Kennedy’s needs, he says, and he apologized too much. He would feed harsh messages and recommend responding (in many more sentences than necessary) I’m sorry, forgive me, I’ll do better. Having no self, I had no self-esteem.
Kennedy wanted a chat bot with “backbone” and thought that if he built it, many other parents would want it too. As he saw it, AI could help them at every stage of their communications: it could filter out emotionally-triggering language in incoming messages and summarize only the facts. It could suggest suitable answers. It could train users toward “a better way,” Kennedy says. So he founded a company and started developing an app. He called it BestInterest, based on the standard courts often use for custody decisions: the “best interest” of the child or children. He would take these commercial OpenAI models and backbone them with his own prompts.
Separated partners end up fighting horribly for many reasons, of course. For many, maybe even most, things cool off after enough months have passed, and a tool like BestInterest may not be useful in the long run. But when there’s a certain personality type in the mix (call it “high-conflict,” “narcissistic,” “controlling,” “toxic,” whatever synonym for “fair insanity” you tend to see crossing your Internet feed), the fight over kids, at least on one side, never stops. Kennedy wanted his chatbot to confront those people, so he turned to the most they hate: Ramani Durvasula, a clinical psychologist in Los Angeles who specializes in how narcissism shapes relationships.

