On Monday, a developer who used the popular Codes cursor with AI noticed something strange: changing between the machines immediately recorded them, breaking a common workflow for programmers who use various devices. When the user contacted the cursor support, an agent called “Sam” told them that a new policy was expected. But this policy did not exist, and Sam was a bot. The AI model increased politics, causing a wave of complaints and threats of cancellation documented in Hacker News and Reddit.
This marks the latest AI confabulations (also called “hallucinations”) causing business potential damage. Confabulations are a type of “gap creativity” response, where AI models invent information that sounds plausible but false. Instead of admitting uncertainty, AI models often prioritize the creation of plausible and safe answers, even when it means making information from scratch.
For companies deploying these customer -oriented systems without human supervision, the consequences can be immediate and expensive: frustrated clients, damaged confidence and, in the case of the cursor, potentially canceled subscriptions.
As it unfolded
The incident began when a Reddit user named Brokentoasteroven realized that while changing between a laptop, a laptop and a remote development box, the cursor sessions were unexpectedly completed.
“Login at the cursor on a machine immediately invalidated the session on any other machine,” Brokentoasteroven wrote in a message that was later deleted by R/Cursor moderators. “This is a significant ux regression.”
Confused and frustrated, the user wrote an email to the assistance of the cursor and quickly received a SAM response: “The cursor is designed to function with a subscription device as a basic security function,” read the email response. The answer seemed definitive and official and the user did not suspect that Sam was not human.
After Reddit’s initial publication, users published as an official confirmation of a real policy change, one that broke the essential habits for the daily routines of many programmers. “Multi-dispositional workflows are table betting for devs,” a user wrote.
Shortly afterwards, several users publicly announced their Reddit subscription cancellations, citing the non -existent policy as a reason. “I literally canceled my sub -direction,” he wrote the original Reddit poster, adding that his workplace was now “purging it completely.” Others joined: “Yes, I’m also canceling, this is asinina.” Shortly afterwards, the moderators blocked the Reddit thread and removed the original publication.
“Hey! We don’t have this policy,” a representative of the cursor wrote in a response from Reddit three hours later. “Of course, it is free to use the cursor on several machines. Unfortunately, this is an incorrect answer of a front -line AI support boat.”
AI confabulations as a business risk
The cursor debacle recalls a similar episode from February 2024, when Air Canada received the order to honor a refund policy invented by its own chat. In that incident, Jake Moffatt contacted Air Canada’s support after his grandmother died and Ai’s Ai Agent was incorrectly told that he could reserve a regular price flight and request retroactively mourning rates. When Air Canada later denied its refund request, the company argued that “Chatbot is a separate legal entity that is responsible for its own actions”. A Canadian court rejected this defense, feeling that companies are responsible for the information provided by their AI tools.
Instead of disputing responsibility as Air Canada had done, the cursor recognized the error and took steps to modify it. The cursor co -founder, Michael Truell, apologized to Hacker News for confusion about non -existent policy, explaining that the user had been refunded and that the problem resulted in a exchange change to improve the safety of the session that he involved involuntarily created session invalidation problems for some users.
“The AI answers used for email support are now clearly laid out as such,” he added. “We use AI assisted answers as the first filter for email assistance.”
However, the incident raised persistent questions about the dissemination among users, as many people who interacted with Sam believed that he was human. “Llms pretend to be people (you called it Sam!) And not labeled as such it is clearly intended to be misleading,” a user wrote to Hacker News.
While the cursor resolved the technical error, the episode shows the risks of deploying AI models acting customer -oriented without proper guarantees and transparency. For a company that sells the productivity tools of the IA to developers, that its own support system invent a policy that alienated to its basic users represents a particularly uncomfortable wound.
“There is some irony that people strive to say that hallucinations are no longer a big problem,” a user wrote in Hacker News, “and then a company that will benefit from this narrative is directly injured.”
This story originally appeared Ars Technicica.