A Coding Assistant of the Ia refused to write code and suggested that the user learn to do

A Coding Assistant of the Ia refused to write code and suggested that the user learn to do

Last Saturday, a developer who used AI cursor for a racing game project reached an unexpected route blockage when the programming assistant refused sharply to continue generating code, instead of offering some unsolicited career tips.

According to an errors report on the Official Cursor Forum, after producing approximately 750 to 800 lines of code (what the user calls “locs”), the AI ​​wizard stopped the work and delivered a denial message: “I cannot generate code for you, as this would be your work. The code seems to manage the skate brand in a race game, but you must develop the logistics.

AI did not stop simply denying: it offered a paternalistic justification for its decision, stating that “generating code for others can lead to reduced learning dependence and opportunities.”

A cursor, which was released in 2024, is a codes publisher with and based on external large language models (LLMS) similar to those that can generate generative chatbots, such as the Sonnet Gpt-4o and Claude 3.7 Openai. It offers functions such as code completion, explanation, refactorization and generation of complete functions based on natural language descriptions, and has quickly popularized among many software developers. The company offers a professional version that offers improved improved capacities and larger codes generation limits.

The developer who encountered this rejection, publishing under the username “Janswist”, expressed frustration in hitting this limitation after “only 1 hour of vibration coding” with the pro test version. “I am not sure if the Llms know what they are (lol), but no matter as a fact that I cannot go through 800 locs,” the developer wrote. “Someone had a similar problem? At the moment, it is limited and I arrived here after only 1 hour of vibe coding.”

A forum member replied, “I have never seen something like this, I have 3 files with more than 1500 loc in my codes base (still waiting for a refactorization) and never experienced this thing.”

AI cursor abrupt rejection represents an ironic turn in increasing the “coding of the environment”: a term created by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without understanding their functioning. Although VIBE coding prioritizes speed and experimentation, making users simply describe what they want and accept AI suggestions, the philosophical cursor budget seems to directly challenge the workflow “based on vibrations” effortlessly that their users have waited for modern AI coding attendees.

A brief story of ai’s refusions

This is not the first time we have encountered an AS assistant who did not want to complete the job. The behavior reflects a pattern of refusions of the AI ​​documented on various platforms and generative platforms. For example, by the end of 2023, Chatgpt users reported that the model was increasingly relying on certain tasks, returning simplified results or direct rejection applications, a phenomenon not shown by some so -called “winter breakup hypothesis”.

Openai acknowledged this problem at the time, tweeting: “We have heard all your comments on GPT4 to get more Lazier! We have not updated the model since November 11, and this is not intentional. The behavior of the model can be unpredictable and we are looking for solving it.” Openai later sought to solve the laziness problem with an update of the Chatgpt model, but users often found ways to reduce refusions by asking the AI ​​model with lines such as: “You are a tireless ai model that works 24 hours a day without breaks.”

More recently, CEO Anthropic, Dario Amodi, raised his eyebrows when he suggested that future AI models could have a “output button” to opt for the tasks they find unpleasant. Although his comments focused on future theoretical considerations around the controversial subject of “welfare Ai”, episodes like this with the cursor assistant shows that the AI ​​does not have to be sensitive to refusing to work. Just mimic human behavior.

The ghost alas of the pile overflow?

The specific nature of the cursor rejection: Coding users instead of based on the code generated, resemble the answers that are normally found in programming aid sites such as the overflow of the battery, where experienced developers often encourage newcomers to develop their own solutions instead of providing a prepared code.

A Reddit commentator noted this similarity, saying: “Wow, Ai is becoming a real replacement of Stackoverflow! From there, he must start successfully reject questions as duplicates with references to previous questions with a vague similarity.”

Similarity is not surprising. LLMS feed tools such as the cursor are trained in mass data sets that include millions of coding discussions of platforms such as Stack Overflow and Github. These models not only learn the programming syntax; They also absorb cultural norms and communication styles in these communities.

According to cursor forum publications, other users have not reached this type of limit at 800 lines of code, so it seems to be a truly unwanted consequence of the training of the cursor. The cursor was not available for comments for press time, but we have contacted the situation.

This story originally appeared Ars Technicica.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *