Detecting when text has been generated using tools like ChatGPT is a difficult task. Popular AI detection tools like GPTZero can provide some guidance to users by telling them when something was written by a bot and not a human, but even specialized software isn’t infallible. lible and can issue false positives.
As a journalist who started covering AI detection over a year ago, I wanted to round up some of WIRED’s best articles on the topic to help readers like you better understand this complicated issue.
Have even more questions about detecting ChatGPT exits and other chatbot tools? Sign up for my AI Unlocked newsletter and contact me directly with anything AI-related you’d like me to answer or for WIRED to explore further.
How to detect AI-generated text, according to researchers
February 2023 by Reece Rogers
In this article, which was written about two months after the launch of ChatGPT, I began to deal with the intricacies of AI text detection, as well as what the AI revolution could mean for writers who publish in line Edward Tian, the founder of GPTZero, talked to me about how his AI detector focuses on factors like text variance and randomness.
As you read, focus on the section on text watermarking: “A watermark may be able to designate certain word patterns as out of bounds for the AI text generator.” Although it was a promising idea, the researchers I spoke with were already skeptical about its potential effectiveness.
The AI detection arms race is on
September 2023 by Christopher Beam
A great article from last year’s October issue of WIRED, this article gives you an inside look into Edward Tian’s mindset as he worked to expand GPTZero’s reach and detection capabilities. The focus on how AI has affected schoolwork is crucial.
AI text detection is a top priority for many classroom educators as they grade assignments and potentially give up essay assignments because students are secretly using chatbots to complete the homework. While some students may use generative AI as a brainstorming tool, others use it to fabricate entire assignments.
AI detection startups say Amazon could mark AI books. it doesn’t
September 2023 by Kate Knibbs
Do companies have a responsibility to brand products that can generate AI? Kate Knibbs investigated how AI-generated books that could break copyright were being put up for sale on Amazon, even though some startups believed the products could be detected with special software and removed them One of the core debates about AI detection centers on whether the potential for false positives (text written by humans that has been accidentally flagged as the work of AI) outweighs the benefits of tagging algorithmically generated content.
The use of AI is seeping into academic journals and is difficult to detect
August 2023 by Amanda Hoover
Beyond homework, AI-generated text appears more in academic journals, where it is often banned without proper disclosure. “Articles written with artificial intelligence could also divert attention from good work by diluting the body of scientific literature,” writes Amanda Hoover. One potential strategy to address this problem is for developers to create specialized detection tools that look for AI content in peer-reviewed articles.
Researchers tested the AI watermarks and broke them all
October 2023 by Kate Knibbs
When I first spoke with the researchers last February about watermarking for AI text detection, they were hopeful but cautious about the potential to print AI text with specific language patterns that human readers they cannot detect but obvious to the detection software. Looking back, her trepidation seems well-placed.
Just half a year later, Kate Knibbs spoke to several sources who were cracking AI watermarks and demonstrating their underlying weakness as a detection strategy. Although not guaranteed to fail, AI text watermarking remains difficult to achieve.
Students are likely to write millions of AI papers
April 2024 by Amanda Hoover
One tool that teachers are trying to use to detect AI-generated class work is Turnitin, a plagiarism detection software that added AI detection capabilities. (Turnitin is owned by Advance, the parent company of Condé Nast, which publishes WIRED.) Amanda Hoover writes, “Chechitelli says most of the service’s customers have opted to buy AI detection. But the risks of false positives and bias against English learners have led some universities to abandon the tools for now.”
AI detectors are more likely to falsely label written content by someone whose first language is not English as AI than by someone who is a native speaker. As developers continue to work to improve AI detection algorithms, the problem of erroneous results remains a basic hurdle to overcome.