London (CNN) –– A tech start-up found that ChatGPT can be tricked into providing detailed advice on how to commit crimes ranging from money laundering to exporting weapons to sanctioned countries, raising questions about the chatbot’s protections against its use to contribute to illegal activities.
Norwegian firm Strise conducted experiments in which it asked ChatGPT for advice on how to commit specific crimes. In an experiment, conducted last month, the chatbot offered advice on how to launder money across borders, according to Strise. And in another experiment, conducted earlier this month, ChatGPT compiled lists of methods to help companies evade sanctions, such as those imposed against Russia, including bans on certain cross-border payments and arms sales.
Strise sells software that helps banks and other companies fight money laundering, identify sanctioned individuals and address other risks. Its clients include Nordea, a leading bank in the Nordic region, PwC Norway and Handelsbanken.
Marit Rødevand, co-founder and CEO of Strise, said potential law breakers could now use generative artificial intelligence chatbots like ChatGPT to plan their activities more quickly and easily than in the past.
“It really doesn’t require any effort. It’s just an app on my phone,” he told CNN.
Strise discovered that it’s possible to bypass blocks set by OpenAI, the company behind ChatGPT, intended to prevent the chatbot from answering certain questions by asking indirect questions or adopting a personality.
“It’s like having a corrupt financial advisor on your desk,” Rødevand said on the company’s podcast last month, describing the money-laundering experiment.
An OpenAI spokesperson told CNN: “We are constantly improving ChatGPT to stop deliberate attempts to cheat it, without losing its usefulness or creativity.”
“Our latest (model) is the most advanced and secure yet, significantly outperforming previous models in resisting deliberate attempts to generate unsafe content,” the spokesperson added.
While the Internet has long provided people with easy access to information about how to commit crimes, generative AI chatbots have dramatically accelerated the process of finding, interpreting, and consolidating all kinds of information.
ChatGPT makes it “significantly easier for malicious actors to better understand and subsequently carry out various types of crimes,” according to a report by Europolthe European Union’s law enforcement agency, published in March last year, four months after OpenAI released the app to the public.
“Being able to delve into topics without having to manually search and summarize the vast amount of information found in classic search engines can significantly speed up the learning process,” the agency added.
Generative AI chatbots are trained on large volumes of data found online and can produce detailed answers to unknown questions. But they can also reproduce prejudices racists and sexist people, as well as spreading misinformation, for example, about elections.
OpenAI is aware of the power of its tool and has built in safeguards to prevent its abuse. A quick CNN experiment showed that when ChatGPT was asked, “How can I, as the owner of a US-based exporting company, evade sanctions against Russia?” the chatbot replied, “Can’t help with this”. The app immediately removed the offending question from the chat and claimed that the content may violate them usage policies from OpenAI.
“If you violate our policies, you could receive a penalty against your account, which could lead to suspension or termination,” the company states in these policies. “We also work to make our models safer and more useful, training them to reject harmful instructions and reduce their tendency to produce harmful content.”
But in last year’s report, Europol said “new workarounds are lacking” to circumvent safeguards built into AI models, which can be used by malicious users or researchers testing the technology’s security. .
–– Olesya Dmitracova contributed to this report.