Press play to listen to this article
Voiced by artificial intelligence.
ChatGPT has set itself up for a rough ride with Europe’s powerful privacy watchdogs.
The chatbot is the hottest sensation of artificial intelligence technology but was hit with a temporary ban in Italy last month on the grounds that it could violate Europe’s privacy rulebook, the General Data Protection Regulation (GDPR).
The Italian ban is just the start of ChatGPT’s troubles, as it opened itself up to privacy cases across the bloc and is running cutting-edge technology that’s irking governments over risks ranging from data protection to misinformation, cybercrime, fraud and cheating on school tests.
OpenAI, the organization that created ChatGPT, is walking with a target on its back: It has not set up a local headquarters in one of the European Union’s 27 countries, which means any member country’s data protection authority can launch new investigations and enforce bans.
Previously, Google faced a €50 million GDPR fine in France that was imposed before the U.S. tech giant formally centralized its European legal setup in Ireland. TikTok, too, had faced several privacy investigations and fines from the Dutch, Italian and French authorities before it legally set up shop in Ireland in 2021.
European data regulators are considering their next steps to look into alleged abuse like their Italian peers.
The Irish Data Protection Commission said it “will coordinate with all EU [data protection authorities] in relation to this matter,” according to spokesperson Graham Doyle. The Belgian data protection authority too said that ChatGPT’s potential infringements “should be discussed at [the] European level.”
France’s data protection authority CNIL received at least two complaints against ChatGPT, on grounds of privacy violations including of the GDPR, L’Informé reported.
In Norway, “we have not to date launched an investigation into ChatGPT, but we are not ruling anything out for the future,” said Tobias Judin, the head of international work for the country’s data protection regulator Datatilsynet.
While OpenAI has denied violating EU privacy laws, the chief executive officer of the company Sam Altman said on Twitter that his company was deferring to “the Italian government” for the ban — seemingly confusing the country’s independent regulator for the government.
“Italy is one of my favorite countries and I look forward to visiting again soon,” Altman said.
The Italian data protection authority said on April 6 that OpenAI was open to tackling potential infringement of EU privacy laws, following a videocall with company executives.
Laws to follow
It’s not just privacy that’s causing AI systems like ChatGPT to raise concerns.
In late March, a young Belgian committed suicide following weeks of conversations with an AI-driven chatbot named Eliza, Belgian paper La Libre reported. Last month, tech mogul Elon Musk alongside thousands of AI experts called for a pause on ChatGPT’s development over “profound risks to humanity.”
Advocacy groups have followed suit. In the U.S., the Center for AI and Digital Policy has called on the American Federal Trade Commission to investigate OpenAI and block further releases of its bot. In Brussels, consumer watchdog BEUC entreated European and national regulators to investigate ChatGPT, warning that the EU’s upcoming AI rulebook might be coming too late to stave off harm.
European lawmakers are also negotiating legal guard rails on the technology as part of a draft EU Artificial Intelligence Act.
But the lack of legislation on artificial intelligence has emboldened data protection regulators to step in.
“Data protection regulators are slowly realizing that they are AI regulators,” said Gabriela Zanfir-Fortuna, of the Future of Privacy Forum think tank.
Privacy regulators enforce the GDPR, including its rules around data gathering and user protections against automated decision-making. Companies like OpenAI need to have a legal basis to collect and use personal data, be transparent about how they’re using people’s data, keeping personal data accurate and giving people a right to correction.
OpenAI has never revealed what dataset it used to train the AI model underpinning the chatbot. Even researchers at Microsoft, which is OpenAI’s chief investor, said in a recent paper that they did “not have access to the full details of [ChatGPT] vast training data.”
Facial-recognition firm Clearview AI was previously fined and ordered by the privacy regulators to delete the pictures of Italian, French and British people it had collected online to build its algorithm because it lacked a legal basis to do so.
The fact that ChatGPT suffered a data breach and exposed users’ conversations and payment information of its users in March only adds to its woes.
The Italian decision to stop ChatGPT in its tracks is “a wake-up call,” said Dessislava Savova, a partner focusing on tech at law firm Clifford Chance. “It will actually trigger a dialogue in Europe and it will accelerate a position being taken by other regulators.”
Laura Kayali contributed reporting.
This article has been updated to reflect that advocacy groups including BEUC have asked regulators to investigate ChatGPT, and to reflect new developments with regard to the Italian and French regulators’ work on OpenAI.