Photo by Jonathan Kemper on Unsplash

(Europe) The Italian Data Protection Authority imposed a temporary ban on the processing of Italian user data by OpenAI, the Microsoft-backed AI research and development company behind ChatGPT. 

The ban comes in light of a data breach affecting ChatGPT users’ conversations and information on payments on March 20.

In its order, the Italian Supervisory Authority highlighted these key reasons behind the ban:

  1. Lack of information provided to users on data collected by Open AI.

  2. Absence of legal basis for the collection and processing of personal data to train algorithms.

  3. Inaccurate processing of personal data, since outcomes on ChatGPT are not always accurate.

  4. Lack of age verification mechanisms, exposing children to inappropriate responses.

The Supervisory Authority has directed OpenAI to respond within 20 days and comply with the order, or face a fine of up to €20 million or 4 percent of its total worldwide annual turnover.

On the March 20 data breach, OpenAI said, “...the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”

While Italy’s action seems to be the most concrete step against OpenAI so far, there have been calls globally to regulate AI.

Global leaders including Tesla’s Elon Musk and Apple’s Steve Wozniak signed an open letter urging an immediate pause for at least six months on the training of large AI systems more powerful than GPT-4. The letter, which has garnered more than 50,000 signatures so far, claims, “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

Recently, UNESCO called on governments across the world to urgently implement its global ethical framework based on its Recommendation on the Ethics of Artificial Intelligence. 

“The world needs stronger ethical rules for artificial intelligence: this is the challenge of our time. UNESCO’s Recommendation on the Ethics of AI sets the appropriate normative framework. Our Member States all endorsed this Recommendation in November 2021. It is high time to implement the strategies and regulations at the national level. We have to walk the talk and ensure we deliver on the Recommendation’s objectives,” said Audrey Azoulay, UNESCO's Director-General.