OpenAI Faces Potential Inquiry Over False Information Generated by AI Model
OpenAI, the creator of ChatGPT, an AI language model, is under scrutiny as the Polish Supervisory Authority (UODO) considers launching an inquiry into the company. The decision came after a citizen complained that ChatGPT had generated false information about him, raising concerns about how OpenAI’s algorithm handles personal data.
According to the plaintiff, ChatGPT utilized data from 2021 that was obtained outside of EU privacy laws, leading to inaccurate results. Additionally, the citizen highlighted the lack of transparency regarding the data source, prompting a closer examination of OpenAI’s data processing methods.
OpenAI’s responses to the questions raised by the complainant have been deemed unsatisfactory. In light of this, UODO’s Deputy President, Jakub Groszowski, affirmed the plaintiff’s right to seek recourse and emphasized the responsibility of data privacy watchdogs.
The development of new technologies must respect the rights of individuals as outlined in the General Data Protection Regulation (GDPR). European data protection authorities have the task of safeguarding European citizens from the potential negative impacts of information processing technology.
This complaint has raised doubts about whether OpenAI adheres to the GDPR’s core principle of privacy by design. Consequently, UODO will seek clarification on OpenAI’s data policies through a comprehensive administrative investigation.
Notably, the EU introduced safeguards for artificial intelligence (AI) in its GDPR law back in May 2018. However, these requirements were met with criticism, as it was considered difficult for companies to fully comply with the regulations and understand how their use of personal data aligned with AI practices.
Privacy laws were also perceived as hindering innovation, with 74% of respondents in a German survey of the digital industry citing these laws as the major obstacle to developing new technologies.
Automated decision-making, an inherent aspect of AI models, has long been subject to criticism. Earlier machine learning models were scrutinized for perpetuating biases and reinforcing societal inequalities. For instance, gender biases in recruitment processes were inadvertently perpetuated by these models, leading to the underrepresentation of female candidates in certain fields.
During the COVID-19 pandemic, governments and companies hastened the adoption of AI, automation, and surveillance tools for the sake of expediency. Yet, this haste resulted in flawed systems that adversely impacted individuals. An algorithmic system implemented by Ofqual, the exam department in the UK, aimed to prevent the inflation of school results but ended up disproportionately affecting high achievers and students from disadvantaged backgrounds.
Later this year, British Prime Minister Rishi Sunak will convene a summit with AI executives and researchers to determine the future landscape of AI. Alongside this, the European Parliament may pass the EU’s Artificial Intelligence Act before the year concludes.
The potential inquiry into OpenAI’s ChatGPT serves as a reminder of the challenges posed by automated decision-making in AI models. It underscores the importance of careful data processing and aligning with privacy regulations to ensure the responsible development and deployment of AI technologies.