Austrian Advocates Challenge ChatGPT on Error-Ridden Data

Austrian Advocates Challenge ChatGPT on Error-Ridden Data

A privacy advocacy group based in Vienna announced that it would bring legal action against ChatGPT in Austria. The group claimed that the “hallucinating” flagship AI tool generates incorrect responses that its creator, OpenAI, cannot fix.

The GenAI revolution was sparked by ChatGPT, which debuted in late 2022 and can mimic human speech while performing tasks like summarising lengthy texts, writing poetry, and even coming up with theme party ideas.

Also read: Alphabet Exceeds Revenue Forecasts As AI Fuels Cloud Growth

The privacy group of NOYB (“None of Your Business”) has raised a crucial concern: there is no guarantee of the accuracy of ChatGPT’s information. In fact, they’ve gone so far as to release a statement titled ‘ChatGPT keeps hallucinating—and not even OpenAI can stop it.’

Privacy issues

The privacy organization Noyb raised this issue in a complaint filed with the Austrian Data Protection Authority (DPA).

This case raises important questions regarding the impact of artificial intelligence (AI) on data. The primary issue with the complaint against OpenAI is GDPR compliance, particularly about data accuracy and individual rights.

According to the group, the company has publicly admitted that it cannot correct inaccurate information generated by its generative AI tool and has not explained the data source or what personal information ChatGPT stores. 

NOYB stated that such errors are unacceptable for information about specific individuals because EU law requires that personal data be accurate.

Data protection attorney Maartje de Graaf of NOYB stated that a system cannot generate data about individuals if it does not produce accurate and transparent results.

“The technology has to follow the legal requirements, not the other way around.”

According to the group, ChatGPT “repeatedly provided incorrect information” regarding NOYB founder Max Schrems’ birthdate “instead of telling users that it doesn’t have the necessary data.”

OpenAI responds

Even though Schrems’s request to correct or remove the data was incorrect, OpenAI denied it, stating that it was impossible, according to NOYB.

Additionally, NOYB claimed that the company “seems not even to pretend that it can comply” after it “failed to adequately respond” to his request to access his data, which is another breach of EU law.

In response to an AFP request for comment, OpenAI stated it was “committed to protecting data privacy.”

“We want our AI models to learn about the world, not individuals; we do not actively seek personal information to train our models, and we do not use publicly available information on the Internet to profile, advertise to, or target people, or to sell their data,” said an OpenAI spokesperson.

Since its founding in 2018, NOYB—which has become a vocal opponent of tech companies—has announced that it is requesting that Austria’s data protection authority investigate and penalize OpenAI to bring it into compliance with EU law.

NOYB stated that it is requesting an investigation into OpenAI’s data processing and the steps taken by the Austrian data protection authority to guarantee the accuracy of personal data processed in the context of OpenAI’s LLMs. Additionally, NOYB requested that OpenAI comply with the complainant’s access request and impose a fine to ensure future compliance.

Due to concerns about data processing, the Italian data protection agency temporarily banned ChatGPT last year. In January, the agency warned that the company’s business practices might violate the GDPR.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.