When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
There have been many complaints filed, but not much enforcement so far.
The Taskforce comes as a way to promote cooperation between national DPAs investigating the OpenAI’s chatbot.
The main contested issue is how ChatGPT collects, retains, and uses EU citizens' data.
OpenAI scrapes vast scales of data from the web without asking for consent.
Via chatbot prompts, users can feed the system with highly sensitive data which requires better protection.
OpenAI cannot ask for consent to scrape your information online.
That’s why, after the Italian case, the company is largely playing the latter card.
These include avoiding certain data categories or sources (such as public social media profiles).
However, ChatGPT’s data is anything but publicly available.
We already discussed how ChatGPT and similar AI chatbots will probablynever stop making stuff up.
“AI hallucinations” not only can fuel misinformation online, but they also go against EU privacy laws.
Under Article 5 of the GDPR, all online information about individuals in the EU must be accurate.
Article 16 requires all inaccurate or false data to be rectified.
Again, OpenAI doesn’t meet any of these criteria.
Likewise, they also recommend informing users that information shared via chatbot prompts may be used for training purposes.
Are theyreallyexpecting OpenAI to come up with a solution?