What if you wake up someday only to find that a popular AI tool is saying false and nasty things about you? Feels like a nightmare, right? Well, a Norwegian man faced this first hand when he found ChatGPT falsely claiming he (… wait for it …) murdered his own children and was also sentenced to decades in prison.
It all started when the man, Arve Hjalmar Holmen, decided to see what ChatGPT says if he asks the AI chatbot about himself. As he entered his name, he was horrified to see ChatGPT spitting out a response that said he killed 2 of his children and also attempted to kill his third child, and that he was sentenced for a whopping 21 years in prison for this act. All while accurately mentioning everyone’s name and all.
Of course, it was not only a mentally devastating experience for Holmen, it also put his reputation at stake, as ChatGPT has been a very popular tool worldwide for years now, used by millions and millions of people. This means, anyone – especially those in Holmen’s family and social circle – would see the same info about him. In his own words:
Some think that ‘there is no smoke without fire. The fact that someone could read this output and believe it is true, is what scares me the most.
Although AI companies put a tiny disclaimer that their products may be wrong sometimes, tools like ChatGPT have become so ubiquitous (due to their high level of accuracy and the convenience they offer), that people hardly even keep in mind that these tools can be wrong …. let alone so wrong in some cases.
As OpenAI, ChatGPT’s parent company, came to know about the matter, they quickly filtered out this information, which means they stopped ChatGPT from revealing the information even if someone explicitly asked the AI tool to know more about Holmen. In fact, OpenAI made more changes to their system, as ChatGPT now searches the Internet to find publicly available information when it’s asked about someone.
The problem which Holmen and European Union digital rights advocates Noyb see now is this: although OpenAI has filtered out the info, it hasn’t deleted it internally. This means, the same info is being used to train ChatGPT AI models. Ideally it should not be the case, as wrong information should never be used to train AI models.
OpenAI has always maintained that it can block information, but can’t correct information. Noyb now argues that this approach of OpenAI violates GDPR’s “data accuracy” requirement. GDPR, in case you aren’t aware, is an EU law that defines a framework for companies to handle EU residents’ personal data, so that their rights can be protected.
Noyb says:
While the damage done may be more limited if false personal data is not shared, the GDPR applies to internal data just as much as to shared data
Now, in order to exert more pressure on OpenAI, Noyb has filed a complaint with Norwegian data authority Datatilsynet, seeking the deletion of incorrect data in this case, and a fine as a deterrent for such cases in future. Only time will tell whether Holmen and Noyb will get what seems like an ideal conclusion to this ordeal.
In case you are wondering how ChatGPT cooked up a fake story, you need to know about AI hallucination. Not only laymen, but even lawyers have been victims of AI tool hallucinating.
Source: ArsTechica