ChatGPT Allegedly Made Up an Embezzlement Claim Against a Radio Host

And now the parent company OpenAI is being sued for libel

OpenAI ChatGPT logos are seen on electronic device screens in this photo illustration on 31 May, 2023 in Warsaw, Poland. The AI parent company is being sued for libel by a radio host that says ChatGPT made up a legal claim against him.
OpenAI: Very confidentially incorrect at times.
Jaap Arriens/NurPhoto via Getty Images

Hey, guess what happens when your AI hallucinates? It’ll inevitably make a damaging false claim against someone. This apparently happened to Georgia radio host Mark Walters, who recently filed a libel lawsuit against ChatGPT’s parent company after the chatbot claimed he was part of an embezzlement case.

Per Gizmodo, a journalist recently asked ChatGPT for a summary of the case The Second Amendment Foundation v. Robert Ferguson; the AI allegedly responded that Walters had been accused of embezzling money from The Second Amendment Foundation (SAF). The host “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurately and timely financial reports,” according to the filed complaint. Problem? Walters had nothing to do with the case and had no official role with the SAF.

ChatGPT Has Been Making Up Fake Articles by “The Guardian”
The artificial intelligence chatbot has cited nonexistent articles by the newspaper when answering questions

“Every statement of fact in the summary pertaining to Walters is false,” reads the suit, which claims OpenAI’s chatbot “published libelous material regarding Walters.”

ChatGPT also got the case number wrong and when asked follow-up questions, made up a passage from the [fake] complaint against Walters.

As bad as this all is, the reality is that Walters probably won’t win his case. The false information wasn’t used in the eventual article — because the human reporter did some fact-checking — and there was no intended malice. “There may be recklessness as to the design of the software generally, but I expect what courts will require is evidence OpenAI was subjectively aware that this particular false statement was being created,” says Eugene Volokh, a UCLA law school professor who’s currently working on an article regarding legal liability over AI models’ output. Volokh did tell Gizmodo he thinks this will be the beginning of other libel cases against AI companies, and that under different circumstances (such as someone losing a job over an incorrect AI accusation) there could be legal consequences.

Last week OpenAI put up a blog post noting that “even state-of-the-art [AI] models still produce logical mistakes, often called hallucinations. Mitigating hallucinations is a critical step towards building aligned AGI.” That tech company (which is behind ChatGPT) also said it was working on something called “process supervision,” which better rewards AI models for detecting hallucinations. For their sake and everyone else’s, we hope ChatGPT figures out the difference between facts and fiction pretty quickly.

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.