Could you be defamed by a robot? A snapshot of the defamation allegations faced by ChatGPT this year

18 December 2023

Amidst the novelty (and controversy) surrounding ChatGPT’s ability to generate a new hit song and an award-winning photograph, concerns have been raised about the accuracy of the statements that ChatGPT produces.

By now, all of us have used or at least heard of ChatGPT. It is a large language model-based chatbot developed by OpenAI which generates human-like text based on context and past conversations. However, the answers or information generated in these conversations can sometimes be inaccurate.

These inaccurate responses are known as ‘hallucinations’ or ‘confabulations’ and are generated when the system cannot correctly interpret and reproduce the data it has received. While the hallucinated responses may seem credible, they can be either factually inaccurate or unrelated to the specified context. They often occur due to the system’s innate biases, lack of real-world understanding, or training data limitations. Essentially, the AI model ‘hallucinates’ information that it has not been trained on, causing misleading and inaccurate responses.

Some examples of hallucinations encountered by users have included fraudulent scientific articles, citations of legal cases that do not exist, and incorrect statements about individuals. The question that now arises is whether OpenAI could be found liable for the hallucinations published by ChatGPT?

COULD OPENAI BE FACING DEFAMATION PROCEEDINGS?

Case Study 1: Australian mayor defamation case

Earlier this year, the mayor of Victoria’s Hepburn Shire, Brian Hood, threatened legal action against OpenAI due to allegedly defamatory publications generated by ChatGPT.

Mr Hood was approached by members of the public informing him that ChatGPT had published concerning information about him, including that he was a guilty party in a foreign bribery scandal. This was not the case. In fact, Mr Hood was actually the person to notify the authorities about the payment of bribes to foreign officials in exchange for winning currency printing contracts. Essentially, ChatGPT falsely identified Mr Hood as the guilty individual involved in the bribery scandal, instead of his actual role as the whistleblower.

Mr Hood’s lawyers sent a concerns notice to OpenAI in relation to the allegedly defamatory publications, however no response from ChatGPT has been reported to date.1

Case Study 2: Recent American defamation case

More recently, a nationally recognised talk show host in America, Mark Walters, sued OpenAI for defamation, alleging that ChatGPT fabricated legal claims about him.

This claim arose after a journalist asked ChatGPT to summarise the case Second Amendment Foundation v Ferguson and, to assist, provided it with a link to the case. In response, ChatGPT generated a summary describing the case as a legal complaint by the Second Amendment Foundation against Mr Walters for allegations including “defrauding and embezzling funds”. ChatGPT even created an erroneous case number to support the summary it produced.2

Not only were the accusations against Mr Walters entirely false, but he had never been a member of the Second Amendment Foundation nor was his name even mentioned in the claim. ChatGPT had hallucinated Mr Walters’ involvement and the relevant facts of the entire case.3

In response, Mr Walters commenced proceedings against OpenAI for defamation. This is the first defamation proceeding commenced against a chatbot like ChatGPT.

POTENTIAL LIABILITY OF CHATBOTS

These cases raise questions as to whether developers and businesses may be held liable for actions such as defamation (or injurious falsehood claims from businesses), for statements published by their chatbots that adopt a generative AI model.

Existing decisions relating to second or third-party publication may provide some guidance on the application of defamation laws in these circumstances:

  • the Court in Fairfax Media Publications Pty Ltd v Voller (2021) 273 CLR 346 held that when news organisations post news stories on social media, and allow readers to post comments on those articles, those news organisations become a publisher of those reader comments for the purposes of defamation law;
  • in Trkulja v Google LLC (2018) 263 CLR 149, the High Court found Google to be the publisher of images and auto-complete search predictions generated by the search engine; and
  • in the similar case of Google LLC v Defteros (2022) 403 ALR 434, the High Court held that Google was not liable for publishing a hyperlink to a defamatory article in its search results.

ChatGPT uses a generative AI model which republishes information that has already been published elsewhere on the internet or has been fed to it directly by users. It does not ordinarily provide references to the statements it generates unless asked to do so and, when it does, those references can also be susceptible to hallucinations. Considering the above cases, we are likely to see further development in the application of defamation laws to these rapidly evolving circumstances.

WHAT DOES THIS MEAN FOR YOU?

These cases represent the beginning of an emerging legal landscape in relation to generative AI.

The use of generative AI presents many opportunities but also exposes users to potential risks and liability, particularly at a time when the law and regulators are yet to catch up with the pace of technological and social change.

If you have any concerns or questions about how generative AI technologies may impact you or your business, please reach out to our team.

This article was written by Alexandra Beal, Solicitor and Annabelle Jones, Graduate-at-Law, and reviewed by Peter Campbell, Partner.


1 Lauren Leffer, ‘Australian Mayor Threatens to Sue OpenAI for Defamation by Chatbot’, Gizmodo (online, 5 April 2023) <https://gizmodo.com/openai-defamation-chatbot-brian-hood-chatgpt-1850302595>.

2 Ashley Belanger, ‘OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit’, Ars Technica (online, 10 June 2023) <https://arstechnica.com/tech-policy/2023/06/openai-sued-for-defamation-after-chatgpt-fabricated-yet-another-lawsuit/>.

3 Mack DeGeurin, ‘OpenAI Sued for Libel After ChatGPT Allegedly Accuses Man of Embezzlement’ Gizmodo (online, 7 June 2023) <https://gizmodo.com/chatgpt-openai-libel-suit-hallucinate-mark-walters-ai-1850512647>.

Subscribe to HWL Ebsworth Publications and Events

HWL Ebsworth regularly publishes articles and newsletters to keep our clients up to date on the latest legal developments and what this means for your business.

To receive these updates via email, please complete the subscription form and indicate which areas of law you would like to receive information on.

Contact us