Anthropic lawyers apologize to court over AI ‘hallucination’ in copyright battle with music publishers

Credit: gguy/Shutterstock

Lawyers for generative AI company Anthropic have apologized to a US federal court for using an incorrect citation generated by Anthropic’s AI in a court filing.

In a submission to the court on Thursday (May 15), Anthropic’s lead counsel in the case, Ivana Dukanovic of law firm Latham Watkins, apologized “for the inaccuracy and any confusion this error caused,” but said that Anthropic’s Claude chatbot didn’t invent the academic study cited by Anthropic’s lawyers – it got the title and authors wrong.

“Our investigation of the matter confirms that this was an honest citation mistake and not a fabrication of authority,” Dukanovic wrote in her submission, which can be read in full here.

The court case in question was brought by music publishers including Universal Music Publishing Group, Concord, and ABKCO in 2023, accusing Anthropic of using copyrighted lyrics to train the Claude chatbot, and alleging that Claude regurgitates copyrighted lyrics when prompted by users.

Lawyers for the music publishers and Anthropic are debating how much information Anthropic needs to provide the publishers as part of the case’s discovery process.

On April 30, an Anthropic employee and expert witness in the case, Olivia Chen, submitted a court filing in the dispute that cited a research study on statistics published in the journal The American Statistician.

On Tuesday (May 13), lawyers for Anthropic said they had tried to track down that paper, including by contacting one of the purported authors, but were told that no such paper existed.

In her submission to the court, Dukanovic said the paper in question does exist – but Claude got the paper’s name and authors wrong.

“Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai,” Dukanovic wrote.

She explained that it was Chen, and not the Claude chatbot, who found the paper, but Claude was asked to write the footnote referencing the paper.

“Our investigation of the matter confirms that this was an honest citation mistake and not a fabrication of authority.”

Ivana Dukanovic, lawyer representing Anthropic

“We have implemented procedures, including multiple levels of additional review, to work to ensure that this does not occur again and have preserved, at the Court’s direction, all information related to Ms. Chen’s declaration,” Dukanovic wrote.

The incident is the latest in a growing number of legal cases where lawyers have used AI to speed up their work, only to have the AI “hallucinate” fake information.

One recent incident took place in Canada, where a lawyer arguing in front of the Ontario Superior Court is facing a potential contempt of court charge after submitting a legal argument, apparently drafted by ChatGPT and other AI bots, that cited numerous nonexistent cases as precedent.

In an article published in The Conversation in March, legal experts explained how this can happen.

“This is the result of the AI model attempting to ‘fill in the gaps’ when its training data is inadequate or flawed, and is commonly referred to as ‘hallucination’,” the authors explained.

“Consistent failures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients’ interests, and generally undermine the rule of law.”

They concluded that “lawyers who use generative AI tools cannot treat it as a substitute for exercising their own judgement and diligence, and must check the accuracy and reliability of the information they receive.”


The legal dispute between the music publishers and Anthropic recently saw a setback for the publishers, when Judge Eumi K. Lee of the US District Court for the Northern District of California granted Anthropic’s motion to dismiss most of the charges against the AI company, but gave the publishers leeway to refile their complaint.

The music publishers filed an amended complaint against Anthropic on April 25, and on May 9, Anthropic once again filed a motion to dismiss much of the case.

A spokesperson for the music publishers told MBW that their amended complaint “bolsters the case against Anthropic for its unauthorized use of song lyrics in both the training and the output of its Claude AI models. For its part, Anthropic’s motion to dismiss simply rehashes some of the arguments from its earlier motion – while giving up on others altogether.”Music Business Worldwide

Related Posts