Anthropic’s Claude AI Cited for Inaccurate Legal Reference Amid Ongoing Copyright Lawsuit
Image Credit: Tingey Injury Law Firm | Splash
Anthropic, a prominent artificial intelligence company, has admitted to a citation error in a recent legal filing after its chatbot, Claude, produced an inaccurate reference. The incident occurred amid a high-profile copyright lawsuit brought by music publishers Universal Music Group, Concord, and ABKCO, who allege that Anthropic used copyrighted song lyrics to train its AI. This event underscores both the promise and pitfalls of using AI in legal settings, emphasizing the critical role of human verification.
[Read More: Amazon Deepens Generative AI Strategy with $4 Billion Investment in Anthropic]
Incident Details
On April 30, 2025, Anthropic data scientist Olivia Chen submitted a declaration in the U.S. District Court for the Northern District of California as part of the ongoing lawsuit filed in October 2023. In this filing, a citation was provided to an article in The American Statistician to support the claim that Claude rarely reproduces copyrighted lyrics. However, while the citation included the correct publication, year, and link, it contained incorrect author names and an inaccurate article title.
The citation was generated by Anthropic’s attorney, Ivana Dukanovic of Latham & Watkins, who used Claude to format the reference from a Google search result. The AI introduced errors in the citation that were missed during manual review. On May 13, 2025, during a hearing, plaintiffs’ attorney Matt Oppenheim challenged the validity of the citation, prompting U.S. Magistrate Judge Susan van Keulen to order Anthropic to clarify the situation by May 15, 2025. The lawsuit itself alleges that Anthropic trained Claude on over 500 copyrighted songs, including works by artists such as Beyoncé, in violation of copyright law.
[Read More: Sony Music Removes 75,000 AI Deepfakes, Céline Dion Slams Unauthorized Tracks]
Anthropic’s Response
In a court filing dated May 15, 2025, Anthropic apologized for the mistake, explaining that the citation was generated using Claude and that errors in the title and authors were inadvertently overlooked. Dukanovic described the issue as an “honest citation error”, emphasizing there was no intent to mislead the court. She acknowledged that minor wording errors also slipped through manual checks. Judge van Keulen described the matter as a “serious concern” due to its AI-generated nature and highlighted the need for greater oversight, though no immediate action was taken against Chen.
[Read More: Anthropic’s Bold AI Vision: Can Claude Lead a Safer, More Ethical AI Future?]
Broader Implications for AI in Legal Work
The incident draws attention to the risks of relying on AI tools like Claude in legal practice, where accuracy and trust are paramount. AI “hallucinations”—when systems generate plausible but incorrect information—can undermine legal processes and result in judicial scrutiny. The legal community has witnessed similar issues before; in 2025, a California judge criticized law firms for submitting AI-generated research with factual errors, sparking renewed debate about the responsible use of AI in high-stakes environments.
Anthropic, founded by former OpenAI researchers and recognized for its focus on AI safety, now faces reputational questions over its internal verification processes. The incident has also fuelled public discussion on social media about the need for stronger safeguards and transparent AI usage in legal contexts.
Context of the Copyright Lawsuit
The core of the lawsuit is the accusation that Anthropic used copyrighted lyrics to train Claude, with evidence presented that the chatbot could reproduce protected lyrics upon request—including by an Anthropic founder. Anthropic has moved to dismiss the case, contending such outputs are rare and unintentional. The company is also contending with a separate lawsuit brought by authors who allege their works were used without permission to train Claude, reflecting the broader debate over fair use and copyright in AI model development.
[Read More: Is AI Indeed a Theft? A New Perspective on Learning and Creativity]
Source: Verge, Business Insider, Music Business Worldwide