Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowIf you have already dabbled with artificial intelligence services such as ChatGPT, Claude, and Grok, you are likely aware of AI’s potent capabilities.
And if you own intellectual property, such as a copyright, you may also be aware of one of AI’s potential downsides: the way in which some AI services use copyrighted materials to train their large language models, or LLMs.
LLM AI services like ChatGPT and Claude use a “generative pre-trained transformer” model.
These AI services rely on LLMs that are pre-trained on massive amounts of information, such as pre-existing text and images.
The AI services then use their neural network architecture to understand the context and relationships of the pieces of information in their libraries to produce entirely new text, images, and other deliverables.
Copyright infringement case
In August of 2024, a group of authors sued Anthropic, the owner of LLM AI service Claude, alleging Anthropic had infringed on the authors’ federal copyrights by using their works to train its LLMs, such as Claude, including by using legitimately purchased copies of their works, as well as unauthorized (“pirated”) copies. Bartz et al. v. Anthropic PBC, 3:24-cv-05417-WHA (N.D. Cal. Aug. 19, 2024).
Anthropic filed a motion for summary judgment, asking the court to determine its uses of the authors’ works did not infringe on their copyrights because it amounted to “fair use” under Section 107 of the Copyright Act.
Court order issued
On June 23, 2025, the court issued an order on Anthropic’s fair use defense, which can be summarized as follows:
Anthropic’s use of the authors’ books to train Anthropic’s LLMs was “exceedingly transformative” and was a fair use under Section 107 of the Copyright Act.
Anthropic’s digitization of the authors’ books Anthropic had previously purchased in print form was a fair use under Section 107 of the Copyright Act because Anthropic simply replaced its print copies with more convenient digital copies, without adding or redistributing copies, or creating new works.
Anthropic’s use of pirated copies of the authors’ books for Anthropic’s central library of training materials was not a fair use under Section 107 of the Copyright Act.
This landmark ruling provides the first judicial guidance on an issue of key importance to the AI industry: the extent to which AI services may use copyrighted works to train LLMs without explicit agreement from the works’ owners.
But this is likely not the end of the story, because there is no word yet on whether the authors will appeal the ruling. And, while AI services now may have some legal basis to train their LLMs, Anthropic still must face the authors’ claims relating to Anthropic’s use of pirated books.
The outcome of these claims could still have major implications for AI services and copyright holders alike.
With the evolution of LLM AI models and their ease of use, there will be many more concerns about their use in the future.•
__________
Justin Sorrell is a partner at Riley Bennett Egloff LLP. Opinions expressed are those of the author.
Please enable JavaScript to view this content.