Download PDF

Anthropic’s Settlement and the Impact on AI Copyright

December 2025
James Hutchinson and Jonathan Booton

Anthropic, the developer behind the AI model ‘Claude’, was sued in a class action in the US over alleged copyright infringement. Anthropic subsequently reached a settlement for an estimated $1.5 billion.

Background

It is alleged that Anthropic downloaded a large library of digital books and without permission, used them to train its AI model ‘Claude’. A group of authors complained that their copyright had been infringed and subsequently lodged a class action lawsuit in the US courts.

Anthropic denied any wrongdoing and defended the claim on the basis of fair use. At a preliminary hearing in June, the judge ruled that the use of the books to train Claude constituted ‘fair use’ under US copyright law. Where use of copyrighted content falls within the ‘fair use’ exception, users have a defence against copyright infringement claims. However, in the same decision, the judge found that Anthropic’s creation of an alleged pirated library of books raised unresolved infringement issues which were to be the subject of a trial. Anthropic and the authors reached a settlement in the case before it reached trial.

How the UK’s position differs to the US

  • Class actions – Currently, the UK takes a different approach toward class actions. Whilst representative actions (class actions as they are known in the US) are increasing, the UK is not seeing the same trends in relation to AI class actions as in the US. In the case of Getty Images v Stability AI, the judge declined permission for a representative action on behalf of photographers allegedly affected. Getty Images also dropped part of its claim due to difficulties proving that infringing acts had taken place in the UK and were therefore subject to the jurisdiction of the English courts. With many AI providers based in the US, we can likely expect to see more class action claims in the US rather than the UK.
  • Fair use and fair dealing – A similar concept to ‘fair use’ in the US exists in the UK. Known as the fair dealing exception under the Copyright, Designs and Patents Act 1988, it allows certain uses of an artistic work without the need for permission from the copyright owner, so long as the use is considered to be ‘fair’. The UK’s concept of fair dealing is more limited than the ‘fair use’ defence in the US. An exception that applies in the US may therefore be regarded as an infringement in the UK.
  • Government’s approach – The UK and US currently share similar approaches to AI, promoting innovation over restrictive AI regulation. Both countries prioritise principles such as accountability and transparency, relying on existing laws rather than specific AI legislation. The US adopts a light-touch approach towards regulation, relying on the court’s interpretation of existing laws and fair use, which could favour AI providers over copyright holders. Pressure is mounting on the UK government to legislate on AI and copyright. In the last year, a private member’s AI bill was introduced in the House of Lords, and a government consultation hinted at possible new legislation to balance the interests of AI providers with the creative industries. Further developments are expected early next year following the UK government’s report on the impact of AI and copyright.

Implications for AI deployers

Although both countries favour a principles-based approach, their strategies are diverging, making a harmonised UK-US approach unlikely. This settlement highlights the difference in approaches and the implications for stakeholders. Organisations must navigate conflicting laws and regulations depending on the nature and location of the activities of the AI tools. We have outlined key considerations for organisations deploying AI tools in light of this settlement below.

  • Limited release – The settlement agreed between the parties only covers claims related to the identified works, and not future claims or other works owned by class members. This means that additional claims can arise if new infringements are discovered. Deploying models using new data will therefore require thorough vetting.
  • Infringing outputs – This settlement does not cover any outputs generated by Claude that may be infringing. AI deployers may therefore find themselves drawn into future litigation if it is identified that any outputs infringe the same identified works. Ensuring there are the correct contractual protections in place will be vital.
  • Pending/future cases – Although not legally binding, this case is likely to serve as a benchmark for settlement figures and shape expectations for remedies and outcomes, such as the destruction of infringing sources and derivative datasets. Such actions could significantly impact the future viability of AI models where training data is removed or destroyed. If any infringing material is discovered in AI deployers’ datasets, organisations may face obligations to remove or destroy it. Retaining legacy infringing material increases the risk of future claims from third parties should they become aware.
  • Data governance – AI deployers should prioritise implementing robust data governance by keeping records of where data has come from and ensuring all training data is lawfully acquired. Whilst it is possible that use of training data may be lawful in one jurisdiction, it may be unlawful in another. AI deployers operating globally should map risks across jurisdictions.
  • Robust review of terms and conditions – AI deployers should ensure that AI providers are offering robust contractual protections. Often these terms are non-negotiable, and so AI deployers should be checking that there are sufficient contractual protections in the form of warranties regarding lawful data collection or indemnification regarding the origin of training data.
  • Indemnities for third party IPR claims – AI deployers should review whether any indemnification is offered by the AI provider in relation to third party claims. Typically, AI providers offer indemnities if the deployer is sued for copyright infringement for the use of the output produced by the AI tool.
  • Managing output risk – Even if an organisation can be confident about the lawfulness of data inputs, model outputs may inadvertently infringe a third party’s rights. As outlined above, the settlement does not release liability for infringing outputs, AI deployers should have guardrails, audits or review processes to detect output infringement and monitor outputs.

How Beale & Co can help

This settlement marks a significant milestone in AI innovation and intellectual property law. We will continue to monitor the landscape and provide further updates. In the meantime, to learn more about how we can support in managing AI compliance or risk management, please contact James Hutchinson and Jonathan Booton.

Download PDF