Download PDF

AI – Current Trends and Contract Considerations

November 2025
Andrew Croft and Stephen Fitzpatrick

As the adoption of AI in our industry continues to grow, Andrew Croft and Stephen Fitzpatrick consider some key trends in the current market.

Architects, contractors, engineers and others in the construction industry are gradually increasing the reliance on AI tools to streamline processes – a trend highlighted in the RIBA AI Survey, which we explored in detail here. While adoption varies widely between businesses, clear trends are emerging. Companies should be alive to the risks associated with greater AI use, as outlined in our note on the importance of AI policies. With AI adoption accelerating, we’re seeing an emphasis on AI related requirements – now appearing in tenders, contracts, and even in questions from insurers during policy renewal.

In this article, we outline examples of what is being included in these documents and share practical steps businesses can take to mitigate AI related risks.

Tender stage

Bidders are being asked to disclose their use of AI when responding to tender questions (particularly in public sector procurements), and to detail any intention to use AI as part of their proposed delivery of the service.

The government recently published a Procurement Policy Note (PPN 017) introducing example “AI Disclosure Questions”. These cover areas such as the extent to which bidders have relied upon AI in preparing their tender response and their plans to incorporate AI in service delivery. For more detail, see our article on the UK Government AI playbook here.

Tenderers should be ready to answer these questions. AI policies can help standardise responses across the business and ensure consistency at the ITT stage (see our recent article here).

Contract terms

We’re seeing an increasing number of restrictions on the use of AI via contract terms. These often include express restrictions on AI tools or models, requirements to disclose AI usage, obligations to comply with the client’s AI policy, and even termination rights for unauthorised use or policy breaches.

It is therefore essential to review contract terms carefully before deploying AI and seek amendments where provisions conflict with your intended approach.

Consultants and contractors should also understand how their use of AI aligns with contractual terms to avoid potential risks. For example:

  • Confidentiality: Confidentiality provisions are likely (even if not expressly stated) to prohibit client and/or project-specific confidential information from being input or uploaded into AI platforms, thus exposing a consultant or contractor to real risk if sensitive information is fed into an AI model.
  • Data protection: The input of personal data into an AI software or system could breach GDPR data protection legislation and/or any related contractual provisions.
  • Intellectual property: Intellectual property provisions could prohibit the use of the client’s IP in an AI platform or require the use of IP in an AI platform, used to provide the services or work to be licenced or assigned to the Client. The use of AI could also increase the risk of IP infringement. If your AI software has been trained on pre-existing copyrighted third party designs and materials, this may increase the risk profile for indemnities covering infringement of third-party IP. Consultants and contractors should be aware of any planned use of AI by the Client, and whether their deliverables may be fed into an AI model controlled by the Client or a third party on the project. This may result in IP being reused if the AI system is trained on it and then uses it in the future.
  • Duty of care: A lack of internal checks could be deemed a failure to exercise reasonable care. Users should maintain regular audits of AI software to ensure accuracy of the output, and critical decisions should remain human led. Reliance on AI should not become a replacement for training of staff. The use of AI in preparing deliverables should be disclosed at all stages, both internally and externally, to allow for appropriate consideration and review.
  • Limit of liability: If a client expressly requires a form of software or AI to be used, consider how advanced the software is and if any caveats and/or disclaimers addressing the risks associated with the use of the software are required. It is likely that it will be difficult to pass liability for any errors arising from use of AI to the AI provider, so any potential liability gap should be considered carefully.
  • Client information: Check whether any client-provided information you plan to rely on was created using AI—for example, reports on site conditions or design documents. This is increasingly important as clients seek to transfer the risk of errors in their data to the supply chain. If your contract includes such risk transfer and issues arise with client information (AI-related or not), you may be unable to recover the cost of correcting those errors.
  • Agentic AI: If Agentic AI is being used by either party, consider the extent to which reliance can be placed on its responses and liability could arise. See our article on Agentic AI.
  • Use of robotics and automation: Robotics and automated machinery is beginning to be used on projects to gather data and assist with the automation of certain manual tasks. Consider the accuracy of any proposed robotics and machinery and the reliance it is sensible to place upon the same.
  • Record keeping: Consider if you will have access to any output or workings of the AI system or process in the future as may be needed in the event of a claim or in response to an express requirement of a contract.

Insurers are increasingly wary of the potential risks of AI, and we are aware of questions being asked on this topic prior to policy renewal. Having a clear understanding of your use of AI and a clear internal policy as to how it can be used can help in such discussions.

As the use of AI becomes more commonplace, it is sensible to consider including additional provisions in your standard terms and other precedent contracts covering the use of AI and your liability for the same.

Key takeaways

There is a great deal to bear in mind when considering how AI is used on any given project. A clear and robust policy is key, and care should be taken when reviewing contract terms or policies provided by the Client to make sure they allow the use of AI in the way your business anticipates, and to consider if you need additional contractual provisions to address the risks of AI. It is important that you consider the contractual framework before using AI to avoid unintentionally being in breach of contract, and consider whether a standard AI clause or amendments to your standard terms are appropriate.

If you would like to discuss AI considerations at tender stage, or within your contract terms, please contact Andrew Croft or Stephen Fitzpatrick.

Download PDF