Navigating digital risks in construction: key insights from our 2026 roundtable
April 2026The construction sector is undergoing rapid transformation as digital technologies, particularly AI, develop at an extraordinary speed. Recent industry reports such as the NBS Digital Construction Report 2025 (see this note) and the RIBA AI Report 2025 (see this note), reflect this shift with more than two in five construction professionals now utilising AI in daily delivery, and close to 60% of architectural practices now having integrated AI into their workflows. These findings underscore a sector that is gaining confidence in its digital capabilities, even as organisations continue to grapple with emerging risks around data handling, skills and professional responsibility.
To explore how firms are responding to this shift, we hosted a roundtable with industry practitioners, legal experts and insurers. This article distils the key themes, risks and practical insights discussed, and sets out what firms can do to navigate an increasingly complex digital landscape, particularly amid emerging concerns around data management, workforce capability and professional accountability.
Key takeaways
The roundtable reinforced that although AI adoption in construction is accelerating rapidly, governance, training and contractual clarity are struggling to keep pace. Professional liability remains firmly with individuals and organisations, and insurers are observing developments closely. As digital tools continue to evolve, firms that prioritise clear policies, robust oversight and careful supplier due diligence will be best positioned to harness the benefits of innovation while managing emerging risks responsibly.
Navigating digital risk
A central focus of discussion was the evolving risk landscape surrounding AI adoption. AI is increasingly embedded across procurement, design and site operations, yet the legal and regulatory frameworks governing these uses have not kept pace. Speakers emphasised that professional responsibility is not diminished by AI, if anything, the need for supervision and verification has grown. Early warning signs are already emerging, such as the Ayinde v London Borough of Haringey case, in which lawyers were criticised for relying on AI‑generated citations that did not exist. Regulators are beginning to respond, with new standards on responsible AI use.
Five principal areas of legal and operational risk were identified:
- Data and privacy – transparency over tools, cross‑border transfers and privilege concerns.
- IP ownership and output responsibility – particularly where AI‑generated material may infringe third‑party rights.
- Contracts and risk allocation – with most standard terms silent on AI use and training data.
- Health and safety – including the reliability of AI‑led decisions and maintaining the golden thread.
- Insurance – uncertainty around the extent to which AI‑related activity is covered.
The roundtable highlighted wide variation in AI adoption across organisations. Some firms have implemented AI policies and enterprise tools, while others are only beginning to develop internal guidance. Participants stressed the importance of balancing innovation with effective risk management, for example, limiting the use of open‑model systems and maintaining auditable logs of AI usage to support client transparency. Confidentiality remains a key concern, though some tools now help mitigate this by alerting users when their activity may move outside a secure environment.
Human oversight also remains essential, with most organisations requiring a senior reviewer to check AI‑assisted outputs. Participants noted that less experienced staff may be particularly prone to over‑reliance on AI, increasing the risk of compounding errors in technical or design work reinforcing the notion that effective AI governance must develop from the bottom up. Several speakers also observed that AI may contribute to a rise in more sophisticated claims from litigants in person, who can now generate detailed claim documents with minimal support.
The overarching message was to remain optimistic about AI’s potential while ensuring strong governance through enterprise‑level, auditable tools, organisational oversight and a clear AI policy.
Insurance landscape
The insurance perspective offered valuable insight into how the market is responding to increased AI adoption. Professional indemnity (PI) insurers are not yet routinely asking firms about their use of AI, and for now coverage remains unchanged: professionals remain fully liable for their outputs regardless of the tools used. Although explicit AI exclusions are still uncommon, participants noted isolated examples of broad exclusions emerging in the UK market. Insurers have not yet developed bespoke AI products, but this is expected to evolve as claims patterns emerge.
Looking ahead, insurers may begin narrowing cover or introducing exclusions, echoing the early development of cyber insurance. It is also likely that they will start seeking more detailed disclosures from firms as AI becomes more embedded in professional workflows. To mitigate future risk, organisations were encouraged to identify their AI‑related exposures, align them with existing quality assurance processes, and establish clear governance and reporting structures.
Practical steps for firms include being explicit in contracts about when and how AI is used, ensuring that third‑party suppliers carry appropriate insurance, and working closely with brokers to provide accurate disclosures. As reliance on digital systems grows, cyber insurance may also become increasingly relevant. Participants additionally noted challenges in evidencing decision‑making where AI automation is involved, underscoring the need for strong oversight and thorough documentation.
Navigating digital projects
On project delivery, the group returned to the core principle that professionals must continue to exercise the level of skill and care reasonably expected of them, regardless of whether AI tools are used. While AI can support efficiency and streamline certain tasks, ultimate responsibility remains with the human decision‑maker.
On the contractual side, whilst there are increasing questions being asked in relation to digital and AI as part of procurement processes, it has not yet become standard practice for AI related provisions to be included in construction contracts. The main shift recently has been to consider the approach to AI when appointing sub-contractors and sub-consultants, e.g. ask if they are using it and include provisions in sub-consultancy agreements addressing the risks. The UK Government’s Procurement Policy Note 017 highlights a broader shift towards increased scrutiny of supplier governance and supply‑chain transparency (see this note). These developments reinforce the importance of clear disclosure and thorough due diligence at the procurement stage. Therefore, it is essential that the extent of digital adoption is considered carefully at the outset of any project so that the key risks and commercial issues can be considered. For example, if the intention is to develop bespoke software or products, a Software as a Service (SaaS) type contract may be more appropriate.
Participants noted the rise of “silent AI,” with machine‑learning features increasingly embedded in everyday tools without clear disclosure. Many anticipated that clients may ultimately require the use of trusted enterprise platforms to mitigate risk. For SMEs, practical steps include relying on established providers, standardising internal processes, maintaining strong IT security, assessing vendor stability and ensuring continued access to data if a supplier were to fail.
Software and AI contracting challenges
Given the increased reliance on technology, it was outlined that the contractual arrangements between construction companies and software are becoming increasingly important. Historically these have not been a top priority for many. However, due to the different nature of the contracting arrangements, there is now a foreseeable risk that a construction company may be liable for issues caused by software used, yet unable to recover its losses from the software provider. Care is therefore required when agreeing supply contracts, together with consideration of whether additional provisions should be included in the construction contract to ensure alignment.
Download PDF


