AI Readiness Across the Professions and What it Means for Insurers
November 2025Artificial intelligence (AI) is transforming professional services across all sectors, creating new opportunities and new risks. We consider the differing approaches of various professional bodies, and whether those differences should have any impact on Insurers’ perception of the risk presented by the use of AI by different professions. This divergence provides useful insight into which professions are developing structured risk management approaches, and which may require closer underwriting scrutiny.
As previously discussed here, the Royal Institution of Chartered Surveyors (RICS) has issued clear guidance with its new global standard on responsible AI use. Other professional bodies are at varying stages of engagement.
Regulatory context
AI now underpins many professional activities including surveying, engineering, accounting, auditing and law. The UK Government’s AI Playbook 2025 (which Andrew Croft and Anna Benz discuss here) established cross-sector principles for responsible AI use: defining purpose, ensuring human oversight, monitoring outcomes, maintaining audit trails and promoting transparency. Whilst most professional guidance mirrors the AI Playbook, different regulators have taken differing approaches, largely governed by the general principles of their overriding philosophy to professional regulation.
Professional landscape
RICS (Royal Institution of Chartered Surveyors)
RICS is the first professional body to issue a binding global standard on the responsible use of AI. Taking effect on 9 March 2026, it requires members and regulated firms to implement AI governance policies, maintain risk registers, document due diligence and disclose AI use to clients. Crucially, surveyors remain personally accountable for AI outputs. The standard mirrors the AI Playbook principles, offering insurers a transparent framework for assessing compliance and standard of care. For the surveying profession, this marks a shift from ethical guidance to enforceable obligation.
RIBA (Royal Institute of British Architects)
RIBA’s Artificial Intelligence Report 2025 demonstrates genuine engagement with AI’s impact on design and practice management. Based on member survey data showing that 59% of practices now use AI, the report examines opportunities and risks and promotes the creation of internal AI policies. It addresses ethics, low-carbon design, and the integration of AI into creative workflows. Although advisory rather than mandatory, RIBA’s position signals increasing maturity and risk awareness across the architectural profession.
ICE (Institution of Civil Engineers)
The ICE has referenced AI within its digital transformation and “Digital by Default” initiatives, highlighting its potential in infrastructure design, modelling and asset management. However, no formal professional standard or compliance guidance currently exists. The institution’s publications remain exploratory, focusing on innovation rather than accountability. In this context, there may be a greater variability in how AI tools may be applied within civil engineering, which insurers may wish to consider when assessing how patterns of professional use are taking shape.
IStructE (Institution of Structural Engineers)
IStructE has acknowledged AI as an area of growing relevance for the profession and has begun considering its implications through professional forums and technical commentary. While it has not published AI specific guidance or frameworks at this stage, its current approach reflects an exploratory phase consistent with how the institution typically develops new professional resources. In the absence of more detailed expectations, there may be a wider range of practice when AI enabled tools are used in structural work, which insurers may need to consider when assessing how certainty and consistency of approach are developing within the profession.
ICAEW (Institute of Chartered Accountants in England and Wales)
ICAEW has published guidance on ethical AI use, focusing on data analytics, automation and transparency in audit processes. Its materials reflect strong awareness of algorithmic bias and audit trail requirements. While not construction-specific, the guidance demonstrates a more established regulatory culture of accountability and governance that may help mitigate AI-related risk in financial and valuation contexts relevant to insurers.
SRA (Solicitors Regulation Authority)
The SRA has issued commentary recognising AI’s growing use in legal services and emphasises transparency, fairness and professional accountability. It cautions against over-reliance on generative tools without human oversight. Although not codified in rules, aligning with its outcome-based approach more generally, this guidance provides an ethical foundation that aligns with insurer expectations around explainability and responsibility in legal decision-making.
Comparative analysis
The professional bodies adopt different positions along a spectrum of regulatory intervention. At one end are prescriptive approaches, such as the RICS global standard, which defines processes, governance structures and client disclosure requirements. At the other end are outcome-focused bodies such as the SRA, which places responsibility on the professional to meet standards without prescribing the method. Others, including RIBA, ICE, IStructE and ICAEW, sit at various points along this spectrum, offering guidance, thought leadership or ethical direction without imposing formal or mandatory requirements.
These approaches reflect each body’s regulatory culture, statutory remit and professional expectations. Greater prescription provides clarity for professionals and insurers, making standards of care and compliance easier to assess. Outcomes-based approaches may offer greater professional flexibility, but create variability in practice, influencing how AI use is controlled from day to day.
For insurers, this diversity means that AI-related exposures differ across professions, requiring analysis at an organisational level.
Despite these differences, a consistent theme emerges. Accountability for AI-generated material rests firmly with the professional. Every regulatory body emphasises the need for human oversight, careful handling of data, compliance with legal obligations and assurance that outputs used in professional work are fit for purpose. While AI introduces risks, current indications suggest that none of the professional bodies are inclined to impose separate standards or otherwise restrict its use, maintaining existing expectations of care and compliance.
If you have any questions regarding the information discussed in this article, please contact David McArdle and Anna Benz.
Further reading
AI Playbook for the UK Government | February 2025
RICS Responsible Use of Artificial Intelligence in Surveying Practice | September 2025
Artificial Intelligence in the natural and built environment sector | RICS
RIBA Artificial Intelligence Report 2025
What is the direction of travel for AI within architecture in 2025 and beyond? | RIBA
Report looks at pros and cons of AI in law firms | SRA
Download PDF
