Download PDF

RICS sets the standard: responsible AI use becomes mandatory in surveying

October 2025
David McArdle and Anna Benz

RICS has published a global professional standard for the responsible use of artificial intelligence (AI) in surveying practice, effective 9 March 2026. The standard sets mandatory requirements and best practices to govern how AI is used across valuation, construction, infrastructure, and land surveying. The standard emphasises professional judgement, governance, transparency, data risk, and client communication. For firms in the built environment, compliance will mean updating policies, training staff, revising terms of engagement, ensuring oversight of AI outputs, and preparing for legal and reputational risks.

What has RICS launched and why does it matter?

On 10 September 2025, the Royal Institution of Chartered Surveyors (RICS) released its first global professional standard on the responsible use of AI in surveying.

The standard is borne out of recognition that AI is increasingly embedded in surveying work, from automated data analysis to predictive modelling, and that the risks (bias, erroneous outputs, data privacy, lack of oversight) are substantive.

RICS seeks to strike a balance: supporting innovation while ensuring the surveyor’s expertise remains central, protecting clients, and maintaining public trust.

What standards does it set?

The RICS standard builds on the foundation laid by the UK Government’s AI Playbook (February 2025). The Playbook’s “safe and responsible” principles (defining purpose, ensuring human oversight, monitoring outcomes, and maintaining audit trails) are echoed in the RICS standard requirements. However, the RICS standard goes further in making many practices mandatory for surveying professionals and regulated firms, and in focusing more precisely on the surveying context: procurement, client communication, oversight, and explainability. See our recent article on the AI Playbook here.

Key requirements under the standard

  • Baseline knowledge:
    • Surveyors using AI must understand different types of AI, their limitations, potential failure modes, bias, and data risks.
  • Practice management:
    • Data governance: implement secure handling of private/confidential data; restricted access; anonymisation; consent for uploading private data.
    • System governance: assess whether AI is the right tool; maintain a written register of AI systems that materially affect service delivery; policy setting (roles, responsibilities, human oversight).
    • Risk management: maintain risk registers; regularly review and update; document likelihood, impact, and mitigation plans.
  • Using AI:
    • Procurement and due diligence: written requests for information from suppliers (data quality, environmental impact, stakeholder involvement, liability, etc.); record all information and assess risks.
    • Outputs, reliance and assurance: professional judgement must be applied; document assumptions and reliability concerns; if unreliable, inform clients in writing; for high volume/automated outputs, perform dip sampling.
    • Terms of engagement and client communication: clients must be told in writing when AI will be used and to what extent; contracts or engagement terms should allow for redress/opt out where possible and clarify indemnity cover.
    • Explainability: on request, provide written information about the type of AI used, its limitations, due diligence and risk management, and decisions about reliability.
  • Development of AI:
    • For firms developing their own AI systems: document application and risks; conduct sustainability impact assessments; involve diverse stakeholders; ensure legal/data compliance; obtain permission for using personal data; ensure quality of training data.

What are the implications for the built environment?

  • Operational and governance changes may be required: Firms will need to audit their existing AI tools, revise policies and procedures (data, procurement, oversight), and invest in staff training to ensure baseline competence.
  • Contractual and client communication revisions: Terms of engagement will need updating to disclose AI use, define liability/indemnity, and provide for opt‑outs and redress mechanisms. Clients will expect transparency.
  • Risk and liability exposure: Professional negligence claims could increasingly hinge on whether a firm complied with RICS’s standard; misleading or unreliable AI outputs could lead to reputational, regulatory, or legal consequences.
  • Regulatory and industry alignment: The standard complements existing/emerging regulation, the UK Government’s AI Playbook and the EU AI Act.
  • Competitive advantage: Early compliance could become a differentiator in tenders, especially where clients demand high ethical/ ESG and risk‑mitigation standards.

Conclusion

AI offers opportunities for operational and functional efficiencies in all industries. However, increased reliance on technology requires careful risk management. This guidance sets out the minimum expectations for RICS members, but each member should carefully consider their AI Strategy in order to minimise risks whilst harnessing the opportunities presented by AI.

For further guidance on AI regulation, policies and procedures, and compliance in the construction and insurance sectors, please contact Andrew Croft or James Hutchinson.

You can also read related articles, including ‘The EU AI Act: The Implications of the EU’s Artificial Intelligence Regulations for Construction’ here for more insights on this evolving area.

Further Reading

RICS launches landmark global standard on responsible use of AI in surveying | RICS

Responsible use of artificial intelligence in surveying practice | RICS

Building with Intelligence (James Hutchinson’s talk, 11 Sep 2025) | Beale & Co

Download PDF