The EU AI Act: Consequences for Insurers from Europe’s Lead in Regulating AI
April 2024Introduction
As the pace of the digital transformation intensifies, the insurance industry increasingly relies on artificial intelligence (AI) to enhance operational efficiency, tailor pricing models, and combat fraud. The recent ratification of the EU AI Act by the European Parliament heralds a significant overhaul of the regulatory environment governing AI use, both within the European Union and potentially globally. This development promises profound changes in the operational frameworks of insurers, aligning them with new legal and ethical standards.
Background to the AI Act
First proposed by the European Commission in April 2021, the AI Act is the first attempt at legal governance of AI technologies. The AI Act is intended to establish a uniform regulatory framework across EU member states. Drafted as a regulation, the AI Act will have direct effect across all EU member states, mitigating the need for individual national legislations and ensuring a harmonised approach to AI regulation. The Act aims to address critical concerns identified around the risks to fundamental rights from AI and is intended to foster responsible innovation within a structured legal framework whilst not overly stifling innovation.
Use of AI in Insurance
The integration of AI within the insurance sector has dramatically refined risk assessment capabilities and fraud detection processes. AI’s ability to analyse extensive datasets has enhanced insurers’ capabilities to evaluate risks and set premiums accurately, fostering more personalised policy offerings. A common example of such systems are the “black box” devices offered by certain motor insurers. More broadly, advanced AI tools, including the use of generative AI models (like ChatGPT), are increasingly being employed by insurers to detect and analyse discrepancies indicative of fraudulent activities more efficiently than traditional methods.
Scope of the AI Act
The AI Act defines AI and categorises AI applications into four distinct risk categories:
- Unacceptable risk,
- High risk,
- Limited risk, and
- Minimal risk.
AI Defined for the First Time
The Act introduces a formal definition of AI, the first such attempt at defining AI in legislation. The AI Act broadly categorises such systems as machine-based entities that adapt over time, affecting both digital and physical realms through data-driven outputs. This definition aims to cover the spectrum of AI technologies from basic algorithms to complex machine learning models.
Analysis of Risk Categories
The AI Act bans the use of what are deemed unacceptable risk AI systems. These include those that deploy manipulative techniques such as social scoring and manipulative AI that could harm individuals or exploit vulnerable groups, In insurance, this would include AI systems that might unfairly discriminate against individuals based on opaque criteria.
High risk AI systems will not be banned outright, but instead must adhere to high levels of transparency and robust governance (see below). High risk AI systems would include those that could significantly impact fundamental rights. For insurers, these might include algorithms that determine eligibility for policies or claims. Interestingly, insurance and financial services are specifically flagged in the AI Act as areas of concern for high risk AI systems.
The AI Act sets certain restrictions and safeguards around the use of limited risk AI. General purpose AI models, capable of performing a broad range of tasks, must meet specific regulations if they pose systemic risks. Transparency about data and energy usage is also required.
For minimal risk AI, such as customer service bots, insurers will need to ensure that interactions are transparent and decisions can be reviewed by users. Customers will also need to be informed that AI has been used.
Technical and Compliance Standards
For high-risk AI applications, the AI Act sets out stringent technical and compliance standards. These include detailed record-keeping, human oversight, and specific performance metrics. These measures are designed to ensure that high risk AI systems are deployed in a manner that is transparent and accountable.
Governance and Enforcement
Compliance with the AI Act will be subject to oversight by national authorities, supported by the AI office inside the European Commission. The AI Act also establishes the European Artificial Intelligence Board (EAIB). Aimed at harmonising the enforcement of AI regulations across the EU, the EAIB will both advise the European Commission and facilitate the exchange of information and practices amongst national authorities.
Implementation Timelines
Following the European Parliament’s approval of the AI Act, the Council of the European Union is expected to endorse the Act shortly, with it becoming law upon its publication in the Official Journal of the European Union, anticipated around May or June 2024.
The overall timeline for the rollout of the AI Act is 24 months. However, compliance deadlines for certain AI uses vary from this timeline as follows:
- Unacceptable risk AI will be phased out within six months of the commencement of the regulation (i.e. likely by the end of 2024).
- High-risk AI must be compliant with the AI Act no later than 36 months from commencement.
- General purpose AI must meet governance standards within 12 months.
Penalties for Non-Compliance
Reflecting the seriousness the EU attaches to the AI Act, non-compliance will lead to steep penalties, up to as much as €35 million or 7% of global turnover (whichever is higher).
Conclusion
Similar to the GDPR, the EU’s AI Act aims to set a global precedent for AI governance, aiming to balance innovation with strict regulatory oversight. Whether it meets these noble goals remains to be seen. In any event, with the AI Act’s imminent rollout, insurers must rigorously audit their AI tools to eliminate elements posing unacceptable risks. The Act’s extraterritorial reach also means that non-EU insurers operating within the EU market must align their practices with the new rules.
As insurers prepare for compliance, understanding these new regulations will be crucial. Beale & Co remain available to assist insurers through this transition, ensuring that they not only comply with the new rules but also to help identify any potential issues.
Download PDF