Download PDF

AI in insurance: rethinking liability, coverage and risk allocation

April 2026
Sam Zaozirny and David McArdle

An introductory note to a series of articles looking at how the market will respond to the evolving risks and opportunities of AI.

Insurers, and the organisations they insure, are increasingly relying on AI‑enabled systems as part of everyday processes. As adoption grows, insurers are being forced to reconsider coverage, exclusions and how AI‑related risks should be addressed.

Where does the risk lie?

AI tools now sit behind many day‑to‑day processes. As businesses rely more heavily on these systems, new questions emerge as to who is responsible when an AI‑based decision turns out to be wrong.

At present, the focus is on the human-in-the-loop use of AI as a tool. The ‘AI’ cannot itself be held liable as it is not a legal person, and it cannot owe a duty of care. From a risk perspective, it is treated like any other tool or technology an organisation might use. If something goes wrong, responsibility sits with the people and businesses who created, selected, relied on and supervised the system and were best placed to avoid the harm being caused.

That framing works while AI is used in limited, supervised ways. It starts to become less clear as AI use scales, becomes embedded into core services, and increasingly relies on external providers operating beyond the user’s control. What happens when there is a systemic failure in the technology that underpins the AI tool and who will be responsible for the losses that inevitably follow?

AI as a product or service

The position becomes more complex when AI is treated not simply as a tool, but as a product or service supplied to third parties. There is no settled view on this classification, and different jurisdictions are adopting different approaches.

This distinction matters. The risk profile, the allocation of responsibility and the availability of insurance solutions can look very different depending on whether AI is characterised as a product, a service, or something that sits between the two. This classification is likely to shape how AI‑related losses are absorbed, and by whom.

New products responding to AI

New types of cover are beginning to appear. Parametric, telematics and behaviour‑based products now use continuous sensor data and AI‑enabled decision engines to trigger payouts automatically and personalise pricing. These models rely less on historical losses and more on live, real‑time risk signals.

At the same time, these innovations introduce new aggregation and dependency risks. A single model flaw, service outage or data issue can potentially affect thousands of insureds at once. These systemic characteristics may look familiar to insurers who lived through the early development of cyber risk.

Silent AI

For now, many AI‑related losses are still being picked up under traditional policies simply because the wording does not explicitly exclude them. That position is changing. Some insurers are introducing AI‑related exclusions, including broad “blanket exclusions” that remove cover for losses connected to AI use. Others are doing the opposite, developing AI‑specific products designed to address algorithm failures or AI‑related professional errors. Hiscox, for example, was the first UK insurer to offer affirmative AI cover through a rewrite of its Technical PI policy in 2025. These solutions are still early‑stage, but they signal a market preparing for AI‑driven claims.

What’s on the horizon?

We expect insurers to move from broad exclusions toward more structured and consistent approaches to covering AI‑related risks as those risks become more understood. Traditional policies struggle to handle issues like hallucinated outputs, biased model behaviour and contaminated training data. These risks do not sit neatly within cyber, professional indemnity or product liability frameworks, particularly when they arise from systemic technology failures rather than individual human error.

AI risks are also becoming harder to assess because the technology changes quickly, is often unclear and can cause large‑scale problems when it goes wrong. These characteristics make AI‑related losses harder to predict, contain and insure within existing policy structures.

Conclusion

AI is introducing risk in ways that existing insurance structures are not yet designed to accommodate. Dependence on a small number of technology providers, the scaling of human judgement through automated systems, and uncertainty over whether AI should be treated as a product or a service all complicate traditional approaches to liability and coverage. How insurers respond, through exclusions, limits, aggregation controls or dedicated products, will determine whether AI risk is absorbed, transferred or left with insureds. The next phase of market response will turn on whether AI risk can be meaningfully accommodated within existing insurance frameworks, or whether new approaches are required.

Download PDF