Establishing a Unique Risk Class for AI: Insights from Lockton

The rapid adoption of artificial intelligence (AI) across various industries is reshaping the commercial risk landscape. A recent report from Lockton Re, in collaboration with Lockton International and Armilla AI, suggests that it may be time to classify AI as its own distinct risk category.
“No sector of the economy is insulated from the potential impact of AI. As an industry, we must prepare for how these evolving risks are underwritten in commercial insurance and anticipate emerging claims patterns,” stated Oliver Brew, co-author of the report and head of the Cyber Centre of Excellence at Lockton Re.
Baiju Devani, co-author of the study and CTO & Cofounder of Armilla AI, emphasized that “the underwriting of AI risk must account for the novel perils it creates,” pointing out a growing disconnect between what insurers intend to cover and what they actually cover.
The report outlines AI-related exposures across key commercial classes, highlighting areas where coverage may be inadequate, fragmented, or misaligned.
Cyber
AI is increasingly being used to enhance cyber attacks, employing sophisticated phishing tactics and deepfake technology. Some cyber insurers are beginning to explicitly cover specific AI risks linked to traditional cyber events, such as data breaches or ransomware attacks affecting AI infrastructure. This shift indicates a move toward limited named-peril protection for potential cybersecurity harms arising from AI tools.
Additionally, new endorsements are emerging to address operational AI risks, including unauthorized access to large language model (LLM) environments and reimbursement for model redevelopment costs following incidents.
Errors and Omissions (E&O)
AI models fundamentally alter the landscape of technology professional liability risk. Traditional E&O policies were designed for deterministic software failures, such as bugs and outages. The probabilistic nature of AI complicates predictions and introduces new potential claim scenarios that insurers must address.
The report notes a trend toward selective endorsements, citing algorithmic decision errors and data-training issues as examples of named causes of loss. Some policies now include clauses addressing “AI services wrongful acts” and “AI products wrongful acts,” extending coverage to products developed using AI technology.
However, many of these endorsements may be too narrow, leaving gaps when incidents fall outside defined perils. This type of insurance primarily benefits developers of AI solutions rather than the companies utilizing AI models.
Casualty
Currently, commercial general liability (CGL) insurers do not model or underwrite AI risks, creating a gap between intended and actual coverage based on policy language. The report emphasizes the need for specific exclusions, similar to those for cyber perils.
A key factor in interpreting these clauses is the definition of AI, especially as advanced generative AI can produce synthetic content rather than merely interpreting inputs.
Directors and Officers (D&O)
As organizations integrate AI into their long-term strategies, D&O exposure is increasing, particularly regarding governance oversight and misrepresentation.
- Governance issues arise from allegations that boards have failed to identify, mitigate, or disclose material AI risks, such as model bias and vendor dependence.
- Misrepresentation occurs when organizations exaggerate the pace of AI development to attract investment or boost share prices, a phenomenon sometimes referred to as “AI washing.”
D&O policies still rely on traditional definitions of wrongful acts, which do not guarantee coverage for AI-specific failures. Standard exclusions for conduct and intentional acts also apply.
Employment Practices Liability (EPL)
The growing use of AI in hiring processes raises the risk of bias and discrimination, as AI models may be trained on biased data. Many policies remain silent on AI usage and may limit coverage to “insured persons” or “natural persons,” potentially excluding AI-generated outputs.
Affirmative Coverage
A new category of insurance is emerging to address ambiguities and gaps in traditional commercial insurance, particularly where probabilistic model behavior is assessed through legacy negligence constructs. Affirmative coverage typically operates on an “all-risk” basis, designed to cover liabilities arising from AI model errors, including scenarios not involving cyber events.
One approach involves assessing the “target model metric” during underwriting. Each model is evaluated based on industry context, output, underlying foundation model, and use case, allowing for tailored pricing and clearer coverage intent.
Systemic Risk
The report also examines the potential for systemic risk due to shared AI infrastructure and common foundation models. “The challenge for the insurance industry is not whether AI will create systemic risk events, but when, and if underwriting practices can keep pace,” noted Devani.
Traditional systemic controls are less effective with AI. When a widely deployed model contains flawed training data, failures can occur simultaneously across multiple organizations, regardless of their individual risk management practices.
Effective underwriting of AI risk requires a fundamentally different approach compared to traditional commercial lines. Underwriters must evaluate not only individual policyholder risk management practices but also portfolio-level exposure concentration through shared model dependencies and architectural vulnerabilities.

The rapid adoption of artificial intelligence (AI) across various industries is reshaping the commercial risk landscape. A recent report from Lockton Re, in collaboration with Lockton International and Armilla AI, suggests that it may be time to classify AI as its own distinct risk category.
“No sector of the economy is insulated from the potential impact of AI. As an industry, we must prepare for how these evolving risks are underwritten in commercial insurance and anticipate emerging claims patterns,” stated Oliver Brew, co-author of the report and head of the Cyber Centre of Excellence at Lockton Re.
Baiju Devani, co-author of the study and CTO & Cofounder of Armilla AI, emphasized that “the underwriting of AI risk must account for the novel perils it creates,” pointing out a growing disconnect between what insurers intend to cover and what they actually cover.
The report outlines AI-related exposures across key commercial classes, highlighting areas where coverage may be inadequate, fragmented, or misaligned.
Cyber
AI is increasingly being used to enhance cyber attacks, employing sophisticated phishing tactics and deepfake technology. Some cyber insurers are beginning to explicitly cover specific AI risks linked to traditional cyber events, such as data breaches or ransomware attacks affecting AI infrastructure. This shift indicates a move toward limited named-peril protection for potential cybersecurity harms arising from AI tools.
Additionally, new endorsements are emerging to address operational AI risks, including unauthorized access to large language model (LLM) environments and reimbursement for model redevelopment costs following incidents.
Errors and Omissions (E&O)
AI models fundamentally alter the landscape of technology professional liability risk. Traditional E&O policies were designed for deterministic software failures, such as bugs and outages. The probabilistic nature of AI complicates predictions and introduces new potential claim scenarios that insurers must address.
The report notes a trend toward selective endorsements, citing algorithmic decision errors and data-training issues as examples of named causes of loss. Some policies now include clauses addressing “AI services wrongful acts” and “AI products wrongful acts,” extending coverage to products developed using AI technology.
However, many of these endorsements may be too narrow, leaving gaps when incidents fall outside defined perils. This type of insurance primarily benefits developers of AI solutions rather than the companies utilizing AI models.
Casualty
Currently, commercial general liability (CGL) insurers do not model or underwrite AI risks, creating a gap between intended and actual coverage based on policy language. The report emphasizes the need for specific exclusions, similar to those for cyber perils.
A key factor in interpreting these clauses is the definition of AI, especially as advanced generative AI can produce synthetic content rather than merely interpreting inputs.
Directors and Officers (D&O)
As organizations integrate AI into their long-term strategies, D&O exposure is increasing, particularly regarding governance oversight and misrepresentation.
- Governance issues arise from allegations that boards have failed to identify, mitigate, or disclose material AI risks, such as model bias and vendor dependence.
- Misrepresentation occurs when organizations exaggerate the pace of AI development to attract investment or boost share prices, a phenomenon sometimes referred to as “AI washing.”
D&O policies still rely on traditional definitions of wrongful acts, which do not guarantee coverage for AI-specific failures. Standard exclusions for conduct and intentional acts also apply.
Employment Practices Liability (EPL)
The growing use of AI in hiring processes raises the risk of bias and discrimination, as AI models may be trained on biased data. Many policies remain silent on AI usage and may limit coverage to “insured persons” or “natural persons,” potentially excluding AI-generated outputs.
Affirmative Coverage
A new category of insurance is emerging to address ambiguities and gaps in traditional commercial insurance, particularly where probabilistic model behavior is assessed through legacy negligence constructs. Affirmative coverage typically operates on an “all-risk” basis, designed to cover liabilities arising from AI model errors, including scenarios not involving cyber events.
One approach involves assessing the “target model metric” during underwriting. Each model is evaluated based on industry context, output, underlying foundation model, and use case, allowing for tailored pricing and clearer coverage intent.
Systemic Risk
The report also examines the potential for systemic risk due to shared AI infrastructure and common foundation models. “The challenge for the insurance industry is not whether AI will create systemic risk events, but when, and if underwriting practices can keep pace,” noted Devani.
Traditional systemic controls are less effective with AI. When a widely deployed model contains flawed training data, failures can occur simultaneously across multiple organizations, regardless of their individual risk management practices.
Effective underwriting of AI risk requires a fundamentally different approach compared to traditional commercial lines. Underwriters must evaluate not only individual policyholder risk management practices but also portfolio-level exposure concentration through shared model dependencies and architectural vulnerabilities.
