AI Risk in Insurance: How Do Underwriters Account for the Unknown?
- IHS Sam Houston State Uni
- 23 hours ago
- 5 min read
By: Dr. Shannon Lane

Artificial intelligence (AI) is rapidly being integrated into organizational decision-making across nearly every critical infrastructure sector, from finance and healthcare to transportation, energy, and maritime operations. As organizations adopt AI-driven tools for efficiency, prediction, and automation, insurers face a fundamental challenge: how to underwrite a risk that is dynamic, opaque, and only partially observable.
This research brief begins with a deceptively simple question—AI risk in insurance: how do underwriters account for this unknown?—and demonstrates that the answer is not found in attempting to model AI itself. Instead, underwriting practice reveals a quieter but more consequential shift: insurers are converting AI uncertainty into governance, control, and organizational accountability criteria.
Rather than insuring algorithms, underwriters are increasingly insuring how organizations manage uncertainty. This brief synthesizes current industry practice, regulatory guidance, and emerging scholarship to explain how AI risk is being operationalized within insurance underwriting, and why this matters for critical infrastructure resilience, security culture, and systemic risk management.
AI as an “Uninsurable” Risk—and Why It Is Still Being Underwritten
From an actuarial standpoint, AI poses several classical problems. It lacks stable loss history, evolves continuously through retraining and updates, and introduces correlated, cross-sector exposures due to shared models, platforms, and vendors. Traditional actuarial approaches rely on historical frequency and severity; AI offers neither in a reliable form (Swiss Re, 2023).
Yet AI is not being excluded wholesale from insurance markets. Instead, insurers have embedded AI-related risks into existing lines of coverage, including cyber insurance, professional liability (E&O), directors and officers (D&O) liability, and product liability (Chester et al., 2022). This absorption strategy allows insurers to apply familiar underwriting logic while adapting assessment criteria to new triggers of loss.
In practice, this means AI is rarely treated as a standalone peril. Losses arising from algorithmic bias, automated decision errors, or AI-enabled cyber incidents are framed as failures of governance, disclosure, or controls—domains insurers already understand. As a result, the “unknown” of AI is translated into organizational behavior that can be evaluated ex ante.
What Underwriters Are Actually Evaluating
Governance Over Capability
Across underwriting questionnaires and risk assessments, the central concern is no longer whether an organization uses AI, but how that use is governed. Empirical reviews of cyber and professional liability underwriting show increasing emphasis on board oversight, internal approval processes, and documented accountability for AI-enabled decisions (Marsh McLennan, 2024).
Key indicators include:
Formal AI governance structures and policies
Clear lines of decision authority and escalation
Human-in-the-loop requirements for high-impact decisions
Documentation and audit-ability of AI outputs
From an underwriting perspective, these indicators function as proxies for organizational maturity. Firms with disciplined governance structures are viewed as better able to absorb, detect, and recover from unexpected AI failures, even if those failures cannot be predicted in advance.
Control, Monitoring, and the Assumption of Failure
A notable shift in underwriting logic is the assumption that AI failure is inevitable. Rather than asking how risk can be eliminated, insurers ask how quickly it can be detected and contained. This mirrors earlier developments in cyber insurance, where breach prevention gave way to breach response as the central underwriting concern (Woods & Simpson, 2017).
Underwriters now evaluate:
Continuous monitoring of AI systems
Logging and explainability mechanisms
Incident response plans specific to AI-enabled failures
The existence of “kill switches” or manual overrides
These criteria align closely with resilience-oriented frameworks in critical infrastructure protection, where rapid detection and coordinated response are more realistic than absolute prevention.
Aggregation and Systemic Exposure
AI introduces a form of correlated risk that is particularly concerning to reinsurers. The widespread use of common foundation models, shared datasets, and centralized cloud infrastructure means a single vulnerability can generate losses across multiple insureds simultaneously (OECD, 2023).
Large insurance markets such as Lloyd’s of London have publicly acknowledged concerns about systemic AI exposure, responding through portfolio-level controls rather than firm-level exclusions. These include sub-limits, tighter definitions of covered events, and informal caps on exposure concentration by sector or technology type.
This approach reflects an emerging recognition that AI risk is not merely organizational but systemic, particularly in sectors with high interdependence and cascading failure potential.
Policy Language as a Tool for Managing Uncertainty
When risks cannot be fully priced, they are often managed through contract language. One of the most significant developments in AI insurance is the emergence of “silent AI” risk—losses arising from AI that fall ambiguously within or outside existing policy definitions (Cambridge Centre for Risk Studies, 2022).
Insurers are increasingly relying on:
Narrow or flexible definitions of covered technology-related losses
Conditional coverage tied to disclosure and governance representations
Exclusions triggered by failure to follow stated internal controls
Regulators, including the National Association of Insurance Commissioners, have expressed concern that silent AI risk may mask systemic exposure across insurance markets. Nonetheless, ambiguity remains a deliberate strategy, allowing insurers to retain flexibility as legal and technological standards evolve.
Insurance as a De Facto Regulator of AI Behavior
Taken together, these practices suggest that insurance underwriting is functioning as an informal regulatory mechanism for AI adoption. By tying premiums, coverage limits, and exclusions to governance quality, insurers are incentivizing organizations to adopt practices aligned with emerging norms of accountability and transparency—often ahead of formal regulation.
This dynamic is particularly relevant for critical infrastructure sectors, where regulatory regimes are fragmented and uneven. Insurance requirements can impose a baseline expectation of risk management discipline across diverse operators, including those not directly subject to AI-specific regulation.
Importantly, this mirrors earlier developments in safety and security culture research. Just as insurers historically shaped industrial safety practices through loss control standards, they are now shaping AI governance through underwriting criteria (Hale & Hovden, 1998).
Implications for Research and Practice
The central insight emerging from this review is that AI risk is being underwritten as a social and organizational phenomenon, not a purely technical one. This aligns closely with sociological perspectives on risk, which emphasize that uncertainty is managed through institutions, norms, and shared expectations rather than prediction alone.
For researchers, this opens several avenues:
Empirical study of how underwriting criteria influence organizational AI governance
Comparative analysis across critical infrastructure sectors
Longitudinal assessment of whether insurance-driven controls improve resilience and recovery outcomes.
For practitioners, particularly in ports, utilities, and transportation systems, the implication is clear: AI readiness is now inseparable from insurability. Governance artifacts, decision processes, and security culture are no longer internal matters; they are increasingly externalized through insurance markets.
Conclusion
Underwriters do not claim to understand AI risk in a predictive sense. Instead, they have reframed the problem by focusing on what can be evaluated: governance, controls, accountability, and organizational capacity to respond to failure.
In doing so, insurance markets are quietly redefining how AI risk is measured, priced, and disciplined—well before comprehensive regulatory frameworks are in place. For critical infrastructure protection, this shift represents both a constraint and an opportunity: a constraint on reckless adoption, and an opportunity to embed resilience and accountability into the fabric of emerging AI-enabled systems.
References:
Cambridge Centre for Risk Studies. (2022). Silent cyber and emerging technology risks. University of Cambridge.
Chester, A., Kahn, J., & O’Brien, P. (2022). Artificial intelligence and liability insurance: Emerging underwriting considerations. Journal of Risk Management in Insurance, 89(3), 211–229.
Hale, A., & Hovden, J. (1998). Management and culture: The third age of safety. Safety Science, 29(2), 129–165.
Marsh McLennan. (2024). AI risk management: Implications for insurance and governance.
OECD. (2023). Systemic risks and artificial intelligence. Organisation for Economic Co-operation and Development.
Swiss Re. (2023). Artificial intelligence risk landscape. Swiss Re Institute.
Woods, D. D., & Simpson, S. N. (2017). Beyond prevention: The value of resilience in safety management. Reliability Engineering & System Safety, 165, 284–290.




Comments