The Insurance Industry Is Quietly Walking Away From AI Risk
- James Garner
- 22 hours ago
- 6 min read
Updated: 8 hours ago
Major US insurance companies have made a decision that should terrify anyone deploying AI in their organisation. AIG, Great American Insurance Group, and WR Berkley have formally asked state regulators for permission to exclude AI-related liabilities from many of their commercial insurance policies.
The implication is stark: the industry that prices and manages risk for a living has concluded that AI risk is too unpredictable, too opaque, and too complex to price using traditional underwriting. They're essentially saying, "This category of risk is outside our capacity to manage."For companies building AI systems, betting on AI tools, or embedding AI into operations, this represents a fundamental shift in the safety net you thought you had.

Why Insurers Are Running Away
The horror stories are piling up. Google AI Overview generated $110 million in false legal advice before anyone noticed. Air Canada's chatbot created a bereavement discount that didn't exist, and the tribunal ruled the airline couldn't pretend the chatbot was a separate legal entity responsible for its own mistakes. Arup, a global engineering firm, lost $25 million to a deepfake fraud that used AI-generated video and audio to impersonate senior executives.
These aren't edge cases. They're working examples of how AI systems fail in production, and they share a common characteristic: traditional underwriting models lack a framework for predicting or pricing risk.
Dr Simone Krummaker, associate professor of insurance at Bayes Business School, while addressing the All-Party Parliamentary Group, said, "The widening protection gap, including here in the UK, is the test that matters. If AI cannot help more people and businesses secure affordable, appropriate cover, then the sector has missed the point. Used well, AI gives us sharper sight of risk, helps prevent loses before they happen, and directs capital towards resilience. But it only works if it is governed well, with fairness designed in and demonstrated in real outcomes".
Insurance policies typically respond to risk categories that are predictable: fire, theft, liability from negligence, fraud from known fraud patterns. AI introduces a different category entirely. The risk isn't predictable because the failure modes are novel. The risk isn't easily bounded because a single faulty model can affect thousands of customers or firms simultaneously. The risk isn't transparent because AI systems often operate as black boxes; even the engineers who built them can't explain precisely why they generated a specific output.
Underwriting requires predicting future losses based on historical patterns. AI introduces a category where historical patterns don't exist yet. Insurers literally cannot price something they can't model.
The Governance Gap: The Real Insight
Here's what's actually happening beneath the surface. The insurance industry hasn't concluded that AI is too dangerous to insure. They've concluded that AI is too opaque to insure without governance frameworks that place responsibility on the companies deploying the technology.
This is the crucial insight we recognise early: the insurance exclusions aren't a technology problem. They're a governance problem. Insurance companies can tolerate AI risk if organisations deploying AI have proper oversight. What they can't accept is organisations deploying AI blindly, without governance, expecting insurance to cover whatever happens.
This places heavy responsibility on senior management and boards to oversee AI systems. These aren't optional best practices. They're becoming regulatory requirements in 21+ states that have adopted the NAIC Model Bulletin.
The framework is explicit: if your organisation uses AI, your board must have documented oversight. You must test for bias. You must maintain audit trails. You must understand how the model makes decisions. You must have human oversight in place. You must be able to explain adverse choices to consumers.
Insurance companies are excluding AI risk, not because they believe AI cannot be deployed responsibly. They're excluding it because they can only insure organisations that have the governance infrastructure to manage that risk themselves.
What This Means for Your Organisation
If you're deploying AI in customer-facing contexts, in decision-making systems, or in any scenario where the model's output could cause financial or reputational harm, your traditional commercial insurance may no longer cover that risk.
That's not hypothetical. WR Berkley has proposed an "absolute AI exclusion" across directors and officers (D&O) policies, errors and omissions (E&O) coverage, and fiduciary liability products. The exclusion language is expansive: it removes coverage for AI-generated content, failure to detect AI-created materials, poor oversight of AI systems, AI-driven operational errors, and regulatory investigations involving AI technologies.
For many businesses, this eliminates coverage for customer service chatbots, automated decision systems, generative content tools, and algorithmic compliance systems. The exact places where many organisations have already deployed AI.
The logical question follows: if your traditional insurance won't cover AI-related losses, what does? The answer is: specialist AI liability insurance, if you can find it. Or, in most cases, nothing.
The Actual Risk
The insurance industry's concerns aren't baseless caution. They're grounded in genuine risk asymmetry. A company deploying a recommendation algorithm affects thousands of customers. If the algorithm has a hidden bias or a failure mode, it can trigger losses across all of them simultaneously. Traditional insurance models assume losses are uncorrelated—your fire doesn't spread to my property. But AI losses are correlated. A faulty model affects everyone using it.
An underwriter at a major insurer described it this way: "Insurers can handle a $400 million loss to one company. What they can't handle is an agentic AI mishap that triggers 10,000 losses at once."
That's the actual fear. Not that AI will cause individual failures. But a single AI failure can cascade across many customers or many organisations using that same model. The systemic risk is the genuine concern.
What Governance Actually Requires
For organisations serious about deploying AI responsibly and maintaining insurance protection, the NAIC framework provides a roadmap. It's not optional to complete the checklist. It's genuine operational discipline:
Board accountability: Your board must actively oversee AI deployments, not delegate it entirely to the technology team. Clear escalation procedures for AI-related issues must be in place, and the board must be informed when they occur. Cross-functional involvement: your compliance, legal, actuarial, data science, and operations teams must work together on AI governance.
Transparency requirements: You must document how models work, what data they use, and how they make decisions. Bias testing: You must regularly test for unfair outcomes and maintain audit trails showing these tests. Human oversight is critical; critical decisions cannot be purely automated. Human review must happen before adverse outcomes affect customers. Documentation: We must maintain comprehensive records of all these governance activities.
If this sounds like expensive, time-consuming work, it is. But it's also the price of insurance coverage in the post-AI risk era. Companies that do this work will find insurance available. Companies that don't will either find their policies exclude AI entirely, or discover too late that coverage they thought they had doesn't actually apply to their AI-related losses.
The Industry Response
Specialist insurers are beginning to emerge. Lloyd's of London syndicates now offer AI-specific liability coverage. Google recently partnered with Beazley and Chubb to provide AI-related insurance products. These policies explicitly address AI perils like model hallucinations, degrading performance, and algorithmic failures.
But these specialist products are expensive. They require undergoing detailed audits of your AI systems. They come with strict requirements about governance, testing, and human oversight. They're not a casual add-on to your existing insurance programme. They recognise that you're operating in a novel risk category that requires novel insurance structures.
The Strategic Question:
For organisations deploying AI right now, the strategic question is unavoidable: Do you have the governance infrastructure in place to satisfy insurance underwriters that your AI systems won't cause unmanageable correlated losses?
If the answer is no, you have two choices. Build that governance infrastructure before deploying AI more broadly, or accept that you're deploying AI systems without insurance coverage. Neither is comfortable.
But this is precisely the scenario project delivery professionals need to understand, and we are highlighting it before it becomes obvious. Your consultants aren't telling you about it. Your software vendors aren't highlighting it. Your existing insurance broker probably doesn't have expertise in AI-specific coverage. So you discover it too late, after you've already deployed systems that turn out to be uninsured.
The insurers have spoken: AI governance isn't optional. It's the price of deploying AI at scale. The only question is whether you pay that price deliberately, through building proper governance infrastructure, or accidentally, by deploying systems that turn out to be uninsured when something goes wrong.
If your organisation wants to stay deploy-ready in an era where insurers are quietly walking away from AI risk, you need to stay ahead of the governance curve.
If your organisation wants to stay deploy-ready in an era where insurers are quietly walking away from AI risk, you need to stay ahead of the governance curve. Project Flux delivers strategic insights, regulatory intelligence, and practical frameworks that help you build AI systems that insurers will still be willing to underwrite. Do not wait for exclusions to hit your policies. Prepare deliberately and Subscribe to Project Flux.

Comments