AI’s Risks Too Great for Insurers, Say the Risk Experts

📅 Published: 11/23/2025
🔄 Updated: 11/23/2025, 7:50:24 PM
📊 12 updates
⏱️ 10 min read
📱 This article updates automatically every 10 minutes with breaking developments

As artificial intelligence (AI) technologies rapidly advance and become deeply integrated into the insurance industry, risk experts warn that the associated dangers are too significant for insurers to fully embrace AI coverage, leading to a cautious retreat from underwriting AI-related risks. The potential for catastrophic, multibillion-dollar claims arising from AI failures, unpredictable system behavior, and evolving regulatory demands is driving insurers to scale back or exclude AI from their policies, creating a complex challenge for the sector in 2025.

Leading voices in insurance risk management highlight that w...

Leading voices in insurance risk management highlight that while AI promises transformative opportunities, the **systemic risks posed by generative AI and other AI-driven technologies have escalated sharply**. Scott Seaman, a partner at Hinshaw & Culbertson LLP, emphasizes this dual dynamic, noting that companies cannot ignore AI’s power, but insurers must carefully navigate how they use and cover AI due to heightened exposure to losses[1]. This is compounded by regulatory developments such as the New York State Department of Financial Services’ 2024 circular imposing substantial obligations on insurers deploying AI in underwriting and pricing, underscoring the mounting compliance complexity[1].

Financial Times reporting confirms that **major insurers are...

Financial Times reporting confirms that **major insurers are retreating from AI coverage** amid fears of large-scale claims triggered by AI-related failures or unintended consequences. The difficulty in accurately assessing and pricing these risks, combined with regulatory uncertainty, has caused insurers to hesitate, potentially limiting coverage availability for technology firms and AI end-users[2]. Such reluctance could slow AI adoption in critical sectors by increasing the cost and difficulty of obtaining adequate insurance protection.

A recent global risk forecast by Kennedys further cements AI...

A recent global risk forecast by Kennedys further cements AI’s dominant threat profile, ranking it as the **top risk for the insurance sector in 2025**, surpassing traditional concerns like sustainability and cyberattacks. The report highlights that AI-related losses are largely unknown and could lead to unintentional insurance coverage gaps if traditional policies fail to explicitly address AI risks. Tom Pelham, global head of cyber and data at Kennedys, stresses insurers must adapt swiftly to AI-driven change or risk obsolescence[3].

Meanwhile, AI’s integration into insurers’ own operations in...

Meanwhile, AI’s integration into insurers’ own operations introduces new vulnerabilities, as revealed by a Cybernews report showing numerous AI-related security issues within S&P 500 firms, including insurers. Key concerns include data security, model reliability, intellectual property theft, and the risk of flawed AI outputs affecting underwriting and claims decisions. Experts urge strict safeguards such as access controls, output validation, encryption, and third-party risk assessments to mitigate these emerging threats[4].

Industry surveys and analysis indicate that while AI adoptio...

Industry surveys and analysis indicate that while AI adoption is advancing—streamlining underwriting, claims processing, and fraud detection—the associated risks are transforming the insurance workforce and risk landscape. Cyber risks are both amplified and mitigated by AI, requiring insurers to balance innovation with robust governance and compliance frameworks[5][7][9].

In summary, the insurance industry faces a **seismic shift a...

In summary, the insurance industry faces a **seismic shift as AI emerges as its foremost risk**. The scale and unpredictability of AI-related dangers, combined with regulatory pressures and operational vulnerabilities, have led insurers to adopt a cautious stance, retreating from broad AI coverage to avoid potentially devastating financial exposures. This cautious approach, while protective for insurers, raises important questions about how the sector will support ongoing AI innovation and manage the complex risk environment of 2025 and beyond[1][2][3][4].

🔄 Updated: 11/23/2025, 6:00:47 PM
Risk experts are sounding the alarm as insurers face mounting pressure to address AI-related exposures, with Morningstar DBRS warning in July 2025 that unchecked AI adoption could significantly impact financial stability and credit ratings. Major carriers have begun excluding AI-driven risks from D&O and cyber policies, while a recent UK survey found that over 60% of claims adjusters now encounter AI-manipulated evidence in fraud cases. “The risks are no longer theoretical—they’re showing up in real claims and governance failures,” said one industry analyst, highlighting a surge in policy exclusions and tighter underwriting standards across the sector.
🔄 Updated: 11/23/2025, 6:10:30 PM
Risk experts warn that the **risks posed by AI to insurers are exceedingly complex and potentially uninsurable**, especially concerning catastrophic outcomes like AI failures in critical infrastructure, which could cause damage in the hundreds of billions of dollars**[2]**. Technically, AI systems' "black box" nature and unpredictable behavior introduce significant **uncertainty in liability and risk assessment**, challenging traditional underwriting models and transparency expectations[4]. The shift from human-driven to AI-driven liability—such as autonomous vehicles transferring risk from drivers to manufacturers—could shrink traditional insurance markets while expanding coverage needs in AI agent and cybersecurity risks, demanding insurers develop new quantitative models and regulatory frameworks to manage this evolving landscape[2][5].
🔄 Updated: 11/23/2025, 6:20:26 PM
Major U.S. insurers including AIG, Great American, and WR Berkley have petitioned regulators to allow them to exclude AI-related liabilities from corporate insurance policies, citing the technology as an uninsurable "black box" risk that could trigger systemic losses in the billions if widely deployed AI malfunctions occur[1]. In response, over half of U.S. states have adopted or are considering laws and regulations specifically targeting AI use in insurance, with Colorado leading by adopting formal algorithm regulation in 2021 to prevent discrimination in rate-setting[2][5]. The National Association of Insurance Commissioners (NAIC) has issued a Model AI Bulletin requiring insurers to implement transparent and fair AI governance programs, adopted by 24 states so far, with ongoing efforts
🔄 Updated: 11/23/2025, 6:30:34 PM
Major U.S. insurers, including AIG, Great American, and WR Berkley, are formally petitioning regulators to exclude AI-related liabilities from corporate policies, citing the technology’s unpredictable “black box” nature as an uninsurable risk, according to the Financial Times. The National Association of Insurance Commissioners (NAIC) responded by adopting a Model Bulletin in December 2023, setting guidelines for responsible AI use and requiring insurers to demonstrate compliance with all applicable laws during regulatory examinations. As of November 2024, over half of U.S. states have either enacted or are actively considering legislation to regulate AI in insurance, with Colorado leading the way by implementing the first formal regulation in 2021 (SB21
🔄 Updated: 11/23/2025, 6:40:28 PM
Consumer and public reaction to AI in insurance is marked by growing skepticism and declining confidence, with only **20% of Americans** supporting AI use in property and casualty (P&C) insurance, according to a 2025 Insurity survey. Millennials showed the steepest decline in positive sentiment, dropping from **41% in 2024 to 26% in 2025**, underscoring widespread concerns about AI’s transparency and reliability. Sylvester Mathis of Insurity emphasized that consumers demand "responsible and transparent" AI deployment to regain trust. This skepticism reflects broader fears about AI risks, including privacy, cybersecurity, and fairness, fueling public apprehension and challenging insurers to improve communication and accountability[1][11].
🔄 Updated: 11/23/2025, 6:50:26 PM
Major U.S. insurers, including AIG, Great American, and WR Berkley, are formally petitioning regulators to exclude AI-related liabilities from corporate policies, citing the technology’s “black box” nature as an uninsurable risk, according to Financial Times reporting. The National Association of Insurance Commissioners (NAIC) responded by adopting a Model Bulletin in December 2023, setting strict governance expectations and reminding insurers that AI-driven decisions must comply with all existing laws, while at least six states considered AI-specific insurance legislation in 2023 alone. “We can handle a $400 million loss to one company,” an Aon executive told the Financial Times, “but not an agentic AI mishap that triggers 1
🔄 Updated: 11/23/2025, 7:00:41 PM
Public concern is mounting as risk experts warn that AI’s rapid adoption in insurance poses unacceptable dangers for consumers. A recent survey found 68% of Americans are worried about AI-driven bias in claims decisions, while 54% fear their personal data could be misused—echoing statements from the Geneva Association that “AI-related risks are outpacing coverage, leaving individuals exposed.” “I don’t trust an algorithm to decide my family’s financial future,” said Maria Thompson, a policyholder in Denver, reflecting a growing sentiment that insurers must do more to protect the public.
🔄 Updated: 11/23/2025, 7:10:29 PM
Risk experts and industry leaders warn that the risks AI poses are currently too great for insurers to underwrite comprehensively, with potential multibillion-dollar claims and systemic failures cited as major concerns. Scott Seaman of Hinshaw & Culbertson LLP highlights that generative AI introduces catastrophic loss potential, prompting insurers to issue coverage endorsements that both grant and exclude AI-related risks[1]. Major carriers like AIG and WR Berkley seek regulatory approval to exclude AI liabilities due to the "black box" nature of AI models, fearing a single AI malfunction could trigger thousands of simultaneous claims beyond the industry's capacity to handle, as one Aon executive stated, "We can handle a $400 million loss to one company; what we can't handle is a
🔄 Updated: 11/23/2025, 7:20:30 PM
Risk experts and industry leaders warn that AI’s risks are too great for insurers to fully embrace coverage, citing potential multibillion-dollar claims and regulatory hurdles as key concerns. Scott Seaman, a partner at Hinshaw & Culbertson LLP, highlights catastrophic loss potential and increasing systemic risks from generative AI, noting insurers face growing obligations under regulatory frameworks like the NYSDFS circular on AI use in underwriting and pricing[1][4]. A report from Cybernews reveals 158 AI-related security issues linked specifically to financial services and insurance firms, with algorithmic bias and insecure AI outputs compounding risk management challenges[2].
🔄 Updated: 11/23/2025, 7:30:37 PM
Risk experts warn that AI poses **systemic risks too great for insurers to fully cover**, as evidenced by the global insurance sector ranking AI adoption as the top risk in 2025, surpassing cyber attacks and extreme weather events, according to a survey of 170 Kennedys partners worldwide[3]. International regulatory responses are emerging, such as the New York State Department of Financial Services' 2024 circular imposing significant obligations on insurers using AI for underwriting and pricing, highlighting growing scrutiny and need for robust governance globally[1]. Meanwhile, experts from Allianz and KPMG emphasize the dual challenge of AI-driven opportunities worth $1.1 trillion annually for insurers alongside escalating cyber threats and ethical concerns, prompting calls for balanced regulations analogous t
🔄 Updated: 11/23/2025, 7:40:27 PM
Consumer and public reaction to AI risks in insurance is marked by growing concern over coverage gaps and accountability. A recent survey revealed that 72% of S&P 500 companies discuss AI risks, yet many fear broad exclusions in policies could leave AI-related claims uninsured unless actively negotiated[1]. Public criticism intensifies around issues of safety, bias, and unclear AI decision-making; users worry about accountability if AI causes harm, questioning whether liability lies with the user, developer, or manufacturer[6]. Brian Campbell of The Conference Board emphasized reputational risk as an immediate threat, noting that “one unsafe output or biased decision can spark rapid customer backlash, investor skepticism, and regulatory scrutiny”[9].
🔄 Updated: 11/23/2025, 7:50:24 PM
Risk experts are sounding the alarm as new data reveals 158 AI-related security vulnerabilities across financial services and insurance firms, with 22 confirmed cases of algorithmic bias in automated underwriting and pricing models, according to a Cybernews report released this week. “The same tools that help insurers assess and manage risk are now introducing their own,” said Martynas Vareikis, a Cybernews security researcher, warning that compromised AI models could become a core business threat. Insurers are now urged to implement strict access controls, data classification, and rigorous validation protocols to mitigate these emerging risks.
← Back to all articles

Latest News