Top Market Developments
56% of Enterprises Can't Hit the Kill Switch on AI
ISACA's 2026 AI Pulse Poll of more than 3,400 digital trust professionals, released at RSA Conference in March 2026, surfaces a governance reality that underwriters cannot ignore: 56% of respondents don't know how quickly they could halt AI operations following a security incident. Only 43% have high confidence in their ability to investigate and explain AI incidents to leadership or regulators. Most striking — only 36% report that humans approve AI-generated actions before execution.1,2 For an underwriter, this data reads like a portfolio-level exposure event. If an enterprise cannot demonstrate incident response readiness, cannot explain its AI decision-making chain, and cannot shut down autonomous systems on demand, then how does a carrier price the policy? The governance gap ISACA has quantified is not an abstract compliance concern — it is the underlying condition that makes AI risk structurally difficult to insure.
Munich Re Warns: Agentic AI Will Reshape Cyber Insurance Lines
Munich Re's March 2026 report on cyber insurance risks and trends identifies agentic AI as a near-term frequency driver — increasing the pace and scale of attacks more than their individual severity. The affected insurance covers span system failure, business interruption, incident response, data restoration, and cyber extortion, extending further into third-party losses including wrongful data collection, privacy violations, media liability, and technology errors and omissions.3 The firm's Global Cyber Risk and Insurance Survey found that 89% of C-level respondents do not feel adequately protected against attacks — a figure that stands alongside Munich Re's own observation that "the lion's share of cyber risks is still uninsured, even though they are insurable."4 This is the world's largest reinsurer effectively stating that the risk is real, the products exist, but the bridge between enterprise AI posture and insurability has not been built. That bridge is the defining infrastructure challenge of this moment.
Langflow CVE Exploited in 20 Hours — The Shrinking Response Window
CVE-2026-33017, a critical remote code execution vulnerability in the Langflow AI pipeline platform, was weaponized within 20 hours of the advisory publication — with no public proof-of-concept required. Attackers reconstructed working exploits from the advisory alone. The attack required a single unauthenticated HTTP request and, once executed, enabled exfiltration of API keys and credentials with access to connected databases and downstream supply chain systems.5,6 The insurance dimension is precise: traditional policy response and claims cycles are measured in weeks. The exploit-to-compromise window is now measured in hours. This delta — between how quickly damage occurs and how quickly coverage mechanisms respond — is the insurability gap rendered concrete. Any AI-adjacent infrastructure with internet exposure must be evaluated against this time horizon, not the legacy assumption that detection and containment will precede material loss.
The AI Policy Language Problem: When "Error" Meets "Occurrence"
Analysis from Gallagher & Kennedy confirms that insurers will require detailed AI disclosures — covering specific tasks performed, autonomy levels, human oversight mechanisms, and third-party vendor dependencies. Critically, incomplete disclosures can lead to rescission of D&O and liability policies after a loss has already occurred, not before.7 The legal ambiguity compounds the structural problem: courts are still determining whether AI errors constitute "errors" or "occurrences," whether AI system failures are "failures" or "attacks," and whether standard CGL forms — which often exclude technology-driven harms — apply at all. Wiley Rein notes there is no marketwide insurance response yet to AI-specific threats, with form exclusions or sublimits the most likely near-term industry adjustment.8 Some AI risks are currently uninsurable, or coverage is very limited. The organizations that invest now in translating their AI governance posture into underwriting language will hold a structural advantage when the market standardizes — and it will.
Vendor Spotlight
Munich Re
SpotlightMunich Re is not an AI security vendor — it is the underwriting authority that decides what AI risks are insurable and at what price. Its March 2026 report represents the most authoritative reinsurer assessment of how agentic AI reshapes cyber insurance lines. The Reflex Cyber Risk Management program offers policyholders complimentary security training, consulting, and risk monitoring — a signal that Munich Re is investing in loss prevention, not just loss transfer. For AI security vendors, Munich Re's risk taxonomy is effectively the scoring rubric that determines market access.3,4
Why It Matters
When the world's largest reinsurer publishes a dedicated section on agentic AI risk, it sets the pricing signal for the entire insurance chain. Carriers, MGAs, and brokers downstream will calibrate their AI-related underwriting to Munich Re's risk assumptions. Understanding this report is not optional for anyone building, deploying, or insuring AI systems.
The Insurability Maturity Curve
89%
of C-level respondents don't feel adequately protected against attacks4
31%
of organizations don't know whether they had an AI breach in the past 12 months9
Our intelligence team identifies three phases in the AI insurability maturity curve. Phase 1 (current state): Opacity — organizations deploying AI faster than they can inventory, monitor, or govern it. Seventy-six percent cite shadow AI as a problem, 31% cannot confirm whether they have been breached, and 56% don't know their AI kill-switch response time. Phase 2 (emerging): Structured Disclosure — enterprises develop formal AI registries, standardized risk assessments, and incident response playbooks that translate technical posture into underwriting language. This is where structured frameworks become the bridge between security teams and insurance carriers. Phase 3 (future state): Continuous Insurability — real-time posture data feeds underwriting models, enabling dynamic premium pricing based on demonstrated security maturity rather than annual questionnaires. The organizations that reach Phase 3 first will command preferential rates, lower retentions, and broader coverage — a direct competitive advantage.
Industry Responses
Enterprise Buyer Signal
53% of organizations have withheld AI breach reporting
Despite 85% supporting mandatory disclosure
HiddenLayer 2026 AI Threat Landscape Report9
76%
of organizations cite shadow AI as a problem, up from 61% in 2025 (+15 pts YoY)9
65%
of insurers plan scaled AI agents for claims processing in 202611
~7%
decline in global cyber insurance pricing Q4 2025 — softening market meets hardening AI risk10
New Vendor Watchlist
HiddenLayer
Released the 2026 AI Threat Landscape Report (250 IT/security leaders) and launched Agentic Runtime Security on March 23 with detection capabilities for prompt injection, malicious tool calls, and data exfiltration in autonomous agent workflows. The threat data from this research is now informing underwriting risk models across the cyber insurance market.9,13
Roots Automation
AI automation purpose-built for insurance operations. Research reveals the AI "operational divide": 82% of insurers believe AI defines their future, but only 14% have it fully integrated. Roots Automation tracks the gap between AI ambition and operational readiness — a gap with direct implications for how insurers themselves will underwrite AI risk.12
ArmorCode
$81M total funding, including a $16M round in March 2026. Agentic AI platform for unified exposure management. Phil Venables — former CISO of Google Cloud — joined the board of directors. Exposure quantification is increasingly the currency underwriters require before extending coverage to AI-forward enterprises.14
Credo AI
Shadow AI Discovery offering directly addresses the 76% of organizations citing shadow AI as a definite or probable problem. A governance-first approach that bridges compliance and innovation — critical for enterprises seeking to translate their AI risk posture into the structured disclosures that insurers are beginning to require.9