OECD Regulators Draft Rules on Machine‑learning Use in Underwriting as Insurers Race to Deploy AI Models

PARIS — International and national insurance regulators, meeting through OECD-led forums and standard‑setting bodies this year, are drafting a set of guidance and rules aimed at curbing the emerging risks of machine‑learning use in underwriting as insurers rush to deploy artificial‑intelligence models that change how coverage is priced, screened and sold. Regulators say the measures — ranging from documentation and governance requirements to mandatory bias testing and third‑party audit rights — are intended to protect consumers and market stability amid rapid, cross‑border AI adoption; insurers and industry groups warn overly prescriptive rules could slow innovation. (oecd.org)

Regulators’ response has accelerated in the past 18 months as national supervisors and international bodies published guidance or draft rules and flagged enforcement expectations for underwriting uses of AI. The International Association of Insurance Supervisors in July 2025 published an application paper that sets supervisory expectations across governance, robustness, transparency and fairness for AI systems used by insurers. The European Union’s AI Act already classifies many underwriting, premium‑setting and claims‑assessment systems as “high‑risk,” triggering strict data‑quality, human‑oversight and documentation requirements that will come into force under national regimes through 2025–2026. In the United States, the NAIC adopted a model bulletin in December 2023 that a growing number of states have since incorporated into supervisory practice; Colorado and several other states have gone further with testing and reporting rules focused on disparate impact. (iais.org)

What regulators are saying — and why
Regulatory texts and public reports stress two linked concerns: (1) algorithmic underwriting can embed or amplify bias in ways that are hard to detect and remediate, and (2) complex models and greater reliance on a small set of cloud and model vendors create operational and systemic vulnerabilities. The OECD’s June 2025 report on AI in regulatory design warns that “inadequate or skewed data” and “lack of transparency and explainability” can lead to inaccurate or adverse outcomes for groups of consumers and that human oversight and careful evaluation are essential. (oecd.org)

At the same time, international insurance supervisors say existing supervisory frameworks can be adapted to AI but need to be interpreted and enforced with AI‑specific expectations. The IAIS application paper, released after a public consultation, urges supervisors to take a risk‑based and proportionate approach while emphasising board accountability, model validation, vendor oversight and explainability for decisions that materially affect consumers. Petra Hielkema, chair of the European Insurance and Occupational Pensions Authority, has publicly argued that supervisors should apply existing insurance principles — such as governance, model risk management and consumer protection — to AI, while offering concrete guidance on how to do so. (iais.org)

How fast insurers are moving — and where AI is used
Insurers in North America and Europe have adopted a wide range of AI tools across underwriting, pricing, claims and customer service. EIOPA’s 2024 digitalisation reporting found roughly half of non‑life insurers and about one quarter of life insurers were already using AI for tasks including pricing, underwriting and fraud detection; industry surveys and company filings show increasing use of telematics, wearables, natural language models for document intake and image‑recognition models for claims. Insurers argue these tools reduce processing times, tighten pricing and expand access to customers with thin files — but supervisors warn that benefits are accompanied by new consumer‑protection and prudential risks. (eiopa.europa.eu)

In the United States, the NAIC’s 2023 model bulletin — “Use of Artificial Intelligence Systems by Insurers” — urges regulated insurers to adopt an AIS (artificial‑intelligence systems) program with board oversight, documented testing and the capacity to give regulators requested model documentation during examinations. Since the bulletin’s adoption, around two dozen U.S. jurisdictions have issued their own bulletins or guidance adopting its principles, and several states — notably Colorado, New York and Wisconsin — have issued or proposed stronger testing and reporting regimes for life underwriting and other uses. Colorado’s draft testing regulation, for example, would require insurers that use external consumer data sources to run annual quantitative tests for disparate impact and to report results to the regulator. (content.naic.org)

Industry reaction and concerns
Industry associations and many insurers have backed principles‑based supervision but warned against “one‑size‑fits‑all” or static rules that could quickly become obsolete as models evolve. The Global Federation of Insurance Associations (GFIA) urged supervisors to preserve flexibility and proportionality and to avoid conflating longstanding advanced analytics with newer generative‑AI tools; GFIA argued overly prescriptive guidance risks imposing high compliance costs that could hinder beneficial uses of AI. Major reinsurers and insurers, such as Munich Re, have publicly said the EU AI Act and supervisory guidance will prompt major investments in interpretability, testing and governance but also note that firms must build new in‑house skills or deeper vendor partnerships to comply. (insurancebusinessmag.com)

Regulatory measures being drafted or implemented

  • International application papers and standards: The IAIS Application Paper (July 2025) outlines five supervisory axes — risk‑based supervision, governance and accountability, robustness and security, transparency and explainability, and fairness/ethics/redress — and signals supervisors will expect insurers to adapt existing Insurance Core Principles to AI use cases. (iais.org)

  • OECD guidance and risk assessments: The OECD’s June 2025 analysis urges regulators to scrutinize data quality, model explainability and accountability when AI is used in regulatory design or oversight; that work supports national regulators’ efforts to harmonize expectations. (oecd.org)

  • EU AI Act: The EU classifies many underwriting, pricing and claims‑assessment systems as high‑risk. High‑risk systems must satisfy strict data‑quality and documentation obligations, human‑oversight mechanisms, and in some cases third‑party conformity assessments; member states must designate national competent authorities by August 2, 2025, with much of the Act’s operational framework being phased in through 2026. Industry commentators say the Act’s extra‑territorial reach and fines make compliance material for global insurers. (munichre.com)

  • State and national rules in the U.S.: The NAIC model bulletin (Dec. 4, 2023) sets expectations for AIS programs and examination materials; roughly two dozen states have adopted or adapted the bulletin and a few have introduced rules requiring bias testing, reporting and governance. Colorado’s draft life‑insurance testing regulation would require specific quantitative tests for disparate impact based on name/geolocation estimations of race or ethnicity and annual reporting to the Division of Insurance. (content.naic.org)

What regulators want insurers to do now
Supervisory documents converge on a set of near‑term expectations: documented governance and board oversight of AI uses; lifecycle model inventories; pre‑deployment testing and ongoing monitoring for model drift; vendor due diligence and audit rights; transparency tailored to impacted consumers; and rapid incident reporting and remediation plans. The IAIS paper and NAIC bulletin both emphasise proportionality — smaller, lower‑risk deployments should have lighter governance — but they also make clear that decisions that materially affect consumers will attract closer scrutiny. (iais.org)

The technical and legal challenges
Underwriting presents special challenges because models often rely on indirect signals and vast external datasets. Regulators and academics caution that even neutral models can produce disparate impacts if the input data reflect historical inequities or if proxies for protected attributes are present. The OECD report notes the difficulty of explainability in complex models and the need for human oversight; state‑level rules such as Colorado’s go further by prescribing testing methods (e.g., Bayesian Improved First Name Surname Geocoding) and statistical thresholds to flag disparate impact. Legal risks — including violations of unfair‑trade or anti‑discrimination laws — and privacy limits on the use of health or behavioural data further constrain model design choices. (oecd.org)

Third‑party dependencies and concentration risk
Insurers’ rapid adoption has increased reliance on third‑party model vendors, cloud providers and large foundation models hosted by a handful of U.S. tech companies. Regulators have repeatedly flagged vendor concentration as an operational‑resilience and systemic risk: an outage, exploit or common mis‑specification could affect multiple firms at once and, in underwriting, could produce synchronized mispricing or coverage exclusions. The IAIS and OECD explicitly urge robust third‑party governance, contractual audit rights and the capability for supervisors to obtain model artefacts during examinations. Industry participants say balancing vendor confidentiality and regulatory access remains contentious in vendor negotiations. (iais.org)

Practical impacts on underwriting and actuarial practice
Underwriters and actuaries are adjusting processes. Insurers report shorter application‑to‑decision times, more granular pricing bands and expanded use of automated “accelerated underwriting” pipelines that rely on open banking, wearables and other external data — uses that regulators now treat with special caution. Actuarial teams must combine statistical rigour with explainability tooling, feature‑attribution techniques and robust back‑testing. Some insurers have paused or narrowed certain use cases until governance and compliance pathways are implemented, while others have invested in internal model‑ops teams and external audits. Reinsurers are beginning to require documentation of model governance before assuming new business lines. (eiopa.europa.eu)

Consumer protection and due process
Regulators emphasise that consumers affected by automated underwriting have the right to meaningful information and redress. The EU AI Act and many U.S. proposals envision disclosure requirements and the right to human review for adverse decisions — a challenging requirement where underwriting models use complex feature interactions. Supervisors also expect firms to maintain records sufficient to reconstruct individual decisions for inspection, an obligation that can clash with vendor contracts and proprietary model protections. (munichre.com)

The tug‑of‑war over prescriptiveness
A key flashpoint is whether supervisors should issue detailed, prescriptive rules (statistical tests, thresholds and standardised tools) or retain principle‑based, outcome‑focused expectations that leave room for innovation. GFIA and many insurers urge a lighter touch and call for alignment between jurisdictions; regulators argue that minimum test standards and transparent documentation are necessary to prevent harm and to make supervisory reviews practicable. The IAIS application paper attempts a middle way: it sets clear supervisory objectives while leaving the specific technical implementations to jurisdictions and firms, stressing proportionality for smaller entities. (insurancebusinessmag.com)

Enforcement and supervisory practice: what to expect
Supervisors are already asking for AI inventories, governance manuals and testing artefacts during routine exams. The NAIC bulletin explicitly notes states may request model documentation during investigations; the EU AI Act establishes significant fines for non‑compliance with high‑risk obligations. Regulators have also signalled they may conduct sectoral stress tests and targeted reviews of models used in critical underwriting books. Industry counsel and compliance teams say enforcement will initially focus on documentation and governance failures rather than on a single disputed model outcome, but that legal risk increases once adverse impact or consumer harm is identified. (content.naic.org)

Where oversight could lead next: standards, certification and “deployment gates”
Several policy proposals and academic papers suggest more structured regulatory instruments to bridge innovation and safety: machine‑readable deployment authorisations, standardised fairness test suites, certified third‑party auditors and insurance products that explicitly insure AI operational risk. Researchers and some regulators have discussed an “AI deployment authorisation” or certificate that would document evidence across risk, control and auditability dimensions — a move that could make supervisory review more scalable but would require international alignment. Reinsurers and capital providers could also demand evidence of AI governance as part of capital or reinsurance placements. (arxiv.org)

Industry steps to adapt
Insurers are taking a range of compliance and engineering steps: creating AIS inventories; upgrading procurement contracts to secure audit rights and model lineage; instituting board‑level AI committees; adopting explainable‑AI toolkits; conducting third‑party and fairness testing; and running pre‑deployment red‑team exercises. Smaller insurers and many brokers have sought standardised vendor attestations or hosted, validated model libraries to reduce the cost of compliance. Several large groups report multi‑year investment plans to bring AI literacy and technical controls into finance, compliance and actuarial functions. (content.naic.org)

Experts’ voices
“This initiative represents a collaborative effort to set clear expectations for state Departments of Insurance regarding the utilization of AI by insurance companies,” Maryland Insurance Commissioner Kathleen Birrane said when the NAIC bulletin was adopted in December 2023, underscoring regulators’ intent to balance innovation and consumer protection. Industry groups like the GFIA responded that regulation must be proportionate and avoid stifling useful innovation. Petra Hielkema of EIOPA has said supervisors should apply existing supervisory principles to AI while providing sector‑specific guidance on governance and model risk management. (content.naic.org)

Bottom line: a global patchwork that will test cross‑border insurers
The regulatory landscape for machine‑learning underwriting is coalescing around common themes — governance, explainability, fairness testing and vendor oversight — but remains fragmented across jurisdictions as the EU’s high‑risk regime, the IAIS application paper, the NAIC model bulletin and state‑level rules differ in scope, technical detail and enforcement approaches. Global insurers operating across these regimes will face operational complexity: multiple compliance regimes, jurisdictional reporting and a need to harmonise internal controls while preserving actuarial and commercial competitiveness. The OECD and international standard setters are pressing for coherent approaches and improved regulator capacity, but firms and supervisors alike expect the debate over prescriptive standards, test methodologies and the balance between innovation and safety to continue through 2026 and beyond. (oecd.org)

What to watch next

  • IAIS implementation material and supervisory question banks that translate the July 2025 application paper into examiner checklists. (iais.org)
  • EU member states’ designations of national AI competent authorities and the development of conformity assessment and standardisation guidance through 2025–2026. (munichre.com)
  • Additional U.S. state adoptions of the NAIC bulletin and any movement at the federal level to harmonise cross‑state rules. (spglobal.com)
  • Market responses: whether reinsurers and capital providers begin to price AI‑governance risk explicitly or require third‑party audited attestations. (munichre.com)

As regulators tighten scrutiny, insurers say they will continue to deploy AI to improve speed and accuracy in underwriting — but with a new, costly layer of governance, testing and documentation that will reshape product design, vendor relationships and actuarial workflows. The next 12–24 months will show whether international coordination and flexible supervision can protect consumers and markets without choking the technology’s potential to broaden access and reduce costs. (iais.org)

Sources: International Association of Insurance Supervisors (Application Paper on the supervision of AI, July 2025); OECD (Governing with Artificial Intelligence, June 2025); NAIC (Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, Dec. 4, 2023); European Insurance and Occupational Pensions Authority (EIOPA digitalisation reporting, 2024); Munich Re analysis of EU AI Act; reporting and analysis from Mayer Brown, Willkie and S&P Global on state and national regulatory developments. (iais.org)

Recommended Articles

Leave a Reply

Your email address will not be published. Required fields are marked *