DENVER — The Colorado Division of Insurance (DOI) has moved into the final implementation phase of a nation-leading regulatory framework designed to prevent "proxy discrimination" in life insurance underwriting, setting a high bar for how carriers use machine learning and big data.
The initiative, rooted in the passage of Senate Bill 21-169, requires life insurers operating in the state to demonstrate that their use of external data and complex algorithms does not result in unfairly discriminatory outcomes for protected classes. As the first state to mandate such rigorous oversight for artificial intelligence (AI) in insurance, Colorado is now providing a roadmap for state-level governance of algorithmic bias.
“Our goal is to ensure that as insurance companies move toward using more data and more sophisticated technology, they are doing so in a way that is fair to all consumers,” said Colorado Insurance Commissioner Michael Conway in a statement regarding the implementation. “We aren't telling companies they can't use these tools, but we are telling them they must be accountable for the results those tools produce.”
The Shift to Accelerated Underwriting
For decades, life insurance underwriting relied on medical exams and fluid samples. However, the industry has rapidly pivoted to "accelerated underwriting," which utilizes machine learning models to analyze thousands of data points—collectively known as External Consumer Data and Information Sources (ECDIS).
These sources include credit scores, social media activity, purchasing habits, and educational attainment. While efficient, consumer advocates warn these data points often act as "proxies" for race, even when racial data is not explicitly used.
“Proxy discrimination occurs when an algorithm uses a neutral variable that is so highly correlated with a protected characteristic that it produces the same discriminatory result,” said Peter Kochenburger, a professor and insurance law expert. “In life insurance, if your algorithm penalizes people based on zip codes or certain shopping patterns that track closely with racial demographics, you have a discrimination problem, regardless of intent.”
The Colorado Governance Framework
Under Regulation 10-1-1, which became effective in late 2023 with reporting requirements rolling out through 2024, life insurers must establish a formal governance program. This framework requires companies to document exactly how their models are built, what data is used, and how they test for disproportionately negative impacts on protected groups.
The regulation requires insurers to submit an annual "Governance Affirmation" to the DOI. This document must detail:
- The specific ECDIS used in underwriting.
- The assigned roles and responsibilities of the personnel overseeing the algorithms.
- A description of the testing processes used to detect unfair discrimination.
- The strategies employed to remediate any bias discovered during testing.
“This is a significant shift from 'trust us' to 'show us,'” said a DOI spokesperson. “Insurance companies are now required to have a cross-functional committee—including legal, actuarial, and data science teams—to oversee these models.”
Industry Reaction and Compliance Hurdles
While the insurance industry has expressed support for the goal of fairness, trade groups have raised concerns regarding the complexity and cost of compliance.
The American Council of Life Insurers (ACLI) has been actively involved in the rulemaking process, advocating for a balance between consumer protection and innovation. In public comments during the drafting phase, the ACLI emphasized the need for regulations that are "operationalizable" and do not stifle the benefits of accelerated underwriting, which can lower costs and increase access for many consumers.
“The industry is committed to ensuring that our use of technology is fair,” said an ACLI representative in a recent policy forum. “However, the challenge lies in defining exactly what constitutes an 'unfairly' discriminatory outcome in a statistical sense. We need clear benchmarks so companies know how to comply.”
One of the primary friction points remains the "testing" regulation. While the governance regulation focuses on the process, a secondary regulation (Regulation 10-1-2) focuses on the quantitative testing of results. This requires insurers to analyze their back-end data to see if protected classes are being denied coverage or charged higher premiums at disproportionate rates.
National Implications
The Colorado model is being closely watched by the National Association of Insurance Commissioners (NAIC) and other states like New York and California, which are weighing similar measures.
Currently, the NAIC has adopted a Model Bulletin on the use of AI by insurers, but Colorado remains the only state with a specific, enacted regulation targeting life insurance machine learning models.
“Colorado is the laboratory for the rest of the country right now,” said Harold Weston, a clinical associate professor of risk management and insurance at Georgia State University. “If Colorado can successfully implement these audits without driving insurers out of the state or making premiums skyrocket, you will see a domino effect across other jurisdictions.”
The urgency for such state-level governance has increased as federal movement on AI regulation remains stalled in Congress. By focusing on life insurance first—an industry where the long-term impact of a policy can span decades—Colorado officials say they are protecting the financial security of marginalized communities.
Looking Ahead
As the 2024 reporting deadlines approach, the Colorado DOI will begin reviewing the first wave of governance reports. Commissioner Conway has indicated that the department will work collaboratively with insurers initially but has the authority to issue fines or require changes to underwriting models if companies fail to meet the new standards.
For consumers, the regulation promises a future where a "black box" algorithm cannot be the sole, unquestioned arbiter of their insurability.
“The message we are sending is that transparency is no longer optional,” Conway said. “If a company is going to use AI to make decisions about a person’s financial future, they must be able to explain those decisions and prove they are fair.”