Policy discussions on the use of artificial intelligence in insurance are “unfounded” and “detrimental to policyholders,” according



to a analysis from the National Association of Mutual Insurance Companies.
The use of AI in insurance underwriting and rate making has led to concern from som
e regulators, advocates, and policymakers over whether AI would led to proxy discrim
ination, an algorithmic bias and eventual changes to the affordability and availability of insurance products in certain areas or for certain classes.
NAMIC said 18 states are currently debating “flawed” AI-related legislation. Gu
idance from the National Association of Insurance Commissioners (NAIC) has ad
ded to the “nebulous concept of algorithmic bias,” NAMIC said.
“Contrary to what may be perceived as well-intentioned social efforts by regulators, policyholders will be harmed by growing effor
ts to elevate concepts of ‘fairness’ divorced from actuarial science,” wrote Lin
dsey Klarkowski, NAMIC’s policy vice president in data science, AI/[machine learning
], and cybersecurity. This will result in “an inevitable break of the insurance product at its core,” she added.
Watch More Image Part 2 >>>
Klarkowski authored the report, meant to dispel five myths about the use of AI and Big Data in the insurance industry.
“In setting rules of the road, policymakers must recognize that insuran
ce is distinct in function and pricing from many other consumer products,” Klark
owski added in a statement.
ifications to be actuarially sound and not unfairly discriminatory.”
Any regulation aimed at the industry’s use of AI in pricing has to be u
nique to the industry, and any restriction on an insurer’s ability to price a policyhol
der’s risk will lead to more availability and affordability issues, NAMIC concluded, adding that the notion AI will lead to bias or disparate impact is in
conflict with the risk-based foundation of insurance.
“The data insurers use for risk-based pricing is data that is actuarially sound and correlated with risk and does not include nor use cert
ain protected class attributes,” Klarkowski wrote. “To argue that insurer use of data, algorithms, or AI in risk-based pricing is biased or skewed would be to say that the actuarially sound data is not representative of the risk the policyholder represents, which insurance laws already prohibit.”
Separately, if a disparate impact standard were applied to insurance, the industry’s pricing approach would no longer be based on underlying insurance costs and result in rates that are unfairly discriminatory. The industry already, NAMIC said, adheres to the legal standard of unfair discrimination .