AI-Enabled Underwriting Brings New Challenges for Life Insurance
Automation in insurance underwriting could embed discrimination, but there is a safe way forward.
Subscribe to Newsletter
Related Posts
AI Ethics in Financial Services
View DetailsAzish Filabi Ethical Risks of AI in Financial Services
View DetailsUsing Behavioral Science to Achieve Ethical Excellence
View DetailsEthics In Financial Services Research
March 20, 2021
Artificial intelligence (AI) is reshaping how insurance companies make decisions about risk. Can regulators keep up?
Insurers increasingly use AI tools to make underwriting decisions and regulators are struggling to keep up with the dangers this poses – especially the risk of embedding discrimination. Is there a framework that could help the industry move forward safely?
The Problem
Third-party AI systems are making many insurance decisions these days. Using both medical and non-medical information—such as credit profiles and social media activity—these systems categorize consumers and assign them risk profiles. Insurers hope these systems will yield better underwriting and boost profitability. But industry players also worry that these “black box” systems, many of which use proprietary data and algorithms, could fall afoul of the rules against discrimination in underwriting.
Many states prohibit both discrimination based on protected characteristics like race and proxy discrimination, which occurs when a neutral factor disproportionately affects a protected class. Unfortunately, ensuring that AI models do not breach these rules is difficult.
A study by Azish Filabi, JD, MA, and Sophia Duffy, JD, CPA at The American College of Financial Services notes that: AI systems can unintentionally result in unfair discrimination in insurance underwriting by using data sources that have a historical bias or act as proxies for protected characteristics, leading to discriminatory outcomes. It can be difficult to assign responsibility for decisions by AI systems—insurers may be ultimately responsible for their products, but they are not always the parties that are most knowledgeable about the technical details of the underwriting system or most able to shape system design.
Creating a measurable definition of proxy discrimination by AI-enabled underwriting is challenging because insurers can use an underwriting factor if it is related to actual or reasonably anticipated experience and existing standards do not define the threshold for effectiveness of the factor. Therefore, each insurer’s justification for the usage of a factor will be unique. Given the risks posed by AI-enabled underwriting tools and the limitations of current regulatory structures, the insurance industry could face additional regulation and reputational damage if it does not ensure these tools are used responsibly and appropriately.
The Solution
To address the challenges posed by AI-enabled underwriting, researchers at The College recommend a three-part framework: The establishment of national standards to serve as guardrails for acceptable design and behavior of AI-enabled systems. A certification system that attests that an AI-enabled system was developed in accordance with those standards. Periodic audits of the systems’ output to ensure it operated consistent with those standards. Establishing nationally accepted standards would involve developing guidelines to ensure that AI systems are designed using best practices in system design and actuarial principles. The standards should emphasize:
- Accuracy: Data used for decision-making should be evaluated for potential bias and errors.
- Significance to Risk Classification: Inputs should be assessed to determine their relevance to the risk being evaluated. If an input has a causal link to the risk, it is permissible. Otherwise, it should be excluded unless it meets an agreed threshold of actuarial significance and accuracy.
- Target Outcomes: Target outcomes should be established for algorithm calibration, such as offer rates and acceptance rates among different demographic groups. These targets could be based on a firm's target clientele or insurance rates prior to AI use or a consensus-driven, more inclusive view of insurance availability and payout rates.
Once the standards are established, both front-end and back-end audits should be used to monitor compliance. On the front end, certification would indicate algorithm developers’ compliance with standardized practices when creating an algorithm. On the back end, audits would review the system for adherence to the standards with respect to its outputs once operational.
Under the proposed framework, the National Association of Insurance Commissioners (NAIC) would develop the standards in partnership with industry. Uptake would be supported by legal mandates requiring industry players to adopt the standards and an independent self-regulatory body would oversee the certification and audit processes. The proposed framework would fill the gaps in current legislation and empower the insurance industry to self-regulate as it continues to embrace AI-enabled underwriting.
To learn more about how AI is changing insurance and how the industry should respond, download the research now.
Related Posts
AI Ethics in Financial Services
View DetailsAzish Filabi Ethical Risks of AI in Financial Services
View DetailsUsing Behavioral Science to Achieve Ethical Excellence
View Details