‘Safe Harbor’ for early insurance company violations concept explored
April 2, 2021 — Insurance regulators considering how to wrestle with biased, discriminatory outcomes from artificial intelligence (AI) in both the offering and coverage of insurance discussed the potential of a safe harbor from enforcement companies when they err, with a chance to correct their course.
A panel comprised of regulators, consumer advocates, data specialists and the filmmaker of documentary Coded Bias met virtually March 31 to discuss how to address AI algorithms when they cause disparate impact among people of color and vulnerable populations —or before they can do so.
The National Association of Insurance Commissioners sponsored both the Big Data/Artificial Intelligence forum and the screening of Coded Bias by Shalini Kantayya. The groundbreaking movie premiered at Sundance Film Festival in 2020.
AI is a gatekeeper of many things from how long a prison sentence someone gets to who gets job or other opportunity and who does not, Kantayya said on the panel.
These same systems, including facial recognition systems that we are trusting so implicitly haven’t been vetted for gender bias or racial bias or even for vetted for some shared standard of accuracy, she said. Kantayya warned that we as a society in using these algorithms blinding could “roll back 50 years of civil rights advances with AI’s black boxes.”
Jon Godfread, North Dakota insurance commissioner and chair of the NAIC’s Innovation and Technology Task Force, raised the concept of a safe harbor, noting that it is entirely possible to have an AI algorithm go through an auditing process to check for bias and still have a bad outcome. He said there could be a process so the full weight of the regulatory authority won’t come down on the insurer’s head. The outcome could instead lead to a discussion without the threat of a fine that sends a company into bankruptcy or other major problems, he said.
Companies are going to get it wrong with the best intentions, Godfread warned. He suggested the focus be on going back and making the customer who suffered a discriminatory result whole. It is beneficial to correct the problem early on rather than have it go on for years, he said.
Godfread acknowledged there would be a liability issue with class action lawyers over bias, but it is better to create a system where failure comes early and is learned from, and problems are corrected.
Panelist and NAIC longtime consumer advocate Birny Birnbaum also appeared to endorse the idea of a safe harbor for unintentional bias that would be immediately rectified during the group panel discussion.
Birnbaum emphasized the need — and its urgency — to create a data collection system to look at outcomes from the algorithms insurers use, so regulators can assess their impact and how they might discriminate against vulnerable populations and groups with a smorgasbord of assembled available data points. Insurance coverage and selection matters enormously in everything from community development to catastrophe recovery and preparation, according to Birnbaum.
“There is a special need need for insurance regulators to address this quickly and modernize the system for accountability and oversight, Birnbaum said. “We are seeing exponential growth with algorithms. We need concrete steps, not just dialogue.”
State insurance commissioners discussed their ongoing work in the executive-level special Race and Insurance Committee and executive sessions and explorations they have held on diversity within their own organization, the NAIC.
However, AI bias issues won’t be limited to just insurance regulatory oversight and accountability in the future.
Data scientist Cathy O’Neil, author of Weapons of Math Destruction and a panelist on the forum, noted that discussions on algorithms’ potentially harmful outcomes with the Consumer Protection Financial Bureau have started up again. These talks were dormant during the Trump Administration.
How are we to translate existing laws to rules that data scientists can make sure their algorithms are passing?, O’Neil asked. “The answer is not obvious,” she acknowledged.
It’s not just the CFPB —the use of data in creating disparate impact harm among people of color is getting attention from the very top. Fimmaker Kantayya said that President Joe Biden and Vice President Kamala Harris was well as House Speaker Nancy Pelosi have asked for a screening.