Artificial intelligence risk management, testing for insurers: Faegre Drinker launches new service as national organization gives guidance

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

For attorneys like Scott Kosnoff, artificial intelligence is a driving force that’s revolutionizing every industry and bringing about bigger change than the internet.

Kosnoff, an Indianapolis-based partner with Faegre Drinker Biddle & Reath LLP, is also co-leader of the firm’s AI-X — or artificial intelligence and algorithmic decision-making — team, a cross-firm, cross-practice group with 25-30 practitioners that spend significant time on AI. He is considered a leading authority on the legal and regulatory challenges of AI and its impact on large insurers.

It’s a legitimate fear for organizations that if they don’t figure out a best use of AI, they will go the way of the dinosaurs, Kosnoff said.

Scott Kosnoff

“AI right now is like the shiny thing,” he said.

But with the rewards AI can bring in terms of improving efficiency and analyzing large amounts of data, it also carries risks.

Faegre Drinker announced in early December the launch of its algorithmic testing and AI governance and risk management service for insurers. The announcement came as the National Association of Insurance Commissioners unanimously adopted a model AI governance bulletin that encourages the use of testing to identify potential “unfair discrimination in the decisions and outcomes resulting from the use of Predictive Models and AI Systems.”

Kosnoff said AI use varies from insurer to insurer. As an example, he said, auto insurers have generally used AI more heavily than life insurers.

Kosnoff said the kind of work Faegre Drinker does with clients in the AI space is broad, but it’s geared toward helping them stay on top of changes in regulations.

He added that none of the concerns raised about AI usage are unique to the insurance industry.

“In order to make AI, you need access to a lot of data. And a lot of times, that data is sensitive,” he said.

Insurers’ AI model bulletin

The NAIC membership voted to adopt the model AI bulletin on the Use of Artificial Intelligence Systems by Insurers at its 2023 fall meeting.

According to an NAIC news release, the bulletin reflects the work of the NAIC Innovation, Cybersecurity, and Technology (H) Committee, which is chaired by Maryland Insurance Commissioner Kathleen Birrane.

Kathleen Biranne

“This initiative represents a collaborative effort to set clear expectations for state Departments of Insurance regarding the utilization of AI by insurance companies, balancing the potential for innovation with the imperative to address unique risks,” Birrane said in a statement. “As the insurance sector navigates the complexities of AI, the NAIC’s Model Bulletin on the Use of Artificial Intelligence Systems by Insurers provides a robust foundation to safeguard consumers, promote fairness, and uphold the highest standards of integrity within the industry.”

The committee, comprised of representatives from 15 states, began drafting the bulletin in 2023 with the goal of establishing comprehensive regulatory standards to ensure the responsible deployment of AI in the insurance industry.

Kosnoff said the bulletin addresses issues related to the usage of AI, such as potential inaccuracies, unfair biases leading to discrimination and data vulnerabilities.

The bulletin would also require insurers to adopt an AI governance and risk management framework.

NAIC bulletins were established to create conformity in insurance regulations, Kosnoff said. But the bulletins are meant to be guidance for states and are nonbinding.

“I think it’s anybody’s guess how widely adopted this bulletin will be,” he said.

There may be states that choose not to adopt the bulletin, Kosnoff continued. He noted that Indiana helped draft the bulletin — although an Indiana representative is not listed among the committee’s 2023 membership — but the Hoosier State is not likely to adopt it.

In general, Kosnoff said, blue states are more likely to adopt the bulletin. He estimated that it will take about three months to know which states are moving forward with adoption.

Algorithmic testing

Faegre Drinker’s algorithmic testing is done through Tritura’s proprietary data analytics and AI platform, QuarterJack. Tritura is an affiliate and wholly owned subsidiary of the firm.

Jay Brudz

Jay Brudz, a Washington, D.C., Faegre Drinker partner and co-leader of the AI-X team, wrote in an email to Indiana Lawyer that data scientists work closely with the firm’s insurance and technology lawyers who understand the evolving insurance regulatory requirements and AI technologies.

“The interdisciplinary team develops and implements testing strategies that are tailored to the needs of each client and its AI use cases,” Brudz wrote. “We’re able to test the client’s AI models for potential unfair discrimination under the guidance of legal counsel and while leveraging the power of our AI platform, QuarterJack.”

As an example, Brudz said the firm conducted testing using the methodologies specified in Colorado’s draft insurance regulation that will require life insurers who use external consumer data and information sources, or ECDIS, to test for unfair discrimination.

Colorado adopted the regulation in September 2023. Affected companies must comply with the requirements by Dec. 1, 2024, and submit an interim progress report by June 1, 2024.

According to Faegre Drinker, the state has also released a draft regulation that will require life insurers using ECDIS in their underwriting processes to test for unfair discrimination, as uniquely defined by Colorado law.

Risk management

Brudz said the testing Faegre Drinker does is one part of a larger risk management framework the firm uses to help insurers mitigate the regulatory, litigation and reputational risks associated with using AI.

He said the recently issued NAIC model bulletin, like the Colorado governance regulation, introduces significant compliance obligations.

As individual states consider whether to adopt the model bulletin, either in full or with some modifications, or even take their own approach to oversight and governance of AI, insurers may face a patchwork of evolving regulatory requirements that vary state-by state and present additional complexities from a compliance standpoint, Brudz said.

“Staying on top of the developing regulatory landscape and having a flexible and pragmatic strategy to respond will be critical,” he wrote to IL.

Faegre Drinker has been developing QuarterJack, its AI platform, since 2016. Brudz described it as a cloud-hosted, secure, ISO-compliant AI and data science platform that includes both off-the-shelf and custom designed modules.

He described the vision behind QuarterJack as providing a technological platform where data scientists, lawyers, clients and AI can all interact with the data to produce comprehensive solutions. A team of attorneys and data scientists design testing plans and interact with the data to conduct the testing, all with an eye toward analyzing the results and advising Faegre Drinker’s clients on the applicable regulatory requirements.

According to Kosnoff, Faegre Drinker wants clients that want to take advantage of the benefits of AI while mitigating the risks through a thoughtful risk management framework.

Initially a regulatory practice attorney for insurers, Kosnoff said seven years ago, he foresaw that AI would be big and started reading and learning as much about it as he could.

“Now,” he said, “it’s almost all that I do.”•

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}