Algorithmic accountability: AI-X team at Faegre Drinker providing legal guidance in new area of law

  • Print

Increasingly, daily lives are becoming controlled by algorithms.

Who gets a mortgage, who gets into college, how much each individual will pay for insurance and who gets the job are a sample of the kinds of decisions turned over to algorithms. These calculations analyze reams and reams of data that everyone generates as they carry their cellphones, pay with their credit cards, log in to email accounts and swipe their keycards. Whether driving down the street and passing under cameras or staying at home and streaming a movie, they are leaving a trail of information.

Companies can craft algorithms and draw insight from all this data about their customers and the market. However, rather than enabling businesses to perform better, these equations can create trouble by rendering results that seem unfair and perpetuate historical biases.

Scott Kosnoff

“The data that goes into the algorithm reflects societal histories and biases,” said Scott Kosnoff, partner at Faegre Drinker Biddle & Reath LLP in Indianapolis. “The algorithm itself is developed by humans and the outcome is often no better or worse than the input. To the extent the inputs are biased in some way, you can actually wind up with an algorithm that produces greater discrimination, not lesser discrimination.”

Kosnoff and his Faegre Drinker colleague Bennett Borden in Washington, D.C., are co-leading a new initiative at the firm to guide and counsel businesses that use algorithms to enhance their operations or market their products. Dubbed the Artificial Intelligence and Algorithmic Decision-Making Team, or AI-X for short, the new group is bringing data scientists from Faegre Drinker’s wholly-owned consulting subsidiary, Tritura, together with the firm’s attorneys from different practice areas.

Algorithmic decision-making is a complex and still budding field, but utilizing the information produced by a bad algorithm can have consequences that many can easily understand. A business could suffer serious damage to its reputation and be subject to a lawsuit. Also, as regulations and laws governing algorithms are crafted in statehouses and on Capitol Hill, companies could get slapped with penalties and fines.

Bennett Borden

Borden described algorithms as the “biggest legal issue of the next decade.” The use of the calculations is growing and will become heavily regulated because the outputs can impact people’s lives.

“It’s a wonderful time in the law,” Borden said. “You usually don’t find these occasions where the law develops in an entirely new area or spreads into a new area based on old law. And questions (arise) about, ‘Do we want to bring old laws into new (areas)?’ So it’s just a fantastic time to be dealing with this issue.”

Testing and measuring

It is also a time when the issue of algorithmic decision-making is moving very quickly.

In July 2021, Colorado enacted state law SB 21-169, which prohibits insurance companies from using consumer data and an algorithm that “unfairly discriminates” against individuals in a protected class. Rhode Island and Oklahoma, according to Kosnoff and Borden, have introduced similar legislation, while Connecticut recently issued additional guidance on its requirement that insurers annually test their algorithms for bias.

Indiana Rep. Matt Lehman, R-Berne, introduced a bill during the 2022 Indiana General Assembly session that included regulations on algorithms used by insurance companies.

House Enrolled Act 1238 initially contained language that required insurers to provide upon request an explanation of how external consumer data was used to calculate policyholders’ premiums. That provision was stripped from the bill before it was signed into law by Gov. Eric Holcomb.

Insurance along with financial services, labor and employment, housing and health care are the five most algorithmic-centric industries. Kosnoff and Borden explained those industries rely on algorithms to be the “entire guts and lifeblood” of their operations and, in turn, have attracted the most regulation.

The AI-X team is focused on helping clients in these and other industries stay on top of emerging laws and regulations, as well as identifying and mitigating risks related to artificial intelligence and algorithms. The Faegre group does not help develop the algorithms but will advise clients on what changes will comply with new regulations.

Currently, Kosnoff and Borden said, the use of algorithms is inspiring a lot of handwringing. Consumer rights advocates are concerned about what they see as problems with the data and the calculations, while businesses are responding with assurances that the equations do not contain any bias.

The attorneys said the answer is to test the algorithms and measure their outputs to determine if something is off-kilter. Having the businesses know how their algorithms are performing will provide a best-practice model as regulators craft rules and compliance standards.

Kosnoff and Borden said they expect regulation will increasingly be a balancing test between the good and the harm the company is introducing to the market by using the algorithm. Companies have to be a part of that conversation.

“There’s a lot of people talking to these regulators about their concerns, especially in the consumer rights area, which are valid concerns in many cases,” Borden said. “But unless we have a countervailing voice that is based on data and testing that is not just handwringing kind of rhetoric, we’re not going to get good regulation out of it.”

‘Algorithmic fairness’

A class action filed against Wells Fargo Bank this March by Black homeowners in California underscores the risks that come with algorithms. The plaintiffs claim the financial institution used a “race-infected lending algorithm” that disproportionately denied the refinancing applications of Blacks compared to white homeowners.

“The numbers associated with Defendants’ misconduct tell a shameful story, without any legitimate explanation,” the amended complaint in Aaron Braxton, et al. v. Wells Fargo Bank, N.A., et al., 3:22-cv-01748, states. “Data from eight million refinancing applications from 2020 reveal that ‘the highest-income Black applicants [had] an approval rate about the same as White borrowers in the lowest-income bracket.’”

Faegre Drinker is not representing any of the parties in the California litigation. Wells Fargo had not filed a response at IL deadline.

Central to the Wells Fargo lawsuit is fairness. As algorithms become central to determining how each consumer will be treated, many are questioning the objectivity of the calculations. The Faegre teams sees attorneys as being able to help provide the answers.

“The concepts of algorithmic fairness are something that lawyers can become familiar with,” Borden said. “It’s most important that lawyers understand how algorithms are built and how they work and how their output is used. It sounds like magic but it really isn’t.

“The tricky bit is lawyers basically look up the answer in the law and they go and look at the client and compare the two,” Borden continued. “The problem (with algorithms) is we don’t have any law to compare it to.”•

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}