Ellen Morrison Townsend: AI isn’t new legal risk — it’s old liability at greater scale

Keywords Opinion / Viewpoint
  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

Artificial intelligence is not creating new legal risk. It is exposing — and accelerating — legal risk that has been hiding in plain sight for decades.

In 1996, I proposed legislative protections addressing a problem in insurance underwriting: the use of variables that appeared neutral but operated as stand-ins for protected characteristics — particularly for domestic violence survivors.

Nearly three decades later, that same problem is being rediscovered under a different name.

Much of today’s discussion around artificial intelligence begins with the assumption that AI has introduced entirely new legal challenges. It hasn’t. What AI has done is scale an existing one — quietly, efficiently, and with a level of confidence that can make flawed decisions look inevitable.

The issue isn’t new. The speed and scale are.

We have seen this before

Indiana law has already confronted the core problem now being repackaged as “algorithmic bias.”

In the 1990s, legal analysis of insurance underwriting revealed that ostensibly neutral inputs — claims history, geographic data and other risk indicators — were functioning as substitutes for protected traits. Insurers did not need to ask impermissible questions directly. The data answered for them.

That doctrine is not unsettled. It is not obscure. And it did not disappear.

The same structure, now at scale

Artificial intelligence systems replicate that same structure.

The difference is not conceptual. It is operational.

Where decisions once occurred one file at a time, they are now made continuously and at scale. That shift does not change the legal analysis. If anything, it heightens it. A system that produces biased outcomes thousands of times over does not dilute liability — it concentrates it.

The ‘black box” is the wrong question

Much of the current discourse focuses on AI’s so-called “black box” nature. That framing misses the point.

Courts do not require plaintiffs to reverse-engineer a decision-making system to prove discrimination. They look at patterns. They look at outcomes. They examine whether the asserted business justification holds.

Those principles apply whether the decision-maker is human or algorithmic.

And the inability to explain how a system reaches its conclusions is not a shield. It is a problem. Deploying a system you cannot explain is not a defense — it is a measurable legal risk.

This is not a new theory

Contemporary scholarship on AI is increasingly describing, in technical terms, what the law has long recognized.

In the essay “Big Data’s Disparate Impact” published in the California Law Review, professors Solon Barocas and Andrew D. Selbst explain how data-driven systems reproduce discrimination through facially neutral variables that correlate with protected traits. What is now described as bias embedded in data is, in legal terms, the same proxy problem identified decades ago.

I worked through that problem as an Indiana University Maurer School of Law student in Bloomington, sitting in the law library stacks, trying to make sense of how neutral risk factors could produce unlawful results. At the time, it felt like a narrow issue tied to insurance.

It wasn’t.

It was the structure.

And that structure is now embedded in AI systems.

This Is already an Indiana issue

For Indiana businesses, this is not theoretical.

State law has already addressed the misuse of facially neutral criteria in contexts like insurance underwriting. Statutes such as Indiana Code § 27-8-24.3 make clear that formal neutrality does not excuse substantive discrimination. Federal law imposes parallel constraints in employment and lending.

AI is now being integrated into each of those domains:

  • hiring and screening
  • lending and credit decisions
  • insurance underwriting and risk assessment

Regulators are not waiting for AI-specific legislation. They are applying existing law — because they can.

The bottom line

There is a tendency to treat AI governance as something that will be worked out later.

It won’t be.

The legal framework is already in place. It has been for decades. AI does not sit outside that framework — it falls directly within it.

The real risk is not that these systems are too complex to regulate.

It is that they are doing, at scale and with consistency, what businesses have long been told they cannot do at all.

The law is not catching up to AI. AI is walking straight into a legal doctrine that has been waiting for it since 1996.•

__________

Townsend is a partner at Due Doyle Fanning Alderfer LLP.

Please enable JavaScript to view this content.

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer!

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In