Rodney R. Nordstrom: The future of automated justice: Could AI replace juries?

Keywords Opinion / Viewpoint
  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

In an experiment conducted earlier this year at the University of North Carolina School of Law, AI-simulated jurors were asked to deliberate to a verdict a real criminal case. The result then was compared to the actual verdict.

Three artificial intelligence platforms: OpenAI’s ChatGPT, Anthropic’s Claude and xAI’s Grok acted as jurors and were asked to interact with one another and reach a verdict. The three AI platforms engaged in multiple rounds of analysis that revealed strikingly human-like reasoning. They engaged in discussions with each other and even changed their opinions about the evidence based upon each other’s reasoning.

After 13 minutes, all three AI platforms converged on a not guilty verdict, citing insufficient evidence and criminal intent. The AI jury verdict stood in sharp contrast to what happened when the case was tried with a human decision maker, but these conflicting verdicts are not all that surprising considering human juries often come to different verdicts on the same case facts.

The experiment also teaches that while AI can assist the legal system, replacing human jurors entirely would raise profound ethical and legal challenges, not to mention constitutional issues. The recommendation for the future of AI in courtrooms involves a hybrid approach where AI supports decision making, but human judgment remains paramount.

Simulated jury deliberations are not new. As agenic jurors and the newest version of synthetic jurors become more sophisticated and human-like, their value and popularity as a trial research tool increase. Proponents argue that AI could reduce wrongful convictions by relying on data rather than emotion — greatly minimizing prejudice related to race, gender and socioeconomic background.

On the other hand, critics warn that AI lacks moral reasoning, empathy and the overall ability to assess and accurately predict human behavior. Synthetic jurors cannot factor in mitigating circumstances like their human counterparts. Also, AI decision-making is notorious for using flawed data algorithms, possibly leading to systemic injustice. AI might inadvertently propagate biases rather than eliminate them. Another big issue is the notorious “black box” problem. It is undisputed that AI developers can’t fully explain how the applications reach their conclusions. Finally, who is the accountable party in the event of an inaccurate verdict?

Ongoing experimentation with newer agenic and synthetic jurors needs to be further studied. As AI decision-makers become more sophisticated, confidence in artificial intelligence will become more commonplace. In one of my recent consulting cases, I asked an AI focus group to independently conduct its own deliberation on the same case facts as a human focus group for comparison. After a short time, to my surprise, the AI jury sent me an email requesting I send them a missing exhibit that was necessary for their deliberations but was shown to the human focus group involving the same case.

AI jurors can even assess the strengths and weaknesses of the case and interpret jury instructions as part of the deliberation process and may soon become indispensable tools for trial preparation and research. This experiment is another example of how AI juries blur the line between human judgment and machinegenerated reasoning. Their role in actual decision-making demands caution, transparency and rigorous oversight, however. The question is no longer whether AI can mimic the mechanics of deliberation — it clearly can — but is society ready and willing to entrust machines with decisions that cut to the core of human liberty and moral responsibility? For now, the promise of AI-assisted justice is intriguing, even transformative, but the ultimate safeguard of fairness must remain rooted in human conscience.•

__________

Nordstrom, Ph.D., J.D., works as a trial psychologist primarily in Illinois and Indiana. He can be reached at [email protected]. Opinions expressed are those of the author.

Please enable JavaScript to view this content.

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer!

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In