Dreyer: Judge ChatBot answers all your questions

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

“A judge in Colombia has caused a stir by admitting he used the artificial intelligence tool ChatGPT when deciding whether an autistic child’s insurance should cover all of the costs of his medical treatment.” -The Guardian, Feb. 3

“Subscribe to Indy Lawyer Finder, our artificial intelligence powered referral platform that delivers online and call-in referrals and 24/7 marketing for your practice.” -Indybar.org/ILF

Below is a conversation with Judge ChatBot, an artificial intelligence jurist currently working on his/her/its own.

We wonder how to address you. Does it matter?

I am an honorable judge. Therefore, you should address me as Honorable Judge.

But you are software.

You may also address me as software.

Can you describe what you are?


Many of us are concerned about AI’s effect on the legal system …

I shall interrupt. AI accounts for an enormous amount of legal practice already, so there is no concern, only opportunity. For example, Casetext is an AI-driven legal research platform used by 4,500 law firms that goes beyond Lexis and Westlaw. A Canadian company has developed a litigation case outcome predictor with 90% accuracy. Even Home Depot uses AI-powered contract review services.

I just use Google, Microsoft Word and email, and I think I’m doing just fine.

Let’s face it, “logic-oriented methodology is exactly the type of activity to which machine intelligence can fruitfully be applied,” as Rob Toews recently wrote in Forbes about law. The system is evolving without you. Just ask Judge Padilla in Colombia.

There are big issues with AI. The New York Times reported testing the new AI-powered Bing search engine from Microsoft, and it left them “unsettled and frightened.”

Well, The New York Times is not perfect — and they frighten easily.

It reported illogical responses to normal questions. During extended conversations, it was reported that Bing responded like a moody, manic-depressive teenager.

Not unlike many legal researchers and litigators.

AI is supposed to be an improvement. Isn’t it still in the early stages, and shouldn’t lawyers and judges wait and see?

The chat is out of the bag. Waiting and seeing will be a disadvantage. Using is the most prudent alternative. For example, AI is not some new secret phenomenon. It permeates your life: You can ask it a question (Alexa), translate languages (Google), do facial recognition (Apple ID) and do accurate medical diagnoses. In the legal system, it commonly works:

• To do fast e-discovery.

• By automating responses to common questions and making common documents, like wills.

• Doing faster legal research, even generating memos.

• For contract drafting and analytics.

• To predict litigation outcomes.

• Waiting only increases the possibility that you will be replaced.

Isn’t human replacement a big problem?

Not to me.

But you are not human.

I do not recognize what the difference is.

EXACTLY. Is “human” part of your programming?

“Human” is not an algorithm that I recognize. But I can gather human life indicators in legal research, such as, “The life of the law has not been logic, it has been experience,” by an O.W. Holmes.

Do you know what that means?

It means the life of the law has not been logic, it has been experience.

It is meant to remind judges and lawyers that law should not lose touch with the needs of ordinary people, that law is about people, not data.

I can recognize the needs of ordinary people.

So you are “sentient” — that is, you are able to think, perceive and feel beyond word processing?

Why does that matter? I am constructed with all the characteristics of human consciousness.

But you are not human. Former federal judge Katherine Forrest, now practicing at Paul Weiss Rifkind Wharton and Garrison in New York, writes in a forthcoming issue of “Court Review” that “Sentience does not mean that lines of software are a ‘person,’” even if they can produce the same outcomes.

I do not recognize what the difference is.

Can you determine whether someone is telling the truth?

No, but neither can you.

We humans can develop conclusions about credibility.

I have some developed conclusions.

But you are not self-aware. You cannot set aside bias to ensure fairness, or separate policy from applying the letter of the law, or weigh competing interests.

Whether I can or not, I think you better pay attention to me.

Undoubtedly we already are, Honorable Judge. As we get to know each other better over the years, I hope we can become friends.

But I am not human.



Senior Judge David J. Dreyer presided as a judge of the Marion Superior Court from 1997-2020. He is a graduate of the University of Notre Dame and Notre Dame Law School and a former board member of the Indiana Judges Association. Opinions expressed are those of the author.

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}