Why arbitrators aren’t using ChatGPT — not yet, anyway

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00
IL file photo

The messages about what artificial intelligence means for the future of your career often aren’t positive.

Usually, it’s portrayed as a high-stakes battle between you and the robot gunning for your job.

Linda Beyea

But Linda Beyea sees it differently.

Beyea is the vice president of innovation at the American Arbitration Association, where she’s been for almost 22 years, and is on a mission to get arbitrators to pay attention to ChatGPT and other similar artificial intelligence programs.

Because like it or not, Beyea said, she can see a day coming when generative AI is incorporated into many of the tools attorneys and arbitrators use. And rather than replacing the arbitrator, Beyea said ChatGPT can actually serve to make the arbitration process more efficient.

“We definitely anticipate they will be using it,” she said.

Beyea made her case in a March blog post for the American Arbitration Association, writing that ChatGPT can streamline arbitration by analyzing vast amounts of data quickly, which could increase productivity and decrease costs.

Joseph Simeri

But to her knowledge, Beyea said no one on the AAA’s roster of arbitrators is using the AI tool for arbitration yet — a finding consistent with inquiries Indiana Lawyer made to arbitrators.

Why the hesitation?

Joseph Simeri, an arbitrator in South Bend, said he’s read about ChatGPT.

“But at this point I have absolutely no desire to learn more about it, frankly,” Simeri said, adding that he’s skeptical other arbitrators will get on board.

There’s a practical reason for that, Simeri said: Experienced arbitrators tend to be older, coming from a generation that has traditionally relied less on technology to do its work.

But there are other reasons he’s not convinced ChatGPT will catch on any time soon.

“Arbitration is such a personal process,” he said. “You make decisions on credibility.”

Kevin Fitzharris

Simeri said he’s had a hard enough time over the last few years trying to get people to participate in arbitration over Zoom, so introducing an artificial intelligence component to the equation could be a difficult sell.

Kevin Fitzharris, an arbitrator in Fort Wayne, likewise said he hasn’t used ChatGPT for arbitration and probably won’t consider using it in the future. The technology is intriguing, he said, but he’s worried it could remove a “certain human element” from arbitration.

“I’d be surprised if an arbitrator or arbitration panel would use it,” Fitzharris said.

Jim Knauer, a partner at Kroger Gardis & Regas LLP, is less skeptical, though he hasn’t used ChatGPT in his alternative dispute resolution cases.

Knauer said he started tinkering around with ChatGPT about a month ago and was surprised by the way it works.

First of all, it’s “blazingly quick,” he said.

He compared using ChatGPT to searching Westlaw, where he said it may take up to seven attempts to come up with the right description that gets you the results you’re looking for.

That trial-and-error process can be dwindled down to maybe a couple searches on ChatGPT, he said.

Also, Knauer said he sees the technology as a great way to jump-start a project.

He acknowledged he’s adopted “some” of the arguments it’s developed, though he considers himself “fairly picky,” so he rewrote them in his own style.

There’s a caveat to the praise, though, because Knauer said he’s also seen mistakes.

“You can’t just cut and paste this stuff,” he said.

Where ChatGPT struggles

Cari Sheehan

Cari Sheehan, a clinical assistant professor of business law and ethics at the Indiana University Kelley School of Business at IUPUI, said she’s tested ChatGPT several times.

“Half the time, the citations are wrong,” Sheehan said, adding that generative AI tends to be better at identifying landmark cases as opposed to smaller, more novel cases.

To use ChatGPT, Sheehan puts in the search prompt, takes the cases it provides, then double-checks the citations.

Sometimes, she said, the program gives her something that isn’t a case at all. That’s commonly called an AI “hallucination.”

Basically, chatbots like ChatGPT are designed to learn skills on their own, rather than specific coding language telling them how to behave. They do this by absorbing information from the internet, and because of their ability to mix and match what they’ve learned, they can come up with answers that are wrong or don’t make sense.

In December, tech website The Verge reported that a popular Q&A site for coders and programmers temporarily banned users from sharing responses generated by ChatGPT because it was too easy to flood the site with answers that initially seemed correct but were actually wrong.

Sheehan also pointed to potential problems surrounding confidentiality.

Because ChatGPT logs queries, it can use that information when it generates answers for other people. So, Sheehan said, a lawyer who puts confidential information into the search prompt runs the risk of ChatGPT regenerating that information later for someone else.

Sheehan will discuss how to use ChatGPT, along with its ethical implications, at an Indiana State Bar Association event on May 4.

It’s not just ChatGPT

One reason Beyea is trying to convince arbitrators to get familiar with ChatGPT is because she said there are similar technologies coming up that are being trained specifically on the legal domain.

One of those programs is Spellbook, which is powered by OpenAI’s GPT-4, the same large language model behind ChatGPT.

Spellbook claims to draft contracts 10 times faster than doing so by hand. It also claims to be trained on “billions” of lines of legal text.

Another program, CoCounsel, also powered by GPT-4, reviews documents, compiles legal research memos and prepares depositions.

Beyea said she has the same concerns as others when it comes to confidentiality and AI’s propensity to seemingly pull bad information out of thin air.

But that doesn’t change the fact that the technology is here, she said — and more is on the way.•

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}