AI crash course helps launch ISBA’s yearlong series

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00
IL file photo

No presentation about the role of artificial intelligence in the legal community would be complete without at least mentioning the New York attorneys who got in trouble for submitting a court brief that cited nonexistent cases generated by ChatGPT.

Dan Linna, speaking about AI at the Indiana State Bar Association’s Annual Summit on Friday, didn’t pass up the opportunity, saying the lawyers came out of that looking bad after he’d read the pleadings and transcript of the argument in that case.

Dan Linna

But Linna, director of law and technology initiatives at the Northwestern Pritzker School of Law, didn’t use the example to scare lawyers away from using AI.

Instead, he treated it as a teaching moment.

Mistakes will happen, Linna said, but there are ways to “mitigate these possibilities” — such as using Westlaw or LexisNexis to confirm a case does, in fact, exist.

“There are ways computationally to solve some of these problems,” he said.

Linna’s keynote speech at the Annual Summit kicked off a yearlong series for the bar association called “AI in the Legal Industry: An ISBA Series Presented by LexisNexis.” The series will include continuing legal education events as well as practice-specific resources from various ISBA sections and committees.

Similar to the bar association’s goal over the next year, Linna focused on basic education surrounding AI.

When it comes to technology that’s shaping the legal field, Linna told attendees he likes to think of three buckets: rules-driven AI, data-driven AI and data science.

A lot of the conversations about AI right now fall into the data-driven AI bucket, he said, which includes large-language models for generative AI — think ChatGPT. Those are “probabilistic tools,” Linna said, which means they use their training to make predictions about words that should appear or whether a document is relevant.

In that sense, Linna said AI hallucinations — where the program answers a prompt with something that’s clearly wrong — is more of a feature than a bug because generative AI tools are intended to be creative.

Linna asked how many people in the room have used ChatGPT, and about half raised their hands.

“That’s pretty good,” he said. “I’d really encourage all of you to test it out.”

However, that isn’t a full-scale endorsement for turning law offices into AI hubs.

“I don’t recommend that you do legal research using ChatGPT,” Linna said, adding that one of the tool’s most practical applications is taking paragraphs or even whole documents and helping to make them more concise or strongly worded — whatever the user needs.

Linna also offered other potential use cases for AI, including summarizing a transcript and extracting important information from documents.

No matter the use, Linna encouraged everyone to learn more about prompt engineering — or the practice of designing the right question or command for generative AI tools to produce what you’re looking for.

“These systems work much better when you give them the right prompt,” he said.

Brian Beck

Brian Beck, Midwest legal technology executive for LexisNexis, spoke before Linna and gave the example of writing a letter of recommendation.

A poor prompt would say something like, “Write a letter of recommendation.” A better prompt includes the person’s name, how you know them and what personal qualities the letter should highlight.

“It’s not going to give you the final work product,” Beck said, “but man, if you can get a really good head start, it really is amazing to use.”

‘I’m super excited about it’

Bryan Lubic graduated law school in 2006 and said he’s returning to the legal field after a 15-year gap spent in higher education. Lubic is working as a law clerk now in Rockville and will take the bar exam next February.

Bryan Lubic (Photo from LinkedIn)

Lubic took notes during Linna’s presentation and said afterward that it will be important to “steer” AI in order to “proactively create the results we want.”

“I love it,” he said of AI, “because I have a natural inclination and affinity for technology.”

When it comes to using something like ChatGPT to get writing feedback, Lubic said that’s been happening all along, but it was a more experienced attorney — rather than a generative AI tool — giving that kind of advice and analysis.

“I’m super excited about it,” he said.

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}