Can ChatGPT practice law? OpenAI faces first-of-its-kind lawsuit in Illinois.

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

A first-of-its-kind lawsuit is making its way through a federal court in Illinois, challenging the relationship between clients and attorneys, attorneys and artificial intelligence, and artificial intelligence and the rule of law.

In early March, Nippon Life Insurance Company of America filed a lawsuit in the United States District Court for the Northern District of Illinois against OpenAI, the AI research company behind the development of popular AI chatbots such as ChatGPT.

Nippon’s lawsuit rests on three claims against the AI company:

  • The chatbot wrongly interfered with a contract agreement between Nippon and a policyholder.
  • The chatbot aided the policyholder’s abuse of the legal process.
  • The chatbot provided legal assistance to the policyholder without being licensed to practice in the state.

It’s that final claim regarding the unauthorized practice of law that attorneys who spoke with The Indiana Lawyer
say is a first in the legal profession.

The allegation begs the question: How do you sue artificial intelligence for practicing law without a license?

Brian McGinnis

While attorneys who spoke with The Lawyer aren’t sure the insurance company will succeed with its claims, the contents of the company’s argument and the circumstances surrounding the case continue to shed light on how artificial intelligence is being woven into the fabric of the legal practice and how licensed attorneys should respond to the technology.

“Bringing cases like this helps us zero in on where we as a profession and a society want to put that needle of, how much responsibility is on the makers of these tools and the makers of the systems versus not?” said Brian McGinnis, partner at Barnes & Thornburg LLP and co-chair of the firm’s artificial intelligence group.

The case

Nippon’s lawsuit stems from an earlier settlement agreement made between the insurance company and an Illinois woman who was a participant in her employer’s long-term disability policy.

In July 2019, the woman submitted a long-term disability claim to Nippon, which was approved that August but later terminated in 2021 when it was determined she was no longer disabled according to the policy’s definition of disability.

The woman responded to the termination in December 2022 with a lawsuit in the northern Illinois district court accusing Nippon of violating her disability policy when terminating it.

The parties reached a settlement agreement in January 2024. As part of the settlement, the woman agreed to “forever and irrevocably” release the insurance company from any liabilities, claims, damages or actions related to her policy or claim, according to Nippon’s lawsuit. She also agreed to dismiss the case against Nippon with prejudice.

Nippon issued a settlement payment to the woman.

A year after the settlement was finalized, however, the woman wrote to one of her attorneys saying she believed the settlement was reached without the inclusion of important facts and documentation and expressed her desire to reopen the settlement.

The attorney refuted her claims that the case omitted key evidence and said that because it was dismissed with prejudice, the case couldn’t be reopened.

According to Nippon’s lawsuit, after speaking with the attorney, the woman uploaded his response to ChatGPT and asked the chatbot if she was being “gaslighted” by the attorney.

ChatGPT responded that she was, and the woman reportedly fired the attorneys and began using ChatGPT to research how to vacate the settlement agreement and reopen the lawsuit in early 2025. The northern district court denied her motion to reopen the case.

Using ChatGPT, the woman crafted and filed another lawsuit in February 2025, asserting similar claims against disability service support organizations and later adding Nippon as a defendant.

As of March 2026, the woman has filed nearly 50 motions she created with ChatGPT, according to Nippon’s lawsuit.

Nippon is now suing ChatGPT’s parent company for tortious interference with a contract, abuse of process and the unlicensed practice of law.

Questions remain

At the top of the list of questions surrounding the lawsuit is how artificial intelligence can be sued for practicing law illegally if the entity providing the legal advice is not human.

“I think it’s a creative approach to what has been otherwise a sort of straightforward, common-sense gap in the way we talk about unauthorized practice of law,” said Kaitlyn Stone, partner at Barnes & Thornburg and McGinnis’ co-chair in the AI group. “We always sort of talk about it in that passive voice, because the assumption has always been that the entity engaging in the unauthorized practice of law would be a person, because only people engage in the practice of law in the first place.”

There are significant questions about the integrity of each claim Nippon presents, Stone said. The tortious claim alone relies on the intent of a party, of which AI as an inanimate object has none.

Even further is how the chatbot operates, responding to a user’s words to formulate a response rather than coming to a conclusion independently.

Kaitlyn Stone

“The underlying piece for all three of these claims is somewhat suggesting that ChatGPT had some sort of agency in providing whatever outputs it did, but none of those outputs are generated without an input by the user,” Stone said.

In October 2025, several months after the woman began using ChatGPT in her case, OpenAI amended its terms of service to say that the chatbot can’t be used for provision “of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

The lawsuit says that “prior to the October 29, 2025, emendation, ChatGPT’s terms of use did not prohibit users from using ChatGPT to draft legal papers, conduct legal research, provide legal analysis or give legal advice.”

McGinnis said the change in the terms of service could work in Nippon’s favor. The plaintiffs are trying to show that OpenAI knew there was an issue and therefore should have made the change previously, he said.

But when determining where the line of responsibility should be drawn, McGinnis drew comparison to Section 230 of the Communications Decency Act of 1996, which protects online service providers such as YouTube from liability for content that is posted by users.

How much responsibility should be placed on the AI system versus a user is a question still up in the air, he said.

How will the case play out?

With the above questions yet remaining, attorneys Stone and McGinnis aren’t quite sure whether the insurance company could win its case.

“I just think there’s a lot of hurdles to get over, but it’s certainly creative,” Stone said.

Bill Henderson

Bill Henderson, an expert in ethics and professor of law at the Indiana University Maurer School of Law in Bloomington, said the case could set a precedent for how artificial intelligence is used in the legal profession moving forward.

It’s obvious that across the United States, people need legal help, he said. In many instances, people are using AI for that help.

But legal advice should still be pursued under the shadow of the law, he said. Lawyers know how to do that; AI does not.

In betting so much on the future of artificial intelligence, “we’re having to kind of deal with these unintended consequences,” Henderson said. This case is an example of that.

But instead of relying solely on artificial intelligence or ignoring it completely, Henderson sees the need for balance between the two. “Although human beings can place their trust in OpenAI or ChatGPT, I think it does make more sense to place their trust, whenever possible, in a human being that has the benefit of artificial intelligence,” he said.

Licensed attorneys have the domain and practical knowledge to navigate a case within the bounds of the law but can use AI to get to a better outcome quicker, Henderson said. Through the Nippon case, he said AI companies could see the need to engineer their products more carefully with respect to the advice they dole out.

The case shines a light on where the practice of law is heading, McGinnis said, acknowledging that AI will continue to be a topic of discussion and debate within the profession.

Attorneys are “not the sole keepers of legal advice anymore,” he said.

Nonetheless, he encourages attorneys to stay abreast of both the possibilities and restrictions AI poses for their own practices.

“These technologies are good at a great many things,” Stone said. “They’re not going to be great at the practice of law. They’re not going to be great at counseling clients the way that we do.”•

Please enable JavaScript to view this content.

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer!

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In