AI use by pro se litigants presents challenges for courts

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

While the rapid development of artificial intelligence has proved beneficial to some, there’s still a list of cons that linger on the sidelines, especially in the legal field.

In January, the United States Court of Appeals for the 7th Circuit issued an opinion addressing a pro se litigant’s suspected use of AI to prepare his brief.

The man denied using the technology in his case, but he attributed quotes that do not exist to two cases. The court nevertheless chose not to impose sanctions for the potential errors.

The litigant’s alleged use of the AI — and the court’s decision not to pursue punishment — pose questions not of whether AI will be used in courtrooms but when and how legal professionals should respond when that occurs.

Frank Emmert

“We have plenty of people who are using AI, and, let’s say it bluntly, they don’t really know how to use it responsibly. Nobody’s ever taught them. So that’s where all this copy-pasting comes from,” said Frank Emmert, a professor of law at the Indiana University Robert H. McKinney School of Law.

AI’s presence in court filings seems to only be gaining traction. According to data from legal analyst Damien Charlotin, parties are increasingly using the technology to bolster their cases.

Charlotin has been tracking the presence of AI hallucinations in legal decisions worldwide since 2023. In 2025 alone, his database found 294 instances of hallucinations by pro se litigants in cases across the United States.

So how should the legal profession address the issue? For now, it appears the answer varies case by case.

AI misuse in court

Ramifications for pro se litigants using artificial intelligence versus attorneys using the technology are going to look different simply because the latter has the formal training and repetitions in court to understand what legal filings should look like, experts say.

That doesn’t mean errors don’t occur for the latter. Last February, U.S. Magistrate Judge Mark Dinsmore recommended that a Texas-based attorney who is licensed to practice in Indiana be sanctioned for $15,000 for submitting briefs containing citations to non-existent cases on three separate occasions while representing a client (In Re: Rafael Ramirez, 1:25-mc-00013-TWP-MJD).

The attorney admitted to relying on programs that utilize generative artificial intelligence to draft the briefs but said he didn’t know AI could generate fictitious cases and citations.

According to federal court case database PACER, no sanctions have been pursued in his case. The attorney’s status is listed as “active and in good standing” on the Indiana Roll of Attorneys.

And on Jan. 30, the Indiana Court of Appeals filed an opinion addressing two pro se litigants’ presumed use of AI in their appeal of breach of contract and fraud claims from a former contractor.

In Steve Wilcox and Melissa Wilcox v. Matthew A. Gingrinch and Grateful Home Exteriors, LLC, 25A-PL-1157, the appellate court stated that the litigants used 14 fabricated cases, mischaracterizations of real cases and citations to out-of-state cases as evidence that AI was likely used but not independently verified.

The court determined that the litigants’ errors went “far beyond minor infractions,” as the litigants built their primary arguments on “nonexistent authorities.”

Because of the litigants’ inability to form a logical argument, the judges stated they could not provide proper appellate review of any of the litigants’ issues and ultimately waived them, affirming the trial court’s judgment.

Despite the number of errors made by the pro se litigants, the appellate court did not impose sanctions on the pair but cautioned both the party and future litigants that the court might impose sanctions in the future.

“We acknowledge that pro se litigants face distinct challenges when using generative artificial intelligence tools for legal research,” the court said, highlighting that litigants don’t typically have access to the same tools that attorneys have to verify cases and citations, such as Westlaw and LexisNexis.

“These practical realities, however, do not excuse the filing of briefs that rely on nonexistent legal authority. Pro se litigants are held to the same standards as licensed attorneys, and courts do not accommodate litigants — whether represented or not — who support their arguments with fabricated cases,” Judge Melissa May wrote.

Emmert said he believes that strong warnings such as May’s are crucial to prevent issues similar to the case’s from being repeated several times over, especially in situations in which a litigant is intentionally trying to manipulate the court.

“The only way you’re gonna discourage people from submitting deep fakes in their legal briefs is by making it very clear: If you get caught with that kind of stuff, that’s a crime, basically,” Emmert said. “If people get away with it, that’s an open invitation to do that and basically say, ‘Hey, if they don’t figure it out, then I win my case, and if they figure it out, OK, I get told off, but that’s it.’”

Addressing the problem

Maya Markovich is executive director of the Justice Tech Association, a nonprofit trade organization that supports companies building technology for those who do not have a lawyer helping them with their legal problem. Several platforms the organization supports use AI.

Courtroom5 uses AI to offer pro se litigants a comprehensive case management system that helps them maintain a record summary, analyze claims and defenses, research and apply case law and generate court-ready legal documents. Other platforms include Contend, which deploys AI as a legal guidance system for individuals who cannot afford traditional legal representation. The platform helps users understand their rights and assess their situation. Thurgood uses AI to help workers identify employment discrimination claims. Clearbox provides support for immigration applications. Herbie helps people create estate plans, and HelloDivorce helps with marriage dissolution.

“We believe ethical technology can and should be part of the solution to the access to justice crisis,” Markovich said in an email to The Indiana Lawyer.

But there are real risks in unsupervised or unprepared AI use, she said.

The nonprofit is seeing an overreliance on generic AI tools that might hallucinate or provide jurisdictionally incorrect advice; a lack of understanding of AI’s limitations, leading users to treat outputs as definitive legal advice; and poor prompt quality, which can produce misleading results.

Justice tech companies can help mitigate those risks.

“Self-represented litigant[s] relying on generalist AI technology that is not built [to] address specific legal issues or mitigate consumer harm can end up with potentially life-changing negative impact,” she said.

Andrew Bloch

Hamilton Circuit Court Judge Andrew Bloch on a daily basis sees litigants using AI in family law proceedings. By now, he’s able to identify the technology based on a few simple formatting decisions from platforms like Claude and ChatGPT. But litigants’ use of AI in his court doesn’t typically require strong consequences, he said.

“I have not gotten to the level yet where I’ve had that come up substantively in a case where it’s decided the outcome. But I see things all the time,” Bloch said.

When it comes to regulating the technology on the legislative level, Emmert points to several roadblocks, namely how difficult it is to keep up with AI’s evolution. By the time legislation is drafted and adopted, AI has surpassed the law, he said.

“We’re regulating AI that doesn’t exist anymore because it’s surpassed already by the latest iterations,” Emmert said.

Still, efforts are being made on both the state and federal levels to keep up with developments.

California’s Transparency in Frontier Artificial Intelligence Act, for example, went into effect on Jan. 1 and requires transparency from AI developers regarding development efforts. The act also allows companies and the public to report safety incidents to the state’s Office of Emergency Services.

The United States in general has been slower to adopt decisive measures on AI regulation, Emmert said. In December, President Donald Trump signed an executive order to establish an AI Litigation Task Force that challenges individual state laws inconsistent with the administration’s goal of developing a “minimally burdensome national policy framework.”

Right now, the most immediate solution at the court level might not be a one size fits all rule. Rather it might be best to leave any discipline to judges’ discretion weighing litigants’ intentions.

“I think this is the line that you can actually draw: We must accept that people will use AI to enhance whatever work product they are doing,” Emmert said.

However, education on ethical AI use should still be pushed, Bloch and Emmert agree.

“AI also gives us the possibility for unethical use,” Emmert said. “And unethical use comes from people who have no idea what they’re doing and just copy-pasting without checking.”

Please enable JavaScript to view this content.

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer!

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In