Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowWhen a California court discovered more than 90% of the quotes in an attorney’s brief were generated by ChatGPT last September, the long arm of the law came down hard. The lawyer was ordered to pay a $10,000 fine for “filing a frivolous appeal, violating court rules, citing fake cases, and wasting the court’s time and the taxpayers’ money.”
The Indiana Supreme Court is trying to help judges navigate the use of artificial intelligence (AI) and generative artificial intelligence (GenAI) in the judiciary. Last fall, the Court released resources through its AI Governance Committee for local courts to develop AI policies.
Janelle O’Malley, committee chair and director of e-filing and innovation at the Indiana Office of Court Technology, said the rise of AI in the legal system nationwide prompted the need for clear guidelines.
“When judges are thinking about using AI, they need to focus on following the ethical rules that they have to follow in everything they do,” said O’Malley. “And that’s particularly true when they’re working with sensitive or confidential court data.”
The bench card, checklist, model policies and buyer’s guide provided to trial courts serve as beginner, intermediate and advanced levels of AI adoption. The Court’s Innovation Committee plans to offer AI tool recommendations and further explain how judges can incorporate software into their workflows.
Hamilton County Circuit Court Judge Andrew R. Bloch, one of three trial court judges on the AI Governance Committee, believes AI has amazing potential, but judges must have a knowledge base first to understand the technology’s capabilities.
“It doesn’t replace doing the work. It doesn’t replace honing your craft. It’s not the end result. It’s a tool to get you there. Ultimately, I’m still responsible for the output at the end of the day,” he said.
Creating a baseline
The Court’s AI Governance Committee was formed in the fall of 2024 on the recommendation of former chief administrative officer Justin Forkner, who served as co-chair of the National Center for State Courts’ AI Rapid Response Team. O’Malley said the group understood that local judges had a greater responsibility regarding AI usage beyond their own familiarity with the tools.
“[Judges are] responsible for managing their staff’s use of technology and ensuring their staff is following the guidelines and looking at the accuracy of anything the tool produces and how they’re going to incorporate that into their practice,” O’Malley said. “They’re also dealing with AI use by litigants and attorneys.”
In the first phase of the committee’s work, internal Court staff sought to draft an internal AI use policy for the Court and its employees.
“We had data technology experts and a delegate from our disciplinary commission to examine the ethical rules. We had some other staff who work with judges on how they research and how they use [legal research platform] Westlaw,” said O’Malley.
In the second phase, three trial court judges — Bloch, Morgan County Superior Court Judge Dakota R. Van Leeuwen and Joseph County Magistrate Judge William L. Wilson — joined the committee. Their goal was to draft model policies for local courts to create AI protocols, particularly smaller courts with fewer resources.
“Everyone’s coming into this with a different level of knowledge. Some people don’t think they’ve used AI before, and some judges are really in tune with AI,” Bloch said. “We needed to create a baseline so the judges can evaluate whether these programs are right for their courts. And the committee’s not saying, ‘You have to use AI.’ We’re not saying that at all. You just need to be competent in the overall technology.”
Court guidelines
The AI Governance Committee’s bench card includes the key principles for court use of AI: transparency, accountability, awareness and confidentiality. The quick-reference guide also provides questions to ask before adopting AI tools and red flags to watch for, such as tools that don’t reveal how conclusions are reached or how questions are answered.
The implementation checklist walks trial court judges through four phases of developing AI policies: planning, drafting, training and monitoring.
“That checklist talks about who you should include on your local committee, what you need to look at, things like, how is the staff currently using AI? What kind of concerns do the judges have?” said O’Malley.
The model policies cover many topics, including the use of open, closed (non-sequestered), and closed and sequestered AI models. For example, ChatGPT is an open model anyone can use. A closed (nonsequestered) model is a private-cloud version of a commercial large language model (LLM), while a closed and sequestered model is an LLM installed on a server used only by judges and staff.
Direction on handling different types of data, such as Personally Identifiable Information (PII), non-public court data, and public court data are also addressed. For example, the Court advises that non-public court data and PII should not be entered into an open AI model or a closed model that is non-sequestered.
Human oversight, notably policing inputs and monitoring outputs, is also stressed.
“One thing we advocate for in our model policies is developing your process so that there is a responsible human review of whatever AI outputs you’re using before you incorporate that into your work product,” O’Malley said.
The buyer’s guide is an aid for judges to develop procurement requirements before implementing or contracting to purchase AI software.
“What they need to be speaking with vendors about and what they need to make sure the safeguards that need to be in place when the software is dealing with their data,” O’Malley said. “The things they need to watch out for before they get into a contract with an AI vendor.”
The ‘human element’
Luke Britt, legal counsel and public information officer at Marion County Superior Court, said the trial court’s AI policy has evolved over the past few years and aligns with the Court’s guidelines. The staff uses the technology for tasks such as summarizing notes during meetings and transcript preparation.
“There’s a balance between wanting to use [AI] for judicial efficiency but also making sure that the subjective judgments aren’t being taken over by technology. There’s always going to be a human element to judging credibility or making judgment-based intellectual decisions,” he said.
Bloch, who has an IT background, said he and his staff follow the AI policy established by Hamilton County. One of his goals with the Court’s guidelines is to put trial courts unfamiliar with AI at ease.
“We’re just trying to teach judges how to look at [AI] and how to use the tools,” said Bloch. “How to be on the lookout for other people that are using it and the possible advantages and disadvantages of that,” said Bloch.
Need to get in touch?
Have a news tip?
While the AI Governance Committee also plans to develop guidance for attorney and litigant AI use in court, Bloch is already seeing lengthy pleadings he believes are AI-generated from self-represented litigants several times a week. In those cases, he asks the plaintiff to refile the motion in a plain and simple format.
“The people that I’m getting so far, I don’t believe they’re trying to mislead the court. They’re just trying to save money. I do lots of domestic relations work, and that’s one of the areas where people often don’t have attorneys, or maybe one side has an attorney. And expenses can be very high in those cases,” Bloch said.
Future AI concerns
While AI-generated text containing fabrications can be an issue for Indiana trial court judges, AI-generated photos submitted as evidence may also be problematic. In November, Bloch made a presentation to the Defense Trial Council of Indiana on identifying AI use in pictures.
“I walk people through and say, ‘Alright, if you were presented this picture in court, what would you do with it?’” Bloch said. “What would tip you off to think this is wrong?”
Bloch is also aware of AI-generated videos, referring to the Mendones v. Cushman & Wakefield, Inc. case in California. However, he hasn’t dealt with that predicament in his courtroom yet.
“It was a landlord-tenant dispute, and they had taken the video depositions and cobbled them all together and used AI to alter some of the video feeds from Ring cameras,” said Bloch. “That’s not been my experience. What do I see? Someone who’s cleaned up one of their photos in a family law case where they took a beer bottle off the table.”
With video and photos, Bloch said, at least there are signs or circumstances suggesting the evidence is trustworthy. The AI-generated elements that worry him the most are audio deepfakes.
“Your voice is your voice. And that’s the one that actually scares me quite a bit as someone who deals with a lot of domestic violence and a lot of phone recordings and things like that. It takes shockingly little to have AI make someone’s voice,” said Bloch.
Indiana Capital Chronicle is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Indiana Capital Chronicle maintains editorial independence. Contact Editor Niki Kelly for questions: [email protected].
Please enable JavaScript to view this content.