Case spotlights challenges in detecting AI in student work

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

A recent lawsuit filed against Purdue Global Law School is drawing more attention to artificial intelligence’s place in the classroom and how it’s monitored.

Last month, a law student filed a complaint in the United States District Court for the Northern District of Indiana claiming the online law school unfairly dismissed her from the school over allegations of academic dishonesty.

The plaintiff argues that the inferences the school made about her actions can be attributed to a school-approved speech-to-text assistive technology she uses for her disability, which allows her to write more quickly. Any mistakes made in the assignments reflect inadequate verification of sources on her part, she said.

But the school says it is not the assistive technology that is in question, but rather the speed and accuracy in which the student was able to produce near-perfect essays in a matter of minutes.

Martin Pritikin

“It is doubtful that any student using permitted talk-to-text technology could do that without the assistance of prohibited AI,” Dean Martin Pritikin wrote in an email to the student, according to public court documents.

Representatives for Purdue Global Law School told The Indiana Lawyer that the school does not comment on pending litigation. And Purdue Global had not filed a public response to the accusations as of The Indiana Lawyer’s deadline.

On April 27, the district court granted the university’s motion to seal its opposition to the student’s motion for a temporary restraining order, which she filed to prevent the university from dismissing her from classes.

But court documents filed by the plaintiff point to Purdue Global’s AI policies, which state that Purdue Global faculty cannot rely solely on AI detection tools to determine whether academic integrity has been violated. The school acknowledges that the tools are inaccurate and unreliable.

The case

In her civil complaint filed in early April, Nicole Lawtone-Bowles, who lives in New York, said she was scheduled to graduate in August.

But Purdue Global Law School, part of Purdue’s online university for working adults, determined she had breached academic integrity three times, and as such, she was dismissed from the program.

Lawtone-Bowles said in her compliant that she became disabled several years ago when she was crushed between her work van and a vehicle. The disability affects her ability to type for long periods of time. Because of that, she uses speech-to-text technology to complete schoolwork. Purdue Global had approved her use of assistive technology.

She said her assignments are created by speaking her own words which are converted into text. That process allows her writing to appear faster and different from traditional typing, she wrote.

The law school accused her of academic dishonesty in multiple courses. Those accusations were based in part on the speed and appearance of her writing, her complaint states. The law school also raised concerns about citation issues in her assignments, she wrote.

She said that she explained her process for completing assignments and that she did not use AI or unauthorized assistance. Still, the law school found she committed academic dishonesty.

She has asked the court to stop her dismissal from the program while the case is pending and to order the law school to allow her to complete her coursework and graduate and to remove the academic dishonesty findings from her record.

“My goal is to complete my Juris Doctor degree and become a lawyer so that I can advocate for disabled students facing barriers similar to those I have encountered,” Lawtone-Bowles said in a written statement to The Indiana Lawyer. “My lawsuit against Purdue Global Law School challenges what I believe to be a fundamentally flawed and discriminatory approach to allegations of artificial intelligence use in academic settings, particularly as it affects students with disabilities.”

The university, she said, relied on characteristics of her work that directly result from the accommodation — such as speed and formatting — to accuse her of academic misconduct.

“Institutions must adopt fair, consistent, and evidence-based processes that distinguish between legitimate assistive tools and prohibited conduct, while fully considering the impact of documented disabilities,” she said to The Lawyer. “The use of emerging technology in education must be handled with care to avoid reinforcing bias or penalizing those entitled to accommodations under
the law.”

Analyzing AI in the classroom

The lawsuit is illustrative of the challenges AI creates in classrooms for both faculty and students navigating its nuances.

Professor Frank Emmert teaches a course on AI and the law at the Indiana University Robet H. McKinney School of Law, where he instructs students on how and when to use artificial intelligence appropriately.

Given his experience with the technology, he doesn’t expect his students to avoid AI and doesn’t believe others should, either.

“It’s like you ask someone to come to a conference in San Francisco, but then you say, ‘but you can’t use any motorized transport,’” Emmert said

He noted that he’s not familiar with the specific details of the Purdue Global case and doesn’t know the instructors’ individual AI policies. But he believes students’ responsible AI use relies in part on instructors who make clear what their expectations are surrounding the technology. If they accuse a student of misusing AI, they bear an elevated burden of proof to show why they believe AI was used, he said.

Massachusetts Institute of Technology Sloan Teaching & Learning Technologies, a team of technology and education experts serving the MIT Sloan School of Management, integrates AI education in courses and provides helpful tools instructors can use to lead classes.

As AI advances, so too do the tools available to detect AI use. But the progress becomes a game of cat and mouse as AI continues to develop beyond detection tools’ ability to keep up.

In favor of using specific tools to monitor AI use in the classroom, the learning technologies team first recommends setting clear standards in the classroom, both verbally and in writing in the class syllabus. The team also recommends providing definitions of plagiarism in the context of AI tools.

Transparency in the classroom is another point of emphasis, one Emmert echoes in his own teaching.

“My students, with every exam, they get instructions that say, feel free to use AI,” he said. “However, you have to deal with it like you would use any other source. If you find some literature in a library, you have to put a footnote and give credit to the original author. And if you’re copying anything verbatim, you have to put it in inverted commas and identify the source.”

In the Purdue case, the full scope of practices and standards presented by the university to the student regarding her use of AI for classwork is unclear.

Emmert said he believes education might quickly reach a point in which some instructors choose to revert to more “primitive” ways of giving assignments and exams. Some faculty members are already headed that way, he said, noting some schools are going back to oral exams or placing students in an examination room without internet.

Although an advocate for AI now, Emmert said he too had to figure out how to work with the technology in the classroom because he knows how crucial AI will be in the years to come, particularly in the legal profession. In fact, some professionals he speaks to are beginning to require an ethical understanding of AI as a job qualification.

“I talk to a lot of attorneys out in practice and they say, ‘If you have good graduates, we are looking for people,” he said. “But then they add, ‘By the way, if they don’t know how to use AI responsibly, they need not apply.”

IU grants wide access to AI course

Indiana University offers students access to several AI tools that Emmert encourages students to test out.

Last month, the university opened its free generative AI course to the larger public, having slowly introduced the course to faculty, students and alumni over the last few months. GenAI 101 was launched in August to the school’s more than 114,000 students, staff and faculty and rolled out to alumni in October, according to a news release from IU.

The course is necessary to keep Hoosiers competitive in an evolving job market, Pat Hopkins, dean of the Indiana University Kelley School of Business, said in a news release.

Around 3,000 people enrolled in the course within the first week of its worldwide launch, said Brian Williams, lead professor for GenAI 101 and chair of the Virtual Advanced Business Technologies Department at the Kelley School of Business.

Williams helped develop the course, which is broken into eight modules and 16 lessons that users can work through at their own pace.

“The idea is that a learner can go into a lesson, learn something that’s helpful for them and apply it right away,” he said. “That was kind of the guiding ethos, was make it short, make it entertaining and make it very, very practical.”

A caveat course developers must contend with is the knowledge that AI will continue to progress, and the course must keep pace with it. Williams said developers have already begun anticipating what updates GenAI 101 may need moving forward.

The course is just one branch of IU’s efforts to expose students to the advancing technology: The Kelley School of Business now has a 200-level AI course for business students to take.

And last year, the business school launched its Kelley AI Playbook, a working guide for Kelley faculty but made public to help instructors integrate artificial intelligence into their teaching and grading.

“We’re trying to operate with a principle of just being transparent about AI use, being clear about when AI is right to use, when it’s not right to use, but never losing the human being the oversight of the AI,” Williams said.•

Please enable JavaScript to view this content.

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer!

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In