Trump’s executive order on AI creating uncertainty for businesses

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

In announcing a new executive order directed toward artificial intelligence, President Donald Trump decried what he described as “excessive state regulation” and raised concerns about the challenges of having 50 states, each with their own laws, regulating the lucrative and fast-changing technology.

The order, released Dec. 11, also promotes a call for supremacy over China in developing the technology and the need for a unified federal framework for AI oversight.

For some attorneys, it’s added another wrinkle as they attempt to advise their clients on AI usage and how to mitigate risk, as more and more companies incorporate the technology into their business practices.

Scott Kosnoff

Scott Kosnoff, an Indianapolis-based partner with Faegre Drinker Biddle & Reath LLP, is also chair of the firm’s AI-X—or artificial intelligence and algorithmic decision-making—team.

A provision in the original version of Trump’s One Big Beautiful Bill would have prevented states and local governments from regulating AI systems and models for 10 years, but it ultimately got stripped from the final budget reconciliation bill signed into law by Trump.

The provision passed the House but was soundly defeated in the Senate, Kosnoff said. Trump’s executive order represents “the latest chapter” of efforts of trying to preempt state lawmaking in regards to AI.

“I don’t think anybody expected that the idea was dead,” he stressed, adding that he expects legal challenges to the Trump order.

Kosnoff said the companies he works with are concerned with what he calls AI’s downsides.

Brian McGinnis

Most companies that use the technology face some regulatory risk, some litigation risk and some reputational harm risk, he added.

Brian McGinnis, a partner with Barnes & Thornburg and a founding member and co-chair of the firm’s Data Security and Privacy Law practice group, said he felt the executive order had been issued somewhat quickly and noted that it didn’t go through Congress.

He said the move created a level of uncertainty with clients, something he termed “bad for business.” But he acknowledged that the law in general is not good at keeping up with technological advances.

Brian Jones

Brian Jones, a partner at Bose McKinney & Evans LLP, said Indiana, with no state regulatory AI model, wasn’t in the crosshairs of the executive order, unlike some other states.

He noted the order discourages states from making anything beyond minimal AI regulations.

“We’re not going to have a unified federal approach through an executive order,” Jones said.

Some states already have laws in place regarding AI, privacy

Kosnoff said the order comes after several states, including California and Colorado, have passed their own forms of AI regulation.

The latest Trump AI executive order specifically mentioned the new Colorado law banning “algorithmic discrimination,” which it claims “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”

Four states—Colorado, California, Utah and Texas—have passed laws that set some rules for AI across the private sector, according to the International Association of Privacy Professionals.

Those laws include limiting the collection of certain personal information and requiring more transparency from companies.

McGinnis said other states, including New York and Illinois, are much farther along than Indiana in terms of proposing or passing AI-related laws and regulations.

He said his best guess is that some states will pause passing new AI laws until any legal challenges to Trump’s order play out in federal court.

McGinnis said he would be shocked if Indiana felt the need to come up with another AI regulation in the meantime.

Kosnoff pointed out that Florida Governor Ron DeSantis has moved forward with a proposed “Artificial Intelligence Bill of Rights” and has challenged whether Trump’s executive order will have any impact on his state.

DeSantis announced the proposal a week before Trump signed his order.

The proposal would “protect Floridians from footing the bill for Hyperscale AI Data Centers and to empower local governments to reject their development in their communities.”

“Today, I proposed new legislation on artificial intelligence and AI data centers to protect Floridians’ privacy, security, and quality of life,” DeSantis said during his Dec. 4 announcement “Our AI proposal will establish an Artificial Intelligence Bill of Rights to define and safeguard Floridians’ rights—including data privacy, parental controls, consumer protections, and restrictions on AI use of an individual’s name, image or likeness without consent.”

Kosnoff stressed that not every state is going to take Florida’s comprehensive approach.

He said he was sympathetic to calls for a single federal standard for AI regulation.

“In the meantime, states feel compelled to fill the void Congress is leaving,” Kosnoff said.

What could it mean for Indiana?

Indiana does not have a comprehensive AI law.

Former Gov. Eric Holcomb did sign Senate Bill 150 into law in 2004. The law established an AI Task Force to study and assess the use of AI technology by state agencies.

Jones said he didn’t think the order would hinder states that already require mandatory disclosures of AI models, but he also doesn’t think Indiana will take legislative action and pass laws similar to those in Colorado, New York and California.

Indiana Attorney General Todd Rokita has been outspoken this year about some concerns he has with AI as it pertains to shielding children from inappropriate content and protecting consumers from unfair and deceptive conduct.

A spokesperson from Rokita’s office emailed The Indiana Lawyer two letters concerning AI that the Indiana attorney general has signed.

The first was directed by Rokita and 43 other attorneys general to industry officials, including leaders of Apple. Meta, Microsoft and Google.

That letter stated the attorneys general will take all legal actions to protect children from harm from AI.

The second letter Rokita and 35 other attorneys general sent to Congress in November and expressed their opposition to a congressional preemption of state laws.

“It would be nice if Congress passed a uniform AI policy that appropriately protected Hoosiers from the potential harms and negative effects from AI,” the office spokesperson said.

The spokesperson said it’s too early to know what the federal government may actually do as a result of Trump’s executive order, especially as it relates to the creation of an AI Litigation Task Force to challenge state AI laws, the required evaluation of state AI laws considered “onerous” or conflicting with US policy and FTC chairman’s expected policy statement on unfair and deceptive acts or practices under 15 U.S.C. 45 to AI models.

Any attempt to predict the implications of the executive order would be pure speculation, the spokesperson added.•

Please enable JavaScript to view this content.

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer!

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In