Gumm: Alexa, what are the legal implications of generative AI?

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
This audio file is brought to you by
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

At the crossroads of innovation and ownership, one finds intellectual property. While the desire for creative advancement and the need for protection aren’t necessarily in opposition to one another, the two must be balanced. This rings particularly true of advancements regarding generative artificial intelligence tools, like ChatGPT, which seem to be developing at a rapid pace and becoming quickly integrated within our workplaces.

By 2025, generative AI is expected to generate 10% of all data and 20% of all test data for consumer-facing applications. Generative AI is a machine learning subfield that uses algorithms to generate new data such as images or text and basically “learns” to do the same by reviewing third-party data. It is being used in industries such as manufacturing, legal, travel, health care and retail to help with research and testing, as well as creating everything from legal documents to building designs. As we open our doors and homes to these tools, legal implications across a spectrum of substantive areas are raised.

Copyright implications of AI

At its core, copyright protects original works of authorship like poetry, music and computer software. Generally, there are some material copyright considerations associated with the use of generative AI tools, particularly in the workplace. Courts are just now tackling these issues, so it is unclear what the potential exposure may be as precedent develops.

First, to the extent that employees use an AI tool to generate content, that content may not be protectable under copyright law — a human being must author the work to receive protection under the laws of many jurisdictions (including the U.S.). Therefore, the scope of using an AI tool to generate work product may impact the ability for such content to be afforded any copyright protection. Companies allowing employees to utilize AI tools to generate content should keep detailed records about the extent and manner of use and consider what contexts require prohibiting the use of such tools.

Second, while “to err is human,” generative AI tools are not foolproof. While they are still being refined, there is a risk that content may inadvertently be copied from third-party sources that hold copyright protections for original works. Moreover, some AI tools have been known to include inappropriate or misleading language, blatant errors and incorrect citations. While the tools’ creators continually attempt to improve functionality, it is a work in progress, so ultimate liability for any missteps by the AI tool may fall on the end user or their employer. As a result, it is prudent that content generated by an AI tool should receive internal review and refinement prior to use, particularly for external distribution.

Third, another risk involves the inadvertent creation of a derivative work. Simply put, a derivative work is a new copyrighted work that is derived from another copyrighted work. As noted, AI tools review existing copyrighted works — often without the permission or knowledge of the copyright owner — to “learn” from them. Some legal scholars and artists are taking the position that content produced by a generative AI tool is therefore a derivative work of the copyrighted materials used to train the tool. Depending upon the circumstances, the generated content could be considered an infringement that exposes users to potential liability. Others have taken the position that learning from materials to generate new content is not a new endeavor. People have used third-party materials for inspiration for centuries. If people can use such materials without permission, why would machines be treated any differently? Litigation that addresses this particular concern has commenced.

Trademark implications of AI

While legal implications of using generative AI tools under copyright currently appear front and center, it would be foolish to think that trademark law will not be impacted also. Trademarks are a word, symbol or design that serve as source-identifiers for products and services. U.S. trademark law grew from a desire to protect consumers, who have imperfect recollections. As a result, trademark owners were given a right to exclude others from using a confusingly similar mark in connection with similar goods and services. For example, Apple Inc. is the only party in the U.S. that can use the trademark APPLE in connection with smartphones. Such a notion appears to be in direct conflict with the U.S. economy’s free market system, but it was the compromise needed to protect consumers from nefarious activity. Hence, at the intersection of innovation, protection and ownership, we find trademark law.

Over the years, the way in which consumers have interacted with brands has consistently changed. In the 19th century, products were largely unbranded and consumers relied upon shopkeepers to influence purchasing decisions. With the emergence of large chain stores, consumers were no longer forced to focus on products in their own backyards; products began to have regional, national and global reach. Branding became vital because consumers were interacting directly with products — from radio advertisements, which relied upon the aural aspects of trademarks, to the prominence of television ads, when brands focused on the appearance of their branding elements to help distinguish their products from those of others. Digital media didn’t change the emphasis for brand creation but instead increased access to products and services from around the world. Consumers today have endless options.

What will change for trademarks with the prevalence of AI tools? Quite a lot (potentially). The use of AI tools like Amazon’s Alexa will serve as a buffer between consumers and brands, thereby changing the way in which consumers interact with brands. For example, certain AI applications make purchasing recommendations in order to facilitate repeat purchases or build unique recommendations based upon a consumer’s prior purchasing habits. In these instances, retail purchasing decisions are no longer purely responsive but predictive.

While this raises several complex legal issues regarding comparative advertising and rules around influencers, there are also traditional trademark concerns, as well. First, if consumers are telling their AI tools what to add to a grocery list and are using brands to do so — “Alexa, please add Jif Peanut Butter to my shopping list” — the spoken brand (not the visual) will be at the center of purchasing decisions (akin to radio advertising). As a result, the clearance of new brands may have a heavier focus on pronunciation as opposed to visual impression because consumers will be interacting more aurally with brands.

Second, a new set of issues is created when a consumer delegates their purchasing to an AI tool. For example, “Jif Peanut Butter” is spoken by a consumer, but despite the command, Alexa purchases a knock-off brand called “Rif.” Because the consumer was not involved in the actual purchase, who is the relevant consumer for purposes of an infringement claim? Also, is an AI tool capable of being “confused”? Clearly, there could be a misunderstanding or miscommunication between the consumer and the AI tool, but does that amount to consumer confusion for purposes of a trademark infringement claim?

As generative AI technology becomes more commonplace and integrated within our lives, it will undoubtedly impact the way in which our laws protect and enforce intellectual property rights. Only time will tell how those laws will be refined to navigate the prefiltration of AI tools. However, one thing is clear: Generative AI tools are effectively helping businesses grow in a multitude of industries. As with any tool, its effectiveness depends upon a company’s ability to mitigate potential risk. Therefore, companies should create policies that cover the use of generative AI tools by its employees, including employee training, an internal approval process and record-keeping, and keeping AI functionality and its use by consumers in mind when considering and clearing new brands.•

__________

Stephanie Gumm is a partner at Faegre Drinker Biddle & Reath LLP. Opinions expressed are those of the author.

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}