By Jeff Kosc
Artificial intelligence offers great potential to positively affect virtually all areas of our lives. There is, however, significant potential for abuse and harm resulting from irresponsible use of AI. Perhaps you are a fan of “Black Mirror” or the “Terminator” series of movies, each of which portend a world where machine intelligence is a threat to humanity, in particular once AI becomes “smarter” than humankind. The concept of the “singularity” (the point where AI surpasses human intelligence) has inspired great science fiction, but it has also prompted warnings regarding responsible use of AI. Studies have also shown that AI systems can be adversely influenced by biased input data or express or inherent biases of programmers. These warnings have led to a growing body of regulation around AI, which seems likely to increase as this technology develops.
The EU’s proposed Artificial Intelligence Act
As much of the concern with the adverse impact of AI has to do with negative effects on privacy rights and individual freedoms, it is unsurprising that the European Union has been at the forefront of regulation. The EU has led the way in privacy regulation with its sweeping General Data Protection Regulation (GDPR), which has had extraterritorial impact as most global companies have adapted to ensure compliance with GDPR in order to do business in the EU. In April, the European Union published its proposal for an Artificial Intelligence Act (the AI Act). While the AI Act is only proposed at this point, it is likely to take effect in a similar form to the proposal. Given the sweeping scope of the AI Act, it is also likely to have a broad impact on global commerce. Most notably, like the GDPR before it, the AI Act would impose very significant fines for violations.
The AI Act expressly prohibits the use of AI that deploys subliminal techniques or that exploits any vulnerabilities of a specific group of persons due to their age or physical or mental disability, in each case in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm. The AI Act also expressly prohibits the use of AI for “social scoring” (as is currently used in China) based on social behavior or known or predicted personal or personality characteristics. The final express prohibition in the AI Act is against use of “real-time” remote biometric identification systems in public spaces for law enforcement purposes, except for a targeted search for specific potential crime victims, prevention of a specific imminent threat to life or physical safety, and detection of perpetrators of certain major offenses such as terrorism, murder and drug trafficking.
Beyond its express prohibitions, the critical portions of the AI Act propose to regulate “high-risk” AI systems, which are those that are intended to be used as a safety component of a product where the product whose safety component is the AI system (or the AI system itself as a product) is required to undergo a third-party conformity assessment under certain laws. High-risk systems also include certain systems that are specifically identified in the AI Act, including AI systems that relate to remote biometric identification of individuals, critical infrastructure, educational and vocational training, recruiting and selecting employees, promoting or terminating “work-related contractual relationships,” access to essential services, law enforcement, border control management and administration of justice. The EU may also update the list to include other AI systems that “pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights.”
For “high-risk” systems, the AI Act will require that a risk management system be implemented and maintained that is continuously iterative and continuously updated to identify, manage, reduce and eliminate risks related to the AI system. These risks must be communicated to users of the AI system. The AI Act also requires that high-risk systems that involve training of models with data must have appropriate data governance practices that cover relevant design choices; data collection; relevant data preparation processing operations; the formulation of relevant assumptions; a prior assessment of the availability, quantity and suitability of the datasets that are needed; examination in view of possible biases; and identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed. Developers and users will need to ensure that the training, validation and testing data is relevant, representative, free of errors and complete, and such data must take into account, in light of the intended purpose, the characteristics or elements that are particular to the geographical, behavioral or functional setting within which the high-risk AI system is intended to be used. In effect, responsible development and use of AI systems will be legally required to do business in the EU.
AI regulation in the U.S.
A few notable laws have already passed in the U.S. regarding the use of AI systems. Illinois’ Artificial Intelligence Video Interview Act (820 ILCS 42/1, et seq) regulates the use of video interviews by employers where AI analysis is used on those videos. Under the Illinois act, before the interview, employers must notify applicants that AI may be used to analyze the video and to consider the applicant’s fitness for the position, provide the applicant with information explaining how the AI works and what general types of characteristics the AI uses to evaluate applicants, and obtain consent from the applicant to be evaluated by the AI program as described in the information provided by the employer. Further, the prospective employer may not share the videos except as necessary to evaluate applicants’ fitness and must delete (and instruct others with copies to delete) copies within 30 days of any request. Also, if relying solely on AI analysis of video interviews, the employer must report demographic information to the Department of Commerce and Economic Opportunity annually.
Colorado has also passed a law that prohibits insurers from using any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.
There are also pending laws in California, Massachusetts and Michigan regarding use of AI systems. While the Massachusetts and Michigan laws have limited applicability to governmental use, the proposed California Automated Decision Systems Accountability Act would require continual testing for biases during development and use of AI systems. In addition to these pending laws, Alabama, California, Hawaii, New Jersey, New York, Utah, Vermont and Washington have all authorized task forces or commissions to study the impact of AI systems on their respective citizens.
At the federal level, the Trump administration issued an executive order calling for federal agencies to adhere to certain principles in the use of AI within U.S. government agencies. This order, however, does not extend into the private sector.
Companies looking to commercialize AI will want to be mindful of the evolving regulatory landscape. Responsible development and deployment practices for AI systems are now a must at all stages.•
• Jeff Kosc is a partner in Taft’s intellectual property group. Reach him at [email protected]. Opinions expressed are those of the author.