Mills: AI’s impact, from driverless cars to lawyerless law firms

Keywords Opinion / Technology
  • Print
Mills Mills

By Courtney David Mills

Artificial intelligence is not necessarily the same as automation. AI does not have a precise definition, but most commonly accepted definitions include the connection between a complex function requiring human intelligence being performed by an automated computer algorithm. In other words, an AI algorithm is a very sophisticated computer program that is continually being refined, updated and improved (i.e., the program itself is capable of learning). Automation, however, is generally considered a very basic version of the same AI theories. While machines have been replacing human production for hundreds of years, AI has accelerated the issue and threatened jobs once considered safe from technology advancements. Polls have consistently shown that most Americans believe it is likely that technology (whether through automation or AI) will replace most jobs currently done by humans. However, the same polls have found that most Americans believe their jobs are safe from automation. Not surprisingly, the responses to such polls generally correlate to income level (with lower-paid occupations being more susceptible to automation and higher income individuals more likely to believe their jobs are safe). While such predictions are generally supported by human history, AI is changing historical norms and threatening occupations once considered safe.

Driverless cars are a good example of recent technology advancements. While driving is generally considered a low-skill occupation, driving safely is difficult. In the United States, there are approximately 6 million traffic accidents each year. This equates to 11 accidents every minute of every day. While there are all sorts of reasons vehicle accidents occur (other than unsafe driving), it seems reasonable to conclude that safe driving is a difficult skill to master, making safe driving a relatively complex activity. However, driverless cars are here. Approximately 21 states have passed legislation regarding regulation of driverless cars. Indiana was slated to become the 22nd state (which coincides with former Chief Justice Randall Shepard’s quote about Indiana’s legal reform being “rarely first, occasionally last, and frequently early”). However, the legislation could not make it over the finish line during the 2018 legislative session. While the technology allowing driverless vehicles has been around for years, the AI aspect of driverless cars is infinitely more complicated. For instance, when a driverless car detects an accident in the immediate vicinity, the car quickly calculates whether it should swerve to avoid the accident (causing possible injury to bystanders), or whether it should engage the braking system in hopes the airbag system will protect the passengers (i.e., the passenger vs. pedestrian dilemma). Likewise, a driverless vehicle must be able to decide between difficult moral decisions, such as swerving and hitting a pedestrian who is jaywalking vs. hitting a child walking on the sidewalk (i.e., the child vs. criminal dilemma). Such decisions are complicated moral decisions that must be integrated into an AI algorithm. Anyone who understands this current state of complex technology will likely admit that jobs involving similarly complex decision making — once considered safe from automation or technology — are no longer safe. In other words, if we can program a car to make split-second moral decisions, we can certainly create a computer program to draft or analyze a legal document.

For decades, lawyers had a false sense of security that they were safe from technological automation because their work involved complex decision-making and creative problem-solving that a computer algorithm could never match. Lawyers were also successful in protecting their profession from outsiders (both AI as well as highly skilled nonlawyers) via unlicensed practice of law statutes and other monopolistic protections. Finally, law firms structured around the concept of the billable hour are historically slow to adopt (and have competing incentives not to adopt) time-saving technologies, including AI. Despite these factors, legal AI has developed in areas including automated contract review (computer programs that review contracts for predefined clauses that have been previously approved by a party and analyze/flag potentially problematic language in contracts). One recent study of such platforms involved the ability to flag potential legal risks in five nondisclosure agreements. An AI system had an accuracy rate of 94 percent (not perfect, but pretty good). The same task was completed by 20 experienced transactional lawyers from large national law firms who had an average accuracy rate of 85 percent (not terrible, but not as precise as the AI). Additionally, it took the AI platform 26 seconds to conduct its analysis, while the average lawyer review time was 92 minutes. Lexis and Westlaw are both developing AI platforms that conduct automated legal syntax research (computer programs that analyze legal arguments and search for cases to support alternative arguments). Legal AI’s most publicized application is in the area of document automation (computer programs that create and electronically file legal documents as simple as an appearance form or as complex as forms necessary for a divorce or probate matter). AI platforms can also predict legal outcomes based on historical jury verdict and settlement information, or even assist lawyers in analyzing potential jurors for a civil or criminal trial. While a highly knowledgeable and experienced lawyer may be able to provide clients with verdict estimates based on dozens or hundreds of trials, an AI platform can analyze millions of verdicts in a matter of seconds and provide more precise and reliable estimates. Most, if not all, work currently done by lawyers can be automated through the use of legal AI.

Every “Star Trek” fan knows the line “resistance is futile.” This is also true for avoidance of legal AI. However, this is not to say the legal profession is doomed to fall to an army of legal robots. Much like all professions confronted with automation and AI, successful lawyers will adapt, survive and thrive. While lawyers will not be completely replaced by automation, most of the lower-skill work lawyers perform (i.e., busy work) will be automated in the very near future. Current AI programs are still fairly poor at writing legal briefs, determining case strategies, presenting evidence and arguments to jurors and negotiating deals with business clients. However, technology runs in a single direction toward improvement. AI platforms are currently being used at some of the world’s largest law firms to perform very specific tasks. While such technology is imperfect, it will continue to improve and become more widespread and affordable.

Everyone knows driverless cars are inevitable and coming soon — Uber drivers beware. The same is true for legal AI. Lawyers must develop plans for AI development in their practice areas. Lawyers should identify areas of their practice that can be automated and begin to focus on areas of their practice that will be slower to reach the point of automation. It is important to remember that it is no longer a question of if the legal field will become automated, but rather a question of when. While lawyers are highly skilled, they are not exempt from the inevitability of technology. Also, never forget the quote from Alan Lakein: “Failure to plan is planning to fail.” Lawyers who prepare for the upcoming technology changes will be in the best position to thrive in the legal AI environment that is coming sooner than you think.•

Courtney David Mills is an attorney at Riley Bennett Egloff LLP in Indianapolis. Opinions expressed are those of the author.

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}