This is the second of a three-part series on the key themes we took away from the CLOC 2018 US Institute. Every year, legal operations professionals from around the world gather at the CLOC US Institute to learn, network, and share experiences. In this series the Brightflag team will delve into the key messages from the conference, and the actionable insights shared by some of the greatest minds in legal operations.
A.I. – Machine Learning or Hype Machine?
Themes emerging from the CLOC 2018 US Institute showed that legal departments around the world are trying to separate the legal A.I. hype from the reality.
10% of all sessions at CLOC 2018 centered on A.I. This is a tall order, given that CLOC sessions seek to cover the entire ecosystem of legal operations issues.
The emphasis on legal A.I. shows that it’s not going away any time soon, and that the A.I. toolkit is being applied to an ever-growing set of legal operational problems. However, the question resounding through all sessions was whether the practical use cases for A.I. outweigh the hype. The answer to this question seemed to be a measured yes.
According to the speakers at CLOC, A.I. is an appropriate solution to a problem if that problem is well-defined, repetitive, relatively simple and the current solution is manually intensive. An excellent example of this kind of problem is bill review, a well-defined process that all in-house departments are tasked to undertake.
Bill review is a repetitive process, and spotting billing inefficiencies is relatively simple, compared to core in-house counsel work like developing legal strategies and solutions to difficult legal problems. Despite this problem being relatively simple, the process most departments currently employ is manual, where an in-house lawyer or legal operations professional reads each bill line by line. To solve this problem artificial intelligence, in this case machine learning and natural language processing, can be trained to mimic a human reviewer, and to ‘read’ lines in order to spot potential billing inefficiencies.
In a number of talks, including a session entitled ‘Brutally Honest A.I.: Dispatches from Real-World Implementation Projects’, speakers admitted to a healthy skepticism about the ability of A.I. to solve all legal problems, or to replace the work of humans. Instead, the speakers thought of A.I. as a toolkit made up of a number of distinct tools, like chatbots, natural language processing, machine learning and predictive analytics.
When we think of A.I. as a set of distinct tools, we can begin to apply the tools to pressing problems that the legal department faces. This view of A.I. as a new route to solving old problems, rather than a flashy piece of tech, is the road to distinguishing the A.I. hype from the practical reality.
To see the next blog post in the series, click here.