Want to Leverage AI But Don’t Know Where to Start? Download The Complete Legal AI Action Plan

Menu

Demystifying Generative A.I.’s Impact on Legal Departments

From industry events to sales calls to customer meetups, Generative A.I. has been the rare topic that’s made its way into just about every conversation our team has participated in this year.

There’s a quiet panic underlying many of these discussions, as in-house attorneys wonder just how much risk one employee’s errant ChatGPT experiment could bring to their business.

But more than a few also carry a note of cautious optimism. Maybe this is the technology that will finally help corporate legal departments rebalance their unsustainable workloads.

No one can say for sure what the full extent of Generative A.I.’s impact on legal departments will be, but I think we’d all benefit from another look at the facts we already know.

A Few A.I. Fundamentals

Generative A.I. is easier to grasp once you understand a few underlying concepts and contrasting examples. Here are some brief overviews of the key elements:

Artificial Intelligence (A.I.)

A.I. broadly describes the ability of machines to mimic human behaviors, like speech recognition or logical reasoning. A.I. solutions can acquire this ability in several ways.

Rules-based Programs

Rules-based programs act on a fixed set of “if, then” principles. Human programmers tell the machine exactly which conditions to expect and exactly which actions to take in response. With this approach, humans are solely responsible for defining inputs, outputs, and correctness.

Machine Learning (ML)

Machine learning, in contrast to rules-based programs, is a branch of A.I. focused on enabling machines to learn from data, rather than being given a pre-defined set of rules. With machine learning, the dataset becomes the focal point—not the limited set of rules and/or instructions formulated by the human programmers.

Supervised ML

In supervised ML models, the dataset presented to the program is supervised (mostly) by humans. The program then presents these supervised examples iteratively to the model, which uses them to learn how to process the data.

First, labeled input-output pairs are presented to the model, training it to learn patterns via a process similar to students memorizing flashcards.

Next, inputs are separated from outputs, and one half of a pair is presented, with the machine asked to supply the other. Supervisors then grade the answer accordingly, providing corrective feedback that the machine uses to update its understanding of the pattern.

These learning loops are then repeated (possibly millions of times) until there is confidence in the machine’s ability to predict correct outputs when given new (but similar) inputs.

Unsupervised ML

In unsupervised ML models, humans do not begin with a definition of correctness. Instead, they supply the machine with unlabeled inputs. The machine then develops its own rules for grouping data, interpreting relationships, and defining patterns.

Generative A.I. goes one step further.

Generative A.I. Defined

Generative A.I. solutions take what they learn from their ML training and use it to create original content. They not only identify patterns within their inputs, they extend those patterns into new outputs as well.

These outputs may be text, audio, or visual media — and the format can differ between input and output.

To solidify our understanding of these traits, let’s examine the world’s most popular Generative A.I. project: The Generative Pre-trained Transformer (GPT) model developed by OpenAI.

The Foundations of GPT

OpenAI’s GPT model began with extensive ML training based on public proprietary data sources, which aimed at learning linguistic patterns and structures.

It accomplished this through semi-supervised training of its core base model on how human language is structured. By applying this training on a very, very large collection of data, it gained a solid understanding of word groupings, relationships, and patterns that enable it to mimic human language via chatbot responses. ChatGPT is a Generative A.I. solution that has extended those core learnings.

Additional tasks or applications can be built on top of this base language model using its understanding of language. With ChatGPT, for example, the developers carried out additional training of the model by using reinforcement learning to teach the model to answer questions and carry out conversations in a manner that human reviewers rated highly. Fine-tuning like this gives more control over the outputs, and more certainty over what the model will predict or generate.

Finally, stepping beyond text-based outputs, there is DALL-E: another Generative A.I. solution based on the GPT model. It uses its acquired intelligence to create digital images from text-based prompts.

Generative A.I. Advantages for Legal Departments

Generative A.I. is more than just a pop culture curiosity. In-house attorneys are already seeing practical business benefits from employing the technology in four main roles.

An Automated Administrator

Making time for meaningful work is a constant struggle within corporate legal departments. Generative A.I. can immediately help the cause by handling a variety of administrative tasks that require little if any legal expertise.

Consider your next routine catchup call with outside counsel. Instead of appointing a teammate to take notes and send recaps, you could have A.I. transcribe, summarize, and circulate what was said.

Or, if that meeting were an annual performance review, you could task A.I. with compiling the data to fill your law firm scorecard.

In each scenario, the desired outputs automatically arrive while attorneys are focused on more strategic priorities.

A Rapid Researcher

Generative A.I. can be more than a strictly administrative resource. Even when the work does require legal expertise, smart technology can quickly find and organize the knowledge you need to do it right.

Accelerating legal research was the first big opportunity most attorneys envisioned after discovering ChatGPT. Considering the scope and frequency of the task, there’s great value in any tool that can efficiently pull legal precedents, identify relevant statutes, and compose preliminary summaries.

There’s also a similar opportunity that fewer teams are talking about. Generative A.I. can help surface the knowledge that’s already inside your organization.

Instead of sending a department-wide email asking for a point of reference on an impending case, a Generative A.I. tool could scan through your matter management system and instantly retrieve all relevant examples from the past five years.

The end result in each example is less time spent searching for context and more time available for interpreting substance.

A Creative Partner

Many legal tech tools have helped users incrementally accelerate research tasks over the years. Generative A.I. can go beyond merely finding and organizing raw materials, however. It can actually help create the work product itself.

The key word to remember here is partner. A.I. is unlikely to become a truly independent publisher of legal documentation anytime soon. But the division of responsibilities between human and machine may vary.

Lawyers experimenting with ChatGPT, for example, have been quick to note its utility as a drafter. Even if its first pass on a new document is far from perfect, A.I. at least advances its partner beyond the blank page. And human writers of every kind can see the benefit in starting their work at the revision, rather than ideation, stage.

For more standardized work types, such as contracts, Generative A.I. can take an even more proactive role. In addition to quickly composing basic agreements, some tools can even supply suggested redlines that align with a department’s contract playbook.

For a profession defined by prolific writing, this kind of creative assistance may ultimately be Generative A.I.’s most impactful role.

A Strategic Consultant

If Generative A.I. was just a faster way for lawyers to put their existing ideas in writing, then it would still be an impressive innovation. But what if it could recommend strategies and strengthen arguments as well?

The first type of A.I. consultation stems from its talent for quantitative analytics. The technology can rapidly scan massive data sets and present telling insights to inform legal strategy.

Generative A.I. could, for example, review a decade’s worth of related judgments and predict the likely result of an active litigation matter. Alternatively, Generative A.I. could scan patent databases and identify opportunities to expand your intellectual property (IP) portfolio.

The more surprising type of A.I. consultation is qualitative.

Consider what you could gain, for example, from the following prompts:

  • What are three potential weaknesses in this argument?
  • How would you explain this clause to a software engineer?
  • What do the closing arguments from these five cases have in common?

From consultant to critic, Generative A.I. could already be serving as your trusted advisor in strategic scenarios.

Generative A.I. Risks & Limitations for Legal Departments

Gauging the implications of emerging technologies is a continuous challenge for corporate lawyers in every industry. But a few recurring risks around Generative A.I. are already coming into view.

Model Transparency

Absolute clarity is an unattainable goal for those curious about how most commercial A.I. models are trained.

The first reason for this limitation is obvious. Proprietary techniques are competitive advantages, so developers have little incentive to disclose their work in detail.

But even assuming they wanted to, how could they communicate the potentially billions of variables an A.I. model weighs when making predictions?

Given this lingering lack of visibility into the inputs, it’s understandable why lawyers might be wary of the outputs.

Output Accuracy

The most basic risk of employing Generative A.I. solutions is that their outputs may be incorrect. And this is one big reason why the technology is unlikely to be a wholesale replacement for human experts in the field of corporate legal services.

One source of error is outdated information. Most models’ training data—especially for the base version—is somewhat stale. We see this in the free version of ChatGPT, for instance, which was trained on a data set that does not include any events after September 2021. Though this can be addressed by providing the model with a live source of up-to-date data, without that context it can leave users vulnerable to applying formerly true answers.

Specialization also poses a problem. A general purpose application like ChatGPT is likely to produce only vague or incomplete answers when prompted with a question that requires deep domain expertise. Wikipedia articles, rather than Supreme Court archives, could be the extent of its relevant training.

The risks of outdated information and insufficient expertise are then compounded by a third problem: hallucination. This is the term developers use to describe the phenomenon of A.I. generating plausible – but unequivocally false – outputs.

Two Manhattan lawyers provided an infamous example of these risks in early 2023 when it was discovered that their A.I.-assisted court brief cited numerous nonexistent cases.

There is potential for hallucinations to be addressed in the future by optimizing a model for accuracy, though it will be interesting to see how such functionality will develop in the legal space.

Data Protection

Introducing Generative A.I. into your business operations can also raise several data privacy, security, and confidentiality risks.

Safety regulations related to personally identifiable information follow that data across platforms. So before entering sensitive records into any software product, users would be wise to review all terms and conditions.

Samsung executives, for example, were recently surprised to learn that trade secrets were inadvertently leaked into the public domain following one employee’s ChatGPT prompting.

The reverse is also true. You could unwittingly infringe another entity’s IP that was illegally ingested into an A.I. training model. (Fair use remains an open question in this rapidly evolving arena.)

Ethical Responsibility

Do you think the software developer or software user should be considered at fault in the following scenarios?

  • A.I. tool plagiarizes copyrighted content
  • A.I. tool amplifies gender and/or racial biases
  • A.I. tool creates advertisement with misleading claims
  • A.I. tool inflates financial metrics included in public filings

These are just a few of the ethical dilemmas developers, users, and regulators may confront in the coming years. But until then, the safest assumption for in-house legal teams is that accountability resides with the user and their employer.

Generative A.I. Myths Legal Departments Must Know

When high risks meet high rewards, intense reactions are sure to follow. There are three impulsive assumptions, however, that I’d advise any corporate legal department to resist.

Lawyers Can Ignore A.I.

It might be tempting for corporate legal departments to classify Generative A.I. as a category of technology that’s too risky to engage. The trouble is, business colleagues and industry competitors probably don’t agree.

The percentage of companies without employees formally procuring or independently using Generative A.I. tools is steadily falling toward zero. As a result, in-house lawyers have an obligation to understand the uses and implications of these technologies on behalf of the business.

And any organizational mandate against the exploration of emerging technology is only likely to create a competitive disadvantage over the long term.

A.I. Will Replace Lawyers

Type “A.I. will” into most search engines and you’ll see some variation of “replace workers” as one of the first few auto-complete suggestions. Ironically enough, that’s an example of Generative A.I. learning from human sentiments.

But if you’re looking for a more measured (and optimistic) take, you only need to scroll up a few paragraphs.

The extreme domain expertise and nuanced social skills required to be an effective lawyer make the profession relatively immune to mass A.I. replacement. What will be replaced, however, are certain modes of legal work.

Rote, recurring, low-complexity tasks like monthly reporting, for example, may be outsourced to A.I. rather than squeezed in between meetings. Similarly, lawyers may soon spend only a fraction of their week summarizing legal research and drafting contracts.

Much of that time can be allocated toward the many strategic priorities vying for their attention. But it’s worth noting that acquiring and refining the skills to maximize Generative A.I.’s potential will take time as well.

A.I. Needs No Supervision

Whether the training is led by in-house developers or third-party software suppliers, every Generative A.I. advantage is contingent on the delivery of continuous, high-quality feedback from human experts.

Make no mistake, A.I. is not a set-it-and-forget-it solution.

But for those who consciously invest in a practice of ongoing supervision, the returns can be transformative.

See How A.I. Works

Brightflag has been delivering practical A.I. solutions for corporate legal departments since 2013. For a level-headed perspective on what the latest innovations could mean for your everyday operations, book a demo with Brightflag today. Or, if you want to see how it works right now, take the interactive tour below.

Michael Dineen, Brightflag's Director of Data Science, smiling in a gray dress shirt and glasses.

Michael Dineen

Director of Data Science

Michael Dineen first joined Brightflag in 2016 as a Data Scientist before working his way up to the role of Director of Data Science. Prior to joining Brightflag, Michael served as a Senior Analytics Consultant with Presidion. He holds a Master of Science (MSc) degree in Business Intelligence and Data Mining from Technological University Dublin, as well as a post-graduate diploma in Software Development.