I’m a contract-drafting guy, but I have to acknowledge that drafting contracts might not be the most annoying part of the day-to-day contracts process.
Assume that Acme does ten deals with ten different companies in which it drafts the contracts using its templates. Then assume that it does those deals using the other guy’s drafts. Odds are that in the second scenario Acme would end up doing a lot more work than it would in the first scenario—producing a draft using a template you’re familiar with is likely to take less time than reviewing a draft produced by the other side.
So it’s not surprising that software aimed at reviewing the other side’s drafts should now be attracting attention. The two names I’m familiar with are LegalSifter and LawGeex. They’re welcome innovations.
LawGeex recently disseminated this study that made the following claim:
In a landmark study, 20 experienced US-trained lawyers were pitted against the LawGeex Artificial Intelligence algorithm. The 40-page study details how AI has overtaken top lawyers for the first time in accurately spotting risks in everyday business contracts.
That prompted a bunch of hyperventilating articles, including this one.
I’m sure the reported results reflect what happened. I know something about confidentiality agreements, having spent around a year building an automated one. (It’s described in this LinkedIn article.) So I had a look at the LawGeex study, and it prompted the following thoughts. My intention isn’t to criticize, but to offer some context.
Humans Are Fallible
Yes, humans are fallible. In terms of contract drafting, my presumption is that everything is bad, and I’ve offered on this blog many examples of that. When that’s not the case, I’m pleasantly surprised. So I would expect a comparable dynamic to apply when it comes to review. But the circumstances of LawGeex’s study are a worst-case scenario. If a company is paying attention, it would give those reviewing confidentiality agreements some sort of checklist against which to measure what they’re reviewing. By contrast, those taking part in LawGeex’s test had to rely only in their experience and native wit.
Granularity
Second, the issues flagged by LawGeex are very broad. For example, one issue was presence of a no-soliciting provision. In my automated confidentiality agreement, the no-soliciting provision is customizable up the wazoo, starting with whether it covers just hiring or both hiring and soliciting. Simply flagging no-soliciting provisions doesn’t get one very far.
Necessarily Limited Scope
Third, inevitably, LawGeex’s list of issues wasn’t comprehensive. To select an example at random, it doesn’t include flagging instances of the word proprietary, something I wrote about in this 2010 post. And their list doesn’t cover general drafting issues, such as whether something should be expresses as a condition and not as an obligation, and I doubt it ever will. That’s why review by software should support review by a person, not replace it.
What Comes Next
Fourth, spotting issues is the first part of what the technology does. Then LawGeex suggests edits based on a company’s pre-defined legal policies. I’d be interested to know the level of detail it offers, in terms of both what it reads and the suggestions it offers. But that would require a demo. For purposes of this post, I’m just looking at their study.
Whom Do You Trust?
And fifth, my biggest question about the new crop of “AI” technologies isn’t the technology per se, it’s the humanoid expertise it incorporates. That concern applies to all services that address contract content. I’m toying with the slogan “Editorial expertise is the new black box.” In the case of services that offer contract templates, if I don’t know who prepared a template, I’m not going to trust it. Even if I do know, I’ll be skeptical unless given good reason not to be. Relying on someone’s contract language requires a leap of faith, so I know that I have to not only be an expert but also appear to be an expert. The same goes for services that assist with review.
To its credit, LawGeex identifies the “team of prestigious law professors and veteran lawyers” that prepared the list of issues that forms the basis for the test. But I happened to spot that the heading for one of the identified issues was “Exclusion—Public Domain.” That’s a little worrisome: as I note in this 2010 post, the phrase in the public domain “has no bearing on how widely available any given information is. Instead, it means that the information isn’t protected by intellectual-property rights and so can be used by anyone free of charge. That would represent an irrationally narrow exclusion from the definition of ‘Confidential Information’ ….” So LawGeex’s team flubbed by using that phrase, albeit just in a heading. Might they have missed other stuff? Just as those performing an old-fashioned review are likely to be fallible, the experts giving instructions to AI might be fallible too.
Most people shouldn’t find that sort of problem disconcerting, as most of us would be grateful to have the benefit of the collective expertise of LawGeex’s team, even if they’re fallible.
Other Kinds of Contracts
It’s not surprising that LawGeex’s report features confidentiality agreements. They’re the cockroach of the contract world—ubiquitous, annoying, and apparently indestructible. And you see the same issues in contract after contract. It will be interesting to see how LawGeex and its competitors do when it comes to reviewing more fluid kinds of contracts.
This category of product has the potential to make contract review quicker and more effective. Let’s see whether the technology and the underlying expertise are up to the job. And, to quote this post, let’s see whether the intended users give a ****.