Skip to content

Something Else to Not Use AI For: Summarizing Contracts

24 February 2024

Screenshot 2024-02-24 164314

[Updated 25 February 2024: I've substantially rewritten this post. The conclusions are essentially the same, but I've gone into greater detail in explaining my conclusions.]

When it comes to AI and contracts, I've already observed (in this blog post, this blog post, and this blog post) that because mainstream contract language is dysfunctional, generative artificial intelligence trained on that dysfunction will just give you an unpredictable version of that dysfunction. So, no thanks.

But AI is being applied to contracts in another manner. Some legaltech vendors serving the contracts market offer products that allow you to summarize contract language. Again, no thanks. Allow me to explain.

What AI Summarizing Looks Like

Let's start by looking at an example of how AI summarizing works in practice. Here's a sentence that appeared in this 2019 blog post (it was the first thing that came to mind):

Consultant represents and warrants that (i) it has full power, capacity and authority to enter into and perform this Agreement and to make the grants of rights contained in Section 3(c), and Consultant’s performance of this Agreement does not violate or conflict with any agreement to which Consultant is a party; (ii) the Services will be performed in a professional manner consistent with the level of care, skill, practice and judgment exercised by other professionals in performing Services of a similar nature under similar circumstances by personnel with requisite skills, qualifications and licenses needed to carry out the work; (iii) all Services and Deliverables will materially conform to the Specifications for a period of one (1) year from Acceptance; (iv) to the best of Consultant ’s knowledge, AssetMark’s permitted use of the Indemnified Items will not infringe the Intellectual Property Rights of any third party; (v) there is no pending or threatened litigation that would have a material adverse impact on its performance of this Agreement; and (vi) it will use industry best practices to prevent the introduction of any viruses, worms, or other harmful or destructive code into the Services, Deliverables, and AssetMark’s systems.

I asked ChatGPT 3.5 to summarize it "so it's easier for business people to understand." Here's what I got:

The consultant assures that they have the authority to enter and fulfill the agreement without conflicting with other commitments. They commit to delivering services at a professional standard similar to industry norms, ensuring compliance with specifications for a year. The consultant also guarantees that AssetMark's use of provided items won't infringe on third-party rights, there are no significant legal issues affecting their performance, and they will employ best practices to prevent any harm to services or systems.

Then I coughed up US$21.73 so I could ask ChatGPT 4 to do the same thing. It gave me this:

This section of the contract means:

  1. The consultant has the legal right and ability to sign and carry out the terms of this contract without breaking any other agreements they are part of.
  2. The work will be done professionally, meeting the high standards expected in their industry, by people who are properly skilled and qualified.
  3. Any work or products delivered will meet the agreed-upon requirements for one year after they are accepted.
  4. The consultant believes that using the work as intended won't violate anyone else's intellectual property rights.
  5. There are no legal issues currently or potentially that would affect the consultant's ability to fulfill this contract.
  6. The consultant will follow the best practices in their field to avoid introducing any harmful software or code into the work or the client's systems.

The ChatGPT 4 version is markedly better than what ChatGPT 3.5 offered, so I'll limit my comments to the ChatGPT 4 version.

It deviates from the original in ways that might cause confusion:

  • It omits the reference to section 3(c) in clause (i).
  • It doesn't use defined terms, so "Services", "Indemnified Items", and "Deliverables" all become "work".
  • Item 3 doesn't reflect the materiality standard in clause (iii) of the original.
  • Instead of "pending or threatened litigation", we get "legal issues".
  • It doesn't mention AssetMark by name, so "AssetMark's systems" becomes just "systems".

Otherwise, the ChatGPT version is just a boiled down, less specific version of the original, with a lot of words that don't appear in the original.

(By the way, I fed ChatGPT just that one sentence, as opposed to the entire contract. When I subsequently fed it the entire contract, it offered me a summary that was way skimpier than what it offered for the one sentence, to the point of being useless. But I assume you could ask it to provide more detail.)

Problems With Summarizing

Now let’s look at the theory. The notion of using AI to summarize contracts requires addressing two issues: First, what are we trying to accomplish? And second, is AI capable of helping us accomplish it?

I made this post about summarizing contracts simply because that’s how I’ve seen the task described. But summarizing contracts isn’t helpful.

When you summarize something, you state the main points and leave out what’s less important or irrelevant. This brings to mind my O Level English exam in the 1970s, at Latymer Upper School, in London. Part of the exam involved doing a “precis”—now called “summary writing.” A proctor would read out a passage of nonfiction writing, and we students had to summarize it in no more than a specified number of words, including only the more important points.

But what might make sense for nonfiction writing doesn’t make sense for contracts, for two reasons. First, in contracts, everything matters! It’s like software code—leave something out and bad things can happen.

And second, in the process of summarizing, usually you aren’t able to just prune some words, repeating the rest verbatim. Instead, you likely have to change some words. In the limited and stylized world of contract language, using different words can have significant implications.

Based on my one extract, these general problems with summarizing contract language are aggravated by using ChatGPT, with its freewheeling omissions and word-changing.

Outlining Instead of Summarizing

So we shouldn’t summarize contracts. If you want to understand the implications of what’s in a contract, it’s reckless to not read the contract itself. That’s not a ChatGPT issue—it’s a contracts issue.

I suggest that instead, there’s value in outlining. In an outlining, you describe the overall structure and you focus on key elements or key issues, leaving the reader to consult the contract to follow up on the implications of what’s in the outline. You can also offer guidance, much as a GPS system offer different kinds of alerts. An outline gives the reader a sense of what the contract covers, and it allows the reader to decide whether they wish explore issues by examining the contract.

The nature of an outline depends on what’s at stake. For example, is it something an organization prepares for every contract it signs? Or it is required as part of assessing the implications of some problem that has arisen? The information could be of the most basic sort. For example, “Governing Law: Pennsylvania.” Or it could be more nuanced.

To give you a sense of the difference between a summary and an outline, here’s my outline of that sentence I had the two versions of ChatGPT summarize (plus my observations, which are probably overkill):

Section 6. Representations and Warranties [Generally, this phrase is used with statements of fact, but in this case, it introduces different kinds of provisions.]

  • Clause (i): The consultant has authority to enter into the contract and no violation. [No remedy is available if this statement of fact is inaccurate; see this blog post.]
  • Clause (ii): The consultant must meet the stated standard of care. [This is an obligation, not a statement of fact.]
  • Clause (iii): Services and Deliverables will materially conform to the Specifications for one year from Acceptance. [This is a “future fact”. A court might interpret it as an obligation or a risk-allocation mechanism; see MSCD ¶ 3.474.]
  • Clause (iv): To the Consultant’s knowledge, AssetMark’s use of the Indemnified Items will not infringe anyone’s IP rights. [This is a “future fact”. A court might interpret it as an obligation or a risk-allocation mechanism; see MSCD ¶ 3.474.]
  • Clause (v): There’s no pending or threatened litigation that would have a material adverse impact. [This statement of fact doesn’t include a knowledge qualification, and it isn’t limited to litigation that involves the consultant.]
  • Clause (vi): The consultant must protect against harmful code. [This is an obligation, not a statement of fact.]

Because every outline would be prepared to address a given need, it’s hard to imagine that ChatGPT is equipped to compile the information required to address that need. I don’t have that much faith in the power of prompting.

Conclusion

To summarize (hah!), summarizing contracts is a bad idea, and ChatGPT makes a bad idea worse. Outlining all or part of a contract might be helpful, but the nature of the outline would depend on the need. It's hard to imagine AI  being sufficiently nuanced to pull together the necessary threads.

I'm aware it's audacious for me—someone who hasn't done deals for 20 years—to wade into the stuff of day-to-day contracts work. I'm hoping my understanding of the nature of contract language will make up for that!