The current print issue of the ABA Journal contains a short article on generative artificial intelligence and contracts. It's behind a paywall, but I'm making a scanned copy of the print version available here. (If anyone at the ABA Journal asks me to take it down, I will.)
This article offers an example of the narratives we're able to entertain because contracts are hard and because the consequences of bad contract drafting aren’t certain or aren’t immediate. (I discuss that in today's other blog post, here.)
The article focuses on the efficiency gains offered by AI; the benefits of AI-powered contract redlines; and using chatbots to learn what's in your contracts. The skeptics chime in at the end. Noah Waisberg points out mildly that for contract drafting, a template might be a better starting point than relying on whatever AI happens to throw together. And the final sentence has me warning that "training AI on poorly written contracts or ones with bad data could lead to less-than-ideal outcomes."
It's easy to see how what I have to say about AI and contracts could be ignored. In this blog post, I say the notion of "gold-standard precedents" is sufficiently questionable as to bring into question the notion of relying on such templates to make Allen & Overy's AI contract-drafting tool, ContractMatrix, more reliable. But unless you're willing to take my word on it, assessing the general quality of templates would be unrealistic for any but the most committed. It would be easy to tune me out.
And in this blog post, I point out shortcomings in an AI redline. Only those with a serious interest in nuances of contract language would be inclined to look closely at my analysis.
So because people might not be inclined to get into the complexity, and because any adverse consequences of relying on AI for contract language aren’t certain or aren’t immediate, people might be inclined to pay more attention to AI pitches, no matter that they don't address the uncertainties.