Blog

Zuva releases enhanced document classifier, open-sources its multi-level document classification taxonomy via the SALI Alliance

We released our first document type classifiers back in the early days of Kira. The classifiers could identify if a document was a contract or not, and put documents into one of ~35 buckets. Ever since then, we’ve been working hard to build a taxonomy and expand the scope of documents that our AI software can automatically classify. That work has been a heavy lift—it’s taken years—but now the wait is over, big time! With Zuva’s new Multi-Level Document Classifier, Zuva now automatically classifies 225 document types.

NO (Mostly)! What Terms of Use For Major Websites Say About Whether Generative AI Training Is Allowed On Their Content

TL;DR: Generative AI builders are getting sued for (among other things) breaching website terms of use. We examined 43 relevant websites to see whether their terms of use might have been breached by generative AI training. A heavy majority explicitly or implicitly prohibit use for generative AI training. Shrek movie - Welcome to Duloc song Generative AIs face significant legal risks. They were trained on vast amounts of data (text, images, code), often without permission from rights holders. Maybe courts will find this okay, maybe they won’t. (For >3k more words on this, check out our earlier (well-reviewed!) piece about legal risks faced by generative AI.) If courts find generative AIs offside, it will likely be because of:

Who Pays If Generative AI Runs Into Legal Troubles, You or Your Provider?

TL;DR: Customers often care that their SaaS user agreements protect them from claims. We looked at a bunch of generative AI user agreements to see how they were at this. Most weren’t very protective. LOTS of details below. There’s a real risk that courts will find that generative AIs infringe other’s copyrights and/or website terms of use. (To read >3,000 more words on this, check out our recent (well-reviewed!) piece about it. New data points since our (recent) piece make it even more likely that generative AIs run into legal infringement difficulties.) If generative AIs are found to infringe, it’s possible that liability will flow downstream to end users.

Did My Prompt Break The Law? Potential Copyright and Breach of Contract Issues with Generative AI

TL;DR: Don’t underestimate the risks of copyright and breach of contract issues with generative AIs. Claims on either ground could be successful, and there will be lots of problems for vendors (and, maybe, end users) if they are. Imagine it’s November 2023. The district judge in Getty Images v. Stability AI has just issued an injunction against Stability AI, prohibiting them from selling any generative AI trained with Getty Images’ copyrighted content. Potential damages look large (Getty Images has asked for over $150,000 for each of the over 12 million images that Stability AI copied), and Stability AI may have to destroy any AI trained with Getty Images content.¹ While other generative AI copyright lawsuits were already on the go, this decision drives lots more lawsuits around generative AIs trained on content without clear opt-in permission. AI image generators, code generators, and Large Language Models have gone from wowwing us to bigtime liability, all in the course of about a year.

Problems With Prompts? Measurability & Predictability of LLM Accuracy

TL;DR: We are somewhat uncomfortable with prompt engineering, because it’s hard to know how accurate a given prompt really is, and responses can be inconsistent and unreliable. There are ways to mitigate this. There are a lot of factors you could consider in evaluating Large Language Models (LLMs) in contract analysis. Accuracy is likely near the top of any list. Recently, we wrote a piece evaluating how GPT-4 is at finding information in contracts. If thinking about LLM accuracy, there are two other things you should consider: measurability and predictability of accuracy. This piece will go into detail about these.

The Top 10 Reasons Why Microsoft SharePoint Can Be A Solid CLM

In recent years, Contract Lifecycle Management (CLM) has emerged as a software category that aims to provide organizations with a dedicated, comprehensive piece of software to manage all of their contract management processes. The space is booming. According to data from PitchBook, in 2021, $436.8M of venture capital was invested in the CLM space across 22 deals—up 63% vs. the year prior. Though CLMs are growing in adoption, they serve a relatively small number of businesses compared to Microsoft’s SharePoint. Based on Gartner figures, CLMs serve approximately 10,000 customers today. There are over 200 million active users of SharePoint, across 200,000 organizations worldwide. SharePoint is very versatile software, and obviously much of this use is on non-CLM tasks. In recent months, we have devoted a lot of effort to finding out how organizations manage their contracts. Our most striking finding has been how many organizations use SharePoint as their contract management system. Our sense is that SharePoint is the most heavily used system for managing contracts, by a LOT.

How is GPT-4 at Contract Analysis?

Where Generative AI Is Good in Contract Data Extraction … And Where It Isn’t TL;DR: GPT-4 is impressive overall, but-on contract review tasks-it’s inconsistent and makes mistakes; probably not yet ready as a standalone approach if predictable accuracy matters. Back in 2011, when we started Kira Systems, other contract review software companies were using rules- or comparison-based approaches to finding data in contracts. We used supervised machine learning to find information in contracts, and did well against our competitors. Eventually, most of the world’s leading law and audit/consulting firms (and a bunch of corporates) became Kira customers. In recent months, Generative AI solutions have become all the rage. Are they going to supplant other machine learning approaches in contract analysis, just as machine learning approaches beat out rules?

Introducing a new FREE version of Zuva contracts AI

At Zuva, we think contract data is too important to be bottled up in one system, or be forced into one particular set of workflows. Almost any business team can benefit from being able to access and review contract data. We’ve been building contracts AI since 2011. Our tech has been used by the world’s most demanding contract reviewers and for many of the world’s biggest companies. At Zuva, our focus has not just been to free this technology from any one system, but also make it dead simple for business people to use AI to extract and make use of contract data.

Severance Agreements

The National Labor Relations Board (NLRB) enforces labor laws in the United States, specifically those related to collective bargaining and unfair labor practices. On February 21, 2023, the NLRB issued a ruling in McLaren Macomb, 372 NLRB No. 58 (2023) about, among other things, a Severance Agreement that contained confidentiality and non-disparagement provisions. The provisions at issue prohibited employees from disclosing the terms of the Severance Agreement to most third parties, and broadly restricted employees from making statements about the employer. After examining the language of the agreement, the NLRB determined that these provisions interfered with the rights of employees and were in violation of the National Labor Relations Act.