AI will probably constitute a significant innovation for legal practice. Beyond the direct use of ChatGPT itself, we're starting to see a set of AI offerings specific to the legal profession. Here are a few companies that have caught my attention:
Casetext has been around for a while, and I've followed this company for nearly a decade (well before I even considered leaving legal practice for tech).
Casetext began life as a legal research tool/Westlaw alternative. Since then, they've pivoted a few times, introducing a few AI assistance tools along the way. Their most recent offering is CoCounsel, a GPT-4 powered legal assistant.
Casetext is an interesting player in this space - at this point, they are one of the older legal tech firms, and with their track record, they can bring some experience and organizational maturity to this endeavor. They've recently been acquired by legal publishing company Thomson Reuters for a substantial sum, so it will be interesting to see
By contrast, Harvey is extremely new, and at this point, we know little about their offerings. Their story tracks the traditional startup narrative, being founded by a pair of roommates - one of them an AI researcher, the other a litigator at O’Melveny & Myers (a large international law firm).
Judging from their careers page, Harvey appears to be recruiting top-flight technical talent, and they've attracted funding from both OpenAI and Sequoia. They state that over 15,000 law firms are on their waiting list, and it appears that Allen & Overy (another large, international law firm) and PwC (a Big 4, professional services firm) have both agreed to partnerships incorporating Harvey into their workflows. This suggests a possibly bright future, and their website promises "unprecedented" AI.
Given the funding and the partnership announcements, it appears several organizations like what they've seen. That being said, there's still reason for caution regarding Harvey. Funding itself is not a guarantee of success - we would well to remember that Sequoia, one of Harvey's founders was a main backer for failed crypto exchange FTX, even going so far as to write a cringeworthy (and since scrubbed) profile of now disgraced and indicted founder Sam Bankman-Fried. I don't mean to imply any impropriety in this situation, but it's worth realizing that even very large, very sophisticated organizations can make serious mistakes.
Moreover, Harvey's legal founder practiced law for a single year before founding the organization, and beyond him, it's not clear if Harvey has people with legal skill sets, although they may still be building out their team. As such, it's not clear what legal knowledge Harvey can bring, though this may prove to be addressable in the long run.
Harvey may simply be too new. At this point, they don't have a product, nor do they have an established customer base. As a company, they have not yet built a reputation, and prospective users/investors may have concerns regarding the long-term viability of this company.
In the legal profession, trust is important. As a consequence, reputation and professional history matter. At this point, Harvey is an unknown and unproven quantity, and this may be a hindrance to its growth and adoption by customers.
Superlegal appears to be a contract generation and review platform - the goal appears to be negotiating contracts more quickly, and for less transactional cost. From what I can determine, they market directly to corporate clients rather than law firms, especially smaller tech companies. It seems they have a tightly focused use case, which is interesting to observe.
Chat-tm is a customer-facing AI tool built by the law firm of Erik M. Pelton & Associates, an Intellectual Property boutique. Unlike the other tools I've discussed, they built it themselves and trained this tool on their own data, chiefly podcasts and blog articles.
It's an interesting contrast - rather than a product meant to be marketed to other law firms or companies, this was developed internally. Moreover, it looks like this firm deliberately chose to train the AI solely on information that was already publicly available, rather than exposing internal material like memoranda, case documents, or emails. This is an extremely prudent decision - while it limits the amount of information available to the AI, it preserves the confidentiality of client communications and attorney work-product.
This represents an interesting, alternative paradigm. Beyond being an internally developed tool, in handling customer communication, it tackles the business of law as opposed to the practice of law; one can imagine a GPT-powered chatbot handling the customer intake process, all the way to signing up customers.
This is by no means a comprehensive list - there are other companies out there. These are the ones that caught my eye in a very quick Google search. Moreover, this article represents my own very initial impressions, based upon very quick research - I could easily be wrong, and I encourage you to do your own digging.
One apparent thing is the importance of data. It appears to be the quality and specificity of data sets that make these products fit for purpose. Large Language Models have a well-known hallucination problem - in one well-known instance, an AI generated brief cited to fictitious legal cases.
I think what we may be witnessing is the fragmentation of AI - rather than ChatGPT, or one LLM to serve (or rule) us all, we could see the adoption of many different AI tools, with unique datasets and settings.