AI Accuracy in Legal Research Remains in ‘Check Your Work’ Phase


Source: Bloomberg Law

A method called retrieval-augmented generation, or RAG, is the leading contender to prevent AI hallucinations and has been widely embraced by the legal technology industry over the last year. But a recent study and real-world use cases are raising questions about how thoroughly RAG eliminates errors in legal research AI systems, according to several people in the field.

“Ultimately, these legal research-specific RAG tools can be very helpful,” said Adam Rouse, senior counsel and director of e-discovery at Walgreens. “I think they can increase efficiency quite a bit. But I still think we’re still at the ‘check your work’ phase. We are not at the, ‘rely on this as truth.’”

In theory, RAG combines the intelligence of generative AI with the reliability of search technology. It’s seemingly a good fit for the needs of the legal tech market, which seeks tools that can write lawyer-quality answers in seconds, instead of hours, but without the dangerous hallucinations.

Building RAG into a legal research tool seems to help avoid one of the more embarrassing hallucinations that generative AI is prone to in legal tasks: fabricating cases that don’t exist. But RAG-based systems still make errors, like misunderstanding what cases are about and confusing precedent, according to a recent Stanford University study. They can point lawyers to real cases, but they won’t necessarily be the right cases to cite.

Read the full article:

Sign up for our newsletter

Get weekly news and insights delivered straight to your inbox!