Zachariah Crabill was two years out of law school, burned out and nervous, when his bosses added another case to his workload this May. He toiled for hours writing a motion until he had an idea: Maybe ChatGPT could help?
Within seconds, the artificial intelligence chatbot had completed the document. Crabill sent it to his boss for review and filed it with the Colorado court.
“I was over the moon excited for just the headache that it saved me,” he told The Washington Post. But his relief was short-lived. While surveying the brief, he realized to his horror that the AI chatbot had made up several fake lawsuit citations.
Crabill, 29, apologized to the judge, explaining that he’d used an AI chatbot. The judge reported him to a statewide office that handles attorney complaints, Crabill said. In July, he was fired from his Colorado Springs law firm. Looking back, Crabill wouldn’t use ChatGPT, but says it can be hard to resist for an overwhelmed rookie attorney.
“This is all so new to me,” he said. “I just had no idea what to do and no idea who to turn to.”
Business analysts and entrepreneurs have long predicted that the legal profession would be disrupted by automation. As a new generation of AI language tools sweeps the industry, that moment appears to have arrived.
Stressed-out lawyers are turning to chatbots to write tedious briefs. Law firms are using AI language tools to sift through thousands of case documents, replacing the work of associates and paralegals. AI legal assistants are helping lawyers analyze documents, memos and contracts in minutes.
The AI legal software market could grow from $1.3 billion in 2022 to upward of $8.7 billion by 2030, according to an industry analysis by the market research firm Global Industry Analysts. A report by Goldman Sachs in April estimated that 44 percent of legal jobs could be automated away, more than any other sector except for administrative work.
But these money-saving tools can come at a cost. Some AI chatbots are prone to fabricating facts, causing lawyers to be fired, fined or have cases thrown out. Legal professionals are racing to create guidelines for the technology’s use, to prevent inaccuracies from bungling major cases. In August, the American Bar Association launched a year-long task force to study the impacts of AI on law practice.
“It’s revolutionary,” said John Villasenor, a senior fellow at the Brookings Institution’s center for technological innovation. “But it’s not magic.”
AI tools that quickly read and analyze documents allow law firms to offer cheaper services and lighten the workload of attorneys, Villasenor said. But this boon can also be an ethical minefield when it results in high-profile errors.
In the spring, Lydia Nicholson, a Los Angeles housing attorney, received a legal brief relating to her client’s eviction case. But something seemed off. The document cited lawsuits that didn’t ring a bell. Nicholson, who uses they/them pronouns, did some digging and realized many were fake.
They discussed it with colleagues and “people suggested: ‘Oh, that seems like something that AI could have done,’” Nicholson said in an interview.
Nicholson filed a motion against the Dennis Block law firm, a prominent eviction firm in California, pointing out the errors. A judge agreed after an independent inquiry and issued the group a $999 penalty. The firm blamed a young, newly hired lawyer at its office for using “online research” to write the motion and said she had resigned shortly after the complaint was made. Several AI experts analyzed the briefing and proclaimed it “likely” generated by AI, according to the media site LAist.
The Dennis Block firm did not return a request for comment.
It’s not surprising that AI chatbots invent legal citations when asked to write a brief, said Suresh Venkatasubramanian, computer scientist and director of the Center for Technology Responsibility at Brown University.
“What’s surprising is that they ever produce anything remotely accurate,” he said. “That’s not what they’re built to do.”
Rather, chatbots like ChatGPT are designed to make conversation, having been trained on vast amounts of published text to compose plausible-sounding responses to just about any prompt. So when you ask ChatGPT for a legal brief, it knows that legal briefs include citations — but it hasn’t actually read the relevant case law, so it makes up names and dates that seem realistic.
Judges are struggling with how to deal with these errors. Some are banning the use of AI in their courtroom. Others are asking lawyers to sign pledges to disclose if they have used AI in their work. The Florida Bar association is weighing a proposal to require attorneys to have a client’s permission to use AI.
One point of discussion among judges is whether honor codes requiring attorneys to swear to the accuracy of their work apply to generative AI, said John G. Browning, a former Texas district court judge.
Browning, who chairs the state bar of Texas’ taskforce on AI, said his group is weighing a handful of approaches to regulate use, such as requiring attorneys to take professional education courses in technology or considering specific rules for when evidence generated by AI can be included.
Lucy Thomson, a D.C.-area attorney and cybersecurity engineer who is chairing the American Bar Association’s AI task force, said the goal is to educate lawyers about both the risks and potential benefits of AI. The bar association has not yet taken a formal position on whether AI should be banned from courtrooms, she added, but its members are actively discussing the question.
“Many of them think it’s not necessary or appropriate for judges to ban the use of AI,” Thomson said, “because it’s just a tool, just like other legal research tools.”
In the meantime, AI is increasingly being used for “e-discovery”— the search for evidence in digital communications, such as emails, chats or online workplace tools.
While previous generations of technology allowed people to search for specific keywords and synonyms across documents, today’s AI models have the potential to make more sophisticated inferences, said Irina Matveeva, chief of data science and AI at Reveal, a Chicago-based legal technology company. For instance, generative AI tools might have allowed a lawyer on the Enron case to ask, “Did anyone have concerns about valuation at Enron?” and get a response based on the model’s analysis of the documents.
Wendell Jisa, Reveal’s CEO, added that he believes AI tools in the coming years will “bring true automation to the practice of law — eliminating the need for that human interaction of the day-to-day attorneys clicking through emails.”
Jason Rooks, chief information officer for a Missouri school district, said he began to be overwhelmed during the coronavirus pandemic with requests for electronic records from parents litigating custody battles or organizations suing schools over their covid-19 policies. At one point, he estimates, he was spending close to 40 hours a week just sifting through emails.
Instead, he hit on an e-discovery tool called Logikcull, which says it uses AI to help sift through documents and predict which ones are most likely to be relevant to a given case. Rooks could then manually review that smaller subset of documents, which cut the time he spent on each case by more than half. (Reveal acquired Logikcull in August, creating a legal tech company valued at more than $1 billion.)
But even using AI for legal grunt work such as e-discovery comes with risks, said Venkatasubramanian, the Brown professor: “If they’ve been subpoenaed and they produce some documents and not others because of a ChatGPT error — I’m not a lawyer, but that could be a problem.”
Those warnings won’t stop people like Crabill, whose misadventures with ChatGPT were first reported by the Colorado radio station KRDO. After he submitted the error-laden motion, the case was thrown out for unrelated reasons.
He says he still believes AI is the future of law. Now, he has his own company and says he’s likely to use AI tools designed specifically for lawyers to aid in his writing and research, instead of ChatGPT. He said he doesn’t want to be left behind.
“There’s no point in being a naysayer,” Crabill said, “or being against something that is invariably going to become the way of the future.”
{Categories} _Category: Applications,*ALL*{/Categories}
{URL}https://www.washingtonpost.com/technology/2023/11/16/chatgpt-lawyer-fired-ai/{/URL}
{Author}Pranshu Verma, Will Oremus{/Author}
{Image}https://www.washingtonpost.com/wp-apps/imrs.php?src=https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.com/public/5N5DQOU6A4XGXDUDV7JHCLB7AM_size-normalized.jpg&w=1440{/Image}
{Keywords}{/Keywords}
{Source}Applications{/Source}
{Thumb}{/Thumb}