Listen to this article
As the use of generative artificial intelligence continues to grow in many industries, a new report shows that the legal community remains cautious about integrating the technology into internal operations of corporate law departments. Conducted by Roseland-based law firm Lowenstein Sandler and the Association of Corporate Counsel – New Jersey, the recently released survey of in-house counsel at companies across sectors uncovered a range of attitudes toward the technology’s use in legal and business operations.
Generative AI – a type of artificial intelligence that creates, or generates, content such as text, graphics and documents – is on the verge of changing how corporate legal departments handle routine functions. By using the technology on tasks like reviewing simple contracts, in-house counsel can reduce the time and money spent to perform that work to a fraction of what would be needed by humans.
However, since generative AI is still quite new, the legal world – like many other industries – is trying to answer important questions surrounding the technology, particularly when it comes to confidentiality, accuracy and liability.
Earlier this year, when ChatGPT, an artificial intelligence chatbot, took the bar exam, the technology scored in the 90th percentile, placing it in the top 10% of humans.
On the flip side, after two attorneys who filed legal briefs written by ChatGPT ended up being full of fake cases and citations, a federal judge ordered them in June to pay $5,000 in fines.
With little objective data to provide a quantitative picture of how leaders in law perceive the technology and impact on the profession, Lowenstein Sandler and the Association of Corporate Counsel decided to survey the legal community to gauge attitudes toward AI, as well as assess how it is being used, either in legal departments or business operations.
In an introduction accompanying the findings, the report’s authors – Mary Hildebrand, a partner at Lowenstein and founder/chair of the privacy and cybersecurity practice; Kevin Iredell, the firm’s chief marketing officer; and Bert Kaminski, director of legal for Google Cloud – wrote, “In 2023, it seemed that all anyone was talking about in the legal and wider business communities was ChatGPT and other generative artificial intelligence (AI) tools. And there appeared to be little consensus. Even within our own organizations, which have always embraced technological innovation, we observed a wide range of disparate reactions to the use of AI, from concern to excitement.”
Conducted from July 11 through Aug. 31, the online survey received responses from 165 legal professionals nationwide, with 26% identifying as general counsel, 13% as VP-legal, 12% as senior corporate counsel and 15% as associate counsel.
Participants reported working at companies spanning a range of industries, from finance and insurance (18%); professional, scientific and technical services (15%); manufacturing (11%); and health care and social assistance (13%).
The majority (42%) hail from companies that have more than 1,000 employees, while 20% are from organizations with 201-500 employees and 14% are with businesses that have a workforce of between 501 to 1,000 individuals.
Key findingsAccording to the survey, only 64% of in-house counsel have used AI for legal tasks. Of those who have incorporated AI into their department, 33% are using it for general research, 32% for document generation and 11% for summarization tasks.
When it comes to in-house counsel who have yet to integrate the technology into their day-to-day work, 62% said they do believe there are several benefits with its use. Forty-three percent cited increased productivity as the biggest plus when leveraging AI in legal workflow. Other benefits mentioned: work speed (23%), content and idea generation (23%), and streamlining of repetitive tasks (12%).According to a recent survey, only 64% of in-house counsel have used artificial intelligence for legal tasks. – CANVA
There’s growing interest about AI use, as respondents said they are fielding more and more questions about it, with IT/security (16%), research & development (16%) and executives/board members (13%) as the most frequent source of queries.
The most frequently asked questions regard legal risks (67%), ethical concerns (57%), restrictions on use (52%) and company policies (52%), according to the report.
The survey also noted that half of respondents are unaware of any current use of AI in any other departments within their companies while the other half said there are at least some instances in which the technology is being put to work in marketing, R&D, IT, sales and operations.
Additionally, the analysis found a “noteworthy awareness” of the technology’s capabilities, with 78% of respondents reporting a “moderate to high awareness” of generative AI tools. Overall, 57% feel confident about their understanding of AI, while 43% expressed either little or slight confidence.
‘AI is here to stay’Though the number of respondents who report having established policies, practices and training about AI – either in their legal practices or business operations – is low, many say they plan to develop such processes in the near future.
Currently only 8% of in-house counsel surveyed said their companies have AI-related policies, while 21% said no guidelines were in place and 15% stated their employers were in the process of establishing rules.
Ultimately, the report’s authors concluded that perceptions of the potential benefits of generative AI tools are positive, but respondents stressed the need for human oversight and formal policies to help address concerns about accuracy, security and confidentiality. Additionally, legal professionals will require more education and guidelines to fully harness generative AI’s potential, they also wrote.“Artificial intelligence is a tool that we are still learning about, and while it holds the potential for great opportunities, it can also create significant challenges within the legal community,” Chief Justice Stuart Rabner said in a statement.
Kaminski stated, “AI is here to stay, but it is important for attorneys to understand its current limitations and be aware of potential risks, so they don’t outweigh rewards. Professionals who understand its benefits and can manage its applications efficiently will be the ones to reap its benefits and create a road map for navigating pitfalls.”
Hildebrand remarked, “As improved data around the use of generative AI is released, firms and businesses will be more inclined to leverage these programs for their operations in the coming years.”
“With prudent policies and thorough training on the use of the evolving tools, it could prove to be the deciding factor for entities able to successfully engage with AI and revolutionize decision-making processes,” she went on to say.
The findings are similar to those in previous studies, including one released in March by Thomson Reuters Institute that found many law firms see the technology’s potential and have begun experimenting with application to improve workflow, refine sales & marketing and create efficiencies for operations.
However, several unknowns remain, leaving the law sector unsure of how AI should be used and how firms can best mitigate the tool’s accuracy and privacy risks. “Indeed, the concern over risks around the technology’s accuracy, privacy, confidentiality, and security are paramount in law firm professionals’ minds. By any stretch, however, we are still early in the game for generative AI and ChatGPT, and any and all future use will have to address the growing awareness of the risks of use and the potential loss of business for non-use. As time and experimentation make users more comfortable with these tools, a day will come when generative AI and ChatGPT is as common in law as online legal research and electronic contract signing have become now,” the study said.
What will the courts say?The release of the report from Lowenstein Sandler and the Association of Corporate Counsel – New Jersey came a few days after the New Jersey Supreme Court announced the formation of a special committee to explore the legal and ethical implications that AI poses for court operations and the practice of law.
Chaired by Administrative Director of the Courts Glenn Grant, the 31-member group will evaluate potential policies and practices in numerous areas, including the appropriate use – and possible limitations – of AI in legal contexts, disclosure of the technology’s usage in court submissions and testimony, as well as guidance for self-represented litigants and other court users.
The membership includes judges, attorneys, educators and government leaders, as well as cybersecurity and technology experts.
Following the panel’s first meeting Sept. 22, Chief Justice Stuart Rabner issued a statement, saying, “Artificial intelligence is a tool that we are still learning about, and while it holds the potential for great opportunities, it can also create significant challenges within the legal community. This committee brings together leaders with different backgrounds and perspectives who can engage in a comprehensive review of the myriad issues this new technology presents for the courts.”
{Categories} *ALL*,_Category: Implications{/Categories}
{URL}https://njbiz.com/legal-firms-remain-cautious-about-ai-in-the-industry/{/URL}
{Author}unknown{/Author}
{Image}https://njbiz.com/files/2023/10/AI-Stock-art-PIXABAY.jpg{/Image}
{Keywords}{/Keywords}