Skip to Main Content
site header image

Generative AI in the Legal Field: Generative AI and Law School

ABA - AI and Legal Education Survey Results

From the ABA's Task Force on Law and Artificial Intelligence, the AI and Legal Education Survey Report provides insights from law school administrators and faculty regarding the integration of artificial intelligence (AI) into legal education. The survey was completed by 29 law school deans or faculty members between late December 2023 and mid-February 2024.

Other Research Guides

Scholarly Articles

Highlighted Articles (most recent articles listed first):

Humans at the Center of Legal Writing with Generative AI as an Evolving Component of the Legal Writing Process
by Jessica Lynn Wherry and Frances C. DeLaurentis, May 15, 2025 (SSRN)
In this Article, the authors explore the unique aspects of legal writing and legal document production, discuss the benefits of generative AI-produced text, explore the risks to novice legal writers of treating generative AI-produced text as writing, and advocate for continuing to teach foundational legal writing skills and incorporating generative AI into that skillset. 

Measuring the Rapidly Increasing Use of Artificial Intelligence in Legal Scholarship
by Michael Conklin and Christopher Houston, April 28, 2025 (SSRN)
This first-of-its-kind study uses the existence of an AI idiosyncrasy to measure the use of AI in legal scholarship. This provides the first-ever empirical evidence of a sharp increase in the use of AI in legal scholarship, thus raising pressing questions about the proper role of AI in shaping legal scholarship and the practice of law.

Disclosing the Machine: Trends, Policies, and Considerations of Artificial Intelligence Use in Law Review Authorship
by Nachman N. Gutowski, April 15, 2025 (SSRN)
This article examines the current landscape of law reviews, most of which lack clear AI policies. It proposes a framework for developing thoughtful guidelines that address the evolving role of AI in legal scholarship. Rather than drawing rigid lines between human and machine contributions, legal academia should embrace AI as an integral tool that enhances, rather than diminishes, intellectual work. The key to maintaining scholarly integrity is not restricting AI!s use but promoting a better understanding of how AI can complement human authorship.  

Legal Scholarship Through the Lens of Generative AI, Darkly
by Andrew Martineau and Loren Turner, March 25, 2025 (SSRN)
This article examines how GPT-4 interacts with law review articles, revealing its unreliability in summarizing them independently but notable accuracy when provided with full-text input. Retrieval augmented generation (RAG) offers a potential solution for improving AI accuracy in a more automated way, yet concerns persist about algorithmic bias, authors' rights, and the impact on legal scholarship. Law librarians must carefully consider these factors when determining how their institutions' scholarly work is accessed and used by AI systems.

Artificial Intelligence & the Future of Law Libraries
by Patrick Parsons, Kristina L. Niedringhaus, and Alex Zhang, December 1, 2024 (SSRN)
The Southeast Roundtable Report summarizes the discussions from a day-long conference held on March 1, 2024, at Georgia State University College of Law, focusing on the impact of artificial intelligence (AI) on law libraries. Key takeaways highlight the necessity of proactive AI adoption, emphasizing the need for law libraries to integrate AI into operations while maintaining a human-centered approach. Discussions underscored AI’s potential to enhance access to justice, streamline research services, and optimize physical and digital library spaces. However, challenges such as staff resistance, over-reliance on AI, budget constraints, and data privacy concerns were also identified.

AI Now
by Rachelle Holmes Perkins, May 24, 2024 (SSRN)
This Article contends that all law professors have an inescapable duty to understand generative artificial intelligence. This obligation stems from the pivotal role faculty play on three distinct but interconnected dimensions: pedagogy, scholarship, and governance. No law faculty are exempt from this mandate. All are entrusted with responsibilities that intersect with at least one, if not all three dimensions, whether they are teaching, research, clinical, or administrative faculty. It is also not dependent on whether professors are inclined, or disinclined, to integrate artificial intelligence into their own courses or scholarship. The urgency of the mandate derives from the critical and complex role law professors have in the development of lawyers and architecture of the legal field.

Language Models, Plagiarism, and Legal Writing
by Michael L. Smith, August 16, 2023 (SSRN)
The author argues that "those urging the incorporation of language models into legal writing education leave out a key technique employed by lawyers across the country: plagiarism. Attorneys have copied from each other, secondary sources, and themselves for decades. While a few brave souls have begun to urge that law schools inform students of this reality and teach them to plagiarize effectively, most schools continue to unequivocally condemn the practice...(but) continued condemnation of plagiarism is inconsistent with calls to adopt language models, as the same justifications for incorporating language models into legal writing pedagogy apply with equal or greater force to incorporating plagiarism into legal writing education as well."

How to Use Large Language Models for Empirical Legal Research
by Jonathan H. Choi, August 13, 2023 (SSRN)
This Article demonstrates how to use LLMs to analyze legal documents. It evaluates best practices and suggests both the uses and potential limitations of LLMs in empirical legal research. In a simple classification task involving Supreme Court opinions, it finds that GPT-4 performs approximately as well as human coders and significantly better than a variety of prior-generation NLP classifiers, with no improvement from supervised training, fine-tuning, or specialized prompting.

Re-Evaluating GPT-4's Bar Exam Performance
by Eric Martínez, May 8, 2023 (SSRN)
This paper investigates the methodological challenges in documenting and verifying GPT-4's "90th percentile passing score" bar exam performance claim, presenting four sets of findings that suggest that OpenAI’s estimates of GPT-4’s UBE percentile, though clearly an impressive leap over those of GPT-3.5, appear to be overinflated.

GPT Passes the Bar Exam
by Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo, March 15, 2023 (SSRN)
The authors evaluated GPT-4's performance on the entire bar exam: multiple choice, essays, and performance task. They found that GPT-4 achieved a passing score on the exam.

Who’s Afraid of ChatGPT? An Examination of ChatGPT’s Implications for Legal Writing
by Ashley B. Armstrong, Jan. 23, 2023 (SSRN)
The author asked ChatGPT to perform a series of common legal research and writing tasks. The article describes how ChatGPT struggles to conduct legal research, and even can produce responses that cite incorrect or made-up case law and statutes. The article also discusses ethical concerns in using ChatGPT for legal writing.

Other articles of interest:

News Articles and Blog Posts

Highlighted Articles (most recent articles listed first):

Generative AI in the Law School Classroom
by Jeremy Sheff, June 2, 2025
A memo from a Property Law professor to his students after he heard allegations that some students may have used generative AI on the course final exam. The key takeaway for first-year students: "To be in a position to know whether the output of a generative AI tool is right or wrong, helpful or harmful, you first need to have the knowledge and skills that would have enabled you to generate such an output yourself. You cannot verify the quality, utility, or truthfulness of work that you never learned how to perform in the first place. And if you aren’t learning how to do that work now, when only your grades are at stake, you are not going to be able to do that work later in your career, when real people will be depending on you to defend their rights."

Generative AI, Having Already Passed the Bar Exam, Now Passes the Legal Ethics Exam
by Bob Ambrogi, LawSites Blog, November 16, 2023
A report on how two of the leading large language models (GPT-4 and Claude 2) have passed a simulation of the Multistate Professional Responsibility Examination (MPRE).

Learning the Law with AI: Why Law School Students are Tentative about Using ChatGPT
by Serena Wellen, LawNext Blog, June 2, 2023
A summary of results from a Lexis survey which found that only "9% of law students surveyed said they are currently using generative AI in their studies and only 25% say they have plans to eventually incorporate it into their work." The post also lists student respondents' concerns about generative AI in the context of law school.

Learning the Law with AI: Why Law School Students are Tentative about Using ChatGPT
Wisblawg, January 23, 2023
This article discusses an in-class exercise using ChatGPT for legal research. While the students acknowledged that ChatGPT could be a helpful starting point, they found several reasons to be skeptical of its output, including inaccuracies of results, lack of data transparency, and confidentiality concerns. The main takeaway - it's important to understand the benefits and limitations associated with any research tool (including ChatGPT) so that you use it wisely and appropriately.

Other articles of interest:

Prompt Writing

Before you start any legal research project, it's important to plan and think about important factors such as issue identification, jurisdiction, and key facts. The same holds true when using LLMs such as ChatGPT for legal research: the better the prompt you give the LLM and the better your description of what you want the LLM to produce for you, the better your results will be. The resources listed below provide guidance on how to plan ahead and write more effective prompts when using LLMs for legal research.