Highlighted Articles (most recent articles listed first):
AI Now
by Rachelle Holmes Perkins, May 24, 2024 (SSRN)
This Article contends that all law professors have an inescapable duty to understand generative artificial intelligence. This obligation stems from the pivotal role faculty play on three distinct but interconnected dimensions: pedagogy, scholarship, and governance. No law faculty are exempt from this mandate. All are entrusted with responsibilities that intersect with at least one, if not all three dimensions, whether they are teaching, research, clinical, or administrative faculty. It is also not dependent on whether professors are inclined, or disinclined, to integrate artificial intelligence into their own courses or scholarship. The urgency of the mandate derives from the critical and complex role law professors have in the development of lawyers and architecture of the legal field.
Language Models, Plagiarism, and Legal Writing
by Michael L. Smith, August 16, 2023 (SSRN)
The author argues that "those urging the incorporation of language models into legal writing education leave out a key technique employed by lawyers across the country: plagiarism. Attorneys have copied from each other, secondary sources, and themselves for decades. While a few brave souls have begun to urge that law schools inform students of this reality and teach them to plagiarize effectively, most schools continue to unequivocally condemn the practice...(but) continued condemnation of plagiarism is inconsistent with calls to adopt language models, as the same justifications for incorporating language models into legal writing pedagogy apply with equal or greater force to incorporating plagiarism into legal writing education as well."
How to Use Large Language Models for Empirical Legal Research
by Jonathan H. Choi, August 13, 2023 (SSRN)
This Article demonstrates how to use LLMs to analyze legal documents. It evaluates best practices and suggests both the uses and potential limitations of LLMs in empirical legal research. In a simple classification task involving Supreme Court opinions, it finds that GPT-4 performs approximately as well as human coders and significantly better than a variety of prior-generation NLP classifiers, with no improvement from supervised training, fine-tuning, or specialized prompting.
Re-Evaluating GPT-4's Bar Exam Performance
by Eric Martínez, May 8, 2023 (SSRN)
This paper investigates the methodological challenges in documenting and verifying GPT-4's "90th percentile passing score" bar exam performance claim, presenting four sets of findings that suggest that OpenAI’s estimates of GPT-4’s UBE percentile, though clearly an impressive leap over those of GPT-3.5, appear to be overinflated.
GPT Passes the Bar Exam
by Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo, March 15, 2023 (SSRN)
The authors evaluated GPT-4's performance on the entire bar exam: multiple choice, essays, and performance task. They found that GPT-4 achieved a passing score on the exam.
Who’s Afraid of ChatGPT? An Examination of ChatGPT’s Implications for Legal Writing
by Ashley B. Armstrong, Jan. 23, 2023 (SSRN)
The author asked ChatGPT to perform a series of common legal research and writing tasks. The article describes how ChatGPT struggles to conduct legal research, and even can produce responses that cite incorrect or made-up case law and statutes. The article also discusses ethical concerns in using ChatGPT for legal writing.
Other articles of interest:
This is a curated collection of recent articles about ChatGPT and law education. Many of these articles summarize the findings of the academic papers.
Highlighted Articles:
Experiments with ChatGPT: Don’t Panic, the Robots Are Not Writing Your Students’ Legal Memos, by Jennifer Wondracek and Rebecca Rich, Three Geeks and a Law Blog, January 30, 2023
Generative AI, Having Already Passed the Bar Exam, Now Passes the Legal Ethics Exam, LawSites Blog, November 16, 2023. A report on how two of the leading large language models (GPT-4 and Claude 2) have passed a simulation of the Multistate Professional Responsibility Examination (MPRE).
Law Students Asses Pros and Cons of ChatGPT as a Research Tool, Wisblawg, Jan. 23, 2023.
Learning the Law with AI: Why Law School Students are Tentative about Using ChatGPT, LawNext Blog, June 2, 2023. A summary of results from a Lexis survey which found that only "9% of law students surveyed said they are currently using generative AI in their studies and only 25% say they have plans to eventually incorporate it into their work." The post also lists student respondents' concerns about generative AI in the context of law school.
Other articles of interest:
Before you start any legal research project, it's important to plan and think about important factors such as issue identification, jurisdiction, and key facts. The same holds true when using LLMs such as ChatGPT for legal research: the better the prompt you give the LLM and the better your description of what you want the LLM to produce for you, the better your results will be. The resources listed below provide guidance on how to plan ahead and write more effective prompts when using LLMs for legal research.