The idea for my experiment with ChatGPT, the next-generation AI text generation bot, came from a chemistry colleague who told me that it could outperform some of his undergraduate students in their final exams. I had a mini panic attack when I heard this. But surely, I thought, this could not apply to the subject I was teaching this term – a specialist, master’s-level elective on international arbitration mostly undertaken by students with years of legal training and even some practical experience.
I tried out ChatGPT with the take-home exam. For the essay component, which asks students to evaluate, for example, whether the law should be reformed, the bot showed its capability to produce fluent and confident answers that examined multiple viewpoints and even cited related UK and international cases. As for the problem question (involving a hypothetical scenario), ChatGPT came up with reasonable answers that identified the main issues and applied the relevant legal rules with some prompts. Its capabilities extend well beyond those of conventional legal research software.
None of ChatGPT’s answers in my experiment would be deemed first-class standard, mainly due to the lack of substantial critical analysis. However, many answers could pass by showing reasonable knowledge of the relevant rules and applying it to the question. I was quite surprised that I did not encounter many answers with substantial errors. I even found some answers that showed a bit more detailed analysis and could be graded at a high 2:2 (not a bad grade for students who haven’t done much work for the course!).
There are, of course, serious academic integrity questions around students’ use of AI-enabled content generation tools like ChatGPT to “help” with their assignments; given that its use is not technically plagiarism, universities may need to review and update such policies. But, as a law professor, the question that I am most concerned by is what ChatGPT means for legal education today.
To provide some context for non-legal readers, some UK law schools have traditionally prioritised legal education as an intellectual field of study over the goal of training future legal professionals, but this is not the case with many others. Still, almost all schools teach undergraduates legal writing and research skills in their first year, as part of about 10 core/mandatory subjects in their first two years (their final year consists of a few electives). One-year LLM programmes are also taught that usually have more advanced or specialist subjects. The way students are assessed in many of these subjects has not changed for years except perhaps to move online – especially since Covid.
But legal practice itself is changing at a more rapid pace than ever before, with new technologies a driving factor. AI is being applied to contract drafting, document reviewing, e-discovery, due diligence, legal research, automation of legal practice and even litigation prediction. New technologies are “democratising” legal knowledge and services that were once in the exclusive domain of trained legal professionals. It is unsurprising that technology is becoming the centrepiece of the “access to justice” movement.
Given these changes, there has been much talk of legal education needing to develop “future-ready” or “future-proof” lawyers. We are seeing a handful of law schools beginning to introduce new electives on law and AI, big data, blockchain, legal design, legal engineering and law and computer science. Their main goal is to help law students understand and become prepared to use new technologies in their future work. Moreover, these courses can help lawyers understand the various legal risks associated with such technologies. There is no doubt that data protection and privacy has become a fast-developing area of legal practice, for instance.
However, the core curricular and pedagogical practices of many law schools have remained the same, despite the real likelihood of technology rendering them obsolete. As ChatGPT’s capabilities grow, it will not only be able to produce a decent legal opinion but also write code faster and potentially better than legal engineers can.
Law schools, legal education and training providers are waking up to these challenges, but action has been taken at a glacial pace. Equipping students with the legal concepts, knowledge or skills – or even preparing them to “think like a lawyer” – will not make them future-proof. Nor will learning about the applications of AI and other new technologies in the legal domain (“legaltech”) be enough.
Legal education across the board will need to focus more on developing students’ social intelligence, creativity, empathy, adaptive capability and collaboration skills, alongside critical thinking – all framed around a learning culture that cultivates a growth mindset. These should no longer be described as “soft skills” but as “must-haves” for survival in these times.
As a friend (a machine learning scientist) recently said to me, “I don’t need another legal opinion. I need someone to represent me and take the best course of action given my circumstances.” In that regard, it may not be a bad thing for AI to replace today’s lawyers with a new generation of legal-social-technology workers.
Mimi Zou is professor of commercial law at the University of Exeter.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login