The AI revolution demands radical change to law curricula

The emergence of ChatGPT underlines that keeping the status quo may be the beginning of the end for legal education, warns Mimi Zou

十二月 23, 2022
A robotic hand points to where someone should sign of a document, illustrating the role of AI in legal practice
Source: iStock

The idea for my experiment with ChatGPT, the next-generation AI text generation bot, came from a chemistry colleague who told me that it could outperform some of his undergraduate students in their final exams. I had a mini panic attack when I heard this. But surely, I thought, this could not apply to the subject I was teaching this term – a specialist, master’s-level elective on international arbitration mostly undertaken by students with years of legal training and even some practical experience.

I tried out ChatGPT with the take-home exam. For the essay component, which asks students to evaluate, for example, whether the law should be reformed, the bot showed its capability to produce fluent and confident answers that examined multiple viewpoints and even cited related UK and international cases. As for the problem question (involving a hypothetical scenario), ChatGPT came up with reasonable answers that identified the main issues and applied the relevant legal rules with some prompts. Its capabilities extend well beyond those of conventional legal research software.

None of ChatGPT’s answers in my experiment would be deemed first-class standard, mainly due to the lack of substantial critical analysis. However, many answers could pass by showing reasonable knowledge of the relevant rules and applying it to the question. I was quite surprised that I did not encounter many answers with substantial errors. I even found some answers that showed a bit more detailed analysis and could be graded at a high 2:2 (not a bad grade for students who haven’t done much work for the course!).

There are, of course, serious academic integrity questions around students’ use of AI-enabled content generation tools like ChatGPT to “help” with their assignments; given that its use is not technically plagiarism, universities may need to review and update such policies. But, as a law professor, the question that I am most concerned by is what ChatGPT means for legal education today.

To provide some context for non-legal readers, some UK law schools have traditionally prioritised legal education as an intellectual field of study over the goal of training future legal professionals, but this is not the case with many others. Still, almost all schools teach undergraduates legal writing and research skills in their first year, as part of about 10 core/mandatory subjects in their first two years (their final year consists of a few electives). One-year LLM programmes are also taught that usually have more advanced or specialist subjects. The way students are assessed in many of these subjects has not changed for years except perhaps to move online – especially since Covid.

But legal practice itself is changing at a more rapid pace than ever before, with new technologies a driving factor. AI is being applied to contract drafting, document reviewing, e-discovery, due diligence, legal research, automation of legal practice and even litigation prediction. New technologies are “democratising” legal knowledge and services that were once in the exclusive domain of trained legal professionals. It is unsurprising that technology is becoming the centrepiece of the “access to justice” movement.

Given these changes, there has been much talk of legal education needing to develop “future-ready” or “future-proof” lawyers. We are seeing a handful of law schools beginning to introduce new electives on law and AI, big data, blockchain, legal design, legal engineering and law and computer science. Their main goal is to help law students understand and become prepared to use new technologies in their future work. Moreover, these courses can help lawyers understand the various legal risks associated with such technologies. There is no doubt that data protection and privacy has become a fast-developing area of legal practice, for instance.

However, the core curricular and pedagogical practices of many law schools have remained the same, despite the real likelihood of technology rendering them obsolete. As ChatGPT’s capabilities grow, it will not only be able to produce a decent legal opinion but also write code faster and potentially better than legal engineers can.

Law schools, legal education and training providers are waking up to these challenges, but action has been taken at a glacial pace. Equipping students with the legal concepts, knowledge or skills – or even preparing them to “think like a lawyer” – will not make them future-proof. Nor will learning about the applications of AI and other new technologies in the legal domain (“legaltech”) be enough.

Legal education across the board will need to focus more on developing students’ social intelligence, creativity, empathy, adaptive capability and collaboration skills, alongside critical thinking – all framed around a learning culture that cultivates a growth mindset. These should no longer be described as “soft skills” but as “must-haves” for survival in these times.

As a friend (a machine learning scientist) recently said to me, “I don’t need another legal opinion. I need someone to represent me and take the best course of action given my circumstances.” In that regard, it may not be a bad thing for AI to replace today’s lawyers with a new generation of legal-social-technology workers.

Mimi Zou is professor of commercial law at the University of Exeter.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

人工智能很快就能像人类一样进行研究和写作。那么,真正的教育会被这种作弊的浪潮淹没吗?还是说,人工智能只会成为教学和评估的又一种技术辅助手段?来自约翰·罗斯(John Ross)的报道

7月 8日

Reader's comments (2)

I call on ALL professors to respond to CHatGPT like scholars 1. it is not so novel. The major change is ease of access. All other features have been present for some time 2. we must understand it calmly and in context 3. the great challenge is EDUCATING both students and our publicS to use it properly and do our best to limit abuses Can we do that without predicting ever other day the end of the world as it never existed?
graff.40: agree. Legal professionals ar bound by codes of conduct and overseen by professional bodies. Who oversees chat GPT's performance and takes rsponsibility if it makes mistakes or displays apparent bias? The user, or the chatGPT developers? Under what juristiction do they operate? Unless/until those questions ca be answered: it seems to be caveat emptor. If you choose to proceed do so with caution.
ADVERTISEMENT