The two key steps to promoting responsible use of LLMs
Large language models offer opportunities for higher education, but also present challenges. Here is how to balance both
You may also like
Popular resources
The integration of large language models (LLMs) such as GPT-4 into higher education is transforming the academic landscape, providing innovative tools for learning and research. However, this transformation also presents challenges, including potential misuse and a decline in critical thinking skills. Here, we will examine the dual impact of LLMs in higher education, highlighting their benefits and the risks associated with improper use, and the two-step process educators can implement to keep their students thinking critically.
Enhancing educational support with LLMs
LLMs are increasingly essential in higher education, providing personalised, on-demand academic assistance. Acting as virtual tutors and available 24/7, they can swiftly and accurately address questions across diverse disciplines. This accessibility helps students grasp difficult concepts, understand complex theories and receive illustrative examples when needed.
- The Goldilocks effect: finding ‘just right’ in the AI era
- A four-step process to embedding AI literacy in business courses
- To demystify AI for your students, use performance
Additionally, LLMs support writing tasks by generating essay topics, refining drafts and organising arguments coherently. In research, they efficiently process extensive academic literature, summarising key insights and facilitating comprehensive literature reviews.
Challenges and risks of LLM integration
Despite their benefits, integrating LLMs into education comes with significant risks, particularly when they are misused. One of the most common forms of misuse is treating LLMs as advanced search engines. This behaviour, which we can classify as an abuse of LLMs, occurs when students seek instant answers to assignments or exam questions without actively engaging with the content. While LLMs are capable of providing quick responses, overreliance can undermine the learning process, bypassing critical thinking, independent analysis and creative problem-solving.
This abuse prevents students from developing essential cognitive skills, leading to superficial understanding and intellectual complacency. The goal of higher education is to nurture enquiry and deep thinking, using LLMs merely to retrieve quick answers short-circuits this process. As a result, students miss opportunities to develop their own insights and analytical abilities, which are crucial for long-term intellectual growth.
Academic dishonesty is another serious issue, with some students using LLMs to generate essays, complete assignments or answer exam questions. This practice not only undermines academic integrity but also devalues educational experience. Furthermore, since LLMs can sometimes generate inaccurate or biased information, uncritical dependence on these models can lead to the spread of misinformation, compromising the quality of learning.
To address these challenges, educational institutions must establish clear guidelines for appropriate LLM usage, ensuring that these technologies are used to complement learning rather than replace it. Encourage students to engage critically with LLM-generated content, fostering a culture that prioritises deep understanding and intellectual growth over shortcuts to information.
Promoting responsible use: answering questions and questioning answers
To mitigate LLM misuse and encourage deeper learning, educators should implement strategies emphasising both answering questions and questioning answers. This dual approach ensures LLMs enhance critical thinking rather than serve as shortcuts for information.
Answering questions:
- Responding to LLM-generated questions: Encourage students to answer questions generated by LLMs related to their coursework. This practice helps students engage actively with the learning material, articulate their understanding and explore intricate topics.
- Skill development: By answering these questions, students practise constructing well-reasoned arguments and organising their thoughts coherently, thus strengthening their grasp of the subject matter.
- Critical evaluation: Teach students to assess LLM-generated responses by verifying facts, identifying biases and evaluating the logic and accuracy of the information.
- Deep understanding: By questioning answers, students engage in deeper analysis, enhancing their ability to discern reliable information and fostering intellectual resilience.
Questioning answers:
By structuring LLM usage around both answering and questioning, educators help to ensure that AI serves as a learning aid rather than as a shortcut. This approach promotes active engagement, critical enquiry and meaningful academic growth, fostering an environment where technology complements rather than detracts from learning.
LLM-enabled learning environments with Socratic methods
LLM-enabled learning environments that incorporate Socratic methods build on the traditional model of teaching through dialogue, questioning and deep reflection. These environments leverage the capabilities of LLMs to stimulate critical thinking by engaging students in cycles of answering questions and questioning answers. By using Socratic methods in conjunction with LLMs, these learning environments can help turn AI into an interactive tool that promotes deeper intellectual engagement. Students are not only learning from AI-generated content but are actively involved in questioning and refining their understanding, which promotes a culture of continuous learning and critical enquiry.
LLMs hold significant promise for enhancing higher education by providing tools that support learning and research. However, their integration into the learning process must be managed responsibly, in order to prevent misuse that can compromise educational integrity and critical thinking. By adopting strategies like the answering questions and questioning answers approach within LLM-enabled Socratic learning environments, educators can transform these technologies into powerful tools for fostering independent and profound learning. The responsible integration of LLMs into the tertiary curriculum can lead to enriched educational experiences and promote deeper understanding and intellectual resilience among students.
Xiangen Hu is director of the Institute for Higher Education Research and Development at the Hong Kong Polytechnic University.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.