Prompt engineering as academic skill: a model for effective ChatGPT interactions
Gathering information from AI requires a new layer of search skills that includes constructing effective prompts and critically navigating and evaluating outputs
You may also like
Popular resources
While debates about AI in relation to higher education pedagogies and assessment rumble on, the consensus seems to be that it is here to stay, and even if it is ultimately banned or effectively identified by academic integrity detectors, we need to take a step back to consider how using AI constitutes a new and evolving academic skill. This is primarily because panics around academic integrity have obscured the fact that chatbots such as ChatGPT were originally envisaged as “helpers” – more sophisticated search tools than a mechanism for outsourcing essay writing.
Many of us are familiar with the practice of teaching students effective search and research skills. These have traditionally comprised effective library catalogue searches, database searches, keyword searches and Boolean operators, and once literature is found, deploying skimming and scanning techniques, such as SQ3R, to locate useful information.
- AI text detectors: a stairway to heaven or hell?
- How can we teach and assess with ChatGPT?
- So, you want to use ChatGPT in the classroom this semester?
Gathering information from AI, however, requires a new layer of search skills often termed (rather technically) prompt engineering. As our students increasingly turn to AI bots such as ChatGPT as a means of scaffolding and complementing their studies, training on the art of (a) constructing effective prompts, and (b) critically navigating and evaluating outputs, is becoming increasingly important. The following two-step acronym/process, I propose, is an effective and easily memorised model for distilling the core tenets of prompt engineering into an academic skill.
Step one: TAP
The first part of the process is to conceive of what AI bots produce as akin to operating a tap. Like a tap, incorrect operation or adjustments will lead to either an insufficient flow, an uncontrollable deluge or perhaps even getting one’s fingers burned. It needs careful operation, guidance and prompting using the acronym TAP:
T: Topic
Careful identification of the topic should be the student’s first consideration. Crucially, for AI bots to produce the most accurate and relevant outputs, the topic needs to be:
- Precise and concise
- Detailed and specific
- Set within its context
Refining keywords to achieve this is a skill that takes practice. The prompt “can war ever be justified?”, for instance, is far too vague as it doesn’t state which war, when or in what context. On the other hand, the prompt “were the arguments made by the Bush administration for going to war with Iraq in 2002 justified?” is far more detailed, specific and provides context. It’s worth bearing in mind that students with neurodiverse conditions may have particular challenges with identifying and refining keywords (for example, students with dyslexia are likely to prioritise context over topic, whereas students with autism are likely do the opposite) and, as such, extra support may be required.
A: Action
The student’s second consideration ought to centre around telling the AI precisely what type of output is required and its purpose. Neither of the prompts above contains action or activity keywords. If we add “write a summary of the main arguments” or “act as a debater/philosopher”, this adds the necessary activity keywords, as the bot will know that it is required to produce a summary, choose/prioritise the main arguments or present them in a certain style. Again, students need to be precise, concise, detailed and specific but also carefully select action words, such as summarise, clarify, suggest, specify, propose or similar.
P: Parameters
Finally, the student needs to outline extra detail to fine-tune the response. These are extra details or constraints such as length, the provision of academic references, explanations (or workings) for the conclusions it presents or the format of the output.
Without these three instructions, outputs are often too vague or generalised to be of a sufficient academic standard for higher education, or they can create cognitive overload through providing too much extraneous, irrelevant or unreliable information. An additional tip for writing effective prompts is to consider whether answers to all of the “Wh questions” are present in the prompt – namely, the what, when, who, why and how of the topic and activity under consideration.
Step two: TASTE
Having got our TAP running/producing outputs, we need to encourage students to TASTE what is being produced before actually consuming/digesting it. We need to avoid a situation whereby students become uncritical consumers and reorganisers of information in favour of making them proficient, critical navigators of AI choice architectures (without such criticality, the plethora of possible options pumped out by AI generators risks students’ becoming self-perpetuating consumers of information without ever having to actually actively research or understand anything). This is where our second acronym comes in:
T: Test
Request and sample different outputs and try different variations/add additional prompts to refine.
A: Adjust
Rephrase topic, action and parameter keywords (fine-tune for increasing precision, detail and context).
S: Simplify
A common pitfall when entering prompts is vagueness, wordiness or woolly instructions, which confuse AI’s algorithms. Students should constantly revise the topic, action and parameter (TAP) keywords and phrases with a view to simplifying them.
T: Trust
A great deal of ink has already been spilled on the issue of AI and ChatGPT producing fake information. Students need to check the accuracy and truth of the outputs before they even begin the work of critical analysis.
E: Examine
Here students can begin the conventional work of critical evaluation and analysis, but with a particular focus on identifying flawed or invented arguments, bias and misrepresentations. Ideally, students need to enter into a mental, if not actual, “chat” with the AI to tease out the wheat from the chaff and scrutinise the outputs. Of all the elements in this model, this is the most vital and needs to be accompanied by reading academic, trusted sources.
By using effective prompts, AI, specifically ChatGPT and its like, can be harnessed as a “guide on the side”, as long as students know how to use prompt techniques effectively to mine information. While much of the focus hitherto has rightly been on issues pertaining to academic integrity, educators need to be mindful that AI bots have suddenly added an additional layer to the traditional menu of academic literacy skills that need integrating into the curriculum sooner rather than later if we are to properly support its ethical and responsible immersion into our pedagogical practices.
Adrian J. Wallbank is a lecturer in educational development at Oxford Brookes University, UK.
If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.