It is understandable that social scientists sometimes succumb to science envy. It is a wonderful thing to be able to independently verify formulas under fixed conditions and use them to accurately predict an output based on the inputs.
This causal paradigm is a compelling approach to understanding “what works”. Wouldn’t we all love to be able to say that if you do X, then Y is Z per cent likely to happen? Hence the “gold standard” status of randomised controlled trials, which aim to establish cause and effect by controlling all factors except the specific intervention (often a medical treatment) under scrutiny to establish whether that intervention does indeed cause the outcomes we hoped for.
In medicine, blind trials are used to statistically rule out the effect of chance or compounding factors. In social or educational settings, the random allocation of people to different groups (drug versus placebo, for instance) is often not possible, but we can use quasi-experimental designs (QEDs), which involves creating a counterfactual comparison group using statistical matching methods to compare two groups whose members are as similar as possible.
This is the intellectual context in which England is attempting to determine what works best for widening access to university. Propelled by the launch of the Centre for Transforming Access and Student Outcomes (TASO) in 2019 – part of the government-funded network of What Works Centres – and the renewed emphasis on evaluation from the Office for Students, universities are seeking to generate causal evidence (also referred to as Type 3 evidence) of effectiveness in their access and participation interventions.
In his recent speech on What’s next in equality of opportunity regulation, John Blake, director for fair access and participation at the OfS, confirmed new funding of £1.5 million for TASO to establish an evidence repository for evaluation research and reports from across the English sector. Details of how this will operate are still under discussion, but we need to reflect carefully on the weight given to different types of evidence, especially in light of the preferences of a funder that is also the sector regulator.
While there are clearly merits to a causal approach, we must be mindful of the context in which causality can be applied in the human, social and behavioural contexts. Human behaviours and attitudes are less predictable than physical forces, especially in educational contexts, despite advancements in behavioural psychology and modelling.
Educational interventions are less like baking a cake – where competently following the recipe virtually guarantees success – and more like making a soufflé. Even for experienced bakers who meticulously follow the recipe, the soufflé, notoriously, does not always rise – because there are both tangible and intangible factors that cannot be easily controlled. The skill of the chef (practitioners and staff), type of ingredients (student backgrounds and resources) and the oven’s temperature (learning environment) can all influence the outcomes for different students.
This raises the question of whether causal evaluation is the most useful method in education spaces. If we show that an intervention worked under the specific context and conditions of our evaluation, can we suggest that it will work in the same way the following year, or a different institutional context? To return to the baking analogy, the challenge in educational contexts is not only to follow the recipe but also to understand, recognise and adapt to the various factors and processes that can influence the soufflé’s rise.
There are other popular paradigms in evaluation, which can all provide different and deeper insights, context and understanding in complex contexts, such as student experiences and institutional cultures. Qualitative methods, including interviews, focus groups and observations, continue to play a key role in understanding why an intervention may work. And different methodologies can be blended to provide a more holistic view of educational outcomes and processes beyond what causal methods alone can offer.
There are unquestionably situations in which a causal methodology will yield the best evidence, but we can’t assume this will always be the case. A basket of methods, combining both qualitative insights and quantitative data, can offer the most balanced understanding, not just of what works, but of why it works.
So before we all get carried away with testing soufflés that may or may not rise, it is important to reflect on what each method can contribute to building confident explanations of the factors that shape the success of educational interventions and evaluations.
Billy Wong is director of research and evaluation (access and participation) and professor of education at the University of Reading, where Lydia Fletcher is research and evaluation manager (access and participation).
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login