“How to measure learning gain in higher education” was the subject of a roundtable discussion organised by Times Higher Education and VitalSource®, a global platform provider of digital content.
For Alec Cameron, vice-chancellor of Aston University, it was first important to distinguish if learning gain (broadly defined as an attempt to measure the improvement in knowledge, skills, work-readiness and personal development made by students during their time spent in higher education) mattered more as an end in itself or as a means of enabling students to achieve what they came to university for.
Most students would say it was more important for universities to provide them with the knowledge and attributes to gain entry into a particular career, he said. To that end, the proxy measures used in the teaching excellence framework, such as course retention, satisfaction and employability, were the right ones by which to evaluate learning gain.
Institutions run the risk of focusing too much on the individual and losing sight of the bigger picture when it comes to measuring learning gain, argued Norbert Pachler, pro-director of teaching, quality and learning innovation at the UCL Institute of Education. “We have to remind ourselves of the purpose of higher education, which is about the public good,” he said.
Christina Hughes, pro vice-chancellor of student experience at Sheffield Hallam University, who leads the Higher Education Funding Council for England-funded Legacy Project aimed at assessing the feasibility of measuring learning gain, said her institution was “very focused on TEF metrics and being metric-minded”, but was not going to be “metric-driven”.
While she believed that proxy measures were “doing something good” in helping to focus the minds of institutional leaders, particularly in such areas as the attainment gap among black and minority ethnic students, the fundamental question to address and think about measuring was “what do we value as a society?”.
“We’ve got a programme of work [at Sheffield Hallam] looking at friendship and belonging among students because we have to value other areas of human experience and not drive ourselves into a reductionist, transactional environment,” she said.
There is little doubt that learning gain is conceptually “a good thing”, said Ian Campbell, deputy vice-chancellor at the University of Hertfordshire, but that it was difficult to measure in an appropriate way. Professor Pachler agreed, saying that in the past there has been “an understandable tendency to measure things that are easy to measure and ignore the things we can’t”.
He cautioned against falling into the same trap this time. “We must give due consideration to qualitative methods then think how can we scale up and aggregate those insights into something meaningful,” he said.
The current proxies being used to measure learning gain can be improved upon, said Camille Kandiko Howson, a researcher working on the Hefce Legacy Project. “The TEF criteria are quite good but the proxy measures are not very well matched,” she said. “We can be much better about measuring learning than that.”
Both research-intensive and non-research-intensive universities are involved in the current phase of the Legacy Project, said Professor Hughes. All are exploring “what learning gain means to them”. “The work is less concerned with a national, meta-test for every student and is far more interested in how [learning gain] can be used to motivate and support student retention and progression,” she said.
For Elizabeth Treasure, vice-chancellor of Aberystwythh University, there were parallels with measuring the quality of healthcare. “You can’t use just one measure but have to look at the same thing from lots of angles. It’s the same with learning gain,” she said. “You can measure knowledge, personal resilience, employability but you have to do it in a composite way to understand the whole picture.”
When it came to looking at metrics related to resilience and critical thinking, there was an ethical dimension to consider, argued Helen King, a former senior higher education policy adviser at Hefce. “What we’re measuring here is a human being, so we need to address that appropriately,” she said. “If we’re looking to measure something that will improve over time then there will be a starting-off point that will not be quite so good. For students taking the test and discovering they’re not performing very well, it might be motivational for some but worrying for others and have the reverse effect.”
However, there will never be a “silver bullet” when it comes to measuring learning gain, said Dr Howson. “Nor will we get it completely right but it will be better than where we are right now.” Approximately six different ways that the data could be used were emerging from pilot projects, that would help to “separate out the different discourses around learning gain”, she said. These included individual level metrics for students, through to teacher, seminar and module-level metrics, institutional ones that allow national benchmarking to take place, and governmental metrics concerning accountability and regulation.
Bart Rienties, professor of learning analytics at the Open University, said the big problem with measuring learning gain was the burden it placed on students. To avoid this, he advocated that institutions “become smarter” about mining their existing data sets and using learning analytics to help under-performing students as well as high achievers.
“Every institution should be passionate about enhancing the quality of its teaching, irrespective of external measures. That’s what should be driving learning gain,” he said.