Which universities are the most innovative?
University rankings are often criticised, but seldom ignored. They are composite measures of several variables, with research quality – as indicated by citations or the number of highly cited researchers – a significant driver for many of them.
While one may argue with the methodological soundness of composite indicators, there is no doubt that rankings influence behaviour and have caused quality to be taken seriously in universities around the world.
They also have some unintended consequences. A case in point is the Academic Ranking of World Universities, originally developed to monitor the global standing of Chinese universities as they invested in research capacity. However, the ARWU soon became a league table for the world’s most research-intensive universities.
Certainly ARWU and other rankings have had a positive influence on nurturing a culture of quality, and do provide a reasonable mechanism to monitor the progress of universities and national systems starting from a low base, but they may be creating distortions for countries that already have a developed higher education research sector; this is especially the case if these countries also have research assessment exercises that reinforce similar behaviour.
Universities in the UK and Australia have performed well in these rankings. It is no coincidence that the UK has had more than 25 years of experience in formal research assessment and Australia is in the midst of its third edition of the Excellence in Research Australia exercise.
However, one concerning distortion relates to Australia’s long-standing poor showing in business-university collaboration. This is despite significant improvement in business investment in research and development in recent years and ongoing expansion of university research.
Two of the Australian Research Council’s grant schemes are the “Discovery” scheme, which funds investigator-driven projects, and the “Linkage” scheme, which requires co-investment from end-user partner organisations. Over the past three rounds, the number of applications for Discovery has increased by 8 per cent, but declined by 24 per cent for Linkage. This is despite the fact that the success rate for Discovery is a low 18 per cent while Linkage is a healthy 36 per cent.
There could be other contributing factors, but the distorting role of rankings and assessment exercises based on the academy’s view of excellence is certainly at play.
It is in this context that the inclusion of impact in the recent UK research excellence framework, or REF, is a welcome development. Despite initial misgivings, it appears to have been well received by a wide range of disciplines, including the humanities and social sciences.
One criticism that continues to be made is the cost of the impact assessment exercise. A recent report puts that cost at £55 million, presented as 3.5 per cent of the research funds to be allocated according to the impact assessment in the specified five-year time frame. But this misses the point. The impact assessment is of the broad research output of UK universities – not just that funded by the funding councils.
It can be argued that a more appropriate denominator would be the total university research expenditure for the five-year period. That would change the cost of impact assessment to something like 0.15 per cent.
A conservative estimate of the global annual spend on university research is at least a quarter trillion US dollars, and much of this is from public funds. Is it too much to expect an attempt be made to assess the broader societal impact of this massive investment? A full economic costing of impact assessment may be more than the reported figure, but it will still be tiny compared with the sheer scale of university research. In contrast, the full economic cost of traditional assessment via academic peer review is, rightly, rarely questioned.
There is a more important benefit of the impact assessment. The UK now has 6,679 published case studies, and it should not take too much effort to perform postcode analysis to see where the research described took place; in fact, they could even be mapped to different parliamentary constituencies. With such an analysis, politicians will either be talking about the great things happening in their universities or be making a case for greater investment. This will eventually lead to a stronger case for investment in university research.
Impact assessment will evolve, and better methodologies will appear over time. But the university sector – and nations that are investing in their universities – will benefit from a diversity of rankings that take broader societal impact into account. Early iterations of these rankings will be questioned, but they will not be ignored.
Arun Sharma is deputy vice-chancellor, research and commercialisation, at Queensland University of Technology.