There is a great scene in Hugh Whitemore's play Breaking the Code (1986) where a young academic is being interviewed by a civil servant. He is making an enthusiastic but rather confused attempt to explain his work: "Hilbert thought there should be a single clearly defined method for deciding whether or not mathematical assertions were provable ... I wanted to show that there can be no one method that will work for all questions ... Eventually I conceived the idea of a machine ... ".
The civil servant is clearly baffled, but manages to ask politely: "You actually built this machine?"
"No, no," comes the reply, "it was a machine of the imagination."
This is a job interview, and the civil servant finds that he cannot resist "a crass and naive question". "What is the point of devising a machine that cannot be built in order to prove that there are certain mathematical statements that cannot be proved?" he asks.
This is, of course, the kind of story that gives academics a bad name. The poor taxpayer sweats it out all through the year to make ends meet, only for the Government to take a slice of his or her income to pay people to spend their time swanning around Cambridge inventing imaginary machines! Who wouldn't throw up their hands like that civil servant and ask: "Is there any practical value in all this?"
Since this is in essence a true story, it is not hard to answer that question. The young academic was Alan Turing, and he was being recruited to become the leading cryptanalyst in the team at Bletchley Park that went on to break the Germans' Enigma code. In doing so, they decisively influenced the outcome of the Battle of the Atlantic and hence the date of the Normandy landings, shortening the Second World War, saving countless lives and perhaps even making Allied victory possible - while also, almost accidentally, building the world's first electronic computer. This is surely worth remembering when it is assumed that the "practical value" of research can be predicted in advance.
Some people argue that the very idea of setting out to produce "useful" research with clearly defined "economic benefits" is based on a misconception. In his book The Black Swan, Nassim Nicholas Taleb, dean's professor in the sciences of uncertainty at the University of Massachusetts at Amherst, builds a powerful polemic on a very simple point: we can't predict the future. In particular, we can never know in advance about the impact of major technological change or "highly improbable" events such as 9/11.
Yet we continue to act as if we can. In 2004, a US government department forecast the price of oil - 25 years ahead - at $ (£12.70) a barrel. A mere six months later, the figure was revised upwards to $54. It didn't seem to occur to the department that the new forecast was bound to be equally unreliable.
In such a radically uncertain world, argues Taleb, we can seldom tell where and whether a particular piece of research is going to prove useful. He gives some striking examples to back this claim. The computer, the internet and the laser - often considered the "three recently implemented technologies that most impact our world today" - were all "unplanned, unpredicted and unappreciated on their discovery, and remained unappreciated well after initial use".
The laser turned out to be vitally useful in repairing detached retinas, but its inventor was merely "satisfying his desire to split light beams ... colleagues teased him quite a bit about the irrelevance of his discovery". The search for hypertension drugs, meanwhile, led to diverse but unexpected benefits in the form of a hair-growth medication and Viagra.
Taleb's conclusion is stark: " ... contrary to social-science wisdom, almost no discovery, no technologies of note, came from design and planning". In many areas, hopes for "augmentation of our knowledge" depend on "the engineer's gusto and love for the building of toys and machines".
One can even find cases where research carried out for no practical purpose, and designed to prove positively idiotic claims, has unexpectedly proved very useful.
The eccentric American artist and amateur naturalist Abbott H. Thayer became obsessed with the notion that animal colouration was always devised for concealment - even arguing that flamingos were pink so as to be camouflaged against the sunset. This daft theory has been widely mocked, but in developing his argument Thayer provided a very detailed analysis of the best methods of camouflage. These were eagerly taken up by the British Navy during the First World War and by the Americans during the Second.
So does the best science emerge from careful design and planning or from more roundabout and sometimes accidental processes? Harry Collins, an expert in the sociology of science at the Cardiff School of Social Sciences at Cardiff University, mocks some of the attempts to justify research by its unexpected benefits - such as when people point out that the hugely expensive space race (pursued primarily for political reasons) led to the invention of the non-stick frying pan. Yet he agrees that it is "fatal if you try to plan everything in terms of the workforce and direct economic payoffs". His own research into the nature of expertise can be carried out largely through voluntary help, borrowing offices and computers, but he worries that such informal arrangements work only for inexpensive projects in supportive departments - and are, in any case, getting more and more difficult.
David Knight, emeritus professor of the history and philosophy of science at Durham University, also believes that scientists haphazardly following their own interests have often achieved significant results. He cites, for example, "Mendel messing about with peas" and "Darwin's huge digression on barnacles, when he abandoned the great issues of evolutionary theory to produce the definitive book on a much smaller subject". Yet this helped him "refine and redefine the big picture, providing evidence that evolution is about finding niches and not about progress 'upwards'".
On the wider issue of "practical value", Knight quotes a story about Michael Faraday: "When Prime Minister Robert Peel asked him what the use of his early dynamo or motor might be, he replied that he didn't know but was sure that one day the Government would tax it."
This was still a live issue for another Prime Minister more than a century later, when Margaret Thatcher got into an argument with Sir George Porter, the director of the Royal Institution.
"She (Thatcher) saw Faraday's work as excellent utility-oriented research that gave birth to the electrical industry, he (Porter) as blue-skies research into matter and force that, after the great man's death, issued in power stations, cathode-ray tubes and so on," Knight says. This disagreement about history was, of course, rooted in a political dispute about how the Government should fund science.
It is Utopian to imagine that today's vastly expensive laboratory research could be publicly funded on the principle of "If we let the engineers go away and play with their toys, something interesting and useful will eventually turn up." But the debate about "directed science" remains fierce.
The 2006 Warry report by the Research Council Economic Impact Group was titled Increasing the Economic Impact of Research Councils. Ian Pearson, the Science Minister, told Times Higher Education last August: "I want to see more economic benefit from the research base."
Many leading scientists responded by arguing for "basic, curiosity-driven research" or pointing out that it can take 30 years for some discoveries to realise their full potential. Sir Philip Cohen, Royal Society research professor in the protein phosphorylation unit at the University of Dundee, claimed that his own research - which, after 25 years, is now proving vital in drug development and in understanding diseases such as cancer and diabetes - "would not have been funded" today. And Peter Cotgreave, former director of the Campaign for Science and Engineering in the UK, echoes Taleb's argument: "In 3,000 years of human endeavour, nobody has come up with a system that can reliably predict the impact of discovery."
Similar debate greeted the publication of the Department for Innovation, Universities and Skills's Innovation Nation White Paper in March. Some scientists worried that the department's plans would tilt the balance too far towards projects with direct commercial prospects and away from curiosity-driven blue-skies research. The Universities Secretary, John Denham, has disputed this suggestion. "Fundamental science is respected both in its own right," he told The Daily Telegraph, and because "In the long term it is our investment in the ideas that would prove exploitable, and the nature of fundamental science is that you can't be sure which ones will be."
Are these issues of "practical value" (and, often, the wealth creation that comes out of it) relevant to research in the humanities as well as the sciences? There have been recent attempts to quantify the impact of arts research on Britain's vast "creative economy". A report last year by Research Councils UK entitled Excellence with Impact explored specific cases. Four books on 20th-century conflict, for which the Arts and Humanities Research Council provided funding of £50,000, each generated book sales worth £500,000 to £1 million.
The report also says that one of the books, Richard English's history of the IRA, Armed Struggle, influenced UK policy in Northern Ireland and helped drive forward the peace process. This clearly had economic as well as human benefits, in terms of increased inward investment and savings on the security budget, although it would be hard to isolate the impact of the book from many other factors. There are also cases of research by art historians, for example, feeding into hugely successful exhibitions.
But just as Turing's imaginary machines turned out to be eminently useful, so can some very obscure research in the humanities.
David Katz, professor of history at Tel Aviv University, studied apocalyptic and millenarian movements and eventually published a book called Messianic Revolution with Richard Popkin of the University of California, Los Angeles. Few people read the vast arid tracts that Isaac Newton devoted to biblical chronology, but Katz and Popkin analysed them in detail and devoted other chapters to themes such as "The Messiah during the Thirty Years' War". Such topics don't tend to thrill people at parties and come, one might assume, into the area of purely useless knowledge, quite unlike the kind of medical research that can actually save lives.
But assumption would be wrong. When the FBI laid siege to David Koresh's Branch Davidian community at Waco, Texas, for 51 days in 1993, they were confronted by a group very like those studied by Katz and Popkin. The agents focused on the safety of the people in the compound; Koresh was only interested in discussing the Bible.
Hence there was no possibility of dialogue, and the FBI had few options but to fall back on a policy of "stress escalation", which led to the conflagration in Koresh's compound and the loss of 74 lives. If somebody such as Katz had talked to Koresh on his own terms, might he have been able to calm the situation and then negotiate the release of the "hostages" without bloodshed? We will never know. Yet his arcane area of expertise soon proved relevant to another urgently practical matter of life and death.
As the year 2000 approached, the Israeli Government, fearing violent acts within its borders by millenarian Christian extremists, realised that its considerable knowledge of Islamic terrorism was far from matched by its understanding of the Christian equivalent. The only hope was to call in an academic. It was thus that Katz's research suddenly turned him into a media star.
"I found myself in the position of a man who has studied dinosaur mating habits in academic obscurity", he later recalled, "until Godzilla comes to Tel Aviv. Before I knew it, I was driving around in a police car with flashing lights and appearing on TV shows from Japan to North America by way of Europe."