Research intelligence - Forget Hal, say hello to Alice

Investing in multi-million-pound supercomputers should revolutionise research in many fields, writes Paul Jump

八月 26, 2010

In an age of austerity, and with the prospect of further cuts in public funding, an outlay of several million pounds on a supercomputer may seem like something of a luxury.

But the number of state-of-the-art facilities for supercomputing - also known as high-performance computing - unveiled by UK universities in recent months suggests that this view is not shared by Britain's vice-chancellors.

The £44 million High Performance Computing Wales project, which will be based primarily at the universities of Cardiff and Swansea and shared between the country's universities and businesses, made newspaper headlines in July when it was formally announced.

And new supercomputers have also been unveiled in recent months at the universities of Edinburgh, King's College London, Lancaster, Leicester and Southampton.

Steve Bowden, supercomputing business manager at IBM, said modern supercomputers, unlike their predecessors, tended to be built from the ground up from basic building blocks. He said universities had to bear in mind that as they grow in size, the price of powering, cooling and administering the technology could come to dwarf the purchase price.

However, Mr Bowden said that incorporating water-cooling systems into designs could dramatically reduce those costs, and pointed out that a powerful supercomputer gave institutions significant pulling power in the bid to court top researchers.

The University of Leicester certainly seems to accept that argument. Announcing the acquisition of its new supercomputer, Alice, the university trumpeted the fact that it is 10 times more powerful than its predecessor, adding that it will "help attract high-quality researchers and millions of pounds in research grants to Leicester". But for such ambitions to be realised, universities are increasingly keen to make their high-performance computing facilities available to as wide a spread of researchers as possible.

For that reason, Leicester decided that Alice's £2.2 million cost - which included £1 million for refurbishment and water-cooling installation - should be borne centrally by the university.

According to Mary Visser, director of IT services at Leicester, access will be "free at the point of use" and overseen by a governing committee that includes representatives from across the university.

"If departments feel they are being short-changed, we need a forum and policies to sort that out," she said.

She added that supercomputers' ability to trawl through large datasets in search of patterns was applicable to a wide range of subjects outside mathematics and physics, the departments at Leicester where Alice's "sub-optimal" predecessors had been housed.

She admitted that only "real techies" had previously had the expertise needed to use supercomputers, "but we have social scientists and economists with big problems to solve who didn't sign up to be computer programmers. We need to get to the point where my staff have enough empathy with research to provide the bridge between it and IT," she said.

Easy PC

The £4 million cost of Lancaster University's new facility (including about £3 million for new machine rooms and water cooling) was also borne by the institution as a whole.

Roger Jones, professor of particle physics at Lancaster, said the supercomputer was also used by researchers in management, environmental science, engineering and social science, and that he was keen to widen its use even further.

To this end, he set up online instruction pages, which emphasise that researchers should not have any trouble replicating anything they could do on a standard desktop PC.

A scheme is also in place to give them access to mentors in the IT or physics department.

Professor Jones admitted that the new hardware would last only for three to four years before technology moves on to such an extent that it becomes more cost-effective to replace than upgrade. But he insisted the investment was imperative for research in areas such as his.

He said alternative arrangements such as cloud computing, which involves using huge remote computing centres provided by commercial organisations such as Google, could be effective for solving some computational problems.

Grid computing, meanwhile, which pools together lots of remotely located standard computers, could make better use of local resources, he added.

However, Professor Jones said that only large local computing facilities had sufficiently rapid connections between their different "nodes" to carry out large numbers of calculations on huge datasets.

"Besides," he added, "you always need something fairly local, where you do the last step of the analysis and where the real inspiration happens."

paul.jump@tsleducation.com.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT