The billion byte brain

November 18, 1994

Since the Renaissance, scientific progress has been built on two planks: theory and experiment. Although these continue to be crucial to progress, what happens when the equations are not amenable to analytic solution and the system is too large, too small, too dangerous, too complex, or too expensive to experiment on? The answer is to model the system numerically. While this approach has been used for a number of years, the lack of powerful computers has kept such studies small and perhaps imprecise. Today, the advent of very high performance computers (HPC) removes these shackles and enables the new scientific technique of computer simulation to stand alongside more traditional methods.

For over a decade the University of Edinburgh has had a pre-eminent position in applying HPC techniques to scientific research and, in 1990, it established the Edinburgh Parallel Computing Centre (EPCC) as the focus for this work. EPCC is an interdisciplinary research centre with activities in a wide range of applications, as well as new HPC techniques. EPCC is also supported as a centre of excellence in HPC Education and Training by the Joint Information Systems Committee of the Higher Educational Funding Council.

Applications of computer simulation are numerous: modelling the weather and climate change, investigating the processes of galaxy formation, simulating fluid flow past bodies, or investigating the fundamental forces of nature. Indeed, over the past few years the concept of computational "grand challenges" -- problems which are almost too difficult to consider tackling -- has been widely accepted and this has heightened interest in the possibilities of simulation on the most powerful computers available.

The traditional approach to improving computer performance has been to refine silicon microchip design to make smaller components and faster clock speeds. However, the opportunity for further substantial advances by this route are limited by the laws of physics which restrict the overall size of the computer while also placing lower limits on the degree of miniaturisation possible. Even more significant constraints may be imposed by the laws of economics which make it unprofitable for companies to continue to press the technology further through ever more esoteric designs.

ADVERTISEMENT

Against this background, the concept of the massively parallel computer, in which many commodity microprocessors are connected by a high-speed communications network, has now gained general acceptance. This lets the computer manufacturer use the latest microprocessors and memories (with consequent decrease in costs) and concentrate on the design of the interconnection network.

Today, many of the largest computer companies offer such products. The art behind parallel computing is for the programmer to make the individual processors work efficiently together on a single problem. Although this requires a new approach to programming, it frequently matches the underlying physics of the problem -- nature itself acts in parallel and the vast majority of scientific applications have a structure which matches the architecture of parallel computers.

ADVERTISEMENT

Until recently the most powerful supercomputers have had vector, or pipelined, architectures. Here, successive calculations are overlapped just as different manufacturing steps are performed simultaneously on an assembly line. However, just as there is a maximum useful length to an assembly line, in any program there is a limit to the number of operations which can be overlapped.

Even if it is not possible to raise throughput by increasing the number of workers on an assembly line it is always possible to run many assembly lines side-by-side. Similarly, a computer may combine tens, hundreds, or thousands of processors to form a "parallel computer".

Since the only limit to the number of processors is the budget, unlimited speeds and memory sizes are theoretically possible.

Two basic designs of parallel computer are in common use. The Thinking Machines CM-200 uses thousands of simple processors under a single controller.

The Cray T3D has tens or hundreds of more powerful processors working independently. In this computer the processors are the same as those in workstations, indeed, the cluster of workstations in most departments is identical in principle to these parallel computers.

Following a competitive bidding process, the University of Edinburgh was selected by the Science and Engineering Research Council (now EPSRC) at the start of 1994 to host a massively parallel Cray T3D system. This machine gives the British academic community an enormous increase in computational resources for grand challenge problems.

It consists of 256 DEC Alpha processors each with 64 Mbyte memory connected in a three-dimensional torus and front-ended by a two-processor Cray Y-MP vector supercomputer. The peak performance is 38.4 Gflops (38.4 billion floating-point operations per second) which makes the computer one the most powerful in the world.

Through a collaboration between EPCC, Cray Research and Scottish Enterprise National, the Cray T3D will soon be upgraded with a further 64 DEC Alpha processors. These extra resources are primarily to increase the opportunities for wealth-creating collaborations with British industry and commerce in line with the Government's policy in last year's White Paper, Realising our Potential, A Strategy for Science, Engineering and Technology.

ADVERTISEMENT

Without widespread state-of-the-art networks, HPC systems could only be available to a minority of users. The Cray T3D and other HPC facilities at Edinburgh are connected to the national JANET and SuperJANET networks which enable users at remote sites to produce data on the Edinburgh HPC platforms and to visualise and post-process this at their own institution.

Use of the Cray T3D is primarily reserved for consortia of collaborating institutions focussed on specific "grand challenges" and supported by the research councils. Software support for these groups comes from EPSRC's HPC Initiative and EPCC, which also provides training.

Projects at the Edinburgh Parallel Computing Centre include: * Fundamental physics; the search, using high-energy particle accelerators, for new physics beyond the standard model of the strong, electromagnetic and weak interactions requires reliable predictions of strong interaction matrix elements to expose the underlying processes.

ADVERTISEMENT

Quantum chromodynamics, or QCD; the theory of the strong nuclear force. It describes how quarks and gluons interact and should predict how they bind together to form the observed hadrons, such as the proton, neutron and pion, and ultimately how hadrons form nuclear matter. Because of its highly non-linear nature, extraction of predictions from QCD can only be provided by numerically simulating the theory on a space-time lattice, an enormously demanding computation. In fact, it is estimated that a solution of an approximation to QCD which neglects the creation and annihilation of virtual quark-antiquark pairs, would take a few thousand hours on a one Tflops (1,000 billion floating point operations per second) machine, with a solution to the full theory requiring about a thousand times more.

UKQCD, a consortium of seven UK universities (Edinburgh, Cambridge, Glasgow, Liverpool, Oxford, Southampton, Swansea) has been simulating the theory since 1990 with the principal objective of computing hadronic matrix elements. Recent work on the spectrum and decays of hadrons containing charm and bottom quarks has already achieved worldwide status using the Meiko and Thinking Machines systems at EPCC. This is currently the largest project on the Cray T3D.

At the other end of the length scale, the present structure of the universe is dictated by the conditions which prevailed in the first fleeting fractions of a second. This was before atomic nuclei had formed, when particle energies were far greater than those achievable in even the most powerful accelerators. Thus, the dominant physical processes operating at this time may have observable consequences on cosmological scales. Simulations of the large-scale structure of the universe in conjunction with observations of galaxy clustering and the Cosmic Background Explorer (COBE) data on the microwave background are placing constraints on the basic physics at the earliest times.

* Environmental issues have taken on a much higher profile with both the public and governments. Whether the perceived environmental changes are caused by man's impact, or are the result of natural cycles, it is believed that we must understand the Earth better if we are to predict the effects of, for example, global warming and ozone depletion, and be able to predict the probability of floods or droughts. Since we cannot conduct large experiments on the atmosphere and oceans, the only method of investigation is computer simulation.

UK researchers have been active in this field from the early 1970s and the UK Universities Global Atmospheric Modelling Project (UGAMP) and Fine Range Antarctic Ocean Model (FRAM) projects have already produced world-ranking science. FRAM produced the largest and most detailed models of the ocean ever constructed and is now being succeeded by the even more ambitious Ocean Circulation and Climate Advanced Modelling (OCCAM) project which will model all the world's ocean and will complement the UGAMP research. Both UGAMP and OCCAM will make significant use of the Cray T3D at EPCC.

* Weather forecasting is a subtly different but related problem. Here, the timescale is much shorter, more detail is required and the program must run faster than real-time.

* The effects of traffic congestion have been felt by almost everyone at some time or other, and the problem is increasing. Any reductions in congestion would thus have major social and economic benefits. For example, a 20 per cent drop in delays would give a saving of Pounds 250 million per annum in Scotland -- more than the total Scottish road-building budget. Although computer simulation of new road layouts has been mandatory for a while, lack of computer power has required these models to be simplified. To address the issue of congestion requires a much more detailed model in which the vehicles are treated as separate entities with prescribed journeys and driver/vehicle types: motorist, bus, lorry etc. This approach is known as "microscopic" modelling and is extremely demanding computationally.

Together with an Edinburgh-based transportation consultancy, SIAS, EPCC has produced a traffic model, "PARAMICS", which runs on a number of HPC platforms, including the Cray T3D. This can simulate the individual movements of 200,000 vehicles over the Scottish trunk road network in faster than real time. This is a major break-through as previous microscopic models were restricted to a small area and a few hundred vehicles; the extension of microscopic modelling to macroscopic scales has long been considered to be the "Holy Grail" of traffic simulation.

Without such detailed models running on state-of-the-art HPC systems the consequences are clear: increased traffic delays and less effective use of the road network. In fact, these microscopic techniques are already widely used for the simulation of many physical, chemical and biological systems.

Molecular structures of biological systems are being accumulated at increasing rates; the processing and interpretation of the data is impossible without powerful computational techniques. An understanding of these structures is vital for medical research, such as the prediction and interpretation of the effect of pharmaceuticals; it plays a growing role in the interpretation of the ways in which genetic information determines the structure and physiology properties of organisms.

It is possible to investigate the properties of proteins and other biological macromolecules by dynamical simulations at the atomic scale, but efforts are restricted by the limits of computer memory and performance. The systems need to account for hundreds of thousands of atomic interactions, and in some cases require millions of timesteps to give accurate results. The calculation of the forces at each timestep is inherently parallel, so a move to the new generation of massively parallel HPC systems is providing the necessary performance to develop a clearer picture of the rules these complex systems obey.

The field of molecular biology was founded by UK researchers with the solution of the structures of haemoglobin and DNA. Now a consortium from Oxford, Manchester, and London is investigating the possibility of modelling proteins and biological membranes on the Cray T3D, and aims to study larger systems than previously possible.

With the Edinburgh Parallel Computing Centre, the UK is putting in place an organised structure for HPC focussing on grand challenge consortia and state-of-the-art hardware to meet the rapid growth of interest in computational science. The demand for the Cray T3D shows that many areas of UK science are poised to reap the benefits from the new scientific technique of parallel computing.

ADVERTISEMENT

Professor Richard Kenway is director of the Edinburgh Parallel Computing Centre and Tait professor of mathematical physics at the University of Edinburgh. Dr Arthur Trew is head of scientific computing at the Edinburgh Parallel Computing Centre, University of Edinburgh.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT