The creation of a universe, so we are told, involves a good deal of making order out of chaos. The recent history of mathematical physics is rather the reverse. Traditionally, the stuff of science was the formulation of simple mathematical models that capture the essential features of phenomena. Idealisation was paramount; symmetry helpful; solubility essential. For centuries, the mathematical sciences have been steered by the concept of solubility. An equation that could be solved was always more interesting than one that could not be.
The most powerful influence of soluble problems on our apprehension of the nature of the world is seen in education. The teaching of mathematical sciences was always framed around simple symmetrical problems bristling with idealisations. Frictionless surfaces, massless pulleys, perfect spheres, immaculate vacua: these are the stuff of which physics courses are made.
The first assault on the fortress of the intractable became possible in the 1950s when computers entered the scientists' armoury. Since then the scope and influence of computational physics has grown stupendously. Supercomputers simulate atmospheres, galaxies, and even whole universes. But curiously, it was not they that have been most influential: that distinction falls to the humble personal computer, whose advent in the 1980s ensured non-linear problems were no longer solely the province of the few with access to vast computational resources. The realm of intractable problems, neglected by physicists for centuries, began to be explored.
One of the first things to be discovered about some complicated phenomena was that they are delicate. Their output is very sensitive to their input. A good name, "chaos", was needed to capture the imagination and create widespread interdisciplinary interest. At first, striking discoveries abounded and the array of chaotic possibilities started to display some unexpected order. Remarkably, whole classes of laws governing chaotic changes were found to possess common consequences irrespective of their detailed specification. This was an exciting discovery because it raised hopes of understanding chaotic systems without precise knowledge of their structure. As these studies advanced, chaos became one of the buzzwords of the age. Popular expositions abounded. With the aid of high-resolution computer graphics, more complicated examples were displayed in glorious technicolour under the title of "fractals". These entered the "wallpaper" business and were soon adorning book covers, postcards and calendars. But the most interesting examples of these fractal systems, which build up complexity by copying the same pattern over and again on different scales, were evident in the natural world. The head of a cauliflower, the branching of trees or the crenellations of a coastline: all these patterns became intelligible as examples of the erratic fractal signature so cost-effective when a large boundary is required without an accompanying increase in volume.
As these ideas penetrated deeper into popular culture, the "chaologist" started to be the essential accoutrement in the modern media. The Michael Crichton book, and subsequent Steven Spielberg film, Jurassic Park derived its plot from the chaotic unpredictability of a small population of regenerated dinosaurs parked on an island somewhere off the coast of Central America. One of the characters was a mathematician, on board to study the unpredictability of the situation.
But just when the journalists had caught up with play, the professionals changed ends. Interest moved from disorganised complexity to organised complexity: from unpredictable chaos to intricate stability. You and I are impressive examples of systems in which a vast amount of complexity leads, not to chaos, but to order. Animals, computers, brains, telephone exchanges: structures of this sort are what they are not because of the identity of their constituents (after all, they are all made of atoms) but because of the ways in which their components are wired together. The networks that result are more than the sum of their parts and are capable of complexities beyond our present understanding. They can learn, and they can spontaneously give rise to new forms of behaviour when thresholds of complexity are attained. Typically, these systems exist, like candle flames, in a long-lived stable state far from equilibrium.
Complex organisation poses new problems for scientists. Are there new types of "law" that govern the growth of complex organisation? Does knowing about it help us harness it for beneficial purposes? Does it help us avoid making catastrophic perturbations to delicate ecological balances in the natural world? So far this subject is still in the early exploratory stages. Investigators try to invent artificial systems on computer screens, each defined by its own set of rules that dictate how they can change; often, the natural world provides instructive examples too. Gradually, it is hoped, the plethora of interesting examples will start to fall into well-defined classes.
Most recently, interest has focused on the interface between chaos and organised complexity because there, on the "edge of chaos", the ordering principles that can create long-lived structures find themselves confronted with a manifold of changing states thrown up by the chaotic sensitivity of the system. A form of selection can then operate to fashion an overall pattern of behaviour that is stable despite being maintained by small-scale instability. A classic example is the behaviour of a pile of sand as grains are slowly added to it. At first, the pile steepens as the sand accumulates, but eventually a critical slope will be reached and maintained by continual avalanches of sand. These avalanches are of different sizes. Big ones are rarer than little ones. Their effect is to maintain the inclination and shape of surface of the sand pile. Processes like this are examples of what has become known as "self-organised criticality". They are surprisingly widespread: replace avalanches of sand by bankruptcies or stock market crashes and it applies to economic systems; replace them by extinctions of species in a natural habitat and there is a model of ecological balance; replace sand falls by earthquakes and we could be modelling a concatenation of unpredictable local geological events which attempt to maintain pressure equilibrium at the earth's surface. All may be systems on the crumbling edge of chaos. As one looks further afield in the natural world there are other complex branching processes - the structuring of trees and river tributaries - that suggest we might view them from a similar standpoint.
The study of organised complexity is far from completion. It is also of potential interest to the layperson in ways only amateur astronomy or natural history have so far managed to be. Amateurs often make important contributions to observational astronomy. Some amateur naturalists have unique talents that enable them to make discoveries professionals might overlook. Likewise, the growth of the personal computer business makes it possible for amateurs to carry out simulations of complex systems in their own homes.
In the light of these fast-moving and exciting interdisciplinary developments, the two outstanding books under review are of great value. Robert C. Hilborn has attempted to produce a complete textbook for students. It is eminently readable and is designed to be supplemented by input from the instructor. However, readers will find a steady supply of exercises and computational projects appearing on every other page to test their understanding of the simple examples of the text. Hilborn does not develop any particular topic a long way but the bibliographies following each chapter are extensive and carefully subdivided into subject areas. The plan of approach is attractive, beginning with specific examples to motivate the search for common factors. The second chapter introduces the concept of universality and gives a brief account of Feigenbaum's period-doubling constants with practical exploratory exercises.
After this first look at discrete systems, continuous systems are introduced in two and three dimensions. This discussion then blends with the earlier introduction of discrete systems through the introduction of Poincare sections, paving the way for more detailed discussion of Lyapunov exponents and a detailed analytical treatment of the Feigenbaum discoveries.
This pattern of introduction, preliminary experimental investigation and in-depth examination is followed for many other subjects and makes the book a unique combination of the readable and the mathematical. There is a vast amount of information, covering topics that do not usually find their way into textbooks. There are chapters on spatial development of patterns, diffusion-limited aggregation processes, quantum chaos, and the applications of information theory to complex systems. This book has something to say about just about every aspect of nonlinearity but its main appeal is the freshness and enthusiasm that it brings to the task. All aspiring graduate students in physics should dip into Hilborn's book; those intending to work on nonlinear systems should read it from cover to cover.
David Peak and Michael Frame have managed to bring off a difficult trick. They have written two books in one. Their wide-ranging discussion of complexity is a mixture of fascinating description, unexpected examples, fancy graphics and equations. They cover old favourites like the Mandelbrot set and Conway's game of life with enough quantitative information to enable home-computer users to do some experimenting. But they are more novel when they tell about the creation of artificial landscapes or the structure of fractal musics.
This is a book written at the level of the Scientific American mathematical games column and will appeal most to the active reader who likes to play mathematical games or solve puzzles (accompanying software and other course-planning materials are available on request). Its principal appeal is the fluency of the writing, and its interface with recent developments in the subject and the creative world of the arts. Reading it reinforced my feeling that there are many lessons that the sciences and the arts have to teach each other about complex organised structures. Scientists have begun to study the patterns of structure that lie behind organised complexities. Those complexities may be temporal, as in sequences of musical sound that combine to produce a symphony, or they may be spatial, as in the artistic patterns that resonate with the complexities of the human nervous system. The world of the arts is full of fascinating examples of organised complexity that may help stimulate the scientific contemplation of new types of complexity; while the artist may find in the systematic study of complex nonlinear systems a newly-opened gallery of aesthetic delights.
John D. Barrow is professor of astronomy, University of Sussex.
Chaos under Control: The Art and Science of Complexity
Author - David Peak and Michael Frame
ISBN - 07167 2429 4
Publisher - W. H. Freeman
Price - £19.95
Pages - 408
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login