The brain maker

October 30, 1998

Geoffrey Hinton believes that machines have the ability to think. And, he argues, it is this artificial intelligence that will help us to understand the workings of the human brain. Helen Hague reports

ost artificial neural networks have less intelligence than the bit of a slug that smells things," says Geoffrey Hinton, striving to explain just how sophisticated the brain's computing powers are and how difficult it is to replicate them. After 17 years working in North America on brain research, he is back in Britain, heading the Gatsby Foundation Computational Neuroscience Unit at University College London. And he is intent on pushing back the frontiers of understanding in a most complex and compelling area of research - how the brain works.

The neuroscience unit, launched this week, is at the cutting edge. It aims to exploit the convergence of ideas in three areas: artificial intelligence, neural networks and statistics. A team of eminent scientists from around the world will model the brain's functions with computer networks that can "learn" in a quest to shed fresh light on how the brain operates.

To see the brain as a giant computer is, says Hinton, in one sense just right and in another very wrong. Computers may clean up when it comes to playing chess or doing arithmetic. But compared with us, artificial intelligence is "hopelessly primitive" at general purpose applications, such as making sense of a new situation. And in vision and motor control, two key research areas occupying Hinton's team, we are far better.

ADVERTISEMENT

Think of the ease with which we control our arms, he says, or how effortlessly the brain recognises shapes and segments complicated scenes, or the brain's amazing capacity to learn. Understanding how to solve these basic problems in robotics and computer vision is, Hinton argues, far more important than coming up with short-term research applications. And it will yield far greater benefits in the long run.

Now 50, an inspirational talker, peppering theories of how the brain computes with examples and analogies, Hinton gained his first degree, in experimental psychology, at Cambridge, followed by a PhD in artificial intelligence from Edinburgh. His last job in the UK was at the Medical Research Council's applied psychology unit at Cambridge, which he left in 1982. It was not simply a case of scant funding that prompted him to join the transatlantic brain drain. He left because the possibility of just doing research on how the brain computed, an area in which he has since built a formidable reputation, seemed practically non-existent in the UK. "You had to pretend to be doing something else in order to do research on that," he says. He claims that since he left Britain, the situation in British universities has worsened. "They are in such bad shape compared with universities in North America," he says. "I don't understand why British academics stay here."

ADVERTISEMENT

It took a lot to coax him back. First, money. David Sainsbury's Gatsby Foundation has pledged Pounds 10 million to fund cutting-edge research at the unit over the next decade. Next, freedom. Crucially, the team will be free of the kind of pressures to come up with short-term applied pay-offs that bedevil many scientific research projects. Hinton is a passionate advocate of the need for basic research. "Practical applications are always based on what was original research in the past," he says. "The ultimate creator of wealth is better basic research and that message is being lost." For him, private sector money, which often brings more pressure to come up with short-term applications, can be worse than standard university funding, although he does think today's government might have more insight into these problems.

Now that he is back, the big challenge he worked on at the University of Toronto as professor in the computer science and psychology departments remains - to understand how to make artificial neural networks more like the real thing. His theoretical research on vision, carried out at Toronto and at Carnegie-Mellon University in Pittsburgh, is already challenging traditional assumptions. These concentrate on the way networks of nerve cells, called neurons, translate images formed on the retina at the back of the eye to representations in the brain, in forward connections. But Hinton says there is more of a two-way dialogue. The brain also has a generative model, which shows how the objects seen give rise to images. There are therefore both forward and backward connections along the neural networks, each training the other. He suggests that backward connections even help generate dreams, training the brain to recognise images.

For Hinton, the new Holy Grail in this field is creating "an unsupervised learning algorithm that scales". This means taking a neural network, showing it input data, and making it construct from this data models of how the world works. Learning algorithms already exist, but at the moment work too slowly to learn new things. It would take them millions of years to learn what a child learns in its first year of life. But, as Hinton says, the brain is proof that it is possible to work out from data what is happening. His team is working towards finding a learning algorithm that is able to scale up and work with big networks as well as small ones, taking visual images and sorting out what is happening in them on a larger scale. While brain scans can yield astonishing pictures of the brain as it functions, theories and computational models are needed to understand how it works and attempt to replicate it. Most neuroscientists do not have advanced mathematical skills, which is why the nexus of excellence clustered at the unit in Queen Square will come in useful.

One theory being considered is whether computers can be capable of thinking autonomously. There are, says Hinton, a few cases of mathematical theorems that mathematicians could not prove, but computers could. "If you do not call that thought, what do you mean by thought?" he asks. For anybody who has ever dipped into science fiction, this is beginning to sound scary.

ADVERTISEMENT

But Hinton does not appear scared of what could be ahead. "At present, I think that what we imagine a very intelligent artificial being would do is clearly much more a projection of our own feelings and worries about human society than a reflection of reality. And I think it is highly likely there will be such beings, but not for a long time - hundreds of years if society keeps going." He says there is unlikely to be a clear transition between very intelligent artificial beings and computers programmed to do as humans command.

"Imagine a learning device that basically decided which email you should get and which email is junk," he says. "It looked at what you were happy with and decided what you should or should not get. You did not know how it worked but it was very good. Would you immediately get rid of it?" In some ways, he says, it could be a bit frightening because it could conspire with other devices not to tell you about things. But he does not see that it would be any more scary than the influence of a powerful media baron. He thinks this kind of device will be around in the next few decades. "Somehow we have this view that nothing will ever be comparable with human intelligence," he says, "that there is a special essence to it - maybe it is consciousness, maybe it is having a soul - but that computers will never be like that." But one of the boundaries between humans and other intelligent devices used to be that people could learn things, while intelligent devices just did what they were programmed to do. This, he says, is no longer true. The unit's computers, for example, can already learn. "You tell them how to learn," Hinton says. "But the idea that they just do what you programme them to do is nonsense."

Hinton says the frontiers of research into how the brain works are being pushed back all the time and the long-term pay-offs are awesome. One day, if scientists ever manage to create superior general purpose intelligences, we may even have to consider giving them rights. "In North America in the last century it was very bad form to give guns to Indians," he says. This seems inconceivable now. In future, giving rights to other intelligences may become just as natural.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT