John Taylor argues that the human mind can be modelled by machines. We live in extraordinarily volatile times, with constant challenges to any established authority, and clashes between people with opposing ideologies. Nowhere is this tumult in greater evidence than over the nature of our very selves, and especially our own consciousness. This war is presently taking place only among academics, but any resolution of it would have worldwide implications for all religions and ways of life.
Some of us, indeed, do believe that we are on the verge of a breakthrough in understanding the physical basis of consciousness. We come from a broad range of disciplines - philosophy, psychology, neurosciences, physics, mathematics, computer science, electrical engineering, and so on. There is disagreement, however, as to the details of this breakthrough, and that gives ammunition to those fighting a vigorous rearguard action to stem what they see as the desecration of the hallowed ground of the human psyche by reducing us to "mere machines".
However, some of the problems raised by consciousness are proving extremely difficult to solve. This, coupled with the extreme range of interdisciplinarity that a proper approach to the human psyche requires, appears to be leading to a stalemate between the two sides in the debate.
I believe that there is an approach to the problem of consciousness which may avoid most of the crucial difficulties raised by those who insist on the mystery of the human psyche. First of all, let me remove the argument that the human mind can never be modelled by a machine, first expressed by the Austrian logician Kurt Godel in the 1930s, developed into a more precise tool to attack machine consciousness by the Oxford philosopher John Lucas in 1960, and more or less flogged to death by the Oxford mathematician Roger Penrose in the past six years.
It is based on the proposal: "This sentence is not provable", which, it is claimed, humans "understand" to be true but which cannot be shown to be so by any machine using a consistent logical approach. The crucial word seems to be "understand'', but Penrose, in particular, who considers the nature of this understanding vital to the human condition, is not prepared to give a definition of it. The word is used in the sense of the human creative act, which appears to be particularly elusive. It is this act, it is claimed, which can never be modelled by a machine.
That claim seems difficult to substantiate. The human creative act is known to be a mixture of conscious (consisting of streams of logically connected thoughts) and unconscious mental processes; the crucial part of the creative process appears to be unconscious, and takes place usually only after a great deal of conscious hard work has been put in to suitably prepare the brain by expanding the mental knowledge base.
The unconscious creative process seems to involve much testing of alternatives against a general criterion of what a solution should be like. This criterion is derived from a general store of imagery and experience, from one's gradually accreted perception of "what the problem is like". There does not appear to be any reason, in principle, why creativity cannot be implemented by a machine which has both conscious and unconscious levels of processing. Since the unconscious activities need not involve consistent beliefs, human (and machine) understanding would thus slip past Godelian rigour.
This brings us back to the most difficult of the problems set by the guardians of the shrine of the psyche, that of the subjective character of consciousness. The American philosopher Thomas Nagel stated it very clearly in 1974: "Every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that objective, physical theory will abandon that point of view". How can the objective third person scientific point of view ever explain the capacity for subjectivity and self-awareness at the heart of the experience of each of us?
To begin to answer that problem I wish to outline a plan of campaign that the powerful tools of science will be able to mount over the next decade or so. This is based on the new non-invasive instruments, going under the acronyms of PET, MEG, EEG and IMRI, which can probe the activity of different parts of the brain of conscious subjects as they perform different mental tasks. They measure signals of the electric, magnetic or blood flow levels associated with activity in particular areas of the brain. The finer details of the way the brain processes information - how different modules hand on to others and so on and what sort of mental activity that represents - will be elucidated by these new windows on the mind. They will provide the flow charts delineating the way in which psychological variables are implemented in brain dynamics.
In spite of the great advances this will lead to, it still does not tackle the difficulty of the subjective character of consciousness. For that we must explore consciousness itself in more detail.
We are very likely not alone in the animal kingdom in having consciousness. Lower down the evolutionary scales the awareness of external objects can only be very dimly glimpsed, but may be more like the "raw feels" we ourselves have. Our unique extra feature is the awareness of our own selves, of our past encoded in autobiographical memories, which also contains records of our past beliefs, fears and goals.
It would seem that there are two separate consciousnesses here. One is that which involves the "raw feels", our awareness of external objects, and which is the first to be activated when inputs arouse consciousness of any sort. I suggest that the sources of inner content of such experiences may be modelled by what I have termed the "relational mind''. This model of the mind, based on the ordering of neural activity in differing parts of the brain, supposes that input activity excites past memories, whose relationship with the present input is particularly strong. The past memories may be of a general form - so-called semantic memories - or of a more special or episodic form. It is these extra activations which give the inputs their content and meaning to us.
The form of consciousness described above could be active when one is completely immersed in a task or pursuit, and of which one says "I lost myself in my work'' or something similar. There is little sense of self and of the enjoyment of the experience. The other extreme occurs when the sense of self is at its height, as in moments of great tranquillity when one relaxes after a hard day or job of work. It would seem as if, at any one time, the greater the awareness of self the less the awareness of the external world.
To explain the self in neural activity terms requires an extension of the relational mind approach, in which input excites related stored memories, to include a neural module which is monitoring and comparing "self" activities excited from autobiographical memories, goals and beliefs, and predictions of what new inputs ongoing actions will give rise to with those from the outside. These latter would consist of the awareness of external objects coded as the "raw feels'' of the relational mind consciousness system at the non-self level. It is the "I'' versus "them'' monitoring process, I would claim, which leads to the more subjective features of consciousness.
It is even possible to comprehend in this manner the ultimate elusiveness of the self, a feature noted by the philosopher Hume: "I never can catch myself at any time without a perception, and never can observe anything but the perception"; this was later expanded upon by William James. To understand this disappearance of the observed self, the activity of the neural modules involved in the comparison between activities from "I'' and "Them'' disappears when these activities themselves are inhibited, and the comparison net tries only to catch its own comparator activity. That is because there is no "external" activity allowed in from outside the monitoring net to sustain the monitoring process itself. It is as if a referee in a football match only tried to keep account of his own actions, and as a result allowed the match to disintegrate completely.
The extended "self-monitoring relational mind" approach can be developed to have a neural basis, with modules functioning in specific ways and localised in different regions (both cortical and sub-cortical). The model has support from various areas of experiment, and from the deficits of consciousness in people suffering brain injury from various causes. It also gives a general overarching framework within which to begin to put together the parts of the brain whose actions have up to now been understood in a rather fragmented manner. Indeed brain science has been rather like trying to put together a giant jigsaw puzzle without knowing the full picture to be assembled. Only when consciousness is added can the full picture be seen.
But does the approach I have advocated so far properly face up to the difficulty raised at the beginning? What about the claimed impossibility of ever explaining the subjective character of consciousness? I suggest that the approach I have outlined, and related ones being developed by other scientists, can give an answer to Nagel's question "How is it like to be?''. The manner in which our awareness of external objects arises (what I call "raw feels"), and the additional subjectivity in our experience of them as the "I" is allowed to dwell thereon, is described by the dynamics of the neural activities as they subtly pass their activities this way and that, teasing an experience here, a feeling there, as they wax and wane across the vast areas of cortex of the brain. The fires of conscious sensation are fanned or damped down in a manner explicable in the neural terms of the self-monitoring relational mind model.
The inner experience itself, as an experience, is still the "property'' of the experiencer. I claim, however, that all of the possible descriptions and modes of action that experience consists of will ultimately be able to be captured by a physical model such as the self-monitoring relational mind. Any content whatever of that inner experience is, in principle, describable by that model.
There is no designation here of the subtlety of the human mind, and it is undoubtedly an awesome task ahead to develop and modify the model with the help of experiment, simulation and theory. That is being attempted, for example, by the project PSYCHE (still under development between various teams in Europe - the UK, Germany, France, Holland, Switzerland). In this project, the model of global brain activity outlined above will be simulated on a large parallel computer and related to, among other things, the brain activities measured on aware human subjects using the new windows on the mind mentioned earlier.
I wish to emphasise that the inner experience itself is not being modelled explicitly, but then neither does a model of the electron attempt to be the electron, only describe it. That is all that science tries to achieve: to model the way the world works, not be it. However the development of the model resulting from PSYCHE and exploration of its response (especially if it were constructed in hardware) may give us a better understanding of, and stronger criteria for, human and animal consciousness. It is that which makes such simulations different in principle from the model of the electron. PSYCHE may even end up having self-consciousness. Would it then have legal rights? That may well be a serious question for the next century to answer.
John Taylor is professor of mathematics and director of the centre for neural networks at King's College, London.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login