Can machines ever be conscious?

April 5, 1996

Igor Aleksander believes they can, Jaron Lanier disagrees

It is collective self-flattery for the computer science community to argue that computers can be conscious. I will argue that they cannot. Arguments about machine intelligence hinge on questions of epistemology, our ways of knowing what we know. The most basic argument of this kind is the Turing test. Alan Turing proposed that if a computer was programmed in such a way that it could fool a human observer into believing that it was conscious, then it would be sentimental foolishness to suggest that it was not conscious - like claiming the earth was at the centre of the universe; a desperate attempt to hold onto our uniqueness.

I claim that there are different ways of knowing things. Consciousness is the thing we share that we do not share objectively. We experience it subjectively, but that does not mean it does not exist.

How could we decide whether machines might also experience consciousness? In Turing's set-up, it is impossible to tell whether the computer has become more human-like, or whether the human has become more computer-like. All we are able to measure is their similarity. This ambiguity makes artificial intelligence an idea that is not only groundless, but damaging. If you observe humans using computer programs that are designated to be "smart", you will see them make themselves stupid in order to make the programs work.

ADVERTISEMENT

What starts as an epistemological argument quickly turns into a practical design argument. In the Turing test, we cannot tell whether people are making themselves stupid in order to make computers seem smart. Therefore the idea of machine intelligence makes it harder to design good machines. When users treat a computer program as a dumb tool, they are more likely to criticise a program that is not easy to use. When users grant autonomy to a program, they are more likely to defer to it, and blame themselves. This interrupts the feedback that leads to improvements in design. The only measurable difference between a smart program and a dumb tool is in the psychology of the human user.

This argument suggests that it is better for us to believe that computers cannot be conscious. But what if they actually are? This is a different kind of question, a question of ontology. I argue that computers are not conscious because they cannot recognise each other. If we sent a computer in a spaceship to an alien planet and asked for a definitive analysis of whether there were computers present, the computer would not be able to answer. There are theoretical limits on one program's ability to fully analyse another that make this so. People can recognise and use computers, so people are not in the same ontological category as computers.

ADVERTISEMENT

This is just another way of saying that without consciousness, the world as we know it through our science need not be made of gross objects at all, only fundamental particles. For instance, one has to be able to distinguish cars from air in order to measure "traffic". Our most accurately confirmed scientific hypotheses, those of fundamental physics, do not, however, acknowledge cars or other gross objects.

It is easy to claim that the state of a person's brain is what notices cars or computers, but that avoids the question of how the brain comes to matter as a unit in the first place. If consciousness is associated with a brain, why is it not also associated with a momentary correlation between a brain and the arrangement of noodles on a plate of pasta being eaten by the owner of the brain? Even brains exist only by virtue of conscious acknowledgment. The alternative idea would be that the right kind of complex process gives rise to consciousness. In that case there would be huge swarms of slightly different consciousness around each person, corresponding to every combination of their brain, or sections of it, with other objects in the universe.

A world without consciousness would be a world of elementary particles at play. They would be the same particles in the same positions and relationships as in our world, but no one would notice them as members of objects like brains. I am not claiming there is something outside my brain that contains the content of my experience. I can accept that the content of my subjective experience might be held in my neurons, and still claim that it is experience itself that makes those neurons exist as effective units.

The first argument presented above, about the Turing test, turns out to have practical relevance because it influences our ability to design better user interfaces. And I think the second ontological argument does too - computers have come to play such a central role in our culture that our way of thinking about them affects our ways of thinking about each other. The tendency to think of computation as the most fundamental metaphor for experience and action leads inevitably to sloppy computer metaphors in politics, economics, psychology, and many other areas. I hope that if we acknowledge just how strange and wonderful it is that we are conscious, that wonder will translate into less bland and nerdy metaphors to guide us in those areas.

ADVERTISEMENT

Jaron Lanier is visiting scholar in the computer science department, Columbia University, and chief scientist at Talisman Dynamics.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT