Clever measures

一月 27, 1995

Hans Eysenck takes a look at new attempts to provide a more solid scientific basis for intelligence testing. IQ testing has been extremely successful on the practical level -- predicting academic success from early childhood to university degree, selecting officers in the armed forces, correlating effectively with almost any activity using cognitions of one kind or another. But as I pointed out more than 40 years ago, it is a technology without a sufficient scientific basis.

Indeed, the technology was so successful that most psychologists failed to notice the absence of a firm basis for their endeavours. The main reason for this state of affairs was the victory of Alfred Binet over Sir Francis Galton in a battle of two different paradigms, a victory that is now being disputed on the basis of a decade of experimental investigations seeking a biological cause of the observed behavioural differences in IQ.

Galton may be considered the father of modern studies of individual differences. He firmly believed in the concept of a general factor of intelligence; he firmly believed in a large genetic contribution to the production of individual differences in intelligence -- indeed he was the first to suggest studies of twins to prove his point; and he believed in the importance of biological studies to investigate the importance of speed as the underlying factor in intelligence.

Binet, on the other hand, was interested in the possibilities of improving intelligence; he was not interested in genetic factors; and he thought that tests of actual cognitive behaviour would serve best for the measurement of intelligence. In fact, he did not really believe in the concept of intelligence; he thought of intelligence as the more or less arbitrary averaging of many different abilities and characteristics which should be separately measured.

How does the tally stand at the moment? There is wide agreement among behavioural geneticists that for adults in developed countries the heritability of the IQ is around 80 per cent; only about 50 per cent for children. So Galton was right, but Binet was not wrong -- there is room for environment to produce improvement. Galton was right in his emphasis on general intelligence, but Binet was right in suggesting the importance of special abilities -- verbal, numerical, visuo-spatial, memory, etc., in addition to general intelligence. These issues may be regarded as settled, but interest now has shifted to investigations of Galton's suggestions for a causal analysis of intellectual functioning in terms of speed of cortical processing of information, and to attempts to measure the underlying physiological processes. Success in this endeavour would give us a proper scientific underpinning of the concept of intelligence, and perhaps a measurement scale with a true zero and equal intervals, lacking in the IQ scale.

In Galton's time there was of course no possibility of measuring events inside the head, such as is afforded us by the electro-encephalograph (EEG), the PET scanner (positron emission tomography), magnetic resonance imaging, and many other modern wonders. Galton suggested, inter alia, to use measures of reaction time (RT) to investigate the speed of mental processing, a suggestion so alien to the Zeitgeist that it was practically disregarded until recently. If one thinks of intelligence in Binet's terms as consisting in complex cognitive problem solving activities, then the notion of responding to a given signal within less than 200 milliseconds by moving one's finger appears ludicrous. Few were willing to invest any time and energy in such investigations. A few very badly done experiments seem to suggest a zero correlation between reaction time and intelligence. This was accepted as evidence that Galton was wrong.

Much improved modern tests have shown that reaction time does correlate substantially with IQ, particularly when slightly more complex designs are used (around .60). Thus you might light up three lights out of a semicircle of eight, two close together and one rather further apart, and ask the subject to lift his finger and push a button next to the odd man out. Another variant of the "speed of transmission" hypothesis measures inspection time (IT). Here you are shown, say, two lines clearly different in length, and you are asked to indicate which is longer. Even a mentally defective child can of course do this, but the lines are presented only for a very short period of time -- down to 20 milliseconds. The score is the shortest presentation resulting in a correct answer. Inspection time also correlates quite well with IQ (around .50); combining RT, and IT gives correlations with IQ of around .70, which is not far from the sort of correlation you get between one IQ test and another. So there is something to be said for Galton's theory.

A much more direct proof, however, comes from the EEG, particularly the so-called Averaged Evoked Potential. Here you have your subject strapped into the ordinary resting EEG mode, when you suddenly present a visual or an auditory stimulus. This results in a series of waves, dying down after about 500 milliseconds. The graph overleaf illustrates such a series, recorded for three children of high, average and low intelligence. The differences between them are very clear. The first difference, of course, is latency; the waves come much more quickly for the bright than for the dull child. Latency usually shows correlations of around .40 with IQ, and of course that is what Galton would have expected in his speed hypothesis. Note that the children in have not been specially selected to prove a point; they are typical of children of different IQ levels.

Even more obvious is the greater complexity of the waves produced by the bright child. This can be measured in different ways, but always shows considerable correlation of complexity with IQ. Why should this be so? Evoked potentials have a poor signal-to-noise ratio, so the published curves are the average of something like 100 evocations. Now if each single wave represents cortical events, then complexity can only be preserved if the 100 waves that are being averaged are closely similar; if the peak on one falls on the trough of another, they will cancel out and you will be left only with the major sinusoidal curves. And why should one curve differ from another? Theory suggests that this may be due to errors occurring during transmission of information, probably at the synapse where the axon of one neuron hands over to the dendrite of another. So it has been suggested that low IQ is produced by lack of integrity in the central nervous system, allowing many errors of transmission to occur.

We seem to have two theories here -- one making speed the major factor, the other error of transmission. But the one can be reduced to the other. Many neurons are involved in transmission even of the simplest information, and if error occurs in some, there will be disagreement and hence a slowing-down of transmission until agreement is obtained. Hence speed is secondary to functional integrity; if your cortical system makes errors, you are slowed down in your reactions. There is much evidence for such theory, although of course it needs much refining. But quite high correlations have been found between complexity and IQ, some upwards of .80. And recent research even identifies those individuals where there is intervention from other biological factors that interfere with this close agreement. Gradually we are getting closer to an understanding of what is going on.

There have been other approaches. One example comes from work on the "neural adaptability" model, which is based on the fact that repeated presentation of the same stimulus results in evoked potentials having smaller and smaller amplitude. It was suggested this slope should be steeper for bright children, and again quite high correlations in that direction have been found. It can be argued that recognition of "identity" depends on the integrity of the nervous system, errors interfering with such recognition. Hence this paradigm may be reducible to the "error" paradigm. Indeed, the "error" paradigm may explain a very odd finding in RT research, namely that the highest correlation with IQ is not for speed of reaction, but for lack of variability; if your reaction times are very variable, you are likely to have a low IQ. This is exactly what the "error" theory would have predicted.

Quite different from these EEG studies are those using positron emission tomography; a technique that enables us to measure the amount of glucose taken up by the brain while the subject is doing a certain task, such as solving IQ problems. (We can also see what parts of the brain take up the glucose, which of course is the essential food required by the brain to function.) We would expect brain integrity to correlate with a need for less energy, just as a well-adjusted car needs less oil and petrol than a poorly adjusted one, and indeed high IQ subjects require less glucose than low IQ subjects. Unfortunately this technique is very expensive, so rather few subjects have been tested, but the results are quite significant statistically.

Galton expected the mere size of the brain to correlate with intelligence, a view that again has been much laughed at. But external measures of the head have always given rather modest but positive correlations with intelligence (around .10 to .20), and fortunately we are now able to measure the size of the intact brain through magnetic resonance imaging techniques. Much higher correlations in the neighbourhood of .40 to .50 have been found, and that is beginning to be quite respectable. Obviously size is not all, and the search is concentrating on microscopic features of the brain that might correlate even higher with IQ. But again Galton has been shown to be right.

Binet thought that intelligence could be improved by educational and social means, but such efforts have not been very successful. Large-scale interventions such as Head Start in the United States have not produced any lasting improvements in IQ, although leading to many other desirable consequences. The only method that has given large and replicable results has been vitamin supplementation which produces increases in IQ of ten points or more in children, whose blood analysis indicates vitamin deficiency. Such deficiency has been observed in about 30 per cent of ordinary, apparently well-fed American and British children; presumably it would be much more common in deprived inner-city children, or in third-world countries. Here is a rich source of scientific research and social intervention, and interestingly enough it is again the biological aspects of intelligence that are being addressed.

What is new in intelligence research is essentially a return to Galton, and an attempt to produce a scientific theory to explain the successes of Binet-type intelligence testing. Of course only a beginning has been made in this endeavour, and the next dozen years will no doubt produce many surprises, and many changes in our understanding. Man is a biological animal, and we must look at both sides of his nature. During the past 70 years, psychologists have spurned biological ways of thinking, and have embraced a thorough-going environmentalism. At long last we are beginning to return to a more factual approach, free of behaviouristic dogmas discounting completely the obvious biological foundations of our behaviour.

Hans Eysenck is professor emeritus of psychology, University of London.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT