IQ: are genetic tests for intelligence scientific?

Correlations between differences in individuals’ DNA and IQ scores could lead to genetic testing of intelligence and even suitability for university. But IQ tests are conceptually muddled measures of learned knowledge, self-confidence and social background, says Kenneth Richardson

November 29, 2018
Students take a test
Source: Alamy/Getty montage

The task of admissions tutors is an unenviable one. On the one hand, they are derided for relying on applicants’ performance in school examinations, lending a huge advantage to those whose parents can afford to send them to selective schools and to top up that advantage with endless supplies of private tuition. On the other hand, admissions tutors are castigated as social engineers when they attempt to level the playing field a little by lowering entry standards for those from less privileged backgrounds.

The lure of a test that could evade this socio-political minefield and identify raw intelligence is clear. IQ testing is the obvious solution. But while the concept has been around for a century, it has never managed to convincingly deliver.

Recently, some psychologists have begun claiming to have found differences in DNA that correlate with IQ test scores and educational attainments. Therefore, they say, it will soon be possible to predict an individual’s adult intelligence from a mouth swab or drop of blood taken at birth. The chief advocate of the idea is Robert Plomin, the well-known professor of psychology at King’s College London. In a paper published in Nature Reviews Genetics earlier this year, called “The new genetics of intelligence”, he argues that parents will eventually use DNA tests to predict their children’s mental abilities and plan their education accordingly.

But the idea that universities could augment admissions tutors with jobbing pharmacists is based on many unlikely, dangerously misleading assumptions.

ADVERTISEMENT

One problem is that psychologists have no agreed theory of intelligence. That’s why IQ tests have always been only pretend tests of intelligence. There is wide disagreement about what they really measure. They are not like blood tests, liver function tests or breathalyser tests, however much IQ testers want people to believe that.

The idea that IQ tests could replace school attainment in university admissions is suggested by testers’ argument that IQ scores must be valid tests of intelligence precisely because they correlate with school attainment. Indeed, what the advocates don’t mention is that the tests are rigged to do just that. Little puzzles and tests that make up the scales (called items) are devised and tried in advance. Only those whose results do correlate with school attainment tend to be selected for inclusion in the test.

ADVERTISEMENT

Testers see, in score differences, the expression of a mysterious mental strength or energy they call “ g ” (British psychologist Charles Spearman’s abbreviation of what he called the “general factor” of intelligence). But the test items themselves are parodies of the complexity of nearly everyone’s cognitive functions in the real world.

The logic assumes, in any case, that school performance is itself a test of intelligence. But there are many doubts about that. Although school performance is used to select for occupational level, neither IQ scores nor educational attainments have much association with mental performance in the real world. Moreover, appealing to IQ test results’ correlation with school attainment does nothing to suggest what value might be added by using them to select students.

Ironically, it is because of the poor predictability of university performance from school attainment that universities and colleges have looked to IQ-type tests for help. In the US, for example, SAT tests – which test literacy, numeracy and writing skills – are widely used for university admissions. But a study by the UK’s National Foundation for Educational Research reported in 2010 that the adoption of an SAT-style reasoning test did “not add any additional information, over and above that of GCSEs and A levels”. Likewise, a 2015 review of research in the US and the UK published in the journal Psychological Bulletin found that school test scores were associated with less than 10 per cent of the variation in university performance. “Self-efficacy” beliefs, or confidence in personal ability, were found to be much more important.

This is hardly surprising, as exam results seem to rely far more on swotting and regurgitation than true ability. As Barnaby Lenon, chair of the UK’s influential Independent Schools Council, said recently: “The best GCSE and A-level results don’t go to the cleverest students – they go to those who revised in the Easter holidays.”

With up to 18 applications per place at some UK medical schools, those aspiring to become doctors are among the most selected students. Here, too, there has been concern about the predictability problem. For example, a London-based team did a very careful analysis of correlations between A-level scores and performance at UK medical schools across a number of cohorts, from the 1980s to about 2012. Even with tricky statistical corrections, the correlations, published in BMC Medicine in 2013, turned out to be only low to moderate (around 0.35, with associations with practical exams tending to be rather lower). Surprisingly, GCSE results also turned out to be only moderate predictors of A-level performances two years later.

More importantly, IQ tests were tried with some intakes. But scores added no predictive value at all. This general picture has been confirmed in a number of other studies since, most recently in a UK study, “Does the UKCAT predict performance on exit from medical school? A national cohort study”, published in BMJ Open in 2016.

And a study published just last month in Scientific Reports, “The genetics of university success”, reports that A-level scores are associated with only 4.4 per cent of individual differences in final degree grades (correlation 0.22). Moreover, sequenced DNA variations (called polygenic scores) were associated with only 0.7 per cent of such differences. The much-sought-after g -factor seems to evaporate outside the self-fulfilling correlation between IQ and school performance.

Of course, all of this says nothing about IQ tests’ ability to predict how people will fare in life beyond university. But research over many decades has also found little relation between IQ and job performance. Here, there have also been intensive efforts to boost the very low correlations by “correcting” them statistically. But these efforts have been highly controversial, involving the pooling of results from hundreds of disparate studies (some from the 1920s), using estimates of missing data and adopting a host of dubious assumptions.

ADVERTISEMENT

Likewise, there is little evidence that members of high-IQ societies like Mensa are disproportionately successful in their careers. In any case, such correlations are easily explained by factors such as cultural background, and the above-mentioned confidence and self-efficacy beliefs.

Similarly, surveys going back to the 1960s have shown that neither school nor university grades are good predictors of job performance. A review by J. Scott Armstrong in the Encyclopedia of the Sciences of Learning in 2012 put the correlations at nearly zero six or more years after graduation. Higher-performing pupils do not tend to become “high-performing” adults. Conversely, the vast majority of high achievers in the real world, as adults, did not stand out at school.

Employers are increasingly catching on. In a New York Times interview in June 2013, Laszlo Bock, who was then a vice-president at Google, said that “we’ve seen from all our data crunching that [educational attainments] are worthless as criteria for hiring, and test scores are worthless…They don’t predict anything”. Google is by no means the only company that has recently said that it will disregard educational attainment – including at university – in its hiring.

DNA helix on campus
Source: 
Getty montage

If, in reality, IQ tests are really just tests of certain kinds of learned knowledge, along with self-confidence, you could also depict them as measures of social class background.

Not surprisingly, members of different classes, with different social and occupational roles, develop different kinds of knowledge and cognitive processes. These become part of their socio-cultural “ecology”; the resulting differences in neural networks in the brain even show up on MRI scans. At one end of the class structure, family wealth promotes healthier lifestyles, physical growth and cognitive vitality in children. Economic security, stable circumstances and predictable futures foster strong beliefs in personal abilities, and booming self-confidence. At the other end, working-class parents suffer the grinding stress of money shortages and insecurities of employment, income and housing. A report in Science in August 2013, “The poor’s poor mental power”, notes how poverty alone depresses cognitive functioning and drains mental reserves in parents and children.

Physiological processes have been discovered through which early life stress – or even that experienced by the mother before birth – can stifle long-term coping in challenging situations in later life.

Working-class parents are also likely to have reached negative conclusions about their own abilities through their own school experience (which may be reinforced by articles about genes and intelligence). It’s difficult to feel aspirational for self or children within a society that has certified you as deficient in brainpower. Yet a report in the Journal of Economic Statistics in 2010, “Must try harder: evaluating the role of effort in educational attainment”, cited just such aspirations as the “key to a child’s educational performance”.

ADVERTISEMENT

There are even deeper consequences. Mere perception of inferior place in a social order reduces self-confidence and increases anxiety in test situations. Test anxiety clouds the mind, disturbing attention and focus. Just being told that it is a “test”, instead of, for example, a survey for research, seriously depresses the cognitive performance of minority and working-class groups.

As Cardiff University emeritus professor Antony Manstead explains in an article, “The psychology of social class: how socioeconomic status impacts thought, feelings, and behaviour”, in a recent edition of the British Journal of Social Psychology, “social class differences in identity, cognition, feelings, and behaviour make it less likely that working-class individuals can benefit from educational and occupational opportunities to improve their material circumstances”. Yet test scores are still treated as if they are readouts from blood tests.

What IQ tests really test is most clearly given away by the so-called Flynn effect: the steep rise in average IQ scores over generations in all developed countries. For example, average performances on a popular test in the UK improved by 27.5 points between 1947 and 2002 (the maximum score is 60).

Such changes hardly reflect the kind of fixed property of individual intelligence that testers ask us to believe in. Psychologists have been utterly, and perhaps comically, mystified by it, involving themselves in arcane debates about “biological” or “environmental” causes. They have been equally mystified by more recent reports of a levelling-off, or even decline, in average IQs over the past 20 years.

The Flynn effect corresponds almost exactly with the expansion of middle-class jobs from the 1940s to the 1990s, resulting in the effects on learning and test-taking confidence mentioned above. Correspondingly, as social mobility has stalled over the past 20 years, so the effect has tailed off.

The blindingly obvious relationship between IQ scores and class background is also revealed by the fact that when children from deprived backgrounds are fostered by middle-class families, their IQs increase by up to 15 points.

Unfortunately, in the absence of a genuine theory of intelligence, mysticism prevails. It is probably inevitable that institutions charged with grading, sorting and placing people in a class-structured job market must resort to simple metaphors such as “bright” or “dull”, “strong” or “weak”. But these are woefully self-fulfilling ideas.

Politicians constantly promise an education system that “allows everyone to fulfil their true potential”. But this implies the existence of a fixed, biological ladder of aptitude. It is what is called genetic determinism and it prevails even though geneticists warn that we must no longer think of the genome as a “blueprint”. The same ghost-in-the-machine notion underlies the concept of g . It does its job in reproducing, and seeming to legitimise, an illusory meritocracy.

Banishing such myths will, of course, have selection implications for universities and colleges. They are making commendable efforts to widen admissions criteria and predictability, but a whole different appreciation of the nature and depth of the problem is required. The problems are not those of a particular social class but of the class system as a whole.

Above all, we need to drop this tacit genetic determinism. Stanford University psychologist Carol Dweck, in her best-selling 2016 book Mindset: The New Psychology of Success, has shown what happens when we do. In her experiments, students and educators were encouraged to replace a “fixed mindset” – the belief that people are either born smart or are not smart – with a “development mindset” – with the implication that intelligence and potential are created from participation, not merely brought out or fulfilled. Apart from transforming the educational progress of students of all ages, there were leaps in self-confidence, an increased desire for challenge and greater resilience in the face of failure.

At the Open University, which has no formal entry requirements, the remarkable transformation of intellectual self-confidence among its students is well documented. More widely, a similar institutional attitude change has helped to transform the gender imbalance in science, technology, engineering and medical subjects.

All this should be enough to question the value of IQ testing, and to help explain low associations with later educational and real-world performance. But critics also remind us of the dark, ideological side of IQ testing: its roots in the eugenics movement, its part in thousands of sterilisations in the US in the 1930s, and the inspiration this provided to the Nazis.

Educators should not be blinded by the application of amazing technology to crude questions. As in the past, it is all based on mountains of statistical correlations: in this case between millions of tiny variations in DNA and IQ scores. It is known that most such genetic variations have no consequences for development and function. But waves of migrants from different genetic backgrounds have entered the social class structure at different levels. So the scope for spurious associations is enormous.

Today’s would-be DNA-IQ testers declare more benevolent aims than the IQ testers of the past did. But, throughout history, scientists have often become unwitting bearers of ideology. And a powerful ploy has been to use correlations to turn effects into seeming causes, to turn victims into culprits. It is seen today in the deluge of correlations being described as “genetic effects”, “genes for”, “explaining”, “accounting for” and so on – when nothing of the kind is shown.

So the dire effects of social inequality – on health, development, ageing, as well as psychological and educational testing – have themselves become portrayed as the effects of differences in IQ caused by differences in genes. In the process, IQ has become elevated to transcendental status. To University of Edinburgh professor Ian Deary, it may 
measure all round “biological fitness”; to Plomin, it is “the om­nipotent variable” of human existence.

In reality, such ideas are only rhetorical redescriptions of the class structure of society, its privileges and deprivations. We must protest about efforts to reduce these social causes to the effects of inert sequences of DNA. 

Kenneth Richardson is a former senior lecturer in human development at the Open University. He is author of Genes, Brains and Human Potential: the science and ideology of intelligence (Columbia University Press, 2017).

ADVERTISEMENT

POSTSCRIPT:

Print headline: An acid test for IQ

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (2)

This really is social predestination. Perhaps it is time for some psychologists to show some understanding of the society they live in.
Great article by Richardson, but understanding the truth of the matter ,means little without having the privilege of changing a corrupt system.

Sponsored

ADVERTISEMENT