Title: In the Know: 35 Myths About Human Intelligence
Author: Russell T. Warne
Scope: 3 stars
Readability: 4 stars
My personal rating: 5 stars
See more on my book rating system.
If you enjoy this summary, please support the author by buying the book.
Topic of Book
A leading authority on human intelligence shows how popular beliefs about intelligence depart from actual research findings.
This topic may seem to be a bit far removed from the topics typically covered in this blog, but there is clearly a connection between intelligence and technological innovation. For this reason, it seems appropriate to learn the basics on human intelligence.
- The scientific study of intelligence is probably the greatest success story in psychology – possibly in all the social sciences. It has made huge strides since the 1990s, but few laymen are aware of the progress.
- “g” is what scientists label general intelligence. It is commonly measured by IQ.
- Intelligence is connected to an amazing diversity of positive life outcomes:
- Success in education (in terms of both length and grades)
- Success in work (in terms of income, prestige of job, avoidance of unemployment, career length)
- Other factors clearly play a role in those outcomes, but those other factors are far less understood.
- Intelligence is to a large extent, though not completely, heritable.
- Amazingly, it does not really matter what type of test is used to assess “g”, a test will measure “g” as long as:
- The questions are cognitive in nature
- Has different types of questions
- Is difficult enough and with enough questions to create variations in scores between people.
- This is because “g” is the overlap in the variance of scores from different tasks. What is unique to each task is statistically eliminated.
- There is no evidence for multiple intelligences (or more accurately, they all map back to “g”).
- IQ scores have increased over the course of the 20th Century in wealthy nations. They are doing the same now in developing countries. Wealthy nations have appeared to plateau in intelligence recently.
- Psychologists have found no way to increase individual intelligence. Interventions that raise intelligence have already been successfully implemented in industrialized societies (education, health, etc.).
- The more positive the environment, the more pronounced the heritable differences.
- Intelligence tests are not biased against any racial or cultural group. The results accurately report average differences in intelligence.
Important Quotes from Book
As I learned more about intelligence, I discovered that the scholarly knowledge about the topic was out of sync with popular opinion – sometimes alarmingly so. I wrote this book to try to reduce some of the distance between the beliefs of laymen and experts.
This book is aimed at anyone who is not a psychologist specializing in human intelligence… possible. My goal is not to make readers into experts, but rather to give them the tools to recognize common incorrect arguments and beliefs about intelligence.
Throughout the book I have tried to voice opinions that are widely held among intelligence researchers. Unanimity is rare, though, and some experts may disagree with some chapters. I know it is impossible to please everyone all the time, but my goal is to have any mainstream expert in intelligence agree with the vast majority of what I say in the book, with the disagreements being on the level of typical differences of professional opinion.
But people at the extremes in political belief will undoubtedly find the chapters in the book that discuss political and social issues to be distasteful, perhaps even incendiary. That says more about their beliefs than about intelligence research or my book. Facts are value-neutral, and only reality deniers will find anything in this book that is so threatening that they must fight against it.
The scientific study of intelligence is probably the greatest success story in psychology – possibly in all the social sciences. For over 100 years scientists – first psychologists, but later education researchers, sociologists, geneticists, and more – have studied human intelligence. Now, two decades into the twenty-first century, the results are impressive. The evidence of the importance of intelligence has accumulated to such an extent that informed scientists now cannot deny that intelligence is one of the most important psychological traits in humans.
But many people – even psychologists – are not aware of this fact.
While there is not unanimous agreement about a definition of intelligence (there never is for any concept in the social sciences), the definition that seems to have a great deal of consensus states:
Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings – “catching on,” “making sense” of things, or “figuring out” what to do. (Gottfredson, 1997a, p. 13)
Despite the diversity in test administration, format, and content, all these tests measure intelligence because it is not the surface content of a test that determines whether it measures intelligence. Rather, it is what the test items require examinees to do that determines whether a test measures intelligence. As long as a test requires some sort of mental effort, judgment, reasoning, or decision making, it measures intelligence.
The most common model that psychologists use to understand the relationships among mental abilities is the Cattell–Horn–Carroll (CHC) model. The layers of abilities are labeled, from most specific to most general, as Stratum I (the bottom row), Stratum II (the middle row), and Stratum III (the top row). The only ability in Stratum III is general intelligence (labeled g), and it is the only ability that is theorized to be useful in performing all cognitive tasks. Beneath g is Stratum II, which consists of broad abilities that are not applicable in every situation.
The CHC model has a few important implications. First, it shows why so many tasks measure intelligence: only intelligence is applicable across every cognitive task, and every narrow task (shown in Stratum I) is subsumed beneath general intelligence. Second, it also shows how intelligence exerts its influence when people perform specific mental tasks: general intelligence is filtered through Stratum II abilities to be used to perform narrow, specific tasks.
Myth: Intelligence Is Whatever Collection of Tasks a Psychologist Puts on a Test
The first reason why this reasoning is wrong is that g itself is not a simple sum of a set of mental abilities (Jensen, 1998). Rather, factor analysis (a statistical procedure explained in the Introduction) finds the overlap of the variances of scores from different tasks and eliminates the unique component of each of these scores. This overlapping portion across all scores is the general ability factor, or g. Because g is made up of the ability that is measured across all tasks on an intelligence test, the measure of g (in other words, an IQ score) has little to do with specific tasks. Anything unique to any specific task is pulled out of g during the course of factor analysis,
The collection of tasks on a test really does not matter much, as long as there are several types of tasks on a test and they are all cognitive in nature. All cognitive tasks measure g to some degree.
However, this does not mean that every cognitive task on an intelligence test is an equally good measure of g (Jensen, 1980b). Some tasks are better than others at measuring intelligence.
Generally, more complex tasks have higher g loadings, while simpler tasks have lower g loadings.
The idea that intelligence is just a set of arbitrarily chosen tasks that are thrown together on an intelligence test is simply not true. Regardless of the content that psychologists choose to put on a test, any cognitive task measures intelligence to some extent. When the scores from these tasks are combined via factor analysis, the unique aspects of each test are stripped away, and only a score based on the common variance among the tasks – the g factor – remains. Scores from these g factors correlate so highly that they can be considered equal. As a result, the idea that intelligence is an arbitrary collection of test items is completely false. Instead, intelligence, as measured by the g factor, is a unitary ability, regardless of what tasks are used to measure it.
Myth: Intelligence Is Too Complex to Summarize with One Number
This work paved the way to the general consensus that dominates psychology today: that intelligence is a general ability (like Spearman’s g) that is related to other mental abilities.
Despite searching for over 100 years, no one has ever found a cognitive variable that was uncorrelated with other cognitive variables or a test that consistently produces multiple factors. This is extremely strong evidence that intelligence is one entity.
No expert in the past 60 years has argued that g is the only important cognitive ability. Anyone who attacks intelligence research by arguing that “IQ isn’t everything” is attacking a straw man.
As a result, the best-designed intelligence tests produce more than just a global IQ score. For example, the WISC-V produces a full-scale IQ score but also scores for verbal comprehension, visual-spatial ability, fluid reasoning, working memory, and processing speed. Even if two people have the same fullscale IQ score, their scores on the Stratum II abilities may be very different.
These strengths and weaknesses matter, especially for making choices about careers or college majors. Research has shown that – in countries where students have a great deal of freedom to choose their occupations or college majors – most people gravitate towards fields that allow them to use their strengths.
Myth: IQ Does Not Correspond to Brain Anatomy or Functioning
All this changed with the invention of technologies that could examine the structure or functioning of brains of living individuals. The first of these technologies was electroencephalography (EEG), which measures brain waves via electrodes placed on the scalp. In the 1970s the invention of the computed tomography (CT) scan and magnetic resonance imaging (MRI) both allowed scientists to view living brains without subjecting people to neurosurgery. Later, the invention of positron emission tomography (PET) and functional MRI (fMRI) allowed scientists to determine the location of brain functioning with a much higher degree of precision than EEG technology (though not as quickly or directly). Today, neuroscientists have a wealth of technologies available to them to understand many aspects of brain functioning – including intelligence.
One of the best known is the correlation between brain size and intelligence, which when measured via brain-imaging techniques in living individuals is between r = .20 and .40.
Volume of white matter is correlated with speed of problem solving (Penke et al., 2012) and with IQ, thus showing that connectivity of brain regions is likely to be an important determinant of intelligence… Additionally, smarter people have more neurons in their brains, and those neurons are more densely packed together
Myth: Intelligence Is a Western Concept that Does Not Apply to Non-Western Cultures
We found 97 analyzable datasets from 31 countries in every non-Western region of the world… The results were striking. Of the 97 samples, 71 (73.2%) produced g unambiguously. The remaining 26 datasets produced more than one factor, but when these factors were factor analyzed, 23 of the datasets (88.5%) produced g.
Therefore, 94 of the 97 (96.9%) samples produced g either immediately or after a second factor analysis. Moreover, the g factor is about as strong in the non-Western samples as it is in typical Western samples. All of these findings show that g is not a culturally specific phenomenon confined to Western populations.
Myth: There Are Multiple Intelligences in the Human Mind
Howard Gardner’s Frames of Mind: The Theory of Multiple Intelligences, published originally in 1983,1 is one of those works, like Sigmund Freud’s The Interpretation of Dreams or B. F. Skinner’s Walden Two, that has seeped into the wider culture and pop psychology. Even people who have never read Frames of Mind know of the theory of multiple intelligences.
Despite the popularity of Gardner’s theory, it is not a viable theory of human cognitive abilities because of two major types of problems. The first problem is empirical, where Gardner’s theory does not find support in the data from psychological research on cognitive abilities. The second is that the theory has fundamental flaws in its logic and construction that prevent it from being a useful scientific theory.
One of the essential characteristics of a scientific theory is that it has to be specific enough to test. Unfortunately, Gardner’s theory of multiple intelligences is too vague for any scientific purpose.
The theory of multiple intelligences lacks empirical support and a coherent theoretical foundation. Therefore, in situations where it could impact people’s lives – like in education and in scientific research – it should be completely abandoned.
Myth: Practical Intelligence Is a Real Ability, Separate from General Intelligence
If practical intelligence really is a separate ability, then it is necessary to describe why g cannot or does not also solve real-life problems or help people function in their environment outside school. So far, the results have been unconvincing.
If practical intelligence and g are truly separate, then Sternbergmust also solve a basic evolutionary problem: it is not clear how a separate academic intelligence that is only useful in school environments would evolve. Traits can only evolve in an environment in which they are useful for surviving. However, academic environments did not exist for the vast majority of humans’ evolutionary history.
Ironically, every attribute Sternberg has claimed for practical intelligence actually is an attribute of g.
Where does this leave people with high intelligence but poor skills on the job or in everyday life? Most psychologists just chalk this up to the fact that IQ does not correlate perfectly with other traits (e.g., r ≠ 1.0) and that other, non-cognitive traits are important for success in everyday life (such as motivation or personality).
Myth: Measuring Intelligence Is Difficult
Intelligence is extremely easy to measure because – as stated in the Introduction – any task that requires some degree of cognitive work or judgment measures intelligence (Jensen, 1998). All of these tasks correlate positively with one another and measure g (see Chapter 1). As a result, all it takes to measure intelligence is to administer at least one task (preferably more) that requires cognitive work; the resulting score is an estimate of the examinee’s intelligence level.
The fact that the CAS, CAM, DIT, NALS, TOFHLA, and many more tests all measure intelligence is more evidence for the indifference of the indicator.
Another consequence of a poor understanding of the indifference of the indicator is that it leads to misinterpretations of test scores… Thus, interpretations of test scores that are widespread in the accountability movement or in college rankings probably do not reflect educational quality to the extent that policy makers, legislators, or educators believe because these groups do not realize that the tests are measuring g.
There is value in using a lengthy test to measure intelligence, but often it is not needed. This is because one of the reasons intelligence is easy to measure is that it produces highly stable scores very quickly.
By itself, a single intelligence test item produces a score that is not reliable: only about .25 (Lubinski, 2004; Lubinski & Humphreys, 1997). This means that a score on a 1-item test is too unstable to be useful. However, when items are combined, the total reliability based on those items increases.5 With 7 items, score reliability increases to .70 – good enough for research purposes. An intelligence test with 12 items has an estimated reliability of .80. And it only takes 27 items (about the length of a single-subject academic test for children) to reach reliability of .90.
Nonetheless, because of the indifference of the indicator and the fact that high reliability does not take many test items, it is not true that intelligence is difficult to measure. In fact, intelligence is incredibly easy to measure. K-12 school accountability tests, licensing tests for jobs, college admissions tests, spelling bees, driver’s license tests, and many other tests are all measures of g – though many measure other abilities also (e.g., job knowledge, memorization). And they are not equally good measures of g.
It is likely that most readers have taken a test that measures intelligence without realizing it.
Myth: Intelligence Tests Are Imperfect and Cannot Be Used or Trusted
Most intelligence tests tend to produce scores with high reliability. As an example, the ACT produces scores that have a reliability value of 0.94 (ACT, Inc., 2017, Table 10.1). The overall SAT score has similar reliability value of .96.
Whether intelligence tests can be used to make decisions does not depend on whether the tests are perfect. Rather, whether to use a test for decision making depends on whether the test is better than alternative methods of decision making. The need to select individuals (for jobs, college admission, promotions, or gifted programs) does not magically disappear if tests are banned. Any time that the number of applicants exceeds the number of positions available, selection has to occur. If intelligence tests can make more accurate judgments than other tools – as is often the case – then the tests should be used whenever possible (especially in combination with other variables). Doing so will result in fewer errors, more fair selection, and more successful experiences in educational programs and jobs.
Myth: Intelligence Tests Are Biased against Diverse Populations
Of the 35 misconceptions in this book, one of the most common is the belief that intelligence tests are biased against African Americans, Hispanics, and Native Americans. In one study of introductory psychology textbooks, this was the most common inaccuracy that authors perpetuated.
Generally, within the United States, European Americans have an average IQ of approximately 100, followed by Hispanic Americans and Native Americans (average IQ ≈ 90), and African Americans scoring lowest (average IQ ≈ 85). Conversely, Asian Americans tend to score higher than all other large racial groups (average IQ ≈ 105).
It is important to note, though, that these are merely averages, and these numbers do not apply to every member of these groups. As Figure 10.1 shows, there is tremendous overlap among these groups, and it is possible to find people from every group at every intelligence level. In other words, there are some people with low IQ scores who belong to groups with a higher average, and there are some people with high IQ scores who belong to a group with a lower average score. These group averages, therefore, often do not apply to particular individuals.
In contrast to the widespread belief that intelligence tests are biased, the mainstream viewpoint among psychometricians and psychologists who use tests is that “the issue of test bias is scientifically dead”.
Because procedures to identify and eliminate test bias are so routine – and mandated as part of the profession’s ethical code – it is nearly impossible to sell a test that hasn’t been subjected to carefully scrutiny for bias. If anyone tried, there are two likely consequences. First, the test would not be commercially successful.
Second, any customers who use the test for decision making – especially in education or employment – would be vulnerable to a lawsuit because using a biased test to make decisions about people in the group whose scores are underestimated would be discriminatory.
It is important to note that this discussion about test bias – and its absence from professionally designed tests – only applies to groups that speak the test language as a native and who were born in the country the test was designed for. In the United States, this means that tests of g are unbiased for native English speakers born in that country. Everyone in the debate about test bias agrees that it is inappropriate to administer a test to a person who does not speak the language of the test and then interpret the low score as evidence of low intelligence.
Myth: IQ Only Reflects a Person’s Socioeconomic Status
The correlations are so weak that the idea that intelligence tests measure socioeconomic status “is a singularly foolish assertion.” I agree.
Statistically controlling for socioeconomic status has almost no impact on the ability of test scores to predict grades (Sackett et al., 2008), and even after controlling for childhood socioeconomic status, IQ has a moderately strong positive correlation with later income and educational success (Kuncel & Hezlett, 2010; Murray, 1998, 2002). This indicates that the correlation between IQ and academic performance is mostly independent of socioeconomic status – even though all three variables are positively correlated with one another.
Correlations in IQ scores among relatives show evidence of a genetic influence on intelligence. For identical twins (who share 100% of their genes), the correlation between their IQ scores is r = .86; the fact that this value is not r = 1.00 indicates that genes are not the only factor determining IQ scores. Likewise, adoptees and their non-biological relatives (who share 0% of their DNA) have IQ scores that are correlated r = .19 to .24.
Heritability for intelligence tends to be around .50 (i.e., about 50% of IQ score differences are due to genetic differences).
Generally, studies of children tend to produce lower heritability (and therefore higher environmental/non-genetic influence), often as low as .20. Studies of adults produce higher heritability – sometimes above .80 (Bouchard, 2004, 2014; Deary, 2012; Hunt, 2011). This indicates that the importance of genes increases as people age (Plomin & Deary, 2015). In other words, intelligence differences among adults are more genetic in origin, whereas in young children, environmental variables matter more.
Often, the results of behavioral genetics studies will indicate that genes are important – if a person already lives in an industrialized nation in a home where basic needs are met. It is not clear how well these results apply to individuals in severe poverty or in highly unfavorable environments.
Myth: High Heritability for Intelligence Means that Raising IQ Is Impossible
There are two classic examples of traits in humans that have high heritability, but which also have effective environmental interventions that can improve the lives of people. The first is myopia (i.e., nearsightedness), which has very high heritability – .75 to .88 in one typical study (Dirani et al., 2006). But there are simple interventions that correct this highly heritable trait: eyeglasses and contact lenses. Thus, it is possible to change the environment to improve people’s functioning, even if a trait is highly heritable.
The second example is a disorder called phenylketonuria (PKU).
Nevertheless, there has been progress with finding ways to increase intelligence in people. One of the most successful started in the 1970s when scientists noticed that children with high levels of lead had lower IQ scores (about 4–5 points) than children with low lead levels.
Another successful intervention to raise intelligence is to cure a child’s iodine deficiency. People with low iodine suffer from thyroid and neurological problems. Giving iodine supplements to people with an iodine deficiency cures this health problem, and – in children – raises IQ by about 8 points (Protzko, 2017a). Two billion people worldwide suffer from iodine deficiency, mostly in southern Asia and Sub-Sahara Africa, and these people are at risk for lower IQ and intellectual disabilities. In fact, iodine deficiency is the most common cause of preventable intellectual disability in the world.
Myth: Environmentally Driven Changes in IQ Mean that Intelligence Is Malleable
Worldwide, IQ scores drifted higher over the course of the twentieth century.
IQ score increases of 3 points per decade is a lot. This would indicate that a person with an average IQ of 100 from 1970 who had traveled through time to 2020 would only score 85 compared to a twenty-first-century population. A person of average intelligence from 1920 if transported through time to today would score 70, which is about the average IQ for people with Down’s Syndrome and the approximate cutoff for being diagnosed with an intellectual disability.
Flynn brought so much attention to the increasing IQ scores that Herrnstein and Murray (1994) called it the Flynn effect, a name that has since stuck.
After more than three decades of research, it is clear that there is no single cause of the Flynn effect. One highly likely cause is increased education…Other suggested causes of the Flynn effect include improved physical health.
Based on the Flynn effect, it is not clear what more anyone can do to raise IQ in a country like the United States or in other wealthy nations. On the other hand, developing nations that implement reforms to improve public health, education levels, and modernize the economy are seeing larger IQ gains than anything seen in industrialized nations in the twentieth century.
Since the early 2000s, there has been a new development in research on the Flynn effect. The increase in IQ has stopped in some countries… These countries are all industrialized and wealthy, with access to all the technological, societal, and cultural changes that modern society brings to a nation. The countries also have widespread access to a quality education, and some provide universal health care to their citizens. These countries may have reached (or may soon reach) a saturation point where environmental improvements provide no additional boost in IQ.
At the individual level, IQ scores seem to stabilize between the ages of (approximately) 7 and 10. Thereafter, “Small changes are common, large changes are rare”.
For people already in positive environments – as many people in industrialized nations are – current knowledge about the environmental causes of high IQ provides few clues about how to raise IQ.
Myth: Social Interventions Can Drastically Raise IQ
The evidence is unequivocal that children who spend a long period of time in a neglectful, deprived environment experience a lowered IQ and long-term negative effects. Removing children from an environment like this – whether through adoption or improving their living conditions – is a boon for their intelligence (and their quality of life, in general).
For children who live in poverty, preschool is, by far, the most studied social
intervention to raise intelligence, and early studies were promising. Initial results of preschool are always strong, but as soon as the intervention ends, fadeout starts, and any gains are usually gone within a few years. Although benefits of preschool may accrue in adolescence or adulthood, it is unclear how this happens and whether these benefits occur in typical preschool programs.
Raising intelligence permanently is hard, and it seems that nothing short of an intensive, years-long intervention that includes academic, social, and health improvements throughout childhood and adolescence will permanently raise IQ.
Myth: Improvability of IQ Means Intelligence Can Be Equalized
An extreme example of an attempt to equalize environments occurred after World War II when the Soviet-supported regime in Poland rebuilt the city of Warsaw, almost three-quarters of which had been destroyed. To implement communist ideals, the government built neighborhoods that were as uniform as possible, with homes, apartment buildings, and commercial buildings being similar throughout the city. Social and cultural services were distributed approximately evenly throughout Warsaw, and families were assigned homes so that every neighborhood contained a mixture of people who worked in low-, mid-, and high-prestige occupations. After three decades of this intensive, egalitarian urban planning, a team of scientists administered a non-verbal intelligence test to a large, representative sample of children in the city. The results indicated that the process to equalize the neighborhood environment did nothing to neutralize the positive correlation that IQ had with parental occupational prestige and parental education.
In fact, the correlation between these two values was similar to what is found in capitalist countries (see Chapter 11), and the authors recognized this… The authorities in post-World War II Poland had far more power to change the environment than any democratic government does, which should make anyone skeptical about the ability of social programs in democratic nations to equalize intelligence.
Inequality of intelligence stubbornly persists because of one simple fact: IQ scores are partially influenced by genes, as indicated by h2 values greater than zero (see Chapter 11). As a result, environmental interventions do not equalize intelligence in people because the genetic influences still remain.
A similar phenomenon seems to happen when environments are improved. Preschoolers in Tucker-Drob’s (2012) study had higher heritability for reading and math scores than similar children who did not attend preschool.2 Apparently, sending children to the more intellectually stimulating preschool environment allowed their genes to express themselves in ways that increased the genetic influences on math and reading scores. Positive environments seem to allow genetic influences to be most pronounced. Therefore, any efforts to improve the environment for the entire population will probably increase the influence of genetics, because heritability will increase. Ironically, an improved environment may increase the importance of genetic differences among people, which is the exact opposite of the goal of some people who are trying to improve environments.
Three facts suggest that real-world attempts to equalize intelligence in people will not be successful: (a) the meager results of the Polish attempt to equalize environments, (b) the important influence of genes in creating IQ differences, and (c) the higher heritability in more positive environments.
Myth: Effective Schools Can Make Every Child Academically Proficient
Educational psychologists and intelligence researchers had said for decades that it was impossible for every student to master a curriculum.
No country or state has ever created a school system that was successful in educating every student to a high level. Yet policy makers believe that this is possible anyway.
Today, this is one of the strongest bodies of research in all of psychology, and it all points to one conclusion: “individual differences in general cognitive ability is the single most important variable for understanding how well students . . . learn academic material” (Frisby, 2013, p. 201). No other variable is a better predictor of academic outcomes.
Depending on study characteristics, intelligence correlates with academic achievement at a level of r = .40 to .70. That correlation is so strong that – in most studies – intelligence is a better predictor of success in school than any other variable.
“Slow learners will always lag behind their brighter peers in academic work, and they will never catch up”
This denial of g has serious negative consequences in the education system. One result of g denialism is the blame game that often ensues when children’s educational performance fails to meet the expectations of adults because people refuse to admit that some children are always going to struggle in school.
Another negative consequence of denying intelligence is that it causes teachers to assume that all of their students are approximately the same in their readiness to learn new material. This incorrect belief causes a teacher to assume that one lesson serves every student well. However, a typical group of students displays a wide span of cognitive abilities:
Some policies that deny intelligence actually harm students. One of these policies is the idea that every child should attend college.
In a perfect world, this would be a well-known fact, and school personnel would consider IQ and intelligence differences when making educational decisions about individual children. Unfortunately, the educational establishment in the United States (and many other countries) has ignored g – much to the field’s detriment.
Myth: Intelligence Research Undermines the Fight against Inequality
Blaming intelligence research for the connections among IQ, genes, and social outcomes is shortsighted. Intelligence researchers do not create the correlations between g and life outcomes, nor do they force intelligence (or any other trait) to be heritable. These facts exist, regardless of whether psychologists and other scientists discover them or not. Ignoring intelligence research will not change that.
Ironically, some commentators have pointed out (e.g., Jensen, 1998; Mackintosh, 2011; Plomin, 2018) that high heritability for a trait is the sign of a more equitable society. This is because high heritability indicates that environments do not constrain the development of most people’s genetic potentials. Thus, the high heritability for intelligence – which is often over .50 for adults in wealthy countries – and, to a lesser extent, income (about .40 in developed nations, according to Plomin, 2018, p. 100) is an index of fairness. This is because high heritability indicates that differences are not caused by society and external forces… If anything, people concerned with environmental disadvantages should welcome high heritability of life outcomes.
The moral of the story is simple: judge people as individuals and don’t consider an irrelevant factor like their race.
After more than a century of research, scientists know more about intelligence than almost any other psychological trait. Unfortunately, much of this information has not trickled down to the general public, the media, students, or even psychologists with specializations in other areas… As a result, erroneous beliefs about intelligence are widespread.
On all these points, what non-experts believe is not just wrong – it is spectacularly wrong.
Third, these incorrect beliefs almost all go in one direction towards an overly optimistic belief about human intelligence. There seems to be an egalitarian bias in non-experts’ beliefs about intelligence which favors wishful thinking.
“The human condition in all of its aspects cannot be adequately described or understood in a scientific sense without taking into account the powerful explanatory role of the g factor” (1998, p. xii). Anyone who genuinely wants to understand humans or improve society needs to understand intelligence. The decision to ignore or deny g is a decision to live in a fantasy world.
One of the great challenges of intelligence research in the twenty-first century will be to identify specific environmental influences that have a noteworthy permanent impact on intelligence. This has proven difficult so far because the shared environment among siblings seems to have no permanent impact on intelligence. This means that whatever some parents do to make their children smarter is not something they do to every child.
Intelligence has its tentacles reaching into nearly every aspect of people’s lives. That is why it matters so much. Intelligence is part of nearly everything important that people do, and denying its existence – or the existence of intelligence differences – will inevitably lead to incomplete answers to important questions in psychology, sociology, health, politics, and more. The public ignores intelligence at its own peril.