Racial bias can seem like an intractable problem. Psychologists and other social scientists have had difficulty finding effective ways to counter it – even among people who say they support a fairer, more egalitarian society. One likely reason for the difficulty is that most efforts have been directed toward adults, whose biases and prejudices are often firmly entrenched.
My colleagues and I are starting to take a new look at the problem of racial bias by investigating its origins in early childhood. As we learn more about how biases take hold, will we eventually be able to intervene before any biases become permanent?
Measuring racial bias
When psychology researchers first began studying racial biases, they simply asked individuals to describe their thoughts and feelings about particular groups of people. A well-known problem with these measures of explicit bias is that people often try to respond to researchers in ways they think are socially appropriate.
Starting in the 1990s, researchers began to develop methods to assess implicit bias, which is less conscious and less controllable than explicit bias. The most widely used test is the Implicit Association Test, which lets researchers measure whether individuals have more positive associations with some racial groups than others. However, an important limitation of this test is that it only works well with individuals who are at least six years old – the instructions are too complex for younger children to remember.
Recently, my colleagues and I developed a new way to measure bias, which we call the Implicit Racial Bias Test. This test can be used with children as young as age three, as well as with older children and adults. This test assesses bias in a manner similar to the IAT but with different instructions.
Here’s how a version of the test to detect an implicit bias that favors white people over black people would work: We show participants a series of black and white faces on a touchscreen device. Each photo is accompanied by a cartoon smile on one side of the screen and a cartoon frown on the other.
In one part of the test, we ask participants to touch the cartoon smile as quickly as possible whenever a black face appears, and the cartoon frown as quickly as possible whenever a white face appears. In another part of the test, the instructions are reversed.
The difference in the amount of time it takes to follow one set of instructions versus the other is used to compute the individual’s level of implicit bias. The reasoning is that it takes more time and effort to respond in ways that go against our intuitions.
Some studies suggest that precursors of racial bias can be detected in infancy. In one such study, researchers measured how long infants looked at faces of their own race or another race that were paired with happy or sad music. They found that 9-month-olds looked longer when the faces of their own race were paired with the happy music, which was different from the pattern of looking times for the other-race faces. This result suggests that the tendency to prefer faces that match one’s own race begins in infancy.
These early patterns of response arise from a basic psychological tendency to like and approach things that seem familiar, and dislike and avoid things that seem unfamiliar. Some researchers think that these tendencies have roots in our evolutionary history because they help people to build alliances within their social groups.
However, these biases can change over time. For example, young black children in Cameroon show an implicit bias in favor of black people versus white people as part of a general tendency to prefer in-group members, who are people who share characteristics with you. But this pattern reverses in adulthood, as individuals are repeatedly exposed to cultural messages indicating that white people have higher social status than black people.
A new approach to tackling bias
Researchers have long recognized that racial bias is associated with dehumanization. When people are biased against individuals of other races, they tend to view them as part of an undifferentiated group rather than as specific individuals. Giving adults practice at distinguishing among individuals of other races leads to a reduction in implicit bias, but these effects tend to be quite short-lived.
In our new research, we adapted this individuation approach for use with young children. Using a custom-built training app, young children learn to identify five individuals of another race during a 20-minute session. We found that 5-year-olds who participated showed no implicit racial bias immediately after the training.
Although the effects of a single session were short-lived, an additional 20-minute booster session one week later allowed children to maintain about half of their initial bias reduction for two months. We are currently working on a game-like version of the app for further testing.
Only a starting point
Although our approach suggests a promising new direction for reducing racial bias, it is important to note that this is not a magic bullet. Other aspects of the tendency to dehumanize individuals of different races also need to be investigated, such as people’s diminished level of interest in the mental life of individuals who are outside of their social group. Because well-intended efforts to reduce racial bias can sometimes be ineffective or produce unintended consequences, any new approaches that are developed will need to be rigorously evaluated.
And of course the problem of racial bias is not one that can be solved by addressing the beliefs of individuals alone. Tackling the problem also requires addressing the broader social and economic factors that promote and maintain biased beliefs and behaviors.
John, 12-years-old, is three times as old as his brother. How old will John be when he is twice as old as his brother?
Two families go bowling. While they are bowling, they order a pizza for £12, six sodas for £1.25 each, and two large buckets of popcorn for £10.86. If they are going to split the bill between the families, how much does each family owe?
These are questions from online Intelligence Quotient or IQ tests. Tests that purport to measure your intelligence can be verbal, meaning written, or non-verbal, focusing on abstract reasoning independent of reading and writing skills. First created more than a century ago, the tests are still widely used today to measure an individual’s mental agility and ability.
Education systems use IQ tests to help identify children for special education and gifted education programmes and to offer extra support. Researchers across the social and hard sciences study IQ test results also looking at everything from their relation to genetics, socio-economic status, academic achievement, and race.
Online IQ “quizzes” purport to be able to tell you whether or not “you have what it takes to be a member of the world’s most prestigious high IQ society”.
If you want to boast about your high IQ, you should have been able to work out the answers to the questions. When John is 16 he’ll be twice as old as his brother. The two families who went bowling each owe £20.61. And 49 is the missing number in the sequence.
Despite the hype, the relevance, usefulness, and legitimacy of the IQ test is still hotly debated among educators, social scientists, and hard scientists. To understand why, it’s important to understand the history underpinning the birth, development, and expansion of the IQ test – a history that includes the use of IQ tests to further marginalise ethnic minorities and poor communities.
In the early 1900s, dozens of intelligence tests were developed in Europe and America claiming to offer unbiased ways to measure a person’s cognitive ability. The first of these tests was developed by French psychologist Alfred Binet, who was commissioned by the French government to identify students who would face the most difficulty in school. The resulting 1905 Binet-Simon Scale became the basis for modern IQ testing. Ironically, Binet actually thought that IQ tests were inadequate measures for intelligence, pointing to the test’s inability to properly measure creativity or emotional intelligence.
At its conception, the IQ test provided a relatively quick and simple way to identify and sort individuals based on intelligence – which was and still is highly valued by society. In the US and elsewhere, institutions such as the military and police used IQ tests to screen potential applicants. They also implemented admission requirements based on the results.
The US Army Alpha and Beta Tests screened approximately 1.75m draftees in World War I in an attempt to evaluate the intellectual and emotional temperament of soldiers. Results were used to determine how capable a solider was of serving in the armed forces and identify which job classification or leadership position one was most suitable for. Starting in the early 1900s, the US education system also began using IQ tests to identify “gifted and talented” students, as well as those with special needs who required additional educational interventions and different academic environments.
Ironically, some districts in the US have recently employed a maximum IQ score for admission into the police force. The fear was that those who scored too highly would eventually find the work boring and leave – after significant time and resources had been put towards their training.
Alongside the widespread use of IQ tests in the 20th century was the argument that the level of a person’s intelligence was influenced by their biology. Ethnocentrics and eugenicists, who viewed intelligence and other social behaviours as being determined by biology and race, latched onto IQ tests. They held up the apparent gaps these tests illuminated between ethnic minorities and whites or between low- and high-income groups.
Some maintained that these test results provided further evidence that socioeconomic and racial groups were genetically different from each other and that systemic inequalities were partly a byproduct of evolutionary processes.
Going to extremes
The US Army Alpha and Beta test results garnered widespread publicity and were analysed by Carl Brigham, a Princeton University psychologist and early founder of psychometrics, in a 1922 book A Study of American Intelligence. Brigham applied meticulous statistical analyses to demonstrate that American intelligence was declining, claiming that increased immigration and racial integration were to blame. To address the issue, he called for social policies to restrict immigration and prohibit racial mixing.
A few years before, American psychologist and education researcher Lewis Terman had drawn connections between intellectual ability and race. In 1916, he wrote:
High-grade or border-line deficiency … is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come … Children of this group should be segregated into separate classes … They cannot master abstractions but they can often be made into efficient workers … from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.
There has been considerable work from both hard and social scientists refuting arguments such as Brigham’s and Terman’s that racial differences in IQ scores are influenced by biology.
Critiques of such “hereditarian” hypotheses – arguments that genetics can powerfully explain human character traits and even human social and political problems – cite a lack of evidence and weak statistical analyses. This critique continues today, with many researchers resistant to and alarmed by research that is still being conducted on race and IQ.
But in their darkest moments, IQ tests became a powerful way to exclude and control marginalised communities using empirical and scientific language. Supporters of eugenic ideologies in the 1900s used IQ tests to identify “idiots”, “imbeciles”, and the “feebleminded”. These were people, eugenicists argued, who threatened to dilute the White Anglo-Saxon genetic stock of America.
As a result of such eugenic arguments, many American citizens were later sterilised. In 1927, an infamous ruling by the US Supreme Court legalised forced sterilisation of citizens with developmental disabilities and the “feebleminded,” who were frequently identified by their low IQ scores. The ruling, known as Buck v Bell, resulted in over 65,000 coerced sterilisations of individuals thought to have low IQs. Those in the US who were forcibly sterilised in the aftermath of Buck v Bell were disproportionately poor or of colour.
Compulsory sterilisation in the US on the basis of IQ, criminality, or sexual deviance continued formally until the mid 1970s when organisations like the Southern Poverty Law Center began filing lawsuits on behalf of people who had been sterilised. In 2015, the US Senate voted to compensate living victims of government-sponsored sterilisation programmes.
IQ tests today
Debate over what it means to be “intelligent” and whether or not the IQ test is a robust tool of measurement continues to elicit strong and often opposing reactions today. Some researchers say that intelligence is a concept specific to a particular culture. They maintain that it appears differently depending on the context – in the same way that many cultural behaviours would. For example, burping may be seen as an indicator of enjoyment of a meal or a sign of praise for the host in some cultures and impolite in others.
What may be considered intelligent in one environment, therefore, might not in others. For example, knowledge about medicinal herbs is seen as a form of intelligence in certain communities within Africa, but does not correlate with high performance on traditional Western academic intelligence tests.
According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them potentially problematic in culturally diverse settings. The application of the same test among different communities would fail to recognise the different cultural values that shape what each community values as intelligent behaviour.
Going even further, given the IQ test’s history of being used to further questionable and sometimes racially-motivated beliefs about what different groups of people are capable of, some researchers say such tests cannot objectively and equally measure an individual’s intelligence at all.
Used for good
At the same time, there are ongoing efforts to demonstrate how the IQ test can be used to help those very communities who have been most harmed by them in the past. In 2002, the execution across the US of criminally convicted individuals with intellectual disabilities, who are often assessed using IQ tests, was ruled unconstitutional. This has meant IQ tests have actually prevented individuals from facing “cruel and unusual punishment” in the US court of law.
In education, IQ tests may be a more objective way to identify children who could benefit from special education services. This includes programmes known as “gifted education” for students who have been identified as exceptionally or highly cognitively able. Ethnic minority children and those whose parents have a low income, are under-represented in gifted education.
The way children are chosen for these programmes means that Black and Hispanic students are often overlooked. Some US school districts employ admissions procedures for gifted education programmes that rely on teacher observations and referrals or require a family to sign their child up for an IQ test. But research suggests that teacher perceptions and expectations of a student, which can be preconceived, have an impact upon a child’s IQ scores, academic achievement, and attitudes and behaviour. This means that teacher’s perceptions can also have an impact on the likelihood of a child being referred for gifted or special education.
The universal screening of students for gifted education using IQ tests could help to identify children who otherwise would have gone unnoticed by parents and teachers. Research has found that those school districts which have implemented screening measures for all children using IQ tests have been able to identify more children from historically underrepresented groups to go into gifted education.
Identifying these issues could then help those in charge of education and social policy to seek solutions. Specific interventions could be designed to help children who have been affected by these structural inequalities or exposed to harmful substances. In the long run, the effectiveness of these interventions could be monitored by comparing IQ tests administered to the same children before and after an intervention.
Some researchers have tried doing this. One US study in 1995 used IQ tests to look at the effectiveness of a particular type of training for managing Attention Deficit/Hyperactivity Disorder (ADHD), called neurofeedback training. This is a therapeutic process aimed at trying to help a person to self-regulate their brain function. Most commonly used with those who have some sort of identified brain imbalance, it has also been used to treat drug addiction, depression and ADHD. The researchers used IQ tests to find out whether the training was effective in improving the concentration and executive functioning of children with ADHD – and found that it was.
Since its invention, the IQ test has generated strong arguments in support of and against its use. Both sides are focused on the communities that have been negatively impacted in the past by the use of intelligence tests for eugenic purposes.
The use of IQ tests in a range of settings, and the continued disagreement over their validity and even morality, highlights not only the immense value society places on intelligence – but also our desire to understand and measure it.