Racial bias can seem like an intractable problem. Psychologists and other social scientists have had difficulty finding effective ways to counter it – even among people who say they support a fairer, more egalitarian society. One likely reason for the difficulty is that most efforts have been directed toward adults, whose biases and prejudices are often firmly entrenched.
My colleagues and I are starting to take a new look at the problem of racial bias by investigating its origins in early childhood. As we learn more about how biases take hold, will we eventually be able to intervene before any biases become permanent?
Measuring racial bias
When psychology researchers first began studying racial biases, they simply asked individuals to describe their thoughts and feelings about particular groups of people. A well-known problem with these measures of explicit bias is that people often try to respond to researchers in ways they think are socially appropriate.
Starting in the 1990s, researchers began to develop methods to assess implicit bias, which is less conscious and less controllable than explicit bias. The most widely used test is the Implicit Association Test, which lets researchers measure whether individuals have more positive associations with some racial groups than others. However, an important limitation of this test is that it only works well with individuals who are at least six years old – the instructions are too complex for younger children to remember.
Recently, my colleagues and I developed a new way to measure bias, which we call the Implicit Racial Bias Test. This test can be used with children as young as age three, as well as with older children and adults. This test assesses bias in a manner similar to the IAT but with different instructions.
Here’s how a version of the test to detect an implicit bias that favors white people over black people would work: We show participants a series of black and white faces on a touchscreen device. Each photo is accompanied by a cartoon smile on one side of the screen and a cartoon frown on the other.
In one part of the test, we ask participants to touch the cartoon smile as quickly as possible whenever a black face appears, and the cartoon frown as quickly as possible whenever a white face appears. In another part of the test, the instructions are reversed.
The difference in the amount of time it takes to follow one set of instructions versus the other is used to compute the individual’s level of implicit bias. The reasoning is that it takes more time and effort to respond in ways that go against our intuitions.
Some studies suggest that precursors of racial bias can be detected in infancy. In one such study, researchers measured how long infants looked at faces of their own race or another race that were paired with happy or sad music. They found that 9-month-olds looked longer when the faces of their own race were paired with the happy music, which was different from the pattern of looking times for the other-race faces. This result suggests that the tendency to prefer faces that match one’s own race begins in infancy.
These early patterns of response arise from a basic psychological tendency to like and approach things that seem familiar, and dislike and avoid things that seem unfamiliar. Some researchers think that these tendencies have roots in our evolutionary history because they help people to build alliances within their social groups.
However, these biases can change over time. For example, young black children in Cameroon show an implicit bias in favor of black people versus white people as part of a general tendency to prefer in-group members, who are people who share characteristics with you. But this pattern reverses in adulthood, as individuals are repeatedly exposed to cultural messages indicating that white people have higher social status than black people.
A new approach to tackling bias
Researchers have long recognized that racial bias is associated with dehumanization. When people are biased against individuals of other races, they tend to view them as part of an undifferentiated group rather than as specific individuals. Giving adults practice at distinguishing among individuals of other races leads to a reduction in implicit bias, but these effects tend to be quite short-lived.
In our new research, we adapted this individuation approach for use with young children. Using a custom-built training app, young children learn to identify five individuals of another race during a 20-minute session. We found that 5-year-olds who participated showed no implicit racial bias immediately after the training.
Although the effects of a single session were short-lived, an additional 20-minute booster session one week later allowed children to maintain about half of their initial bias reduction for two months. We are currently working on a game-like version of the app for further testing.
Only a starting point
Although our approach suggests a promising new direction for reducing racial bias, it is important to note that this is not a magic bullet. Other aspects of the tendency to dehumanize individuals of different races also need to be investigated, such as people’s diminished level of interest in the mental life of individuals who are outside of their social group. Because well-intended efforts to reduce racial bias can sometimes be ineffective or produce unintended consequences, any new approaches that are developed will need to be rigorously evaluated.
And of course the problem of racial bias is not one that can be solved by addressing the beliefs of individuals alone. Tackling the problem also requires addressing the broader social and economic factors that promote and maintain biased beliefs and behaviors.
<p>Two families go bowling. While they are bowling, they order a pizza for £12, six sodas for £1.25 each, and two large buckets of popcorn for £10.86. If they are going to split the bill between the families, <a href=”http://www.tests.com/practice/WISC-Practice-Test”>how much</a> does each family owe?</p>
<p>Online IQ “quizzes” <a href=”https://geniustests.com/”>purport</a> to be able to tell you whether or not “you have what it takes to be a member of the world’s most prestigious high IQ society”.
<p>If you want to boast about your high IQ, you should have been able to work out the answers to the questions. When John is 16 he’ll be twice as old as his brother. The two families who went bowling each owe £20.61. And 49 is the missing number in the sequence. </p>
<p>Despite the hype, the relevance, usefulness, and legitimacy of the IQ test is still <a href=”http://www.jstor.org/stable/1466807″>hotly debated</a> among educators, social scientists, and hard scientists. To understand why, it’s important to understand the history underpinning the birth, development, and expansion of the IQ test – a <a href=”http://www.jstor.org/stable/799646″>history</a> that includes the use of IQ tests to further marginalise ethnic minorities and poor communities. </p>
<p>In the early 1900s, dozens of intelligence tests were developed in Europe and America claiming to offer unbiased ways to measure a person’s cognitive ability. The first of these tests was developed by French psychologist Alfred Binet, who was commissioned by the French government to identify students who would face the most difficulty in school. The resulting 1905 Binet-Simon Scale became the basis for modern IQ testing. Ironically, Binet actually thought that IQ tests were inadequate measures for intelligence, pointing to the test’s inability to properly measure creativity or emotional intelligence.
<p>At its conception, the IQ test provided a relatively quick and simple way to identify and sort individuals based on intelligence – which was and still is highly valued by society. In the US and elsewhere, institutions such as the military and police used IQ tests to screen potential applicants. They also implemented admission requirements based on the results.
<p>The <a href=”http://www.jstor.org/stable/367145?seq=1#page_scan_tab_contents”>US Army Alpha and Beta Tests</a> screened approximately 1.75m draftees in World War I in an attempt to evaluate the intellectual and emotional temperament of soldiers. Results were used to determine how capable a solider was of serving in the armed forces and identify which job classification or leadership position one was most suitable for. Starting in the early 1900s, the US education system also began using IQ tests to identify “gifted and talented” students, as well as those with special needs who required additional educational interventions and different academic environments.
<p>Alongside the widespread use of IQ tests in the 20th century was the argument that the level of a person’s intelligence was influenced by their biology. Ethnocentrics and eugenicists, who viewed intelligence and other social behaviours as being determined by biology and race, latched onto IQ tests. They held up the apparent gaps these tests illuminated between ethnic minorities and whites or between low- and high-income groups.
<p>Some maintained that these test results provided further evidence that socioeconomic and racial groups were <a href=”http://www.jstor.org/stable/20373194″>genetically different</a> from each other and that systemic inequalities were partly a byproduct of evolutionary processes. </p>
<h2>Going to extremes</h2>
<p>The US Army Alpha and Beta test results garnered widespread publicity and were analysed by Carl Brigham, a Princeton University psychologist and early founder of psychometrics, in a 1922 book A Study of American Intelligence. Brigham applied meticulous statistical analyses to demonstrate that American intelligence was declining, claiming that increased immigration and racial integration were to blame. To address the issue, he called for social policies to restrict immigration and prohibit racial mixing.
<p>High-grade or border-line deficiency … is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come … Children of this group should be segregated into separate classes … They cannot master abstractions but they can often be made into efficient workers … from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.
<p>There has been considerable work from both hard and social scientists refuting arguments such as Brigham’s and Terman’s that racial differences in IQ scores are influenced by biology.
<p>But in their <a href=”http://www.jstor.org/stable/10.1525/j.ctt1pn5jp”>darkest moments</a>, IQ tests became a powerful way to exclude and control marginalised communities using empirical and scientific language. Supporters of eugenic ideologies in the 1900s used IQ tests to identify “idiots”, “imbeciles”, and the “feebleminded”. These were people, eugenicists argued, who threatened to dilute the White Anglo-Saxon genetic stock of America.
<p>As a result of such eugenic arguments, many American citizens were later <a href=”https://www.ncbi.nlm.nih.gov/pubmed/3299450″>sterilised</a>. In 1927, an infamous ruling by the US Supreme Court legalised forced sterilisation of citizens with developmental disabilities and the “feebleminded,” who were frequently identified by their low IQ scores. The ruling, known as Buck v Bell, resulted in over 65,000 coerced sterilisations of individuals thought to have low IQs. Those in the US who were forcibly sterilised in the aftermath of Buck v Bell were disproportionately poor or of colour.
<p>Debate over what it means to be “intelligent” and whether or not the IQ test is a robust tool of measurement continues to elicit strong and often opposing reactions today. Some researchers say that intelligence is a concept specific to a particular culture. They maintain that it appears differently depending on the context – in the same way that many cultural behaviours would. For example, burping may be seen as an indicator of enjoyment of a meal or a sign of praise for the host in some cultures and impolite in others.
<p>What may be considered intelligent in one environment, therefore, might not in others. For example, knowledge about medicinal herbs is seen as a form of intelligence in certain communities within Africa, but does not correlate with high performance on traditional Western academic intelligence tests.
<p>According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them <a href=”http://nrcgt.uconn.edu/newsletters/winter052/”>potentially problematic</a> in culturally diverse settings. The application of the same test among different communities would fail to recognise the different cultural values that shape what each community values as intelligent behaviour. </p>
<p>Going even further, given the <a href=”http://tap.sagepub.com/content/12/3/283″>IQ test’s history</a> of being used to further questionable and sometimes racially-motivated beliefs about what different groups of people are capable of, some researchers say such tests cannot objectively and equally measure an individual’s intelligence at all. </p>
<h2>Used for good</h2>
<p>At the same time, there are ongoing efforts to demonstrate how the IQ test can be used to help those very communities who have been most harmed by them in the past. In 2002, the execution across the US of criminally convicted individuals with intellectual disabilities, who are often assessed using IQ tests, was ruled unconstitutional. This has meant IQ tests have actually prevented individuals from facing “cruel and unusual punishment” in the US court of law.
<p>In education, IQ tests may be a more objective way to identify children who could benefit from special education services. This includes programmes known as “gifted education” for students who have been identified as exceptionally or highly cognitively able. Ethnic minority children and those whose parents have a low income, are under-represented in gifted education.
<p>The way children are chosen for these programmes means that Black and Hispanic students are often overlooked. Some US school districts employ admissions procedures for gifted education programmes that rely on teacher observations and referrals or require a family to sign their child up for an IQ test. But research suggests that teacher perceptions and expectations of a student, which can be preconceived, have an impact upon a child’s IQ scores, academic achievement, and attitudes and behaviour. This means that teacher’s perceptions can also have an impact on the likelihood of a child being referred for gifted or special education.
<p>The <a href=”http://journals.sagepub.com/doi/abs/10.1177/2372732215621310″>universal screening</a> of students for gifted education using IQ tests could help to identify children who otherwise would have gone unnoticed by parents and teachers. Research has found that those school districts which have implemented screening measures for all children using IQ tests have been able to identify more children from historically underrepresented groups to go into gifted education.
<p>Identifying these issues could then <a href=”http://www.sciencedirect.com/science/article/pii/0277953696000287″>help</a> those in charge of education and social policy to seek solutions. Specific interventions could be designed to help children who have been affected by these structural inequalities or exposed to harmful substances. In the long run, the effectiveness of these interventions could be monitored by comparing IQ tests administered to the same children before and after an intervention. </p>
<p>Some researchers have tried doing this. One US <a href=”https://link.springer.com/article/10.1007%2FBF01712768?LI=true”>study in 1995 used IQ tests</a> to look at the effectiveness of a particular type of training for managing Attention Deficit/Hyperactivity Disorder (ADHD), called neurofeedback training. This is a therapeutic process aimed at trying to help a person to self-regulate their brain function. Most commonly used with those who have some sort of identified brain imbalance, it has also been used to treat drug addiction, depression and ADHD. The researchers used IQ tests to find out whether the training was effective in improving the concentration and executive functioning of children with ADHD – and found that it was.
<p>Since its invention, the IQ test has generated strong arguments in support of and against its use. Both sides are focused on the communities that have been negatively impacted in the past by the use of intelligence tests for eugenic purposes.
Indonesian moviegoers have had something to talk about these past two weeks. A top box-office movie by director Joko Anwar, Satan’s Slave, has a hair-raising ghost, called “Ibu” or “Mother”, haunting almost 2 million viewers. The millions were scared of “Ibu”, but I have scary data haunting Indonesian women and these ghosts are real.
In the annals of Indonesian folklore, female ghosts take centre stage. The country has kuntilanak, sundel bolong and Si Manis Jembatan Ancol. Most female ghosts in Indonesia were loving mothers or ordinary women before they started haunting the world with dark agendas.
Among the most popular ghosts are kuntilanak and sundel bolong; their narratives are reproduced in pop culture products, most notably movies.
Kuntilanak was a woman who died at childbirth (or died delivering a stillborn, according to another version). Sundel bolong was a woman who was raped and became pregnant, then died at childbirth.
The third one is Si Manis Jembatan Ancol, loosely translated into The Pretty One Haunting Ancol Bridge, referring to Ancol, an area in North Jakarta. Men were said to have raped and killed Si Manis in North Jakarta when she escaped her husband.
A different kind of female ghost, an outlier, is Nyai Roro Kidul, believed to be the ruler of the southern sea of Java, who becomes the mystical wife of each Mataram king.
To know more about the issue of women in Indonesia’s ghost folklore, read Indonesian fictions, Sihir Perempuan (Black Magic Woman) and Kumpulan Budak Setan (Devil’s Slaves Club), by author, scholar and feminist Intan Paramaditha.
‘Kuntilanak’: victim of poor access to healthcare
There’s a thread connecting the female ghosts beyond their gender: most of them are victims.
Of course, no scientific evidence supports the existence of these ghosts. But the background story of each ghost shares similar themes. These women were victims of gender inequality and sexual violence. They also had poor access to healthcare.
Meanwhile, Indonesia’s data on sexual violence, experienced by both Si Manis and sundel bolong, are also harrowing. The Central Statistics Bureau surveyed 9,000 women respondents in 2016 and reported that one in three Indonesian women aged 15-64 has experienced physical and/or sexual violence in their lifetime.
But “Ibu”, the woman in the movie set in 1981, would probably have a better fate today compared with her fate then. She was lying helpless for three years without a proper diagnosis. In 2017, at least, she would probably have received a better diagnosis thanks to Indonesia’s universal healthcare, BPJS Kesehatan, implemented since 2014.
Don’t let there be another ‘sundel bolong’
Of course juxtaposing the stories of Indonesian female ghosts with real data is only a way for me to highlight an important issue using folklore and a popular culture product.
But the popular ghosts’ stories reveal the close connections between violence against women and access to healthcare for women in the distant past. As the maternal death rate shows, the state of healthcare for Indonesian women today remains grim.
Perhaps, we would not have the story of kuntilanak haunting young mothers and their newborns had more real live Indonesian women survived child labour and deliver healthy babies.
Sundel bolong’s unwanted pregnancy, a result of rape, could have been avoided if she received adequate reproductive healthcare. Indonesia has issued a regulation legalising abortion for rape victims, but its implementation remains elusive.
High maternal mortality rate, sexual violence: the real ghosts
The plights of these women ghosts, as told by the older generations, serve as a warning about the state of Indonesian women today. The numbers and data should be scary stories for Indonesian women.
Policymakers should pursue systematic changes, or we will forever see more women sharing the plights of sundel bolong, kuntilanak and Si Manis Jembatan Ancol.
If we don’t improve reproductive health services for women and let impunity reign among sexual violence perpetrators, we will continue the legacy of the female ghosts to our next generation. Not only in movies, but in real life as well.
The stereotypes of autistic people perpetuate a myth that they are socially inept. Yet non-autistics, also known as neurotypicals, portray ineptitudes on the basis of their susceptibility to body language, communication, and perceptual manipulations. How we learn these signals opens the debate for nature versus nurture, and the acquisition of social skill aptitude. Who is more socially equipped? The one who is capable of surrounding himself with pretentious body language, or the one who is mindful of her full spectrum of awareness? A neurotypical who communicates with learned body gestures is currently considered evolved, while the acquisition of those skills are a direct result of the inability to survive otherwise. The autistic who remains authentic in order to adapt to the current environment is potentially most equipped to function in society.
The cycle of life requires attracting a mate, reproduction, and adaptations for exploitation to those who threaten…
According to official estimates, the country will need more than 30 billion pesos (around US$2 billion) to rebuild. The resources required for Mexico’s recovery are almost double the country’s annual gross domestic product, according to World Bank figures.
Manpower, at least, has not been an issue. Search-and-rescue teams from several countries – including Chile, Colombia, Israel, Japan, Panama, the United States and Spain – arrived in the days after the earthquakes to dig survivors out of the rubble. Dozens of foreigners who reside in Mexico also joined the Mexican volunteers in their rescue efforts.
Among these international brigades was a group of undocumented Central American migrants who, interrupting their travel northward to the U.S., stayed in Mexico to help clean up debris and assist the victims.
Their efforts have been largely focused in two of the cities most impacted by the historic Sept. 7 quake, Juchitán and Asunción Ixtaltepec, in Oaxaca. But after the Sept. 19 Mexico City earthquake, some members also volunteered to help dig out survivors from the rubble of the nation’s capital.
The nearly 50 Central American migrants assisting in Oaxaca’s earthquake recovery effort are staying at Hermanos en el Camino (Brothers of the Road), a Catholic-run shelter in hard-hit Isthmus of Tehuantepec.
Felipe González, a volunteer at the shelter, told me via telephone that after the urgent rescue efforts ended, they have continued their work, distributing aid among those who lost their homes.
The migrants who organized this aid brigade are from Honduras, El Salvador, Nicaragua and Guatemala, and they have diverse backgrounds, but what they have in common – both with each other and with Mexican earthquake victims – is a history of hardship.
According to a May report from Doctors Without Borders, almost 40 percent of the roughly 500,000 Central American immigrants the organization surveyed in Mexico fled their countries after experiencing physical attacks, threats against themselves or their families, extortion or forced gang recruitment.
The Brothers of the Road shelter is located in Ciudad Ixtepec, one of the stops on the main route that Central American immigrants heading north used to follow through Mexico. Normally, the facility serves to provide relief to immigrants who ride atop “La Bestia” – that is, the Beast, the Mexican network of freight trains – to travel to the U.S.
Mexico has also stepped up deportations. In 2014, for example, Mexico “returned” 107,814 migrants, the majority of them from El Salvador, Guatemala and Honduras. In 2015, deportations rose to 181,163. In 2016, it was 159,872.
The Trump administration has kept up the pressure. In a letter sent to Congress and Senate leaders on Oct. 8, the U.S. president requested that the Department of Homeland Security be granted broad powers to assist “partner nations” in “removing aliens from third countries whose ultimate intent is entering the United States.”
Tough border enforcement isn’t the only reason that Central American migrants normally aim to hurry through Mexico under the radar. Nearly one-third of women surveyed by Doctors Without Borders in 2014 had been sexually abused during their journey, and 68 percent of all migrants were victims of violence.
Migrants are among the many victims of Mexico’s drug war. In 2010 and 2011, 265 migrants from Central and South America were murdered by the Zetas cartel in the northern Mexican town of San Fernando, Tamaulipas, just 55 miles from Texas.
The North American dream
Even knowing the dangers presented by both the state and the drug lords, the guests at the Brothers of the Road shelter risked everything to pitch into the rescue effort after the quake that hit Oaxaca and Chiapas, two of the poorest states in Mexico, in September.
“We’re immigrants in search of the American dream,” Denio Okele, an Honduran migrant, explained to NBC News. But, he continued, “we arrived in Oaxaca, and an earthquake occurred. We are thus helping the people who need assistance.”
Their reasons for helping range from solidarity and compassion to gratitude. “We have received a lot support from people, so we want to help them,” Wilson Alonso, also from Honduras, told the Spanish newspaper El País.
The sacrifice of this migrant humanitarian aid team has earned them hero status in Mexico. Like other volunteers who dug their neighbors free from the rubble with their bare hands, they have been lauded on social media and interviewed by reporters. And for once, the legal status of a group of Central Americans was not the story.
As José Filiberto Velásquez, a Catholic priest at the Brothers of the Road shelter, told one Mexican reporter, these migrants have shown Mexicans through their actions that, quite simply, “immigrants are good people.”
Pact of the defeated
The Central American migrants’ story is just one example of the spirit of national solidarity that carried Mexico through the days after the two killer September quakes.
The solidarity on display recalls what Argentinian writer Ernesto Sábato calls “the pact of the defeated.” In a world full of “horror, treason and envy,” Sábato writes in his memoir, “Antes del Fin,” it’s often “the most unprivileged part of humanity” that shows everyone else the path to salvation.
Right now in Mexico, earthquake-impacted locals and undocumented migrants alike are working together to rebuild their futures. In facing the years of hard recovery and U.S. antagonism ahead of it, a “pact of the defeated” may be as good a starting point as any.
In UK universities there are far fewer women in senior posts than men – particularly at professor level. Putting aside teaching, to reach this status, an academic typically needs to have completed a considerable amount of research. Research takes time, and if people want to succeed in academia, they have to apply for funding. This is where one key difference lies.
Women receive less funding than men, and they also apply for smaller grants than their male counterparts. Our study investigated the amount of research funding awarded to male and female study leads across over 6,000 studies related to infectious disease research in UK institutions. Around 75-80% of the funding was awarded to male principal investigators – a huge difference. In addition to the differences in total sums of money, there are also clear differences in the size of the grants secured.
It’s a Catch-22
So what’s the barrier to women getting funding? It’s unlikely to be widespread gender bias from the funders themselves. One of the most famous papers that did highlight clear biases in this area was a 1997 article published in Nature which pulled no punches in highlighting the problem in the peer review process of the Swedish Medical Research Council.
But this analysis is now 20 years old and does seem to be an outlier in an increasing pool of evidence. Most other analyses suggest there is no observable gender bias on the part of the research funders. For example, evidence reported by the Foundation for Science and Technology suggested there is no significant difference in the proportions of successful grant applications led by male and female researchers from the major UK funders, such as the Wellcome Trust and the Medical Research Council.
So why are women getting less by way of grant amounts? With seniority comes big bucks. The more senior the person applying, the bigger the grant they are likely to be requesting. But with fewer senior women out there to apply for something big, it’s a Catch-22.
There are initiatives within, and involving, universities that may help. The Athena Swan programme encourages institutions to consider inequalities and disadvantaged groups, and often focuses on the issues surrounding women in science. There is some evidence to suggest it is having a positive effect. The National Institute for Health Research (NIHR), one of the major UK funders, now insists that university departments and faculties must have at least a silver award from Athena Swan to be eligible to apply for their funding streams. Recipients of an Athena award have demonstrated through work practices and workplace philosophy their commitment to gender equality and supporting women in STEM careers.
There is also an interesting clause in the guidance of the NIHR autumn 2017 call for research professorships (a prestigious and significant award in the career of any aspiring health researcher). Institutions can put forward a maximum of two candidates, and at least one of the two candidates must be female.
I am not aware of other major research funders yet taking a similar approach (though they may do). It would be interesting to hear their views. As universities are increasingly strapped for cash, research income is important, so no doubt many faculties would be happy to jump through hoops to be eligible for all funding streams from the big players.
Still a man’s world?
Funding applications aside, there are good reasons for female academics to be disheartened about their chances of competing on a level playing field. A 2012 US-based study revealed how identical CVs with a male name at the top were favoured over those with a female name. Then there is the evidence that female lecturers are rated lower than their male counterparts by students, without there being any obvious difference in the standard of their teaching. It takes an extra level of tenacity and determination for a woman to make it to the top in a world that is naturally skewed towards men.
There are many additional factors that come into play as to why there are clear differences between the careers of men and women in an academic environment. Digital Science’s new report, Championing The Success of Women in Science, Technology, Engineering, Maths, and Medicine (STEM), explores many of these issues from a range of perspectives, as well as considering other areas where inequality is a problem. It also examines potential ways forward, including the use of mentors, feedback from the academic community and cultural changes that ensure there are more women into senior roles.
But what is very evident is that higher education institutions can prioritise the promotion of equality and still be successful in keeping their heads above water during the ongoing storm of funding cuts, Brexit and general political disdain towards experts.
This laughable 2012 video by the European Commission to encourage teenage girls to take an interest in science underscores the kind of problems that exist in the way women are perceived in terms of science. There was some furious backpedalling by its creators soon after its release, but it is shocking to think it got approved in the first place. But at least its desperately hackneyed approach lays bare some of the sexist, outdated and demeaning attitudes that women have to endure in male-dominated environments.
British weather isn’t much to write home about. The temperate maritime climate makes for summers which are relatively warm and winters which are relatively cold. But despite rarely experiencing extremely cold weather, the UK has a problem with significantly more people dying during the winter compared to the rest of the year. In fact, 2.6m excess winter deaths have occurred since records began in 1950 – that’s equivalent to the entire population of Manchester.
Although the government has been collecting data on excess winter deaths – that is, the difference between the number of deaths that occur from December to March compared to the rest of the year – for almost 70 years, the annual statistics are still shocking. In the winter of 2014/15, there were a staggering 43,900 excess deaths, the highest recorded figure since 1999/2000. In the last 10 years, there has only been one winter where less than 20,000 excess deaths occurred: 2013/14. Although excess winter deaths have been steadily declining since records began, in the winter of 2015/16 there were still 24,300.
According to official statistics, respiratory disease is the underlying cause for over a third of excess winter deaths, predominantly due to pneumonia and influenza. About three-quarters of these excess respiratory deaths occur in people aged 75 or over. Unsurprisingly, cold homes (particularly those below 16°C) cause a substantially increased risk of respiratory disease and older people are significantly more likely to have difficulty heating their homes.
Health and homes
The UK is currently in the midst of a housing crisis – and not just due to a lack of homes. According to a 2017 government report, a fifth of all homes in England fail to meet the Decent Homes Standard – which is aimed at bringing all council and housing association homes up to a minimum level. Despite the explicit guidelines, an astonishing 16% of private rented homes and 12% of housing association homes still have no form of central heating.
Even when people have adequate housing, the cost of energy and fuel can be a major issue. Government schemes, such as the affordable warmth grant, have been implemented to help low income households increase indoor warmth and energy efficiency. However, approximately 2.5m households in England (about one in nine) are still in fuel poverty – struggling to keep their homes adequately warm due to the cost of energy and fuel – and this figure is rising.
Poor housing costs the NHS a whopping £1.4 billion every year. Reports indicate that the health impact of poor housing is almost on a par with that of smoking and alcohol. Clearly, significant public health gains could be made through high quality, cost-effective home improvements, particulalrly for social housing. Take insulation, for example: evidence shows that properly fitted and safe insulation can increase indoor warmth, reduce damp, and improve respiratory health, which in turn reduces work and school absenteeism, and use of health services.
Warmth on prescription
In our recent research, we examined whether warmer social housing could improve population health and reduce use of NHS services in the northeast of England. To do this, we analysed the costs and outcomes associated with retrofitting social housing with new combi-boilers and double glazed windows.
After the housing improvements had been installed, NHS service use costs reduced by 16% per household – equating to an estimated NHS cost reduction of over £20,000 in just six months for the full cohort of 228 households. This reduction was offset by the initial expense of the housing improvements (around £3,725 per household), but if these results could be replicated and sustained, the NHS could eventually save millions of pounds over the lifetime of the new boilers and windows.
The benefits were not confined to NHS savings. We also found that the overall health status and financial satisfaction of main tenants significantly improved. Furthermore, over a third of households were no longer exhibiting signs of fuel poverty – households were subsequently able to heat all rooms in the home, where previously most had left one room unheated due to energy costs.
Perhaps it is time to think beyond medicines and surgery when we consider the remit of the NHS for improving health, and start looking into more projects like this. NHS-provided “boilers on prescription” have already been trialled in Sunderland with positive results. This sort of cross-government thinking promotes a nuanced approach to health and social care.
We don’t need to assume that the NHS should foot the bill entirely for ill health related to housing, for instance the Treasury could establish a cross-government approach by investing in housing to simultaneously save NHS money. A £10 billion investment into better housing could pay for itself in just seven years through NHS cost savings. With a growing need to prevent ill health and avoidable death, maybe it’s time for the government to think creatively right across the public sector, and adopt a new slogan: improving health by any means necessary.
When humanitarian emergencies flare up, what should prompt the U.S. government to “send in the Marines”?
Disasters like Hurricane Harvey’s floods in Houston and Hurricane Maria’s devastation of Puerto Rico’s roads and power grid can quickly overwhelm civilian authorities and emergency responders. Military support can make a life-or-death difference in those emergencies.
As scholars at the U.S. Naval War College and Harvard Humanitarian Initiative, we have seen that the military can have a profound and positive impact on the immediate response to large-scale disasters such as Hurricanes Harvey, Irma and Maria or the Haiti earthquake in 2010.
But soldiers, sailors, marines and aviators are primarily trained to fight, not feed disaster victims. When they report for humanitarian duties, it typically costs far more than when civilians handle them. Does their muscle actually go to good use?
Why deploy the military
Nonprofits like the Red Cross and government agencies like FEMA simply don’t have the equipment required following disasters like the one unfolding in Puerto Rico – where millions of people may lack power and clean drinking water for months.
At the same time, many military personnel also report that aid missions are good for morale, as countless service members take pride in doing disaster relief.
Having soldiers or sailors airlift people from their flooded homes or distribute hot meals is also great public relations at a time when the U.S. military is engaged in several unpopular and protracted conflicts abroad.
While military missions can fill critical gaps in response to large-scale natural disasters like Hurricanes Harvey, Irma and Maria, there are also significant limits to the military’s ability to jump in.
Also, under a law known as the Stafford Act of 1988, the Department of Homeland Security may request military assistance as a last resort in major disasters and emergencies.
These restrictions have loosened up a little since the 9/11 terrorist attacks, granting the military and National Guard more leeway to support domestic counterterrorism operations. These changes made it easier for the military and National Guard to respond to the recent hurricanes.
But there are no such legal restrictions on how the U.S. military may respond to foreign disasters, as long as host governments request help or consent to it.
A common call
According to the Center for Naval Analyses, a federally funded defense research center, the U.S. military divertedunits from “routine” operations to conduct humanitarian assistance operations 366 times from 1970 to 2000, compared with 22 times for combat missions.
Since 2000, the U.S. armed forces have conducted many massive humanitarian operations around the globe, such as responding to the 2004 Indian Ocean earthquake and tsunami and the 2015 Nepal earthquake, as well as Superstorm Sandy and Hurricane Katrina at home.
Given how frequently the military undertakes these missions, preparing for them should be a high priority. But that is not the case. With few notable exceptions, soldiers, sailors, marines and aviators spend little if any time training for disaster-response strategies, tactics, policies and procedures.
According to estimates by Aruna Apte at the Naval Postgraduate School and Keenan Yoho at Rollins College, the U.S. spent more than $17 million just to operate a single aircraft carrier nearby for 17 days – not counting personnel costs.
Aircraft carriers are essentially floating airfields that make it easier to access otherwise impossible-to-reach areas, facilitating evacuations. Although they can dispatch critical food, water and medicine, there are usually better ways to deliver aid after disasters.
Despite the big price tag, military involvement in disaster relief is bound to grow. That’s because global humanitarian organizations are already stretched thin by competing needs. Conflict-driven migration is growing, and severe storms are becoming more common as a result of climate change – along with the higher sea levels scientists say it is causing.
Women’s role in science has been hotly debated and discussed in recent decades. Policy-oriented and scholarly studies have explored a range of topics on the issue. From girls’ participation in science, technology, engineering and mathematics (STEM); to how women are represented and perform in STEM occupations and women’s access to technologies – it’s all been studied.
But only one study has examined women’s representation and participation in national science academies. This silence is ironic. These academies honour scientific excellence and synthesise scientific findings to support evidence-based policymaking. This means they are well placed to contribute towards strengthening their countries’ national innovation systems. They can advocate to get more girls and women participating in STEM, and advise on system-wide application of the gender lens in research and innovation.
So one of the first steps, surely, would be for academies to address their own gender gaps. But there’s a data problem. Academies simply don’t know how they’re doing when it comes to the representation of women compared to their counterparts within the science-policy environment. So they’re unable to monitor their progress.
A common message emerged from our research: with one or two notable exceptions, women are massively underrepresented in national science academies compared to their male peers.
Women in the minority
The information was gathered through two separate but related online surveys during 2014 and 2015.
The InterAmerican Network of Academies of Sciences surveyed the partnership’s 19 national science academies in North America, Latin America and the Caribbean. The South African academy surveyed 84 academies in the other world regions: Africa, the Middle East and Central Asia, South Asia, South East Asia and the Pacific, Western and Northern Europe, South Eastern Europe and Central and Eastern Europe.
There was a response rate of 63%: 65 of the InterAcademy Partnership’s 103 national academics provided us with data. A full table of the data is available in this article published in the South African Journal of Science.
The Cuban Academy of Sciences (27%) and the Caribbean Academy of Sciences (26%) had the highest representation of women in their membership. A “member” was taken to mean any person elected into the academy. The national science academies of Mexico, Nicaragua, Peru, Uruguay, Honduras and Canada also featured on the list of the top 10 academies with the largest shares of women members – between 16% and 23%.
In Africa, meanwhile, women comprise on average 10% of academy members. Academy of Science of South Africa is the only academy on the continent that ranks among the top five organisations for women membership (24%). The Uganda National Academy of Sciences was second in Africa (13%), followed by the academies of Ghana and Cameroon (both 11%).
The average share of women members, across all 63 national science academies that responded, is 12%.
More women in governance
Interestingly, women fared better when it came to national science academies’ governing bodies. Here the average was 20%. In Africa, the Academy of Science of South Africa recorded the largest share of women in academy governance (31%).
It’s not clear why and at this stage we can only speculate about possible reasons. For instance, there could be a general recognition among academies that women need greater representation. A logical first step would be to include those already elected into the academy in the governing body. An equally plausible hypothesis is that women volunteer their time more readily.
The Academy of Science of South Africa arm of the survey also asked whether academies had either a committee to address gender or diversity issues, or at least someone to advise on them. The answer was “no” from 61% of academies. A third – typically academies with a larger share of women in their membership, specifically in North and Latin America – had a dedicated committee. The remaining 6% of academies relied on individuals’ input and guidance.
We would have liked to obtain more data. But we believe the number and spread of participating academies provide a good base for future surveys. Based on the data, we propose several recommendations for the InterAcademy Partnership and its affiliated academies.
Recommendations and unanswered questions
First, member academies should annually collect, analyse and report gender-disaggregated data. This should then be published in the partnership’s annual report. The document can then be used to discuss the gender dimensions of its membership activities. It’s also important for member academies to establish permanent organisational structures related to gender. These can provide strategic direction and implement the academy’s gender mainstreaming activities.
Several aspects of women’s representation in science weren’t explored in this study. How much of a role does unconscious bias play in academies’ election or selection as members? Are the criteria for membership limiting women’s chances? What about socio-cultural aspects? Many cultures have male and female work spheres, confine girls to less valued “women’s work” and underestimate women’s intellectual and technological capacities.
This bias can be replicated in the processes of nomination, evaluation and selection of women and men, for example, for research grants, fellowships, prizes, key aspects that contribute to building the scientific excellence that is associated with honorific recognition of an individual by an academy of science.
These are important questions and issues. Further qualitative research will help to engage the unsettling narrative which emerged from the data in our study.
Once known as multiple personality disorder, dissociative identity disorder remains one of the most intriguing but poorly understood mental illnesses. Research and clinical experience indicate people diagnosed with the condition have been victims of sexual abuse or other forms of criminal mistreatment.
Media references to dissociative identity disorder are also often highly stigmatising. The recent movie Split depicted a person with the condition as a psychopathic murderer. Even supposedly factual reporting can present people with dissociative identity disorder as untrustworthy and prone to wild fantasies and false memories.
But research hasn’t found people with the disorder are more prone to “false memories” than others. And brain imaging studies show significant differences in brain activity between people with dissociative identity disorder and other groups, including those who have been trained to mimic the disorder.
Dissociative identity disorder comes about when a child’s psychological development is disrupted by early repetitive trauma that prevents the normal processes of consolidating a core sense of identity. Reports of childhood trauma in people with dissociative identity disorder (that have been substantiated) include burning, mutilation and exploitation. Sexual abuse is also routinely reported, alongside emotional abuse and neglect.
In response to overwhelming trauma, the child develops multiple, often conflicting, states or identities. These mirror the radical contradictions in their early attachments and social and family environments – for instance, a parent who swings unpredictably between aggression and care.
These states display marked differences in a person’s behaviour, recollections and opinions, and ways of engaging with the world and other people. The person frequently experiences gaps in memory or difficulties recalling events that occurred while they were in other personality states.
The manifestations of these symptoms are subtle and well concealed for most patients. However, overt symptoms tend to surface during times of stress, re-traumatisation or loss.
People with the condition typically have a number of other problems. These include depression, self-harm, anxiety, suicidal thoughts, and increased susceptibility to physical illness. They frequently have difficulties engaging in daily life, including employment and interactions with family.
This is, perhaps, unsurprising, given people with dissociative identity disorder have experienced more trauma than any other group of patients with psychiatric difficulties.
Dissociative identity disorder is a relatively common psychiatric disorder. Research in multiple countries has found it occurs in around 1% of the general population, and in up to one fifth of patients in inpatient and outpatient treatment programs.
Trauma and dissociation
The link between severe early trauma and dissociative identity has been controversial. Some clinicians have proposed dissociative identity disorder is the result of fantasy and suggestibility rather than abuse and trauma. But the causal relationship between trauma and dissociation (alterations of identity and memory) has been repeatedly shown in a range of studies using different methodologies across cultures.
People with dissociative identity disorder are generally unresponsive to (and may deteriorate under) standard treatment. This may include cognitive behavioural treatment, or exposure therapy for post-traumatic stress disorder.
Phase-orientated treatment has been shown to improve dissociative identity disorder. This involves stages (or phases) of treatment, from an initial focus on safety and stabilisation, through to containment and processing of trauma memories and feelings, to the final phase of integration and rehabilitation. The goal of treatment is for the person to move towards better engaging in life without debilitating symptoms.
Critics have pointed to poor therapeutic practice causing dissociative symptoms as well as false memories and false allegations of abuse. Some are particularly concerned therapists are focused on recovering memories, or encouraging patients to speculate that they have been abused.
A recent literature analysis concluded that criticisms of dissociative identity disorder treatment are based on inaccurate assumptions about clinical practice, misunderstandings of symptoms, and an over-reliance on anecdotes and unfounded claims.
Dissociative identity disorder treatment is frequently unavailable in the public health system. This means people with the condition remain at high risk of ongoing illness, disability and re-victimisation.
The underlying cause of the disorder, which is severe trauma, has been largely overlooked, with little discussion of the prevention or early identification of extreme abuse. Future research should not only address treatment outcomes, but also focus on public policy around prevention and detection of extreme trauma.
If this article has raised concerns for you or anyone you know, call Lifeline 13 11 14, Suicide Call Back Service 1300 659 467 or Kids Helpline 1800 55 1800.
Okay, you also may need to find an aunt who can lend you money to start a business, if you want to follow in his footsteps. But it’s often best to follow your own way and these days, a credit card can also sometimes be used to start up a business. Much riskier than a loan from an aunt, of course.
Welfare reform and austerity in the UK has led to reductions in public spending on services that support older people. Age UK has highlighted how nearly one million older people have unmet social care needs. This is of particular concern as the winter months approach.
In ongoing research on food insecurity in older age, my colleagues and I have analysed survey data and interviewed older people who use foodbanks. We’re finding that many older people are at risk of under-nutrition because of poverty, or because they don’t get the support they need to shop, cook and eat.
While many older people have been less affected by the recent recession than other age groups, in part because of the triple lock protection for pensions, poverty can persist in old age. Data from 2015 shows that 1.6m pensioners live below the relative poverty line, and 8% of pensioners are in persistent poverty – defined as having spent three years out of any four-year period in a household with below 60% of median income.
Poverty and social isolation
Around 20% of older people have little or no private pension, housing or material wealth and retiring with debt is also a growing problem. There are 3.8m people aged 65 and older living alone in the UK and evidence from Age UK suggests that nearly one million people in this age group always or often feel lonely.
Older people living alone tend to eat less. This can lead to under-nutrition – a major cause of functional decline among older people. It can lead to poorer health outcomes, falls, delays in recovery from illness and longer periods in hospital, including delayed operations.
Evidence from the National Nutrition Screening Survey suggests that an estimated 1.3m people aged over 65 in the UK are not getting adequate protein or energy in their diet. On admission to hospital, 33% of people in this age group are identified as being at risk of under-nutrition.
Data we are analysing from the 2014 English Longitudinal Study of Ageing suggests that for around 10% of people aged 50 and over “too little money stops them buying their first choice of food items” and this has increased consistently since 2004. Evidence from the Poverty and Social Exclusion Survey in 2012 found that 12% of people aged over 65 had often or sometimes: “skimped on food so others in the household would have enough to eat”.
Embarrassment and stigma
The Health Survey of England consistently highlights the issue of unmet need among some older people. For example, 6% of people aged over 65 reported that they had not received help from anyone with shopping for food in the last month. In addition, 19% of this age group reported needing help to leave their home.
Evidence suggests that as food insecurity has increased in the UK, many older people have become reliant on food banks. In 2016, the food redistribution charity FareShare said that 13% of its clients were aged over 65.
Our interviews with older people using food banks have highlighted the challenges many older people can face. Some were having food parcels delivered by the food banks as they were unable to go themselves or did not want to be seen going.
Embarrassment and stigma were also a concern for one 69-year-old man who told us how he preferred coming to the food bank than asking family or friends for help. “I don’t believe in asking others, I don’t want to upset people,” he said. Another 65-year-old man told us: “My family would help but I don’t like to ask them, they have their own families to look after.” Others, however are either unable or too embarrassed to visit a food bank.
Food or warmth
One 54-year-old man said: “I can go for a couple of days without food… the gas is cut off and I get hot water from the kettle to wash.” There was also evidence that some older people were not fully recognising their nutritional needs. As one 60-year-old woman said: “When you are on your own… sometimes I don’t cook, depends how I feel.” Another 65-year-old man revealed his poor diet, stating how when he had no food he would: “Just eat cornflakes.”
Other people chose to cut back on food during the winter due to the costs of heating their home – suffering the cold as a result. As one 72-year-old woman stated: “Sometimes I just go without putting the heating on.”
An increasing number of older people are constrained in their spending on food, many are skipping meals and are not getting the social care support they need. Emergency food parcels are an inadequate and unsustainable way of addressing the issue of food insecurity.
There are currently 10m people in the UK aged over 65, but this is expected to increase to 19m by 2050 – that’s one in every four people.
As the size of the older population continues to grow, the reductions in local authority spending on social care raise concerns about their long-term welfare. Given the follow-on costs to the public purse, including in terms of healthcare, the government must do more to combat food insecurity amongst older people.
Then I decided that it was better to be for things than against things. More positive.
But you can’t be for the safety and well-being of children if you don’t also fight child abuse, which includes that you are against its acceptance in some circles and cultures. (As expressed by for instance a recent decision in Britain that child abuse victims by definition “consented” to their abuse if they were living in the same house as the abuser, and other nasty nonsense like that.)
Similarly, you can’t be for the creation of a better future if you’re not also against its destruction.
You can’t be for human rights for every human being if you’re not also against the taking away or diminishment of human rights of some people by some people (such as in the case of that abused apprentice who had the misfortune of working at a business with an approved abuse culture).
I see that now.
I am redefining myself as fiercely anti-abuse (etc) first and fiercely pro-flourishing (etc) second.
That is probably what I already was when I started out. I don’t like feeling angry, however. So I tend to avoid anger and tend to see it as something negative. But you can’t accomplish a thing in the world without anger. Ultimately, anger is what makes the world go round. Anger for instance makes people fight against (the effects of) abusive people in power, like Donald Trump, and fight for a better world.
Anger pushes people out of complacency and opens their eyes. And then it makes them decide to do something about what caused the anger and fight for what becomes possible without it. Anger makes people start food banks and raise funds for medical treatments in the presence of failing governments and corrupt politicians.
Anger is a tool that you can learn to use. The first step in that learning process is to stop avoiding and suppressing it so that you see how you can actually use it constructively. Anger makes people stop waffling and whining and begin to act instead. Anger is empowering. It is powerful.
Anger can therefore be very destructive (particularly if you suppress it and allow it to fester). That is the risk inherent to anger, and part of the reason why most people try to avoid it (and also why it’s generally seen as done for men but not for women).
That’s why you have to tie it to something else. Compassion, for example. Anchor it.
See, when you get angry, you have a choice. That choice is whether to let the anger make you act for good or act for bad. Whether to make a cake to throw into a politician’s face or to make soup to hand out to strangers on a cold street. Whether to start a mud-slinging campaign on Twitter against some public figure or start a fund-raising campaign for someone’s medical treatment, or heck, sponsor the pill for an American woman.
An example of fighting for justice and against child abuse:
wow the gaurdian,the times news papers,not calling me a fantasist anymore after conifer report
However, “Even when people are unhappy with a state of affairs, they are usually disinclined to change it. In my area of research, the cognitive and behavioral sciences, this is known as the “default effect.” wrote Musa al-Gharbi in May in the US News on the likely reelection of Donald Trump. Today, the same prediction was made by a different medium.
People generally dislike taking responsibility. They don’t like stepping up. This is often connected to risk aversion. So they are angry, but don’t do anything with their anger. That causes stress.
Stepping up does not have to mean getting your face into the newspapers because of something you did or proclaiming that you want to rule the world. It does not have to involve huge risks. Stepping up can be as simple as driving your neighbor to the supermarket and back.
So to use anger, you have to look at your possibilities. If you don’t have a car, you can’t drive someone else to the supermarket. And I, for example, don’t have the power to vote against Trump or against Theresa May. So what can I do? And what can you do? Looking into that can force you to take other steps. Empowering steps. Steps that enable you to do something instead of nothing.
This article was co-written by Adeline Lacroix, who works with Fabienne Cazalis and was recently diagnosed with Asperger syndrome. A second year master’s student in psychology, she is working on a scientific literature review about the characteristics of high-functioning autistic women.
Let’s call her Sophie. The description we’ll give could be that of any woman who is on the autistic spectrum without knowing it. Because they’re intelligent and used to compensating for communication impediments they may not be consciously aware of, these women slip through the cracks of our still-too-inefficient diagnostic procedures.
Studies reveal one woman for every nine men is diagnosed with so-called “high-functioning” autism, that is, autism without intellectual disability. If we compare this to the one woman for every four men diagnosed with the more readily identified “low-functioning” autism, we can easily imagine many autistic women are left undiagnosed.
Today, Sophie, who lives in France, has a job interview. If you could see her nervously twisting her hair, you might think she’s anxious, like anyone would be in the circumstances. You would be wrong. Sophie is actually on the verge of a panic attack. At 27, she just lost her job as a salesperson due to repeated cash-register mistakes – and it’s the eighth time in the last three years. She loved maths at university and is deeply ashamed. She hopes the person hiring will not bring up the subject – she has no justification for her professional failures and knows that she is incapable of making one up.
Learning accounting by herself at home
Sophie’s wish is granted: the interviewer asks her instead about her time at university. Relieved, she happily launches into an explanation of her masters thesis on meteorological modelling, but he cuts her off abruptly, clearly irritated. He wants to know why she is applying for a temporary job as an accounting assistant when she has no experience or training. Although her heart is racing wildly, Sophie manages to keep her composure, explaining that she taught herself accounting at home in the evenings. She describes the excellent MOOC (online course) she found on the website of the French Conservatoire National des Arts et Métiers, and tells him how one of the questions she asked the teacher on the forum led to a fascinating debate on the concept of depreciation expenses.
Sophie is not good at guessing what people are thinking, but she understands from the way the man is staring at her that he believes she is lying. Overwhelmed, she feels weaker by the minute. She watches his lips move but does not understand what he’s saying. Ten minutes later she’s in the street, with no memory of how the interview ended. She is shaking and holding back tears. She curses herself, wondering how anyone could be so stupid and pathetic.
She climbs into a crowded bus, swaying under the heavy odours of perfumes worn by those pressed up around her. When the bus brakes suddenly, she loses her balance and bumps into a fellow passenger. She apologises profusely and hurriedly gets off. In her rush, she trips again and falls to the pavement. “I must get up, everyone is looking,” she thinks, but her body refuses to obey. She can no longer see properly and doesn’t even realise her own tears are blinding her. Someone calls an ambulance. Sophie wakes up in a psychiatric facility. She will be misdiagnosed with a psychological disorder and given medication that will solve none her problems.
A unique way of thinking, a taste for solitude, intense passions
Sophie’s story is typical of the chaotic lives led by women whose autism remains undiagnosed because they are on that part of the spectrum where the signs are less obvious. In spite of her impressive cognitive capacities – like the ability to teach herself a totally new field of knowledge – Sophie has no idea of her own talents, and neither do those around her, or only rarely. Trapped in a social environment highly critical of what makes her unique, such as her unusual way of thinking, taste for solitude, and the intensity of her passions, Sophie is acutely aware that these are seen as shortcomings.
If Sophie could be given the correct diagnosis of high-functioning autism, she would at last understand the way her mind works. She could meet other autistic adults and learn from their experience to help her overcome her own difficulties.
Autism is characterised by social and communicative difficulties, specific interests that people with autism are capable of speaking about for hours (like meteorological modelling, in Sophie’s case), and stereotyped behaviours. There are also differences in perception, such as hypersensitivity to smells or sounds, or, conversely, reduced sensitivity to pain. Autism is thought to affect around one in one hundred people.
70% of people with autism have either normal or superior intelligence. This form of autism is generally referred to as high-functioning autism, as per the latest version of the “bible” of psychiatric disorders, the DSM 5 (Diagnostic and Statistical Manual of Mental Disorders). In this version, all reference to older categories has been removed, including Asperger syndrome. The term Asperger’s is still used today in some countries, however, even though all types of autism are now grouped under a single spectrum and classified according to the severity of symptoms.
Appropriate support throughout schooling
Ideally, Sophie would have been diagnosed as a child. She could have benefited from specialised support throughout her schooling, as is legally required in France and other countries. This support would have made her less vulnerable, giving her the tools to defend herself from bullying in the schoolyard and helping her learn with teaching methods adapted to her way of thinking. Upon leaving school, her diagnosis would have opened up access to labour rights, such as disabled worker status, which would have helped her find an adapted employment. Sophie’s life would have been simpler and she would be more at peace with herself.
But Sophie’s problems are twofold. Not only is she autistic, but she’s also a woman. If getting a diagnosis is already tricky for men, it’s even more difficult for women. Originally, autism was thought to only rarely affect women. This erroneous idea, which emerged from a 1943 study conducted by Léo Kanner (the first psychiatrist to describe the syndrome), has been reinforced by the long-dominant psychoanalytical approach. The criteria defining autistic symptoms were based on observations in boys.
Later, when science replaced psychoanalysis as the dominant model, studies were largely conducted on male children, thus reducing the chances of recognising autism as it’s manifested in females. This phenomenon, also present in other areas of science and medicine, has far-reaching implications today.
Similar test results for boys and girls
To diagnose autism spectrum disorder (ASD), doctors and psychologists evaluate quantitative criteria using tests and questionnaires, but also qualitative criteria, like interests, stereotyped movements, difficulties with eye contact and language and isolation. But while autistic girls show similar test results to autistic boys, the clinical manifestation of their condition differs, at least in cases where language has been acquired.
With social-imitation strategies, for example, autistic girls have fewer troubles making friends than autistic boys ; they have seemingly more ordinary interests than boys (for example horses, rather than maps of the subway); while less restless than boys, they are more vulnerable to less-visible anxiety disorders, and more adept at camouflaging their stereotyped and soothing ritual behaviors. In other words, their autism is less obtrusive, which means their symptoms are less obvious to their families, teachers and doctors.
Biology and environment explain these differences, and in this case it’s impossible to separate nature from nurture. On the nature side of the argument, some hypothesise that girls are better equipped for social cognition and more apt at caring roles. This would explain why they appear to be more interested in the animate (cats, celebrities, flowers) than the inanimate (cars, robots, rail networks).
When it comes to nurture, girls and boys are not brought up in the same way. Socially acceptable behaviours differ according to sex. Although autistic children are more resistant to this phenomenon, the pressure to conform is so strong it still ends up influencing their behaviour, as illustrated by the case of Gunilla Gerland. As a girl, this Swedish woman didn’t want to wear rings or bracelets because she hated the way metal felt on her skin. Observing that adults could not fathom that a little girl might not like these things, she resigned herself to getting gifts of jewellery, and even learned to thank the giver, before stashing the object away in a box at the earliest opportunity.
Skilled in the art of camouflage
As autistic girls grow up, the gap between how their condition and that of boys manifests widens. As adults, some autistic women can become highly skilled in the art of camouflage, which explains the use of the term “invisible disability” to describe certain types of high-functioning autism. Incidentally, this is the meaning of the title of Julie Dachez’s 2016 graphic novel, The Invisible Difference (Delcourt).
More and more women are discovering their condition later in life and sharing their experience. Since September 2016, the Francophone Association of Autistic Women (Association francophone des femmes autistes, or AFFA) has been fighting for recognition of the specific ways autism manifests in women. A learned society on autism in women is also being created in France, bringing together the general and scientific communities, with the goal of promoting dialogue between researchers and autistic women.
A specific questionnaire for girls
Historically, major figures in autism research believed there was significant prevalence in women. The Austrian Hans Asperger (for whom the syndrome is named) put forward the idea as early as 1944, as did British psychiatrist Lorna Wing, as early as 1981. But it’s only in recent years the scientific community has really started examining the evidence.
Some researchers aim to better understand the specific characteristics of autism in women. Since the beginning of this year, volunteers are invited to participate in a study on “autism in women” conducted by Laurent Mottron, a professor in the department of psychiatry at the University of Montreal (Canada), and Pauline Duret, a doctoral student in neuroscience, in collaboration with myself and Adeline Lacroix, working at the École des Hautes Études en Sciences Sociales (EHESS) in Paris (France). Adeline Lacroix is a master’s student in psychology and has herself been diagnosed with autism.
Other studies are attempting to adapt diagnostic tools for use with female subjects. A team made up of Australian scientists Sarah Ormond, Charlotte Brownlow, Michelle Garnett, and Tony Attwood, and Polish scientist Agnieszka Rynkiewicz, is currently perfecting a specific questionnaire for young girls, the Q-ASC (“Questionnaire for autism spectrum conditions”). They presented their work in May 2017 at a conference in San Francisco.
While there has been an initial trove of interesting results, current research into the specific characteristics of autism in women is raising more questions than it answers. However, the confusion could be considered a necessary step toward the acquisition of knowledge, provided the women affected can contribute to the research and share their point of view on the direction the work should take.
Ordinary citizens can also work towards ensuring autistic girls have the same rights as their male counterparts. By gaining a better understanding of the different forms of autism, everyone can contribute to a world in which children and adults with autism can find their place, and help fight exclusion by creating an inclusive society.
“A study from Brigham Young and Princeton Universities found that men tend to dominate 75% of the conversation during conferences. A writer for Slate Magazine found that men in the tech industry interrupt women at twice the rate women interrupted men. An article in the Harvard Crimson student newspaper found that women’s voices were significantly underrepresented in law classes.”
This shows why matching dogs to people is far more complicated than we might predict.
Humans and dogs: a long history
Humans have been co-evolving with dogs for thousands of years. We owe them a lot, including (perhaps surprisingly) the ways in which we experience and express gender via animals.
This often happens in negative ways, such as when women are referred to as bitches, cows, pigs, birds, chicks and men as wolves, pigs, rats. None of these animal metaphors have much to do with the animals themselves but more to do with how we use categories of animals to categorise humans.
So unpacking and challenging gender stereotypes might just also improve the lives of animals too.
A 2006 landmark analysis of gender and dog ownership revealed that owners use their dogs as props to display their own gender identities.
Participants in this study considered female dogs to be less aggressive but more moody than apparently more playful male dogs. They used gender stereotypes not only to select dogs, but also to describe and predict their dog’s behaviour and personality.
The potential ramifications of this are important because such flawed predictions about dog behaviour can lead to a person giving up on their dog, which is then surrendered to a shelter.
Once surrendered, an aggressive bitch or uncooperative dog faces a grim future, with most dogs who fail a behavioural assessment being killed, adding to the troubling euthanasia rates in Australia.
That said, the predictive power of behaviour assessment in shelters is being questioned. Some say the ability of such assessments to reliably predict problematic behaviours in future adoptive homes is “vanishingly unlikely”. Moreover, the assessments are likely to be informed by the gendered expectations and behaviours of the humans who assess, surrender or adopt.
A small study in the UK in 1999 observed 30 dogs in shelters when approached by unfamiliar men and women. It found that the female dogs spent less time looking towards all the humans than the male dogs did.
All the dogs barked at and looked towards the women less than the men, which the researchers suggest shows that gender of the potential adopter plays a role in determining what a good match might look like, as well as the likelihood of adoption.
Even the bond that dogs share with their primary care-giver may have gender differences. For example, in a 2008 Australian study (led by one of us, Paul), dog owners reported that male dogs showed elevated levels of separation-related distress compared to female dogs. They also reported that separation-related distress and food-related aggression increased with the number of human adult females in the household.
Desexing, which is more than justified by the animal welfare benefits of population control, also complicates cultural beliefs about appropriate dog gender and may even influence a dog’s problem-solving behaviour. A recent study published this year suggests that desexing may have a more negative effect on female than male dogs when it comes to aspects of cognition.
These studies underline just how much the lives of dogs depend upon how they conform to gender expectations. In other words, it’s not just how we humans interact with dogs that matters, it’s how our genders interact as well.
While we know how damaging stereotypes can be for humans, dog owners may not consider just how their conceptual baggage of gender stereotypes affects the animals they live with.
More research can help to shed light on the role that gender plays when it comes to making a good match between humans and their dogs; and by good match, we mean one that will result in a decrease in the likelihood of the dog being surrendered to a shelter or treated badly.
The take-home message from these studies is that, to be truly successful mutual companions, dogs don’t need just any human, they need a complimentary human who is open to reflecting critically on gender stereotypes.
Thanks partly to an uncritical adoption of gender stereotypes, the matching of dog and human is currently rudimentary at best. So we should not be surprised if dogs often fail to meet our expectations.
When relationships go wrong, it’s catastrophic for dogs, because it contributes to euthanasia rates in shelters. These deaths need to be better understood as a broader failure of human understanding about how their own beliefs and behaviour affect the dogs in their lives.
When I first qualified as a doctor more than ten years ago, it was simple – my duty was to provide the best possible care to the patient in front of me. Evidence and clinical experience were my guides. Unlike in a commercialised health system, such as the US or India, I was not torn between doing the right thing and demands from a profit-making paymaster, or concerns over whether my patient could afford the care.
Identity checks at the front door and upfront charging have changed all that. They compromise my duty to “show respect for human life” by prioritising British lives over all others, regardless of the wider implications.
According to the NHS constitution, healthcare should be “available to all irrespective of gender, race, disability, age, sexual orientation, religion, belief, gender reassignment, pregnancy and maternity or marital or civil partnership status”. It is a service that provides care “based on need, not an individual’s ability to pay”. It is these first two fundamental principles that I, along with many other NHS staff, am so proud of.
For the first time since its inception, Jeremy Hunt has said “we should all expect to be asked questions that confirm our eligibility for free healthcare”. This statement came as part of the Migrant and Visitor Cost Recovery Programme, first rolled out in 2014. It sets in place a series of policies that restrict access to healthcare for those not born in the UK. The NHS cannot be available to all, as the constitution states. A line must be drawn somewhere, and that line is the UK border.
An immigration health surcharge has been one of the least controversial measures introduced, attached to the visa application process for long-term non-European Economic Area migrants and students.
However, the second part of the cost recovery programme has been to incentivise NHS trusts to identify ineligible patients and charge them 50% more than the actual cost of their care. Critics argue that the cost of managing this scheme does not justify the 0.3% dent in the annual NHS budget attributable to health tourism. Furthermore, there have been reports of patients wrongfully billed. This would be stressful in itself, but more concerning are the reports of racial profiling that has been used to aid the identification of chargeable patients. With the introduction of charges upfront in an NHS that is already running on empty, snap decisions on who will actually be asked to provide identification are likely to be based on identifiers of difference, such as skin colour or accent.
To add to this hostile environment for migrants, in February this year the assumption of confidentiality – a sacred cornerstone of medical practice and a foundation of the trust that is so vital to the doctor-patient relationship – was placed on shaky ground with an agreement that patient details could be passed on to the Home Office. This memorandum of understanding, along with a hotline which charged the NHS 80 pence per minute (just to add insult to injury), is aimed at identifying people for deportation.
A public health risk
Despite the Department of Health’s claim that evidence is lacking, there is a significant body of knowledge that demonstrates that charging and data-sharing deter people from seeking help when they are unwell. These barriers to obtaining health – which, by the way, the UK government has signed up to protect as part of the EU convention on human rights – extend way beyond those who, in the eyes of the law, are ineligible for care. From a public health perspective, delaying diagnosis and treatment of infectious diseases increases the risk of spread to the wider community. Bacteria, I assure you, pay no heed to arbitrary notions of birth rights and citizenship.
From an economic perspective, prevention is better than cure. Those deterred from accessing healthcare by these policies are the least able to pay. By the time their treatment is life-threatening, you can be sure that, had we treated them sooner, the outcome would be better and cheaper for all. It will be those who look different, sound different, or dress differently from an “average British citizen” (in the head of the person in front of them) who will be caught in the wider net of eligibility checks.
These policies do not protect human rights. They are not in line with my General Medical Council duties as a doctor or with NHS principles. They are not economically sound. They will not protect the health of the public. These policies feed a narrative that the NHS crisis has been caused by migrants – not the rich people who broke the banks and heralded in a period of austerity. We must look up and hold those people in power to account and look around at our fellow human beings with compassion and kindness.
When you start reading up on narcissistic personality disorder, you may find yourself wondering if you have it yourself, which can be unsettling at first. But after those first moments of concern, you will probably very quickly be able to decide that no, it isn’t the case.
You can also start to feel that it is wrong of you, that you are making a mistake or are weak or gullible if you are actually trying to find out how you can get along with a narcissist, for which it is necessary – or at least very helpful – to understand the disorder. (After all, the golden rule is “No contact”, which applies to people who are trying to break out of a close relationship.)
However, working with a narcissist can also be extremely rewarding and inspiring because of their nearly superhuman skills for getting things done — when they want it and how they want it.
Narcissists are part of society so you will run into them.
One could become that neighbor from hell who seemed so nice when he or she moved in if you don’t recognize the manipulative disorder hence don’t know how to deal with it.
One can turn up as your new boss or a new colleague at the desk next to yours.
If you’re self-employed, that strange client with inflated ideas about the importance of his work who suggests that if it becomes known you are working for him, burglary is likely and who suddenly starts calling you names for no reason at all may be one too. A little bit of extra knowledge may enable you to avoid the energy-draining conflict situations narcissists are famous for. That benefits everyone in the situation.
When you look into narcissistic personality disorder (or similar disorders), you may end up developing much greater insight into yourself. What your weaknesses are, which are usually strengths at the same time. You may discover a few highly surprising ones. That can cause you to stand much firmer.
You also have to decide for yourself what you need in life, what you want or like, what you are willing to accept (put up with) and where you absolutely put your foot down and draw the line if you want to be able to get along well with narcissists.
I talked about this disorder in relation to Donald Trump, before. Please, do remember that persons with narcissistic personality disorder DID NOT CHOOSE to have this disorder. In most cases, something happened in early childhood while the person’s personality was being formed. (There is a video below about that.)
It’s my interest in bioethics in combination with a zen tinge of acceptance, among other things (including two personal situations), that is causing me to look deeper into particularly these personality disorders.
Bioethicist Julian Savulescu, for instance, advocates for removing essentially all disorders and diseases from the human gene pool, even when we can do a lot to prevent certain conditions or keep them under control (think asthma and air quality). A lot of what he wants is like demolishing homes to prevent that they ever burn down. He also is highly critical with regard to various personality disorders.
If you are able to be compassionate and keep in mind that the line between compassion and stupidity is very thin, you may find that dealing with a narcissist becomes much easier. Also, not everyone with narcissistic personality disorder has the affliction to the same degree or in the same way.
It is, for instance, possible to be friends with someone with narcissistic personality disorder. You have to be very steady on your feet and recognize every instance you’re being played so that you can stop each manipulative game before it starts (such as being told that you’re wrong, that red is black and then when you agree it’s black, being told it’s red).
Recognize the toddler part in narcissists when they behave like toddlers. Respond the way you would respond to a toddler. (Calmly.)
You also have to be aware of what may be happening behind your back (lies that are being told about you) and realize that if you try to talk to third parties about the disorder or about what is going on, YOU will sound like the “crazy” and “jealous” one. Can you handle that?
I am not recommending that we all become friends with narcissists, but they are a part of human diversity so we run into them whether we like it or not. Being able to deal with them well is better for everyone.
You can often choose how you respond emotionally to all sorts of occurrences and being able to choose how you respond can make a great difference. Often, you can either choose to get upset and feel victimized or shrug, smile and calmly carry on with whatever you were doing (or walk away). Understanding more about narcissistic personality disorder can facilitate this ability to choose your own responses.
The upside? Narcissists may all have a great sense of humor and no one can ever accuse them of being boring. Sometimes, you can actually learn from them, or from having encountered them.
The downside? They may have ruined you (your life) completely before you even know what hit you. Taking the zen approach of mentally letting go of what you lost and acceptance can help you deal with it and enable you to stay “whole” (but that is hard to explain without sounding shallow or even flippant or, worse, as an encouragement for accepting abuse).
Video 1: How to understand people who irritate or upset you
Video 2: Understanding the mind of a narcissist
Video 3: The emotion at the heart of narcissism
Video 4: The childhood origins of narcissism
Video 5: 5 key strategies for dealing with narcissists
Video 6: How the narcissist destroys your physical health
Video 7: 5 destructive fantasies empaths have after the narcissist has left.
(This is a video about lingering beliefs or ideas some people have after the breakup of a relationship with a narcissist.)
Video 8: The hidden emotion that makes empaths vulnerable to narcissists
Video 9: 7 traits of Narcissistic Abuse Victim Syndrome
Also, this happens when you ignore a narcissist, apparently:
Knowing how manipulation works is helpful too.
Below is an example of a behavior that narcissistic personality disorder can also result in, apparently. (Notice that no one seems to have realized yet that hackers can also have narcissistic personality disorder.) I am not sure yet how that comes about. Perhaps from the realization that in real life, relationships are too hard for someone with such a personality disorder?
I post the following from the work of Dr Lorraine Sheridan.
Typology 4: Sadistic stalking (12.9%)
· victim is an obsessive target of the offender, and who’s life is seen as quarry and prey (incremental orientation)
· victim selection criteria is primarily rooted in the victim being:
(i) someone worthy of spoiling, i.e. someone who is perceived by the stalker at the commencement as being: – happy – ‘good’ – stable – content and
(ii) lacking in the victim’s perception any just rationale as to why she was targeted
· initial low level acquaintance
· apparently benign initially but unlike infatuation harassment the means of intervention tend to have negative orientation designed to disconcert, unnerve, and ergo take power away from the victim
– notes left in victim’s locked car in order to unsettle target (cf. billet-doux of infatuated harassment)
– subtle evidence being left of having been in contact with the victim’s personal items e.g. rifled underwear drawer, re-ordering/removal of private papers, cigarette ends left in ash trays, toilet having been used etc.
– ‘helping’ mend victims car that stalker had previously disabled · thereafter progressive escalation of control over all aspects (i.e. social, historical, professional, financial, physical) of the victim’s life
· offender gratification is rooted in the desire to extract evidence of the victim’s powerlessness with inverse implications for his power => sadism
· additional implication => self-perpetuating in desire to hone down relentlessly on individual victim(s)
· emotional coldness, deliberateness and psychopathy (cf. the heated nature of ex-partner harassment)
· tended to have a history of stalking behaviour and the controlling of others · stalker tended to broaden out targets to family and friends in a bid to isolate the victim and further enhance his control
· communications tended to be a blend of loving and threatening (not hate) designed to de-stabilise and confuse the victim
· threats were either overt (“We’re going to die together”) or subtle (delivery of dead roses)
· stalker could be highly dangerous
– in particular with psychological violence geared to the controlling of the victim with fear, loss of privacy and the curtailment of her social world
· physical violence was also entirely possible
– especially by means which undermine the victim’s confidence in matters normally taken for granted e.g. disabling brake cables, disarming safety equipment, cutting power off
· sexual content of communications was aimed primarily to intimidate through the victim’s humiliation, disgust and general undermining of self-esteem
· the older the offender, the more likely he would have enacted sadistic stalking before and would not be likely to offend after 40 years of age if not engaged in such stalking before
· victim was likely to be re-visited after a seeming hiatus
Case management implications
· should be taken very seriously
· acknowledge from outset that the stalker activity will be very difficult to eradicate
· acknowledge that there is no point whatsoever in appealing to the offender – indeed will exacerbate the problem
· never believe any assurances, alternative versions of events etc. which are given by the offender
· however, record them for use in legal action later
· the victim should be given as much understanding and support as can be made available
· the victim should not be given false or unrealistic assurance or guarantees that s/he will be protected
· the victim should carefully consider relocation. Geographical emphasis being less on distance per se, and more on where the offender is least able to find the victim
· the police should have in mind that the sadistic stalker will be likely to:
(i) carefully construct and calculate their activity to simultaneously minimise the risk of intervention by authorities while retaining maximum impact on victim,
(ii) be almost impervious to intervention since the overcoming of obstacles provides
(iii) new and potent means of demonstrating the victim’s powerlessness (ergo self-perpetuating) and,
(iiii) if jailed will continue both personally and vicariously with the use of a network.
Without us knowing, our brains are busy making associations. While on the surface we may sincerely believe that men and women are equal, or that people on benefits are just regular folks who happen to need help, our unconscious minds might not be so progressive. In psychology, ideas that we hold unconsciously are called “implicit attitudes”.
Implicit attitudes develop under the influence of the world around us. Immerse your brain in a culture that routinely represents women as emotional and irrational, or in which black men are habitually portrayed as aggressive and criminal, and it will develop those associations whether you want it to or not.
A great deal of valuable research has been done into people’s implicit attitudes towards women and peopleof colour. However, there are many other groups which society also tends to represent in negative, stereotyped ways. A particular target in the UK are unemployed people who receive government benefits.
Described in newspaper headlines as “dossers” and “layabouts” (The Sun), “scroungers” (The Daily Mail), and “skivers” (The Express) benefit claimants are treated with unremitting hostility by large sections of British society. It is easy to see how exposure and immersion in this culture could lead to the development of negative unconscious feelings towards this group. This is the idea I set out to test with my new research.
Testing our associations
How do you find out if someone harbours negative implicit attitudes towards benefit claimants? The very fact that these attitudes are not conscious means you can’t just ask them directly. To get around this problem, psychologists have developed a set of tools called implicit association tests.
In my research, I used a specific test called the Go/No-Go Association Task, or GNAT. The easiest way to describe how this works is by way of an example. Imagine you are sat in front of a black screen. At the top of the screen some white text reads “spiders and negative words”. Words will now appear and disappear rapidly in the centre of the screen.
As each word appears, your job is to decide if it fits into the category of “Spiders and negative words”. If it does, you press the space bar (“Go”). If it doesn’t, you don’t press anything (“No-Go”). So, for example, if you saw the words “tarantula” or “disgusting” you would press the space bar. If you saw the words “wonderful” or “glasses”, you wouldn’t.
Once you’ve finished going through 60 words or so, the text at the top of the screen changes. It now says “spiders and positive words”. Now if you saw the word “tarantula” or the word “wonderful”, you should press the space bar. If you saw the word “disgusting”, you shouldn’t.
Because most people feel negatively about spiders, they will find it more difficult to group them together with positive words than they will to group them with positive words. Because the words appear and disappear so quickly, people don’t have time to deliberate. Their responses are dominated by their unconscious feelings. You can get a feel for this by trying some implicit attitude tests on a website run by Harvard University.
The principle is exactly the same when we are talking about social groups. For example, study after study has found that people find it much easier to pair photographs of black people with negative words than with positive ones.
Bias against benefit claimants
And when I used this technique to examine unconscious attitudes towards benefit claimants in the UK, I found exactly the same results. Participants found it much easier to group words relating to benefit claimants together with negative words like “bad”, “useless”, and “dirty” than they did to group them together with positive words like “friendly”, “clean”, or “wonderful”. This was true even for people who, when asked directly, did not report having any negative opinions about people on benefits. These results strongly suggest the existence of a negative, unconscious prejudice against this group.
There are of course caveats to this research. My sample was small – only around 100 people. This is a similar sample size to that of most implicit attitude studies. However, 100 people is clearly too few to start drawing conclusions about the British population as a whole. This is particularly true given that all of the participants came from a single town (Oxford), and that many (though not most) were university students.
So this research does not yet demonstrate that negative unconscious attitudes towards benefit claimants are a general feature of the British population. However, if this result proves to be robust, it has significant implications for debates about welfare both in the UK and elsewhere.
If antipathy towards benefit claimants is strongly rooted in people’s unconscious feelings and stereotypes, this profoundly limits the power of facts and figures to change people’s minds about the benefits system. Correcting mistaken beliefs about the benefits system is easy. Severing unconscious negative associations that have developed over decades is likely to be much, much harder.
Muslim woman dressed in black with a veil and a wonderful broad smile is travelling with a young child. Her daughter. She is precocious and smart, chats non-stop, entertaining everyone around her with her gentle voice. As a distraction to play with, the girl has an American video or game featuring fragments from Bizet’s Carmen and Grieg’s Peer Gynt suite, as well as the topic of Christmas. As free as a bird in the sky. The little girl reminds me of me when I was a little girl. Even her hair is about the same.
Young blond guy in a turquoise-blue jacket accidentally drops something that looks like a business card or ticket at Victoria Station’s Prêt. I happen to see it happen, from a nearby waiting area. When he ends up in my vicinity, I tell him about it. “I am not sure,” he replies, and goes back into Prêt, where he loiters and keeps an eye on me. Meanwhile, a Prêt employee notices the card or ticket, picks it up, studies it and puts it on the counter. Did young blond guy not even listen to what I said and assume I was asking for money (which is not unusual in Britain)?
Dutch daily Trouw prominently featured anger about a human rights violation on its 6 September front page (online version).
Not only had Dutch police tasered a patient in so-called drive-stun mode (“pain compliance“), the patient in question already was in solitary confinement.
I was shocked when I read this. It seems to signal a return to practices I thought we had left behind a long time ago, and it particularly worries me that this happened in the Netherlands of all places.
“This is torture,” say Amnesty International as well as organizations of patients and their relatives, and Amnesty has called for an immediate suspension of the use of this type of weapon by Dutch police, so I understand. According to Trouw, the taser’s manufacturer advises against use on psychiatric patients and Amnesty believes taser use may actually be life-threatening in such cases.
This is likely the first time a taser was used to subdue a hospitalized psychiatric patient in the Netherlands, where three-hundred police officers are currently testing tasers.
The following appears to have transpired.
On 17 July, police officers were called to a hospital in Cappelle aan de IJssel, in which a male patient in his twenties was having a psychotic episode. (When Dutch police are called to a hospital for a problem with a patient, police take over responsibility.)
The patient was having a bad day, apparently, and had refused to take his antipsychotic medicines. Rotterdam police were first called to force the patient into solitary confinement (to reduce sensory input and calm the patient down).
In the evening, police were called again, for unknown reasons. That’s when the tasering occurred.
The patient’s mother, Marijke Bos, found out about the incident a few days later during a visit on her son’s birthday. Her son had dark bruises under his eyes, several bruises on one of his hips and roughly thirty small taser-related lesions on his back. The patient had also been tasered on one of his feet.
The patient’s mother has filed several formal complaints.
The hospital staff reportedly is also extremely dismayed about the taser use.
Solitary confinement in itself can be damaging and can be seen as a human rights violation. Tasering a patient who already is in solitary confinement and clearly no danger to anyone else raises eyebrows, to put it mildly.
It seems to me that tasering in drive-stun mode is even worse than using a baseball bat to knock someone out as it deliberately causes pain, so it is more comparable to stabbing someone with a knife or throwing scalding water or oil.
The incident made me wonder about taser use on patients in other countries and I did a quick web search. It is not clear whether other reports of taser use on patients concern drive-stun mode or probe mode, but probe mode is the usual taser mode.
New Zealand police used a taser on a mentally ill man earlier this year as well and it was the country’s second case this year in which taser use against a mentally ill person was ruled (excessive and) unjustified:
“Police told the 21-year-old he would need to be strip searched, the man repeatedly refused to remove his clothes telling the officers he had a history of sexual abuse and didn’t feel comfortable being touched by males.”
In Britain, even taser use in general has turned out to concern mainly mentally ill persons, according to Home Office figures:
I agree with Matilda MacAttram (director of Black Mental Health UK and writer of the above article in the Guardian) that there is no role for police in mental healthcare, just like police have no business in heart surgeries and appendectomies either.
I ran into a discussion on Kialo, to which I quickly contributed the first paragraph below and penned what I have added below, all within about five minutes. I later edited it a bit, to make it easier to read.
I am so pleased someone started this discussion. I promote non-discrimination of embryos and fetuses. A child is not a consumer product but a human being who must be loved and encouraged to flourish. How can you love one child but not another if the latter is non-mainstream? I’ve been thinking about that and it’s made me wonder if it actually means that the parents aren’t fit to be parents. I haven’t dared say that out loud yet, but this discussion clears the road for me.
So yes, maybe parents-to-be should require vetting.
Within a few decades, we will no longer require sex to create babies, but will make our offspring in the lab, possibly on the basis of skin cells from each of the parents. We’ll probably look after our little gestating (incubating) children as if they are rare orchids that we want to bring to bloom.
(So by that time, women will no longer have a need for abortions and they won’t have to menstruate and experience PMS any longer either.)
I can imagine very well that you will require a license in the future in order to have a child. Somehow, that feels like an automatic consequence of the possibilities we will have then.
And also, indeed, why should adoptive parents be scrutinized but are natural parents free to do whatever they want?
And after all, in that distant future, anyone who wants can probably have a child (technically speaking). Even adoption may slowly become a thing of the past, that is, if we get to the point that we no longer succumb to illnesses and accidents and maybe even can choose when our lives end.
I hasten to add that at the moment, natural parents are not always free to do as they please either, of course. For example, in countries with a great deal of inequality, the state may step in on the basis of what is no more than prejudice in practice.
Nowadays, some children suffer horribly, either because of their parents or because of someone else. Sometimes before children are removed from their parents and sometimes afterward.
In practice, perhaps it won’t be an actual license but a training program that must be completed with good results. If that training is tough and long enough, that alone will already sort committed parents from parents who aren’t ready for a child.
Would they have to get a license or go through some kind of training program every time they want to have a child? Yes, I think so. Insights change.
It’s even possible that parenting will eventually become a profession.
Unfortunately, Kialo may not work very well with Linux. I was able to post my contribution, but seem unable to comment on other people’s contributions. Maybe it’s part of the learning curve, but I did see the intro video and the comment option mentioned in it simply does not seem to exist for me.
Earlier this year, I translated and adapted Richard Bintanja’s second novel. (The original is in Dutch.) The title of the translation is The Ultimate Brainchild. In his daily life, Richard is currently a professor of climate change.
I know Richard in real life because his wife and I became members of a professional network for women in science and technology a long time ago. I even spent some time at their home after my unplanned re-emigration from the United States to my native the Netherlands.
I have also edited some of Richard’s scientific publications because I’ve always had excellent writing and language skills, and as you probably guessed by now, I too have an extensive science background. (And, as you likely know if you’ve been to my web site before, I’ve been self-employed for a long time.) One of his papers is still the greatest I have been lucky enough to be able to work on. It literally made me sit up. It was for Nature, and I knew right away that it would be accepted.
Translating one of Richard’s non-scientific works was a very interesting experience for me and I want to share with you why.
Apart from the fact that I rarely read books in Dutch, this is not the kind of book that I would have selected for myself, to read. I normally look at a few pages, and I may read a few pages, but if it does not pull me in straight away or the pace is too slow, I tend to give up. I often go for fast-paced American thrillers and crime novels.
But this, and I only discovered this because I translated the book, is a book that has to be read in full before you can judge it. That is very interesting. Does this mean that I too have become part of the people who want “instant gratification”? I am still chewing on this…
The book is very well crafted (featuring brain scientist George Walder) and, of course, you don’t fully appreciate that until you arrive at the end, when everything has come together. I can imagine it as a film. There is enough action in it to make it work, enough suspense and enough that sparks curiosity. A good script writer could easily add a few scenes here and there or adapt a few scenes here and there to give it what it would need to be turned into a film. (No, I am not kidding. I believe that this book has many elements – including some nudity, by the way – and a storyline that would work very well in a film.)
The book contains a division into persons who have a certain ability and people who don’t. There is even a parallel universe in it, which really threw me at first because I didn’t know what it was about until much later. (The book increasingly produces a sense of delight as you progress through it.) So it is also a book that could potentially yield new insights about equality and inequality for some people.
Now, ask me anything you want! About this book or the context of the above.
Techniques like gene editing may bring more equality, but can just as easily bring less. Now is the time to make up our minds about the future because we can’t leave this up to a handful of experts who advise the government.
Most of you will have heard about the recent CRISPR-Cas9 breakthrough. On 26 July, scientists in Oregon successfully used the technique to replace a defective piece of DNA with a regular piece of DNA. It concerned a gene with faulty DNA that normally results in heart disease.
This was only one small step, involving fertilized egg cells that were allowed to develop for no more than a few days and weren’t implanted in a womb.
But when techniques like CRISPR become routine, which “defects” are we going to edit out and which ones will we leave intact? And will this be the domain of commercial enterprises who sell their services or should this be part of everyone’s health care?
At the moment, we have the practice that fetuses are tested for certain conditions. To give you an idea of what this means in practice, in 2009, the abortion rate for fetuses diagnosed with Down’s syndrome in the UK was 92%.
Also, embryos created with IVF are subjected to PGD (pre-implantation genetic diagnosis), which has been around for decades. The use of IVF is rising.
Right now, the UK’s Human Fertilisation and Embryology Act 2008 states that “Persons or embryos that are known to have a gene, chromosome or mitochondrion abnormality involving a significant risk that a person with the abnormality will have or develop: (a) a serious physical or mental disability, (b) a serious illness, or (c) any other serious medical condition, must not be preferred to those that are not known to have such an abnormality.”
Who decides what a serious physical or mental disability, a serious illness, or any other serious medical condition is? In practice, that’s the UK Human Fertilisation & Embryology Authority. Currently, embryos with one of 400 conditions such as a genetic form of dwarfism called achondraplasia or Down’s syndrome are never used in IVF in Britain. Deselection of embryos with other conditions is awaiting approval.
Will the same eventually happen when techniques like CRISPR go mainstream?
This may all sound far off right now, but within the next few decades, human procreation is bound to change drastically. It is likely that we will eventually stop using sex to create babies. Instead, we will make all our babies “in the lab” and let the embryo develop in an artificial womb.
There is nothing wrong with that. We’ll get used to it, just like we got used to cars, trains and planes or the fact that women can attend universities now whereas they weren’t allowed to in the not too distant past.
But it will also mean that we’ll be able to do a lot more genetic tweaking. We need to start thinking about these upcoming changes now. What kind of society do we want to see develop? Do we want people to resemble each other more and behave in similar ways?
There are several reasons for not eagerly fixing too many “defects” that lead to viable human beings who live worthwhile lives, other than that groups of people are starting to demand the right not to be “edited out”.
The first is that there are currently several ongoing trends of emancipation. This includes the emancipation of persons with a wide range of conditions who we used to lock up and keep out of society. Deaf people and blind people, people with learning disabilities and so on. This emancipation is leading to accomplishments that we didn’t consider possible only a very short while ago. Particularly people with Down’s syndrome currently keep astonishing us all. Many have jobs now, and some serve as city councillor, become artists or are getting academic degrees.
A second reason is that we don’t know whether we may have a particular need for persons with various conditions such as autism spectrum disorder in the future. It is unlikely that we will continue to communicate the way we do now. Maybe we’ll end up communicating solely via images. We may well discover that those of us who now seem best suited for keeping up in society will fall behind then because they lack certain talents. We need the greatest neurodiversity possible.
A third reason is that lots of so-called impairments are nothing more but hindrances created by society. Better education and continued emancipation will see many of these hurdles disappear.
A fourth reason is that technological progress itself will come up with a wide range of solutions to accommodate everyone. Once more humans start integrating technology into their bodies – we are already seeing some of that – there soon won’t be any remaining limitations for those currently still considered impaired.
Dear Dr Seidel, thank you for making these very important points.
I am taking the opportunity to offer a few suggestions for discussion and invite more views on these issues. Some of what I write below only emerged during the writing of this response and may not be watertight. Can you withhold initial judgement, think along with me and see it as an exercise in exploring the various angles?
But first of all, please forgive me my shortcomings; I phrase various concepts differently than you do as my background is not in medicine and I tend to shy away from jargon. Also, what I say is not limited to newborns, but that will be obvious to this audience. The principles largely remain the same, whether we are talking about a pre-embryo, a fetus or a newborn, and whether I call them person, individual or child. (Legally, this is currently much more complex, as you know.) My focus in this discussion does not extend to persons beyond the age of majority (likely not even beyond 8 or 10, in practice) and I am also keeping the concept of euthanasia out of the discussion even though it is related. Worst of all, I throw all techniques related to genetic material into one big pot because it enables me to see the bigger picture better.
I write from my own perspective of an opinionated white woman in the west, but when I say “we”, my intention is to refer to the human species. People from other cultures will undoubtedly spot biases in my western views; I would like those people to point out those biases.
You ask whether genome screening for newborns will pave the way to genetic discrimination. You also raise the question of the interpretation (and reliability) of such data and you have privacy concerns.
With regard to the latter, I think that we will slowly have to accept that the digital age comes with the loss of privacy in many ways. That does not have to be as dramatic as it sounds. Privacy is a changing concept anyway, which also has a cultural angle to it. The realization that people from different generations and from different cultures have slightly different views on what privacy is may add some perspective that can make us breathe easier. So we should probably become more relaxed about the loss of privacy as we knew it and focus more on preventing and ameliorating potential negative consequences of that loss, if any. The real issue is not the loss of privacy, but abuse of personal information.
In my opinion, what we need to do is ensure non-discrimination and make certain that genomic information will only be used to improve any individual’s (medical) care. (The data can become part of studies, anonymized or not; we also need to redefine consent, but I am going to leave that out of this discussion too.) In other words, genomic information must only be used to enable and allow human beings to flourish.
Even a word like “flourish” or “thrive” is highly ambiguous, though. I mean it in a non-materialistic manner, whereas some others do not at all. Perhaps I can break it all down into stages to show what I mean within this specific context.Perhaps I can break it down to show what I mean within this context.
You mention the Hippocratic Oath, which some define as “Do no harm”. Harm is another concept that we don’t agree on yet and that we – therefore? – haven’t been able to define well.
I think that we need to start applying the principle of non-discrimination to all new human life. I believe that we should consider every human individual is just as valuable – in a non-materialistic manner – as every other human individual.
When I toss this around, I run into a peculiar dilemma. While I must see a deaf or a blind person (as an example) as equally valuable as a hearing or sighted person, I cannot accept it when a hearing or sighted person is deliberately made (permanently) deaf or blind, for instance during a mugging or a work-related accident. This also applies with regard to so-called augmentations. I cannot take a human being against his or her wishes and carry out a nose reconstruction or even inject botox. That makes me realize that harm done to a human appears to be any interference or change that occurs against that human being’s wishes and is implemented by someone else.
For now, I have to limit this to physical changes because the area of psychological changes is too complicated. (Just think of schools; we do not take bad teachers to court for being bad teachers, but we do take bad surgeons and physicians to court for being bad doctors, also because the evidence related to the latter is often much clearer.) Physical interference that occurs against a person’s wishes can of course also result in psychological changes, but that does not actually matter for the concept of harm within this context.
The next problem I then run into is the fact that particularly an embryo, fetus or newborn has a very limited ability to express wishes, but and that also holds for young children. If I try to put myself in the shoes of a child, however, it becomes possible to define harm in spite of that limitation.
This – putting themselves in the shoes of the child, as adults – is what parents, guardians and other carers do all the time, of course. They sometimes have to make the decisions for the child and express the child’s wishes for the child, as if they were the child, using the knowledge they have as adults, knowledge that the child will have in the future but does not possess yet.
So, lLet’s step into a child’s feet, then. It is hard to imagine a sick or injured child that would want to get sicker and sicker and sicker or want to have a permanently festering wound resulting from an injury caused by a fall. So it is fair to say that anything we do toward remedying such a situation is in accordance with the child’s wishes, in essence, even in cases in which the child cannot even say “please make the pain go away”. It is what the child would want if it possessed the knowledge and abilities of an adult.
So, the first step inpart of enabling a human – a child – to flourish is to attempt to prevent any deterioration of the child’s health.
We may have to start agreeing that this cannot be considered harm within this context, even if the chance of success is small, certainly in cases for which there are no alternative remedies. We may even have to decide that doing nothing constitutes harm when there is still an option of doing something.
If a child has appendicitis, a surgeon will have to cut into the child’s abdomen in order to remove the appendix to prevent deterioration of the child’s health or even death. Strictly speaking, cutting into a child’s abdomen constitutes inflicting an injury, but in this case, as it is done with the intention of preventing greater harm, namely the deterioration of the child’s health, it does not constitute harm within this contextwe do not see it as harm. (This may be be an example of where I display a western bias?)
(Of course, we can still take the surgeon to court if his or her work fails to meet professional standards, but that is a different type of harm. We certainly need professional standards.)
We can also take a child to the dentist and the dentist may have to inflict some discomfort in order to prevent deterioration of the child’s health.
By contrast, we should not, however, drag a child along kicking and screaming to have its ears pierced as this is not done with the aim of preventing a deterioration of health. (If a child asks to have its ears pierced, there is a clear wish on the side of the child.)
Note that the intention matters. When a procedure is carried out with the intention of wanting to prevent deterioration of health, we never have 100% certainty that the intended result will be achieved. (This may have implications for how we think about practices carried out in other cultures. Keep this at the back of your mind. Our own western views are not the only views that hold value.)
The second stepvital part of enabling a human being to flourish is to do everything we can within a daily-life context to allow that person to thrive on the basis of the person’s given physical (and mental) situation.
We send children to playgrounds to let them play with other children and test their physical limits, we feed them, clothe them and provide shelter as well as love and all those other concepts that are hard to measure but easy to grasp. In essence, this is no different for children who are, say, blind or deaf or who have Down syndrome.
The BBC news site just highlighted a very nice albeit exceptional example of what I mean by flourishing within this context: http://www.bbc.co.uk/news/m…
To do everything we can to allow that child to thrive is also required for children who are born with a medical condition that requires some form of medication or extra nutritional care to prevent deterioration of health. This, I think, is where standard genomic testing of newborns can play a pivotal role. These days, parents still too often have to conclude that something is seriously genetically wrong with their child on the basis of the deterioration of the child’s health, which in some cases means that irreversible damage has already occurred to the child’s health.
So, failure to provide such testing (screening) from the point in the future at which we know how to do and use this properly and reliably could perhaps also be seen as harm as it could lead to the preventable deterioration of a child’s health and would not encourage the child to thrive.
The third step next level within this context of enabling someone to flourish – and this is where it gets even trickier – is interfering with the child’s genetic make-up.
We may feel that the child is flawed, whereas the child is actually viable and doeswill not suffer a deterioration of health or be at great risk of certain complications if we allow it to live. At the moment, we often prevent such a child from coming into the world. This is where, I think, we need to draw the line and have to take a step back. It is a discriminatory practice because it appears to express a value judgement.
I also think that because of limited resources, we may need to approach this in a stepped manner.
What I mean is that if we initially limit techniques like CRISPR and gene therapy to all situations in which a resulting child would have “a life not worth living”, then we might have a fairly just and affordable way to start implementing CRISPR, gene therapy and anything else that may come along. Once we’ve done that, we can slowly start to take it forward, extend it to other conditions. The costs of such techniques will come down. and if we start with rare diseases that are currently incurable, we also limit the initial costs of implementation.
The loss of privacy may actually become an advantage because openness makes it also much easier to detect abuse of information and to safeguard against discrimination.
One of the reasons why I strongly believe that we need to start implementing non-discrimination for all new human life is the following. Once humans start interfacing with technology, other so-called impairments – which are currently often either biased opinions or restrictions imposed by society – cease to be impairments, taking away much of the motivation for “correcting” these individuals.
Moreover, not only do we – the human race as well as society – need diversity, we may have future needs for abilities of which we currently don’t realize that some people possess them. Those may well be people who are currently considered “impaired” or “flawed”. Junk DNA was once considered just that, too.
As I already indicated, we need a workable definition of what constitutes a life not worth living and once we have one (I may have found one, by the way, based on the principle of humanity), we may end up concluding that these are the primary cases in which we actually have a duty to interfere with the child’s genetic make-up.
So I agree with you that we have to exercise restraint, in spite of all the enormously exciting developments we currently see around us. Discrimination is not the only concern and neither are interpretation and costs. We don’t know all the possible consequences yet of the application of any of those new developments, even if we think we do.
We have made many decisions in the past without asking questions that now are so blatantly obvious in hindsight. Did nobody foresee that insecticides might also affect bees and birds and amphibians, to name just one example of a past mistake, albeit a highly significant one that now also affects human fertility?
We have another reason to take it slow, namely the fact that laws and regulations lag behind, evolve in response to arising situations in real life, and rarely anticipate on what may happen in the future. Legal professionals, too, tend to think conservatively and in a geographically limited manner. It’s probably the UN and WHO who should start taking the lead in this area, and guide us into the future. Do they need a push? Should we apply pressure?
Because perhaps more than anything else, we need to work toward reaching a global consensus (including legislation) on such important matters, irrespective of how challenging and impossible that may seem. It was also once completely unimaginable that we’d have humans land on the moon, so if we did that, then we can accomplish so much more than we think we can.
Mitalipov’s team is not the first to genetically modify human embryos. This was first accomplished in 2015 by a group of Chinese scientists led by Junjiu Huang. Mitalipov’s team, however, may be the first to demonstrate basic safety and efficacy using the CRISPR technique.
This has serious implications for the ethics debate on human germline modification which involves inserting, deleting or replacing the DNA of human sperm, eggs or embryos to change the genes of future children.
Those who support human embryo research will argue that Mitalipov’s research to alter human embryos is ethically acceptable because the embryos were not allowed to develop beyond 14 days (the widely accepted international limit on human embryo research) and because the modified embryos were not used to initiate a pregnancy. They will also point to the future potential benefit of correcting defective genes that cause inherited disease.
This research is ethically controversial, however, because it is a clear step on the path to making heritable modifications – genetic changes that can be passed down through subsequent generations.
Beyond safety and efficacy
Internationally, UNESCO has called for a ban on human germline gene editing. And the “Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine” – the Oviedo Convention – specifies that “an intervention seeking to modify the human genome may only be undertaken for preventive, diagnostic or therapeutic purposes and only if its aim is not to introduce any modification in the genome of any descendants.”
In a move away from the positions taken by UNESCO and included in the Oviedo Convention, in 2015 the 12-person Organizing Committee of the first International Summit on Human Gene Editing (of which I was a member) issued a statement endorsing basic and preclinical gene editing research involving human embryos.
The statement further stipulated, however, that: “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application.”
Mitalipov’s research aims to address the first condition about safety and efficacy. But what of the second condition which effectively recognizes that the human genome belongs to all of us and that it is not for scientists or other elites to decree what should or should not happen to it?
Since the 2015 statement was issued, many individuals and groups have tried to set aside the recommendation calling for a broad societal consensus.
For example, in February 2017, the U.S. National Academy of Sciences and National Academy of Medicine published a report endorsing germline modification. It states unequivocally that “clinical trials using heritable germline genome editing should be permitted” provided the research is only for compelling reasons and under strict oversight limiting uses of the technology to specified criteria.
Seeds of change in Canada
In Canada, it is illegal to modify human germ cells. Altering “the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants” is among the activities prohibited in the 2004 Assisted Human Reproduction Act.
Worried that “Canadian researchers may fall behind on the international scene” and that “restrictive research policies may lead to medical tourism,” the Canadian Institutes for Health Research (with input from the Canadian Stem Cell Network) has begun to plant the seeds of change.
In its Human Germline Gene Editing report, CIHR hints at the benefits of changing the legislation. It also suggests professional self-regulation and research funding guidelines could replace the current federal statutory prohibition.
Future of the species
With Mitalipov’s technological advances and increasing suggestions from researchers that heritable modifications to human embryos be permitted, it is essential that citizens be given opportunities to think through the ethical issues and to work towards broad societal consensus.
We are talking about nothing less than the future of the human species. No decisions about the modification of the germline should be made without broad societal consultation.
The rapid development of so-called NBIC technologies – nanotechnology, biotechnology, information technology and cognitive science – are giving rise to possibilities that have long been the domain of science fiction. Disease, ageing and even death are all human realities that these technologies seek to end.
They may enable us to enjoy greater “morphological freedom” – we could take on new forms through prosthetics or genetic engineering. Or advance our cognitive capacities. We could use brain-computer interfaces to link us to advanced artificial intelligence (AI).
Nanobots could roam our bloodstream to monitor our health and enhance our emotional propensities for joy, love or other emotions. Advances in one area often raise new possibilities in others, and this “convergence” may bring about radical changes to our world in the near-future.
“Transhumanism” is the idea that humans should transcend their current natural state and limitations through the use of technology – that we should embrace self-directed human evolution. If the history of technological progress can be seen as humankind’s attempt to tame nature to better serve its needs, transhumanism is the logical continuation: the revision of humankind’s nature to better serve its fantasies.
If we want to live in paradise, we will have to engineer it ourselves. If we want eternal life, then we’ll need to rewrite our bug-ridden genetic code and become god-like … only hi-tech solutions can ever eradicate suffering from the world. Compassion alone is not enough.
But there is a darker side to the naive faith that Pearce and other proponents have in transhumanism – one that is decidedly dystopian.
There is unlikely to be a clear moment when we emerge as transhuman. Rather technologies will become more intrusive and integrate seamlessly with the human body. Technology has long been thought of as an extension of the self. Many aspects of our social world, not least our financial systems, are already largely machine-based. There is much to learn from these evolving human/machine hybrid systems.
Yet the often Utopian language and expectations that surround and shape our understanding of these developments have been under-interrogated. The profound changes that lie ahead are often talked about in abstract ways, because evolutionary “advancements” are deemed so radical that they ignore the reality of current social conditions.
In this way, transhumanism becomes a kind of “techno-anthropocentrism”, in which transhumanists often underestimate the complexity of our relationship with technology. They see it as a controllable, malleable tool that, with the correct logic and scientific rigour, can be turned to any end. In fact, just as technological developments are dependent on and reflective of the environment in which they arise, they in turn feed back into the culture and create new dynamics – often imperceptibly.
Situating transhumanism, then, within the broader social, cultural, political, and economic contexts within which it emerges is vital to understanding how ethical it is.
Max More and Natasha Vita-More, in their edited volume The Transhumanist Reader, claim the need in transhumanism “for inclusivity, plurality and continuous questioning of our knowledge”.
Yet these three principles are incompatible with developing transformative technologies within the prevailing system from which they are currently emerging: advanced capitalism.
One problem is that a highly competitive social environment doesn’t lend itself to diverse ways of being. Instead it demands increasingly efficient behaviour. Take students, for example. If some have access to pills that allow them to achieve better results, can other students afford not to follow? This is already a quandary. Increasing numbers of students reportedly pop performance-enhancing pills. And if pills become more powerful, or if the enhancements involve genetic engineering or intrusive nanotechnology that offer even stronger competitive advantages, what then? Rejecting an advanced technological orthodoxy could potentially render someone socially and economically moribund (perhaps evolutionarily so), while everyone with access is effectively forced to participate to keep up.
Going beyond everyday limits is suggestive of some kind of liberation. However, here it is an imprisoning compulsion to act a certain way. We literally have to transcend in order to conform (and survive). The more extreme the transcendence, the more profound the decision to conform and the imperative to do so.
The systemic forces cajoling the individual into being “upgraded” to remain competitive also play out on a geo-political level. One area where technology R&D has the greatest transhumanist potential is defence. DARPA (the US defence department responsible for developing military technologies), which is attempting to create “metabolically dominant soldiers”, is a clear example of how vested interests of a particular social system could determine the development of radically powerful transformative technologies that have destructive rather than Utopian applications.
The rush to develop super-intelligent AI by globally competitive and mutually distrustful nation states could also become an arms race. In Radical Evolution, novelist Verner Vinge describes a scenario in which superhuman intelligence is the “ultimate weapon”. Ideally, mankind would proceed with the utmost care in developing such a powerful and transformative innovation.
There is quite rightly a huge amount of trepidation around the creation of super-intelligence and the emergence of “the singularity” – the idea that once AI reaches a certain level it will rapidly redesign itself, leading to an explosion of intelligence that will quickly surpass that of humans (something that will happen by 2029 according to futurist Ray Kurzweil). If the world takes the shape of whatever the most powerful AI is programmed (or reprograms itself) to desire, it even opens the possibility of evolution taking a turn for the entirely banal – could an AI destroy humankind from a desire to produce the most paperclips for example?
It’s also difficult to conceive of any aspect of humanity that could not be “improved” by being made more efficient at satisfying the demands of a competitive system. It is the system, then, that determines humanity’s evolution – without taking any view on what humans are or what they should be. One of the ways in which advanced capitalism proves extremely dynamic is in its ideology of moral and metaphysical neutrality. As philosopher Michael Sandel says: markets don’t wag fingers. In advanced capitalism, maximising one’s spending power maximises one’s ability to flourish – hence shopping could be said to be a primary moral imperative of the individual.
If biotech has rendered human nature entirely revisable, then it has no grain to direct or constrain our designs on it. And so whose designs will our successor post-human artefacts likely bear? I have little doubt that in our vastly consumerist, media-saturated capitalist economy, market forces will have their way. So – the commercial imperative would be the true architect of the future human.
Whether the evolutionary process is determined by a super-intelligent AI or advanced capitalism, we may be compelled to conform to a perpetual transcendence that only makes us more efficient at activities demanded by the most powerful system. The end point is predictably an entirely nonhuman – though very efficient – technological entity derived from humanity that doesn’t necessarily serve a purpose that a modern-day human would value in any way. The ability to serve the system effectively will be the driving force. This is also true of natural evolution – technology is not a simple tool that allows us to engineer ourselves out of this conundrum. But transhumanism could amplify the speed and least desirable aspects of the process.
For bioethicist Julian Savulescu, the main reason humans must be enhanced is for our species to survive. He says we face a Bermuda Triangle of extinction: radical technological power, liberal democracy and our moral nature. As a transhumanist, Savulescu extols technological progress, also deeming it inevitable and unstoppable. It is liberal democracy – and particularly our moral nature – that should alter.
The failings of humankind to deal with global problems are increasingly obvious. But Savulescu neglects to situate our moral failings within their wider cultural, political and economic context, instead believing that solutions lie within our biological make up.
Yet how would Savulescu’s morality-enhancing technologies be disseminated, prescribed and potentially enforced to address the moral failings they seek to “cure”? This would likely reside in the power structures that may well bear much of the responsibility for these failings in the first place. He’s also quickly drawn into revealing how relative and contestable the concept of “morality” is:
We will need to relax our commitment to maximum protection of privacy. We’re seeing an increase in the surveillance of individuals and that will be necessary if we are to avert the threats that those with antisocial personality disorder, fanaticism, represent through their access to radically enhanced technology.
Such surveillance allows corporations and governments to access and make use of extremely valuable information. In Who Owns the Future, internet pioneer Jaron Lanier explains:
Troves of dossiers on the private lives and inner beings of ordinary people, collected over digital networks, are packaged into a new private form of elite money … It is a new kind of security the rich trade in, and the value is naturally driven up. It becomes a giant-scale levee inaccessible to ordinary people.
Crucially, this levee is also invisible to most people. Its impacts extend beyond skewing the economic system towards elites to significantly altering the very conception of liberty, because the authority of power is both radically more effective and dispersed.
Foucault’s notion that we live in a panoptic society – one in which the sense of being perpetually watched instils discipline – is now stretched to the point where today’s incessant machinery has been called a “superpanopticon”. The knowledge and information that transhumanist technologies will tend to create could strengthen existing power structures that cement the inherent logic of the system in which the knowledge arises.
This is in part evident in the tendency of algorithms toward race and gender bias, which reflects our already existing social failings. Information technology tends to interpret the world in defined ways: it privileges information that is easily measurable, such as GDP, at the expense of unquantifiable information such as human happiness or well-being. As invasive technologies provide ever more granular data about us, this data may in a very real sense come to define the world – and intangible information may not maintain its rightful place in human affairs.
Existing inequities will surely be magnified with the introduction of highly effective psycho-pharmaceuticals, genetic modification, super intelligence, brain-computer interfaces, nanotechnology, robotic prosthetics, and the possible development of life expansion. They are all fundamentally inegalitarian, based on a notion of limitlessness rather than a standard level of physical and mental well-being we’ve come to assume in healthcare. It’s not easy to conceive of a way in which these potentialities can be enjoyed by all.
Unprecedented acute concentration of wealth happens alongside these expulsions. Advanced economic and technical achievements enable this wealth and the expulsion of surplus groups. At the same time, Sassen writes, they create a kind of nebulous centrelessness as the locus of power:
The oppressed have often risen against their masters. But today the oppressed have mostly been expelled and survive a great distance from their oppressors … The “oppressor” is increasingly a complex system that combines persons, networks, and machines with no obvious centre.
Surplus populations removed from the productive aspects of the social world may rapidly increase in the near future as improvements in AI and robotics potentially result in significant automation unemployment. Large swaths of society may become productively and economically redundant. For historian Yuval Noah Harari “the most important question in 21st-century economics may well be: what should we do with all the superfluous people?”
We would be left with the scenario of a small elite that has an almost total concentration of wealth with access to the most powerfully transformative technologies in world history and a redundant mass of people, no longer suited to the evolutionary environment in which they find themselves and entirely dependent on the benevolence of that elite. The dehumanising treatment of today’s expelled groups shows that prevailing liberal values in developed countries don’t always extend to those who don’t share the same privilege, race, culture or religion.
In an era of radical technological power, the masses may even represent a significant security threat to the elite, which could be used to justify aggressive and authoritarian actions (perhaps enabled further by a culture of surveillance).
In their transhumanist tract, The Proactionary Imperative, Steve Fuller and Veronika Lipinska argue that we are obliged to pursue techno-scientific progress relentlessly, until we achieve our god-like destiny or infinite power – effectively to serve God by becoming God. They unabashedly reveal the incipient violence and destruction such Promethean aims would require: “replacing the natural with the artificial is so key to proactionary strategy … at least as a serious possibility if not a likelihood [it will lead to] the long-term environmental degradation of the Earth.”
The extent of suffering they would be willing to gamble in their cosmic casino is only fully evident when analysing what their project would mean for individual human beings:
A proactionary world would not merely tolerate risk-taking but outright encourage it, as people are provided with legal incentives to speculate with their bio-economic assets. Living riskily would amount to an entrepreneurship of the self … [proactionaries] seek large long-term benefits for survivors of a revolutionary regime that would permit many harms along the way.
Progress on overdrive will require sacrifices.
The economic fragility that humans may soon be faced with as a result of automation unemployment would likely prove extremely useful to proactionary goals. In a society where vast swaths of people are reliant on handouts for survival, market forces would determine that less social security means people will risk more for a lower reward, so “proactionaries would reinvent the welfare state as a vehicle for fostering securitised risk taking” while “the proactionary state would operate like a venture capitalist writ large”.
At the heart of this is the removal of basic rights for “Humanity 1.0”, Fuller’s term for modern, non-augmented human beings, replaced with duties towards the future augmented Humanity 2.0. Hence the very code of our being can and perhaps must be monetised: “personal autonomy should be seen as a politically licensed franchise whereby individuals understand their bodies as akin to plots of land in what might be called the ‘genetic commons’”.
The neoliberal preoccupation with privatisation would so extend to human beings. Indeed, the lifetime of debt that is the reality for most citizens in developed advanced capitalist nations, takes a further step when you are born into debt – simply by being alive “you are invested with capital on which a return is expected”.
Socially moribund masses may thus be forced to serve the technoscientific super-project of Humanity 2.0, which uses the ideology of market fundamentalism in its quest for perpetual progress and maximum productivity. The only significant difference is that the stated aim of godlike capabilities in Humanity 2.0 is overt, as opposed to the undefined end determined by the infinite “progress” of an ever more efficient market logic that we have now.
A new politics
Some transhumanists are beginning to understand that the most serious limitations to what humans can achieve are social and cultural – not technical. However, all too often their reframing of politics falls into the same trap as their techno-centric worldview. They commonly argue the new political poles are not left-right but techno-conservative or techno-progressive (and even techno-libertarian and techno-sceptic). Meanwhile Fuller and Lipinska argue that the new political poles will be up and down instead of left and right: those who want to dominate the skies and became all powerful, and those who want to preserve the Earth and its species-rich diversity. It is a false dichotomy. Preservation of the latter is likely to be necessary for any hope of achieving the former.
Transhumanism and advanced capitalism are two processes which value “progress” and “efficiency” above everything else. The former as a means to power and the latter as a means to profit. Humans become vessels to serve these values. Transhuman possibilities urgently call for a politics with more clearly delineated and explicit humane values to provide a safer environment in which to foster these profound changes. Where we stand on questions of social justice and environmental sustainability has never been more important. Technology doesn’t allow us to escape these questions – it doesn’t permit political neutrality. The contrary is true. It determines that our politics have never been more important. Savulescu is right when he says radical technologies are coming. He is wrong in thinking they will fix our morality. They will reflect it.