Comment mieux protéger les hommes et l’environnement des dangers du mercure ? C’est la question qui a occupé, du 24 au 29 septembre les signataires de la Convention de Minamata.
Le but de cette convention, entrée en vigueur le 16 août dernier, est de préserver la santé humaine et les écosystèmes des émissions et rejets anthropiques de ce produit chimique hautement toxique. Le Programme des Nations unies pour l’environnement (PNUE) estime que 8 900 tonnes de mercure sont rejetées chaque année dans la nature (rejets naturels et anthropiques).
De multiples effets délétères
Le mercure est un métal très toxique, capable d’affecter le fonctionnement du cerveau. Selon l’Organisation mondiale de la santé (OMS), il peut provoquer des tremblements et autres symptômes neuropsychiatriques tels que la fatigue, l’insomnie, l’anorexie, la dépression, la nervosité, l’irritabilité ou encore des problèmes de mémoire. Il est particulièrement dangereux pour ceux qui consomment beaucoup de poisson puisqu’il s’immisce dans toute la chaîne alimentaire.
Selon le Programme des Nations unies pour l’environnement, la plus grande source d’émissions de mercure concerne les mines d’or artisanales à petite échelle, suivie de la combustion du charbon. Le mercure est en effet utilisé lors de l’extraction de l’or par les mineurs. Sachant qu’environ 15 % de l’or est produit dans un cadre artisanal et à petite échelle, le recours au mercure est largement répandu.
Éliminer progressivement le mercure
La Convention de Minamata tire son nom d’une ville de pêcheurs située au Japon dont les habitants ont été gravement contaminés pendant des décennies. De 1932 à 1966, une usine pétrochimique à effet rejeté des métaux lourds, et tout particulièrement du mercure, affectant plus de 10 000 personnes. Cet événement dramatique a permis une prise de conscience des dangers du mercure.
Si elle n’agit pas de façon immédiate, la Convention constitue un indicateur précieux pour les mesures à venir limitant l’usage du mercure. Le traité prévoit ainsi l’interdiction de l’ouverture de nouvelles mines du mercure et son abandon graduel dans les mines d’or existantes.
Il vise également son élimination progressive dans un certain nombre de produits courants, comme les thermomètres, ainsi que le contrôle de ses rejets et de ses composés. Car le mercure est présent dans de très nombreux objets du quotidien à l’image des piles ou encore des produits cosmétiques.
Au-delà des promesses, la Convention de Minamata peut-elle réduire efficacement l’utilisation du mercure dans le monde ? Plusieurs points du texte illustrent des faiblesses, comme l’ont souligné certains groupes de défense de l’environnement.
En premier lieu, alors même que le traité exige des pays concernés qu’ils établissent des plans d’action pour réduire l’utilisation du mercure dans l’extraction minière artisanale, aucun objectif exact ni aucune échéance n’ont été spécifiés.
On peut également regretter que la Convention de Minamata ne soit pas aussi contraignante pour les centrales électriques à charbon déjà en fonctionnement que pour les nouvelles centrales. Seules ces dernières auront désormais l’obligation d’utiliser la meilleure technologie disponible.
On le voit, le traité de Minamata demeure très souple, sa mise en œuvre dépendant du moment où le pays veut agir.
La question du financement
Le financement constitue également l’une des grandes faiblesses du traité, la mise en place des mesures pour lutter contre les effets du mercure dépend des capacités de chaque pays. Et le peu de volontarisme des pays industrialisés n’arrange rien : ces derniers préfèrent en effet s’appuyer sur le Fonds pour l’environnement mondial (GEF) afin de réduire l’usage du mercure.
Soulignons enfin que certains pays où le mercure est beaucoup utilisé dans l’extraction artisanale d’or à petite échelle, comme l’Inde, le Suriname et les Philippines, n’ont pas encore ratifié le traité.
En dépit de ses faiblesses, le traité témoigne de la volonté de la communauté internationale d’agir contre les effets nocifs du mercure. C’est une initiative précieuse, mais ce n’est qu’une première étape.
The cost of energy in the UK is once again a hot topic. During the party conference season, Nicola Sturgeon, the first minister of Scotland, announced that the Scottish government will set up a publicly owned, not for profit energy company. Labour’s Jeremy Corbyn restated his wish to nationalise utility companies to “stop the public being ripped off”. And the Conservative prime minister Theresa May promised to fix the “broken” energy market, in part by imposing a cap on some domestic energy prices.
The UK government swiftly followed this season of rhetoric with two supporting policy announcements. It has drawn up draft legislation to set an energy price cap, although this may take until the winter of 2018/19 to be enacted. Second, it has published a clean growth strategy, which promises “cleaner air, lower energy bills, greater economic security and a natural environment protected and enhanced for the future”.
It’s not easy to address the social, environmental and economic dimensions of domestic energy in one go, as these different goals interact with each other. For example, a price cap clearly makes energy more affordable, but it doesn’t reduce the amount of energy needed or used. While the sheer price of energy is problematic for many people, so too is inefficient housing which increases bills and associated greenhouse gas emissions.
The clean growth strategy addresses this by reconfirming a commitment to require large energy companies to install efficiency measures such as insulation and heating systems. This scheme, the energy company obligation (ECO), now has £3.6 billion in funding through to 2028. It aims to help 2.5m fuel-poor households. Alongside stricter regulations within the private rented sector, the ECO is intended to upgrade all fuel-poor homes to a decent standard by 2030.
But it’s worth putting the rhetoric and promises of these policy announcements into context. Help for people in fuel poverty has decreased since 2010, largely due to the coalition government abandoning publicly funded schemes in England in favour of privately funded energy supplier obligations like ECO. Though social and environmental policies do add to fuel bills, policymakers assume that this increase is more than offset by people using less energy thanks to efficiency savings.
In our research we are currently looking at whether ECO is an effective way to address affordability and energy efficiency in vulnerable people’s homes. England is the only one of the four UK nations that relies solely on this market-driven scheme, so it’s important to evaluate its impact. We recently highlighted a number of potential problems, and solutions. To begin with, only certain people are eligible. Proxies such as welfare benefits, demographics and postcodes are used, but they can arbitrarily exclude households on the margins of these measures who may indeed be vulnerable.
People also struggle to upgrade their homes if the work does not enable a certain amount of carbon savings at a certain price. In other words, private companies are likely to prioritise meeting their statutory obligations rather than findings and helping the most vulnerable households. Even for those that do secure funding, it’s at best a long and complicated process. Some upgrades are never completed because installers are not equipped to manage the needs of people with, for example, disabilities or mental health conditions.
What is clear from our comparative research of the UK nations is that state funded schemes, such as nest in Wales and home energy efficiency programmes in Scotland, are better able to target, and respond to the needs of, vulnerable households. Market driven schemes are different as they will, by definition, seek out the most cost effective work. But this ceases to be an asset once the low-hanging fruit has all been picked, and those with the greatest need (and potentially higher costs) are left subsidising other people’s housing upgrades.
An energy price cap will certainly provide some initial relief. But unless it is continually ratcheted down or extended to more customers it will not provide long-term savings or wider benefits. Increasing investment in energy efficiency ticks more social and environmental boxes, but the regressive approach to funding such a scheme in England means it will continue prioritising cost-effective carbon savings over helping those most in need.
The British government began a 12-week consultation period on Oct. 6 to sort out the details for a near-total ban on its domestic ivory trade.
Conservation groups have long worried that even a legal trade can mask the illicit movement of ivory and stimulate further demand for ivory from poached elephants.
The conservation groups WCS and Stop Ivory applauded the announcement and pledged to work with the government to put the ban in place.
The UK government has announced that it will shut down the legal ivory trade within its borders.
“The decline in the elephant population fueled by poaching for ivory shames our generation,” UK Environment Secretary Michael Gove said in a statement. “The need for radical and robust action to protect one of the world’s most iconic and treasured species is beyond dispute.”
Growing evidence has shown that even a legal trade in ivory can mask illegal sales and feed the demand that drives poaching. Laws in the UK currently allow pieces of worked ivory, such as carvings, made before March 3, 1947, to be sold. Pieces made after that date require a certificate to be traded legally.
On Oct. 6, the government began a 12-week consultation period to define a new set of restrictions, during which time conservation groups and art and antique curators will work together “to ensure there is no room for loopholes which continue to fuel the poaching of elephants,” according to the statement.
Under the plan, the government will make exceptions to allow the trade and sale of musical instruments made with ivory and pieces with small amounts of ivory. Trade in objects of “significant historic, artistic or cultural value” will also be allowed, as well as the movement of pieces to and between museums.
Gove’s office said that the goal will be to identify “narrowly defined and carefully targeted exemptions for items which do not contribute to the poaching of elephants and where a ban would be unwarranted.”
“Illegal ivory hides behind ‘legal’ ivory, and the UK still allows a significant domestic ‘legal’ ivory market,” said Wildlife Conservation Society CEO Cristián Samper in a statement from the organization. “The implementation of a strict ban without loopholes that traders can exploit is essential in the fight against the poaching of elephants and the trafficking in their ivory.”
In just the past 15 years, forest elephant (Loxodonta cyclotis) numbers in Central Africa are down by 66 percent, Samper said. On the continent as a whole, 20,000 elephants die at the hands of poachers each year. Many scientists fear that savanna (Loxodonta africana) and forest elephants could face extinction in the next few decades if those trends continue.
“The only way to save elephants, in addition to strong field and enforcement work, is to ban ivory sales to prevent any opportunities for such laundering,” Samper said.
“The unprecedented crisis we face — with Africa’s natural heritage being destroyed and communities put at risk due to poaching by illegal armed gangs — will only stop when people stop buying ivory,” said John Stephenson, CEO of the NGO Stop Ivory, in the government’s statement. “Along with our partners, we congratulate the government on this important step and look forward to working with it and our colleagues to ensure the ban is implemented robustly and without delay.”
In October 2018, the UK will host an international conference on the illegal wildlife trade, which Gove’s office figures is worth 17 billion pounds ($22.4 billion).
Samper acknowledged the need for “strong field and enforcement work” to protect elephants where they live, but he said that leaders in those countries need help.
“We cannot save elephants if the trade in their ivory continues,” Samper said. “The majority of African elephant range countries … have called on the global community to put an end to the ivory trade.”
“Elephants need dramatic help now,” he added.
Banner image of an elephant in Tanzania by John C. Cannon.
In 2016, as part of a ceremony in Cameroon’s capital Yaoundé, 2 000 elephant tusks were burned to demonstrate the country’s commitment to fight poaching and illegal trade in wildlife. US Ambassador to the United Nations Samantha Power gave a speech at the event linking poaching to terrorism.
The idea that terror groups like Boko Haram fund their activities through ivory poaching in Africa is a simple and compelling narrative. It has been adopted by governments, NGOs and media alike. But it is undermining wildlife conservation and human rights.
The problem is that such claims hinge on a single document which uses only one, unnamed source to estimate terrorist profits from ivory. The study hasn’t been backed up elsewhere.
Similarly, there is little evidence that terrorist activities are funded by wildlife poaching in Cameroon. We have studied wildlife conservation and pastoralism in the Far North Region of Cameroon in the last two decades. We have found that it is highly unlikely that Boko Haram is using ivory to survive financially. The elephant populations in the areas where Boko Haram operates are so low that this would be a faulty business plan to say the least. Only 246 elephants were counted in Waza Park in 2007.
Talking about ivory-funded terrorism overlooks the real sources of income for these groups. In Cameroon and Nigeria evidence shows that Boko Haram is using profits from cattle raids to support its activities. Boko Haram’s plunder of the countryside leaves cattle herders destitute.
The dangers of militarisation
The wrong focus has implications for conservation and human rights. Linking poachers and terrorists has led to a further militarisation of conservation areas in Africa. More guns and guards have been sent into parks to stop poachers.
The military approach has also led to serious human rights violations. These take the form of shoot-on-sight policies and other violent tactics carried out against local populations. Law enforcement in protected areas is important for controlling poaching and terrorism alike but it is not a perfect solution.
And wildlife conservation can suffer if well armed but underpaid park guards turn to poaching themselves.
It would be more helpful if properly paid and trained people provided security across the region rather than just in protected areas.
Consequences of the wrong connection
Ignoring the fact that cattle, not ivory, may be fuelling terrorism in places like Cameroon does a disservice to pastoralists. While livestock may compete with wildlife when pastoralists take refuge inside better-protected areas like parks, they do so only because their livelihoods are at risk.
Mistaking the true source of income for terrorist groups also means that their violent activities continue.
Finally, it diverts attention from corrupt conservation and government officials who may be complicit in poaching.
Of course, this is not to say that poaching is not happening. The dramatic declines in elephant populations in Cameroon and elsewhere in Africa indicate otherwise. The question is who is doing the poaching and why.
We challenge governments and organisations interested in wildlife, security and human rights to take a closer look at the evidence. Instead of sharing simple claims about terrorism and poaching, they should consider all the forms of economic support to terrorist organisations.
In Cameroon, this would mean offering better security for pastoralists and their cattle. Protecting cattle does not have the same appeal for Western audiences as protecting elephants. But it could be a way to conserve wildlife, protect human rights and stop funding for terrorism.
The young populations of these countries are entering a shifting jobs landscape propelled by innovation in digital technology. China and India are moving to prepare their populations to take advantage of the digital era.
But Indonesia has a lot of catching up to do to provide its people with skills including digital literacy, to be able to find employment in a world where the ability to use the internet via digital mediums, such as personal computers, smartphones, tablets and others, will be a necessary skill.
Changing jobs landscape
Before there was the internet, around 30 years ago, more than half of Indonesia’s population (54.7% in 1985) worked the land as farmers.
Data from Indonesia’s Statistics Agency show more than half of Indonesian workers (51.5%) are underqualified or lack the right skills to do the job. This occupational mismatch is often associated with low levels of education. Some 40% of workers’ skills and employment are well matched. And 8.5% are overqualified for their occupations.
The data show Indonesia is facing a skills shortage. One of the skills Indonesians lack is digital literacy.
What China and India are doing
China provides us with a good example of how to take advantage of an internet-enabled digital economy. It accounted for 30.6% of China’s GDP in 2016.
Even though China restricted its citizens’ internet access, by blocking certain websites and applications since 1997 (“the great firewall of China”), it has, on the other hand, driven the development of its native platforms such as WeChat, Weibo, QQ, Renren, Alibaba, JD.com and many others.
With its restrictions, China has reoriented internet adoption and online behaviours by maximising its market potential within the country.
For example, WeChat has grown rapidly since 2011 to rival Facebook and become the nation’s most-used social media app. It has radically changed the Chinese lifestyle and way of doing business. WeChat will potentially overtake Facebook in the future.
It offers features such as instant messaging, commerce and mobile payment services. It makes a virtual workplace possible by offering components that enable and improve important business functions such as task co-ordination. It provides a convenient virtual wallet that can be used for almost every transaction, from paying utility bills to a coffee.
The Chinese diaspora has spread the use of WeChat worldwide. This is an example of China using its demographic bonus to create opportunities and a competitive environment that allow its citizens to redefine the global economic balance of power.
Meanwhile, India made a serious move to combat digital illiteracy by establishing the National Digital Literacy Mission (NDLM) in August 2014.
With the objective of “making one person in every family digitally literate by 2020”, India has pledged to provide 147 million people in rural India with the necessary skill to use the technology.
This can be seen as a positive move towards a more digital-savvy India that recognises the need of digital literacy for development.
What about Indonesia?
Indonesia currently focuses on traditional infrastructure development, such as roads, ports and a subway system, to improve physical connectivity and mobility. But the government should not lose sight of the importance of providing the population with the infrastructure to access information and technology.
According to Akamai, as of March 2017, the internet penetration rate in Indonesia is 50.4%. This is lower than neighbouring countries such as Australia (85.9%), Singapore (81.2%), Malaysia (67.7%), Philippines (52%), Vietnam (52.1%), and Thailand (60%).
The average speed of internet connection in Indonesia (7.2 Mbps) is also slower compared to Singapore (20.3 Mbps), Thailand (16.0 Mbps), Vietnam (9.5 Mbps) and Malaysia (8.9 Mbps).
Despite high smartphone sales (55.4 million users in 2015 with 4.5 million smartphones sold annually), Indonesia remains “a marketplace” rather than a rising power in the global competition. Indonesia is the third-largest smartphone market in the Asia-Pacific rgion, after India and China.
Indonesians can use social media such as Facebook and WhatsApp, but they do not have fast and reliable internet access to browse and research online, let alone create business opportunities.
Research by Edwin Jurriens and Ross Tapsell recommends that the Indonesian government start paying attention to the digital divide if Indonesia is serious about its objective to combat inequality.
Start with simple but necessary steps
Indonesia needs to develop policies with clear objectives to spur internet adoption and digital literacy.
working with the private sector to provide internet access and telecommunication services for rural areas
training citizens to use digital technology via formal and informal education programs nationwide
promoting and providing incentives to develop native online platforms.
This could involve, for example, holding hackathons to solve the real issues that Indonesians face daily, such as traffic jams, floods, finding markets for local products, access to health services and referral, options for different service providers, a channel to provide feedback to improve services, etc.
Instead of leaping towards the objective of creating “technopreneurs”, Indonesia could begin with a simple objective to start a nationwide movement to combat digital illiteracy, a hidden inequality that persists in Indonesia.
Indonesia should also provide an environment where tech startups can thrive, through tax rebates and investments, to really benefit the Indonesian economy.
For example, Gojek, one of the most successful local startups, was founded and is led by Nadine Makarim, an Indonesian. However, it could only succeed after receiving backing and investment from Warburg Pincus, KKR and Farallon Capital – all American-based equity firms.
We may celebrate Gojek as a successful Indonesian example of a startup that has helped to solve local issues by allowing access to convenient services. But, if we fail to understand who are “the real owners” of the business, Indonesia will only be “a marketplace”, not an emerging economy.
The investigative series Indonesia for Sale, launching this week, shines new light on the corruption behind Indonesia’s deforestation and land rights crisis.
In-depth stories, to be released over the coming months, will expose the role of collusion between palm oil firms and politicians in subverting Indonesia’s democracy. They will be published in English and Indonesian.
The series is the product of nine months’ reporting across the country, interviewing fixers, middlemen, lawyers and companies involved in land deals, and those most affected by them.
Indonesia for Sale is a collaboration between Mongabay and The Gecko Project, an investigative reporting initiative established by UK-based nonprofit Earthsight.
Indonesia, a nation of thousands of islands draped across the equator, is in the grips of a social and environmental crisis.
Its rainforests are being destroyed at a catastrophic rate. Nearly every year it is cloaked in a choking haze from burning peatlands. Thousands of conflicts over land persist across the archipelago. It is one of the most unequal societies on earth, with half of its wealth controlled by 1 percent of the population. Local elections, where power over millions of people is decided, descend into a brazen display of vote-buying and bribery.
Many of the causes of these problems can be traced back to one source: the corrupt actions of a small number of politicians who have taken control of Indonesia’s districts.
In the turbulent years after the fall of the dictator Suharto in 1998, huge powers were transferred from the central government to Indonesia’s districts. Specifically, to the bupatis, the elected officials who presided over these jurisdictions, and who assumed new control over how land and forests within them could be used.
Within a few short years, the bupatis had built fiefdoms across Indonesia. They used their newfound powers to cash in on natural resources, bankroll elections and build dynasties by installing relatives as their successors and in other influential positions.
Under their watch, oil palm plantation companies were granted millions of hectares of land and forests. Much of it was used and owned by indigenous and other rural communities, whose rights were cast aside in favor of the private sector.
Plantation companies have played a central role in the destruction of Indonesia’s rainforests. They have drained its peat swamps, rendering vast landscapes prone to outbreaks of fire. They have taken community lands and offered little in return, sparking intractable conflicts.
The land deals overseen by the bupatis concentrated immense territories in the hands of conglomerates owned by super-rich oligarchs from around Southeast Asia. At the same time, they deprived many of the poorest rural families from access to the fields and forests on which they depend for their livelihoods and food security. While successive national governments paid lip service to the need for land reform, precisely as a means of reducing inequality, the bupatis were busy giving more land to the rich.
Over the past nine months, Mongabay and The Gecko Project have investigated the corrupt ways in which government officials handed out vast tracts of Indonesia to private firms. We traveled to the heart of Borneo, to the swamplands of southern Kalimantan, to a paradise island of mangrove forests, and to a remote corner of eastern Indonesia. We met with indigenous activists who carried out their own investigations into the officials pillaging their land, and with fixers who facilitated deals between politicians and companies in Jakarta hotels.
Over the coming weeks we will release our findings in a series of articles and short films collectively titled Indonesia for Sale. The series is centered around three case studies, each shedding light on a central component of the way in which large swaths of the country have been transferred by corrupt politicians into private hands.
The first installment, “The palm oil fiefdom,” shines a spotlight on a bupati in Borneo who tried to turn almost the entire southern half of his district into one giant oil palm plantation, for the benefit of his relatives and cronies. It delves into one of the most egregious examples of a system in which district chiefs collude with private companies to exploit their office, with devastating consequences for people and the environment.
The next installment follows the money trail that ended in the bribery of Akil Mochtar, chief justice of the nation’s highest court, to secure an election win in Borneo. It lays bare the connection between natural resources, land deals and money politics, and the middlemen who serve as the connective tissue in that relationship.
The final installment exposes a shadowy cabal that constitutes the largest single threat to Indonesia’s forests today, with links from Papua to Malaysia to Yemen. It reveals the methods these individuals are using to hide their identities and the illegality of their projects as they forge east into the archipelago’s last frontier.
These will be supported by articles that explore broader issues raised by our investigations. For example, the role of brokers in facilitating oil palm deals, the tricks employed by companies to acquire land from indigenous groups, and the widespread failure of plantation firms in Indonesia to provide smallholdings for nearby communities, as required by law.
For more than a decade, the fate of Indonesia’s forests has been recognized as a global problem. The expansion of agriculture into these carbon-rich ecosystems has made the nation a leading greenhouse gas emitter.
But for all of the responses that have been devised by policymakers and the private sector, plantation companies continue to destroy forests and violate human rights. Many policies have failed because corrupt politicians have been allowed to collude with the private sector in a vacuum of accountability and scrutiny.
Indonesia for Sale puts these politicians firmly in the spotlight.
Follow Mongabay and The Gecko Project on Facebook (here, here) for updates on the series.
Racial bias can seem like an intractable problem. Psychologists and other social scientists have had difficulty finding effective ways to counter it – even among people who say they support a fairer, more egalitarian society. One likely reason for the difficulty is that most efforts have been directed toward adults, whose biases and prejudices are often firmly entrenched.
My colleagues and I are starting to take a new look at the problem of racial bias by investigating its origins in early childhood. As we learn more about how biases take hold, will we eventually be able to intervene before any biases become permanent?
Measuring racial bias
When psychology researchers first began studying racial biases, they simply asked individuals to describe their thoughts and feelings about particular groups of people. A well-known problem with these measures of explicit bias is that people often try to respond to researchers in ways they think are socially appropriate.
Starting in the 1990s, researchers began to develop methods to assess implicit bias, which is less conscious and less controllable than explicit bias. The most widely used test is the Implicit Association Test, which lets researchers measure whether individuals have more positive associations with some racial groups than others. However, an important limitation of this test is that it only works well with individuals who are at least six years old – the instructions are too complex for younger children to remember.
Recently, my colleagues and I developed a new way to measure bias, which we call the Implicit Racial Bias Test. This test can be used with children as young as age three, as well as with older children and adults. This test assesses bias in a manner similar to the IAT but with different instructions.
Here’s how a version of the test to detect an implicit bias that favors white people over black people would work: We show participants a series of black and white faces on a touchscreen device. Each photo is accompanied by a cartoon smile on one side of the screen and a cartoon frown on the other.
In one part of the test, we ask participants to touch the cartoon smile as quickly as possible whenever a black face appears, and the cartoon frown as quickly as possible whenever a white face appears. In another part of the test, the instructions are reversed.
The difference in the amount of time it takes to follow one set of instructions versus the other is used to compute the individual’s level of implicit bias. The reasoning is that it takes more time and effort to respond in ways that go against our intuitions.
Some studies suggest that precursors of racial bias can be detected in infancy. In one such study, researchers measured how long infants looked at faces of their own race or another race that were paired with happy or sad music. They found that 9-month-olds looked longer when the faces of their own race were paired with the happy music, which was different from the pattern of looking times for the other-race faces. This result suggests that the tendency to prefer faces that match one’s own race begins in infancy.
These early patterns of response arise from a basic psychological tendency to like and approach things that seem familiar, and dislike and avoid things that seem unfamiliar. Some researchers think that these tendencies have roots in our evolutionary history because they help people to build alliances within their social groups.
However, these biases can change over time. For example, young black children in Cameroon show an implicit bias in favor of black people versus white people as part of a general tendency to prefer in-group members, who are people who share characteristics with you. But this pattern reverses in adulthood, as individuals are repeatedly exposed to cultural messages indicating that white people have higher social status than black people.
A new approach to tackling bias
Researchers have long recognized that racial bias is associated with dehumanization. When people are biased against individuals of other races, they tend to view them as part of an undifferentiated group rather than as specific individuals. Giving adults practice at distinguishing among individuals of other races leads to a reduction in implicit bias, but these effects tend to be quite short-lived.
In our new research, we adapted this individuation approach for use with young children. Using a custom-built training app, young children learn to identify five individuals of another race during a 20-minute session. We found that 5-year-olds who participated showed no implicit racial bias immediately after the training.
Although the effects of a single session were short-lived, an additional 20-minute booster session one week later allowed children to maintain about half of their initial bias reduction for two months. We are currently working on a game-like version of the app for further testing.
Only a starting point
Although our approach suggests a promising new direction for reducing racial bias, it is important to note that this is not a magic bullet. Other aspects of the tendency to dehumanize individuals of different races also need to be investigated, such as people’s diminished level of interest in the mental life of individuals who are outside of their social group. Because well-intended efforts to reduce racial bias can sometimes be ineffective or produce unintended consequences, any new approaches that are developed will need to be rigorously evaluated.
And of course the problem of racial bias is not one that can be solved by addressing the beliefs of individuals alone. Tackling the problem also requires addressing the broader social and economic factors that promote and maintain biased beliefs and behaviors.
<p>Two families go bowling. While they are bowling, they order a pizza for £12, six sodas for £1.25 each, and two large buckets of popcorn for £10.86. If they are going to split the bill between the families, <a href=”http://www.tests.com/practice/WISC-Practice-Test”>how much</a> does each family owe?</p>
<p>Online IQ “quizzes” <a href=”https://geniustests.com/”>purport</a> to be able to tell you whether or not “you have what it takes to be a member of the world’s most prestigious high IQ society”.
<p>If you want to boast about your high IQ, you should have been able to work out the answers to the questions. When John is 16 he’ll be twice as old as his brother. The two families who went bowling each owe £20.61. And 49 is the missing number in the sequence. </p>
<p>Despite the hype, the relevance, usefulness, and legitimacy of the IQ test is still <a href=”http://www.jstor.org/stable/1466807″>hotly debated</a> among educators, social scientists, and hard scientists. To understand why, it’s important to understand the history underpinning the birth, development, and expansion of the IQ test – a <a href=”http://www.jstor.org/stable/799646″>history</a> that includes the use of IQ tests to further marginalise ethnic minorities and poor communities. </p>
<p>In the early 1900s, dozens of intelligence tests were developed in Europe and America claiming to offer unbiased ways to measure a person’s cognitive ability. The first of these tests was developed by French psychologist Alfred Binet, who was commissioned by the French government to identify students who would face the most difficulty in school. The resulting 1905 Binet-Simon Scale became the basis for modern IQ testing. Ironically, Binet actually thought that IQ tests were inadequate measures for intelligence, pointing to the test’s inability to properly measure creativity or emotional intelligence.
<p>At its conception, the IQ test provided a relatively quick and simple way to identify and sort individuals based on intelligence – which was and still is highly valued by society. In the US and elsewhere, institutions such as the military and police used IQ tests to screen potential applicants. They also implemented admission requirements based on the results.
<p>The <a href=”http://www.jstor.org/stable/367145?seq=1#page_scan_tab_contents”>US Army Alpha and Beta Tests</a> screened approximately 1.75m draftees in World War I in an attempt to evaluate the intellectual and emotional temperament of soldiers. Results were used to determine how capable a solider was of serving in the armed forces and identify which job classification or leadership position one was most suitable for. Starting in the early 1900s, the US education system also began using IQ tests to identify “gifted and talented” students, as well as those with special needs who required additional educational interventions and different academic environments.
<p>Alongside the widespread use of IQ tests in the 20th century was the argument that the level of a person’s intelligence was influenced by their biology. Ethnocentrics and eugenicists, who viewed intelligence and other social behaviours as being determined by biology and race, latched onto IQ tests. They held up the apparent gaps these tests illuminated between ethnic minorities and whites or between low- and high-income groups.
<p>Some maintained that these test results provided further evidence that socioeconomic and racial groups were <a href=”http://www.jstor.org/stable/20373194″>genetically different</a> from each other and that systemic inequalities were partly a byproduct of evolutionary processes. </p>
<h2>Going to extremes</h2>
<p>The US Army Alpha and Beta test results garnered widespread publicity and were analysed by Carl Brigham, a Princeton University psychologist and early founder of psychometrics, in a 1922 book A Study of American Intelligence. Brigham applied meticulous statistical analyses to demonstrate that American intelligence was declining, claiming that increased immigration and racial integration were to blame. To address the issue, he called for social policies to restrict immigration and prohibit racial mixing.
<p>High-grade or border-line deficiency … is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come … Children of this group should be segregated into separate classes … They cannot master abstractions but they can often be made into efficient workers … from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.
<p>There has been considerable work from both hard and social scientists refuting arguments such as Brigham’s and Terman’s that racial differences in IQ scores are influenced by biology.
<p>But in their <a href=”http://www.jstor.org/stable/10.1525/j.ctt1pn5jp”>darkest moments</a>, IQ tests became a powerful way to exclude and control marginalised communities using empirical and scientific language. Supporters of eugenic ideologies in the 1900s used IQ tests to identify “idiots”, “imbeciles”, and the “feebleminded”. These were people, eugenicists argued, who threatened to dilute the White Anglo-Saxon genetic stock of America.
<p>As a result of such eugenic arguments, many American citizens were later <a href=”https://www.ncbi.nlm.nih.gov/pubmed/3299450″>sterilised</a>. In 1927, an infamous ruling by the US Supreme Court legalised forced sterilisation of citizens with developmental disabilities and the “feebleminded,” who were frequently identified by their low IQ scores. The ruling, known as Buck v Bell, resulted in over 65,000 coerced sterilisations of individuals thought to have low IQs. Those in the US who were forcibly sterilised in the aftermath of Buck v Bell were disproportionately poor or of colour.
<p>Debate over what it means to be “intelligent” and whether or not the IQ test is a robust tool of measurement continues to elicit strong and often opposing reactions today. Some researchers say that intelligence is a concept specific to a particular culture. They maintain that it appears differently depending on the context – in the same way that many cultural behaviours would. For example, burping may be seen as an indicator of enjoyment of a meal or a sign of praise for the host in some cultures and impolite in others.
<p>What may be considered intelligent in one environment, therefore, might not in others. For example, knowledge about medicinal herbs is seen as a form of intelligence in certain communities within Africa, but does not correlate with high performance on traditional Western academic intelligence tests.
<p>According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them <a href=”http://nrcgt.uconn.edu/newsletters/winter052/”>potentially problematic</a> in culturally diverse settings. The application of the same test among different communities would fail to recognise the different cultural values that shape what each community values as intelligent behaviour. </p>
<p>Going even further, given the <a href=”http://tap.sagepub.com/content/12/3/283″>IQ test’s history</a> of being used to further questionable and sometimes racially-motivated beliefs about what different groups of people are capable of, some researchers say such tests cannot objectively and equally measure an individual’s intelligence at all. </p>
<h2>Used for good</h2>
<p>At the same time, there are ongoing efforts to demonstrate how the IQ test can be used to help those very communities who have been most harmed by them in the past. In 2002, the execution across the US of criminally convicted individuals with intellectual disabilities, who are often assessed using IQ tests, was ruled unconstitutional. This has meant IQ tests have actually prevented individuals from facing “cruel and unusual punishment” in the US court of law.
<p>In education, IQ tests may be a more objective way to identify children who could benefit from special education services. This includes programmes known as “gifted education” for students who have been identified as exceptionally or highly cognitively able. Ethnic minority children and those whose parents have a low income, are under-represented in gifted education.
<p>The way children are chosen for these programmes means that Black and Hispanic students are often overlooked. Some US school districts employ admissions procedures for gifted education programmes that rely on teacher observations and referrals or require a family to sign their child up for an IQ test. But research suggests that teacher perceptions and expectations of a student, which can be preconceived, have an impact upon a child’s IQ scores, academic achievement, and attitudes and behaviour. This means that teacher’s perceptions can also have an impact on the likelihood of a child being referred for gifted or special education.
<p>The <a href=”http://journals.sagepub.com/doi/abs/10.1177/2372732215621310″>universal screening</a> of students for gifted education using IQ tests could help to identify children who otherwise would have gone unnoticed by parents and teachers. Research has found that those school districts which have implemented screening measures for all children using IQ tests have been able to identify more children from historically underrepresented groups to go into gifted education.
<p>Identifying these issues could then <a href=”http://www.sciencedirect.com/science/article/pii/0277953696000287″>help</a> those in charge of education and social policy to seek solutions. Specific interventions could be designed to help children who have been affected by these structural inequalities or exposed to harmful substances. In the long run, the effectiveness of these interventions could be monitored by comparing IQ tests administered to the same children before and after an intervention. </p>
<p>Some researchers have tried doing this. One US <a href=”https://link.springer.com/article/10.1007%2FBF01712768?LI=true”>study in 1995 used IQ tests</a> to look at the effectiveness of a particular type of training for managing Attention Deficit/Hyperactivity Disorder (ADHD), called neurofeedback training. This is a therapeutic process aimed at trying to help a person to self-regulate their brain function. Most commonly used with those who have some sort of identified brain imbalance, it has also been used to treat drug addiction, depression and ADHD. The researchers used IQ tests to find out whether the training was effective in improving the concentration and executive functioning of children with ADHD – and found that it was.
<p>Since its invention, the IQ test has generated strong arguments in support of and against its use. Both sides are focused on the communities that have been negatively impacted in the past by the use of intelligence tests for eugenic purposes.
When we hear about the horrors of industrial livestock farming – the pollution, the waste, the miserable lives of billions of animals – it is hard not to feel a twinge of guilt and conclude that we should eat less meat.
Yet most of us probably won’t. Instead, we will mumble something about meat being tasty, that “everyone” eats it, and that we only buy “grass fed” beef.
Over the next year, more than 50 billion land animals will be raised and slaughtered for food around the world. Most of them will be reared in conditions that cause them to suffer unnecessarily while also harming people and the environment in significant ways.
This raises serious ethical problems. We’ve compiled a list of arguments against eating meat to help you decide for yourself what to put on your plate.
1. The environmental impact is huge
Livestock farming has a vast environmental footprint. It contributes to land and water degradation, biodiversity loss, acid rain, coral reef degeneration and deforestation.
Climate change alone poses multiple risks to health and well-being through increased risk of extreme weather events – such as floods, droughts and heatwaves – and has been described as the greatest threat to human health in the 21st century.
Meat production is highly inefficient – this is particularly true when it comes to red meat. To produce one kilogram of beef requires 25 kilograms of grain – to feed the animal – and roughly 15,000 litres of water. Pork is a little less intensive and chicken less still.
The scale of the problem can also be seen in land use: around 30% of the earth’s land surface is currently used for livestock farming. Since food, water and land are scarce in many parts of the world, this represents an inefficient use of resources.
3. It hurts the global poor
Feeding grain to livestock increases global demand and drives up grain prices, making it harder for the world’s poor to feed themselves. Grain could instead be used to feed people, and water used to irrigate crops.
If all grain were fed to humans instead of animals, we could feed an extra 3.5 billion people. In short, industrial livestock farming is not only inefficient but also not equitable.
High meat consumption – especially of red and processed meat – typical of most rich industrialised countries is linked with poor health outcomes, including heart disease, stroke, diabetes and various cancers.
These diseases represent a major portion of the global disease burden so reducing consumption could offer substantial public health benefits.
Currently, the average meat intake for someone living in a high-income country is 200-250g a day, far higher than the 80-90g recommended by the United Nations. Switching to a more plant-based diet could save up to 8m lives a year worldwide by 2050 and lead to healthcare related savings and avoided climate change damages of up to $1.5 trillion.
Ultimately, it’s unethical
Most people agree that as a basic rule an action that promotes the overall happiness of others is morally good, while an action that causes harm or suffering without good justification is morally wrong.
Meat eating is wrong not because there is something special about pigs or chickens or dogs or cats, but because of the harm it causes, whether that harm is caused to animals, humans, or the wider environment.
Most people living in industrialised countries have historically unprecedented dietary choice. And if our nutritional needs can now be met by consuming foods that are less harmful, then we ought to choose these over foods that are known to cause more harm.
Eating less meat and animal products is one of the easiest things we can do to live more ethically.
A Japanese air-transport model estimated that more than 45% of ambient PM2.5 (fine particulate matter) concentration in Nonodake (350km north of Tokyo) is from China. Although reducing this pollution in a coordinated way will be a difficult task, real-time data exchange (as proposed by NEASPEC) might be relatively easier.
If the Northeast Asian countries share real-time emissions data as well as the currently available meteorological data, they could generate more reliable pollution forecasts and help people prepare for high-pollution events. The harder task of particle pollution mitigation will be better addressed when the level of negotiating partners is upgraded from the current ministerial level to head of state level.
Developing neighbour-friendly energies
If Northeast Asia is to have a sustainable energy future, more regional cooperation will be required.
The trilateral Russia-China-Korea natural gas pipeline is bringing Russian natural gas to South Korea. Natural gas is not a sustainable energy source, but it can be a “bridging fuel” to help countries reduce their greenhouse gas emissions by replacing coal until their renewable energy technology and systems evolve. Then, a natural gas pipeline is an attractive option for South Korea, the world’s second-biggest LNG importer after Japan.
The other energy option, the Asia international grid connection, is a project promoted by South Korea, Japan, and Mongolia. The basic idea is that vast solar and wind energy potential of Mongolia’s Gobi Desert can be utilised by South Korea and Japan. A super grid would connect the countries in Northeast Asia.
If further research can find evidence that the project will significantly improve China’s air quality by reducing coal consumption, national governments of the region might help make it happen.
Of course, true green détente in Northeast Asia cannot happen without North Korea’s support and participation. However, if any of the reviewed four options become reality, it will give North Korea a strong incentive to cooperate.
Tuna is one of the most ubiquitous seafoods. It can be eaten from a can or as high-end sashimi and in many forms in between. But some species are over-fished and some fishing methods are unsustainable. How do you know which type of tuna you’re eating?
Some tuna is certified as sustainably caught by groups such as the Marine Stewardship Council (MSC) that set standards for sustainable fishing. But these certifications are only good if they are credible.
In late August, several media outlets published stories about On the Hook, a new campaign by a consortium of retailers and academics who have taken issue with some fishing practices allowed by the MSC. As a university professor whose research focuses on private seafood governance, including certifications and traceability, and fisheries policy, I am deeply familiar with the issues at hand. I support the campaign, but don’t stand to gain from the outcome.
The Western and Central Pacific skipjack tuna fishery is one of the world’s biggest. Some of the tuna caught here carries the MSC’s blue label, identifying it as the best environmental choice for consumers. But the same boats making that sustainable catch may also use unsustainable methods to catch unsustainable fish on the same day.
The On the Hook coalition sees this as at odds with the MSC certification, as do I. Yes, sustainable and unsustainable fish can be separated; there are people on board whose sole job is to do this. But rewarding fishermen for their sustainable catch, while allowing them to fish unsustainably, dupes consumers into supporting companies that take part in bad behaviour.
Does sorting work?
The On the Hook campaign singles out one fishery in particular: the “purse seine” fishery in the tropical western Pacific Ocean. This fishery covers the waters of eight island nations, including Micronesia, the Marshall Islands, Papua New Guinea and the Solomon Islands. Under the Nauru Agreement, these nations, usually referred to as the Parties to the Nauru Agreement (PNA), collectively control access to about one quarter of the world’s tuna supply.
Fishermen can use nets to catch free-swimming adult tuna and earn MSC certification for their catch. But these same fishermen can also use fish aggregating devices (FADs) — instruments that attract all kinds of marine life, including adult tuna, juvenile tuna and hundreds of species of sharks, turtles and other fish — to net their catch. Fishing on FADs is faster and less costly, but these devices are associated with high levels of bycatch, one of the main sustainability concerns in many fisheries. Fishing on FDAs does not earn MSC certification.
Under normal operations, the fishermen use both methods. “Compartmentalization” is a technique that allows the unsustainable portion of the fish to be separated on board the vessel from the sustainable portion. This is supposed to provide assurance to consumers that they are making a sustainable choice. Yet the negative environmental impacts connected to FAD fishing operations should surely also be considered in an MSC assessment. Currently, this does not happen.
Compartmentalization remains necessary because there isn’t enough of an economic advantage for companies to make only sustainable catches. It costs fishermen more to fish sustainably because they have to find the tuna, instead of waiting for it to come to the FAD.
A fleet using both methods can be part of a higher value premium market and earn financial security from the high volume, yet unsustainable, fishery. If purse-seining tuna vessels need to subsidize their sustainable fishing with unsustainable practices, then MSC certification has not provided the incentive it set out to.
A holistic fishery
Millions of tonnes of tuna have been fished from the waters of the Western and Central Pacific fishery. But the countries controlling these waters have not benefited to a large extent, mostly due to a lack of cooperation in bargaining for benefits, which allowed distant nations to exploit the fishery.
In the past decade, these Pacific Island states have increased their bargaining power in regional negotiations by implementing a scheme that controls the number of boats that can enter their waters. Under the program, called the vessel day scheme (VDS), these countries can now charge higher fees to boats that want access.
For example, PNA countries used to extract between three per cent and six per cent of the value of tuna fishery in their waters. Since their bargaining power has increased, they can now extract more than 14 per cent of the value, and this number is likely to continue to rise.
This is no small accomplishment for these Pacific Island nations, and other coastal state collectives are now trying to emulate their success. But this does not mean that all of the practices they allow are commendable, including those that are not representative of the “best environmental choice” in seafood.
On my Facebook feed, a colleague recently commented: “A Pacific Islander owned sustainably certified fishery is the wrong target.”
Let me clear up this misconception. The On the Hook campaign is not targeting the PNA, but the MSC. It would like the MSC to delay the recertification — authorized by the accreditation body in September — of the PNA fishery until the compartmentalization practice has been addressed. The fishery needs to be considered holistically.
For example, the MSC could specify that to earn a certification, a boat cannot fish sustainably and unsustainably on the same fishing trip. Consumer dollars should not be supporting the very practices the MSC condemns.
Another colleague remarked that because the PNA is challenging big industry, the On the Hook campaign might benefit big industry and hurt the PNA. In fact, it is the same boats, the same fleet, the same companies that are fishing MSC-certified tuna and on FADs.
My colleagues also worry that the campaign calls into question the credibility of the MSC label. But this has actually become commonplace, with many groups pointing at examples of certified fisheries that are not sustainable. For example, the WWF has recommended that seafood buyers should stay away from MSC-certified Mexican tuna.
I would argue that the MSC is tarnishing itself by normalizing the practice of compartmentalization. It is no longer clear that fish carrying the MSC label offer the best environmental choice. Many Canadian fisheries, like lobster, herring, and Atlantic redfish, are MSC-certified. The faltering credibility of the MSC is a major risk for Canadian fish harvesters who rely on the MSC label to communicate their good fishing practices.
Additionally, Canadian consumers who are used to searching for the blue MSC check mark when they shop for seafood can no longer do so thinking that the logo conveys accurate information. Consumers need to know that the waters are muddy, that seafood sustainability is a moving target, and that it is not easy to make the right choice when standing in the aisle at the supermarket.
Governments and businesses need to make that choice easier for consumers. And they could start by dealing with compartmentalization in the PNA fishery — and elsewhere.
The PNA countries could also make demands. They could allow access rights only to vessels that agree to drop the practice of compartmentalization and that are transparent about their fishing practices.
More than anything, the MSC needs to take a good look at itself and remember what it is supposed to represent — the best environmental choice — not consumer confusion.
Many of these dentists chose to work in this town because of the tourist traffic, given its proximity to the Mexico-United States border. Thousands of Canadian and American tourists park their cars and walk across the border into Los Algodones to spend the day souvenir shopping, eating and drinking in the local restaurants, and purchasing alcohol, prescription drugs and dental care at lower costs than available back home.
In 2015 and 2016, I spent four months living in Los Algodones conducting interviews and participating in local events for a doctoral research project in health sciences at Simon Fraser University. My work investigates dental travel as part of the wider phenomenon of “medical tourism” — an industry that is growing rapidly as more and more patients seek access to new or more affordable medical treatments outside of their countries of residence.
Most of the residents and employees I met during my research in Los Algdones were grateful for the much-needed economic benefits of the dental tourism industry. But I also heard concerns and frustrations from members of the local population. They felt that many of the industry activities were unfair and difficult to change.
One interviewee explained how dental tourists often come with prejudiced assumptions about Mexico, stating: “We see a lot of racism […] people trying to come here and saying, okay it is Mexico, I can ask for anything and pay you less.”
Local residents and industry employees felt that dental tourists’ perceptions of Mexico as unsafe and underdeveloped are driving poor working conditions and discriminatory practices.
For example, employees work long hours to promote Los Algodones’ reputation for their employers as an ideal site to purchase dental care. Some also said they had experienced harassment from dental tourists negotiating lower prices and faster care.
Harassment of Indigenous vendors
Clinic employees and local residents also experience stressful interactions in the industry to meet the expectations of clinic owners. Some owners encourage employees to minimize their Mexican accents. This is done to distance Los Algodones from prejudiced perceptions of Mexico as an underdeveloped place with inferior medical care.
One participant described how a dental clinic owner offered to pay him to dump buckets of water on the heads of Indigenous souvenir vendors working near his clinic. Along with harassment, clinic owners also encourage Indigenous vendors to “stay cool, sell stuff cheap, and smile to people.” Many owners worry that the presence of Indigenous vendors might deter tourists by representing the underdeveloped Mexico of tourists’ imaginations.
Local residents struggle to access dental care
My research also revealed that dental clinic owners’ concerns about reputation can decrease access to dental care for local residents. Clinic owners suggested they’re too busy marketing their services and treating foreign patients to treat many locals. Some owners are using free X-rays to entice tourists, who shop around for their ideal care.
Most of the dental services in Los Algodones are also focused on the provision of major restorative treatment rather than preventative care, given the needs of dental tourists. Most local residents cannot afford this type of care. This is concerning as there are limited publicly funded dental care options available in this region of Mexico.
Overall, the “dental Shangri-la of the Mexican desert” is only an oasis for those able and willing to travel and pay for dental care. For many, the industry provides much-needed employment. But this might be stressful, unfair work for individuals unable to use the dental oasis for their own health needs.
The need for global regulation
Participation in the global medical tourism industry is increasing and research shows that this growth raises serious ethical challenges, at least in the industry’s current form. Researchers have raised concerns about the negative impacts on the health of local people who live and work in these medical tourism destinations.
My in-depth investigation of industry practices in the town of Los Algodones provides more evidence to support these concerns. It suggests the need for better global regulation of dental tourism and medical tourism more widely.
This regulation is needed to avoid competition between industry sites driving down labour standards in the global industry and diverting health resources away from populations in need. This regulation could enforce acceptable work conditions to avoid a race-to-the-bottom effect as industry sites try to attract customers to lower-cost, desirable medical care.
More information about these concerns could also help individuals participating in the industry to avoid harmful practices. It could remind medical tourists that cost savings for care might come at a cost to fair labour standards — and that they should allow sufficient time for treatment and be prepared to pay fair prices.
Crops for livestock feed damage ecosystems and threaten wildlife, says WWF UK.
The conservation NGO estimates that just the UK’s livestock industry has caused the extinction of 33 species worldwide.
However, if people lower their protein intake to recommended amounts, farmers would need 13 percent less land to produce feed for livestock and farmed fish, saving an area 1.5 times the size of the EU.
The conservation NGO WWF UK is calling on consumers to cut back the amount of meat they eat. Land used to produce feed crops for livestock and farmed fish threatens biodiversity and delicate ecosystems, the organization argues in a new report released Oct. 3.
“The world is consuming more animal protein than it needs and this is having a devastating effect on wildlife,” Duncan Williamson, a food policy manager at WWF, told the Guardian. “We know a lot of people are aware that a meat-based diet has an impact on water and land, as well as causing greenhouse gas emissions, but few know the biggest issue of all comes from the crop-based feed the animals eat.”
With its conclusions centering primarily on the UK and Europe, the publication’s authors report that the UK’s livestock industry is responsible for driving perhaps 33 species to extinction, as the need for land to grow food to feed chickens, pigs and fish puts pressure on the animals that once called those areas home.
“Feed crops threaten the biodiversity of many of Earth’s most valuable and vulnerable areas,” WWF UK writes, including the Amazon, the Congo Basin, South America’s Cerrado and Asia’s Mekong.
Farmers have ratcheted up soybean production in recent years, and if the trending increase in demand for meat, and by extension livestock feed, continues, they’ll have to produce almost 80 percent more than 2010 levels by 2050.
Poultry production has jumped from 15 percent of global meat production in the 1960s to 32 percent in 2012. As a result, much of that feed — 41.5 percent across Asia, Europe and the United States — is headed for the mouths of hungry birds. Pigs also take a sizeable share at 30 percent of the world’s feed. And fish farming, on the rise as wild stocks struggle to cope with consumer demand, needs another 4 percent.
WWF UK proposes several alternatives to land-hogging feed cropping to provide sustenance to growing flocks, herds and schools of animals destined for the dinner table. Insects could supplement animal feed, as they pack about the same amount of growth-inducing protein as soybeans. Raising them doesn’t emit as much carbon as farming, and they don’t require as much land to produce. Algae, too, could be an alternative livestock food source on a large scale in the future, with the benefits that it can make its own food with few inputs other than nutrients and carbon dioxide.
These potential solutions, however, don’t address the uptick in demand for animal-based protein. Each of us eats on average 25 kilograms (55 pounds) of chicken every year, and our annual fish take is 19.7 kilograms (43 pounds) per person — nearly twice what it was in the 1960s. But WWF UK says we don’t need that much meat.
British health guidelines figure that the average person needs a maximum of 55 grams of protein per day, and yet Britons consume 64 to 88 grams, one-third of which comes from meat.
By notching that intake down to the recommended levels, farmers would need 13 percent less land to produce livestock feed. That translates to an area 50 percent larger than the EU — 650 million hectares (2.5 million square miles) — that could remain undisturbed by agriculture.
Indonesian moviegoers have had something to talk about these past two weeks. A top box-office movie by director Joko Anwar, Satan’s Slave, has a hair-raising ghost, called “Ibu” or “Mother”, haunting almost 2 million viewers. The millions were scared of “Ibu”, but I have scary data haunting Indonesian women and these ghosts are real.
In the annals of Indonesian folklore, female ghosts take centre stage. The country has kuntilanak, sundel bolong and Si Manis Jembatan Ancol. Most female ghosts in Indonesia were loving mothers or ordinary women before they started haunting the world with dark agendas.
Among the most popular ghosts are kuntilanak and sundel bolong; their narratives are reproduced in pop culture products, most notably movies.
Kuntilanak was a woman who died at childbirth (or died delivering a stillborn, according to another version). Sundel bolong was a woman who was raped and became pregnant, then died at childbirth.
The third one is Si Manis Jembatan Ancol, loosely translated into The Pretty One Haunting Ancol Bridge, referring to Ancol, an area in North Jakarta. Men were said to have raped and killed Si Manis in North Jakarta when she escaped her husband.
A different kind of female ghost, an outlier, is Nyai Roro Kidul, believed to be the ruler of the southern sea of Java, who becomes the mystical wife of each Mataram king.
To know more about the issue of women in Indonesia’s ghost folklore, read Indonesian fictions, Sihir Perempuan (Black Magic Woman) and Kumpulan Budak Setan (Devil’s Slaves Club), by author, scholar and feminist Intan Paramaditha.
‘Kuntilanak’: victim of poor access to healthcare
There’s a thread connecting the female ghosts beyond their gender: most of them are victims.
Of course, no scientific evidence supports the existence of these ghosts. But the background story of each ghost shares similar themes. These women were victims of gender inequality and sexual violence. They also had poor access to healthcare.
Meanwhile, Indonesia’s data on sexual violence, experienced by both Si Manis and sundel bolong, are also harrowing. The Central Statistics Bureau surveyed 9,000 women respondents in 2016 and reported that one in three Indonesian women aged 15-64 has experienced physical and/or sexual violence in their lifetime.
But “Ibu”, the woman in the movie set in 1981, would probably have a better fate today compared with her fate then. She was lying helpless for three years without a proper diagnosis. In 2017, at least, she would probably have received a better diagnosis thanks to Indonesia’s universal healthcare, BPJS Kesehatan, implemented since 2014.
Don’t let there be another ‘sundel bolong’
Of course juxtaposing the stories of Indonesian female ghosts with real data is only a way for me to highlight an important issue using folklore and a popular culture product.
But the popular ghosts’ stories reveal the close connections between violence against women and access to healthcare for women in the distant past. As the maternal death rate shows, the state of healthcare for Indonesian women today remains grim.
Perhaps, we would not have the story of kuntilanak haunting young mothers and their newborns had more real live Indonesian women survived child labour and deliver healthy babies.
Sundel bolong’s unwanted pregnancy, a result of rape, could have been avoided if she received adequate reproductive healthcare. Indonesia has issued a regulation legalising abortion for rape victims, but its implementation remains elusive.
High maternal mortality rate, sexual violence: the real ghosts
The plights of these women ghosts, as told by the older generations, serve as a warning about the state of Indonesian women today. The numbers and data should be scary stories for Indonesian women.
Policymakers should pursue systematic changes, or we will forever see more women sharing the plights of sundel bolong, kuntilanak and Si Manis Jembatan Ancol.
If we don’t improve reproductive health services for women and let impunity reign among sexual violence perpetrators, we will continue the legacy of the female ghosts to our next generation. Not only in movies, but in real life as well.
The stereotypes of autistic people perpetuate a myth that they are socially inept. Yet non-autistics, also known as neurotypicals, portray ineptitudes on the basis of their susceptibility to body language, communication, and perceptual manipulations. How we learn these signals opens the debate for nature versus nurture, and the acquisition of social skill aptitude. Who is more socially equipped? The one who is capable of surrounding himself with pretentious body language, or the one who is mindful of her full spectrum of awareness? A neurotypical who communicates with learned body gestures is currently considered evolved, while the acquisition of those skills are a direct result of the inability to survive otherwise. The autistic who remains authentic in order to adapt to the current environment is potentially most equipped to function in society.
The cycle of life requires attracting a mate, reproduction, and adaptations for exploitation to those who threaten…
According to official estimates, the country will need more than 30 billion pesos (around US$2 billion) to rebuild. The resources required for Mexico’s recovery are almost double the country’s annual gross domestic product, according to World Bank figures.
Manpower, at least, has not been an issue. Search-and-rescue teams from several countries – including Chile, Colombia, Israel, Japan, Panama, the United States and Spain – arrived in the days after the earthquakes to dig survivors out of the rubble. Dozens of foreigners who reside in Mexico also joined the Mexican volunteers in their rescue efforts.
Among these international brigades was a group of undocumented Central American migrants who, interrupting their travel northward to the U.S., stayed in Mexico to help clean up debris and assist the victims.
Their efforts have been largely focused in two of the cities most impacted by the historic Sept. 7 quake, Juchitán and Asunción Ixtaltepec, in Oaxaca. But after the Sept. 19 Mexico City earthquake, some members also volunteered to help dig out survivors from the rubble of the nation’s capital.
The nearly 50 Central American migrants assisting in Oaxaca’s earthquake recovery effort are staying at Hermanos en el Camino (Brothers of the Road), a Catholic-run shelter in hard-hit Isthmus of Tehuantepec.
Felipe González, a volunteer at the shelter, told me via telephone that after the urgent rescue efforts ended, they have continued their work, distributing aid among those who lost their homes.
The migrants who organized this aid brigade are from Honduras, El Salvador, Nicaragua and Guatemala, and they have diverse backgrounds, but what they have in common – both with each other and with Mexican earthquake victims – is a history of hardship.
According to a May report from Doctors Without Borders, almost 40 percent of the roughly 500,000 Central American immigrants the organization surveyed in Mexico fled their countries after experiencing physical attacks, threats against themselves or their families, extortion or forced gang recruitment.
The Brothers of the Road shelter is located in Ciudad Ixtepec, one of the stops on the main route that Central American immigrants heading north used to follow through Mexico. Normally, the facility serves to provide relief to immigrants who ride atop “La Bestia” – that is, the Beast, the Mexican network of freight trains – to travel to the U.S.
Mexico has also stepped up deportations. In 2014, for example, Mexico “returned” 107,814 migrants, the majority of them from El Salvador, Guatemala and Honduras. In 2015, deportations rose to 181,163. In 2016, it was 159,872.
The Trump administration has kept up the pressure. In a letter sent to Congress and Senate leaders on Oct. 8, the U.S. president requested that the Department of Homeland Security be granted broad powers to assist “partner nations” in “removing aliens from third countries whose ultimate intent is entering the United States.”
Tough border enforcement isn’t the only reason that Central American migrants normally aim to hurry through Mexico under the radar. Nearly one-third of women surveyed by Doctors Without Borders in 2014 had been sexually abused during their journey, and 68 percent of all migrants were victims of violence.
Migrants are among the many victims of Mexico’s drug war. In 2010 and 2011, 265 migrants from Central and South America were murdered by the Zetas cartel in the northern Mexican town of San Fernando, Tamaulipas, just 55 miles from Texas.
The North American dream
Even knowing the dangers presented by both the state and the drug lords, the guests at the Brothers of the Road shelter risked everything to pitch into the rescue effort after the quake that hit Oaxaca and Chiapas, two of the poorest states in Mexico, in September.
“We’re immigrants in search of the American dream,” Denio Okele, an Honduran migrant, explained to NBC News. But, he continued, “we arrived in Oaxaca, and an earthquake occurred. We are thus helping the people who need assistance.”
Their reasons for helping range from solidarity and compassion to gratitude. “We have received a lot support from people, so we want to help them,” Wilson Alonso, also from Honduras, told the Spanish newspaper El País.
The sacrifice of this migrant humanitarian aid team has earned them hero status in Mexico. Like other volunteers who dug their neighbors free from the rubble with their bare hands, they have been lauded on social media and interviewed by reporters. And for once, the legal status of a group of Central Americans was not the story.
As José Filiberto Velásquez, a Catholic priest at the Brothers of the Road shelter, told one Mexican reporter, these migrants have shown Mexicans through their actions that, quite simply, “immigrants are good people.”
Pact of the defeated
The Central American migrants’ story is just one example of the spirit of national solidarity that carried Mexico through the days after the two killer September quakes.
The solidarity on display recalls what Argentinian writer Ernesto Sábato calls “the pact of the defeated.” In a world full of “horror, treason and envy,” Sábato writes in his memoir, “Antes del Fin,” it’s often “the most unprivileged part of humanity” that shows everyone else the path to salvation.
Right now in Mexico, earthquake-impacted locals and undocumented migrants alike are working together to rebuild their futures. In facing the years of hard recovery and U.S. antagonism ahead of it, a “pact of the defeated” may be as good a starting point as any.
Three-quarters of all abortions in Latin America are performed illegally, putting the woman’s life at risk. Together with Africa and Asia, the region accounts for many of the 17.1 million unsafe abortions performed globally each year, according to a new report in The Lancet, published jointly with the Guttmacher Institute, a research and policy group.
Though worrying, this fact is unsurprising in a region where six countries ban abortion under all circumstances: the Dominican Republic, El Salvador, Haiti, Honduras, Nicaragua and Suriname. Such complete criminalization, even when fetal termination is necessary to save a woman’s life, exists in only two other places in the world: Malta and the Vatican.
Not so in Central America, home to three of the eight countries in the world with total abortion bans. As I am a Costa Rican lawyer and feminist, to me, it’s no small matter that women in many neighboring countries lack access to this basic health service.
Why does this region so studiously avoid recognizing women as full individuals entitled to their own human rights? In my view, there’s a clear link in Latin America between the state of a country’s democracy and the reproductive rights of its female citizens.
In some ways, this is not surprising. In post-coup Honduras, human rights violations – ranging from violence and poverty to impunity – are routine fare for the entire population. Rampant gender inequality is just another symptom of this dismal situation.
In 2010, for example, a pregnant woman who went by the pseudonym of “Amelia” was refused treatment for metastatic cancer because the state ruled that the regime of chemotherapy and radiotherapy – which her doctor had urgently recommended – might trigger a miscarriage.
The Inter-American Commission on Human Rights ultimately issued injunctions for Amelia, but the damage was already done. She died in 2011.
This legal argument is now used to uphold a full criminalization of abortion, even under the most extenuating circumstances, such as when a woman’s life is at risk, the pregnancy is the result of rape or the fetus is severely malformed.
In 2016, Sweden offered political asylum to a Salvadoran woman who was sentenced to 40 years of prison for the aggravated homicide of a fetus miscarried before she even knew she was pregnant.
The United Nations Committee on the Elimination of all Forms of Discrimination Against Women has requested that El Salvador decriminalize abortion, saying that the fact that most women prosecuted and sentenced for this crime are among the country’s most vulnerable citizens – young, uneducated, jobless and single – represents a powerful social injustice.
Though there are great economic, cultural and political differences between these three countries, across Central America the connection between the lack of rule of law and women’s restricted reproductive rights is noteworthy.
That’s because denying women the ability to make decisions about their own bodies means that a woman’s life matters only to the extent that she is the custodian of a potential future life, rather than as a life worthy of protection.
The Constitutional Court of Colombia agrees. In 2006, it stated in its legal justification for decriminalizing abortion, “The dignity of women does not permit that they be considered mere receptacles.”
In Chile, which recently legalized abortion after nearly a half-century of its total prohibition, history shows a similar relationship between democracy and women’s rights. In 1931, the Chilean Congress approved the voluntary interruption of pregnancy for medical purposes if the woman’s life was endangered or the fetus was not viable.
What’s at risk in the Latin American regimes where abortion is still forbidden, then, are not only women’s lives but also the political systems of Central American society itself. Can democracy exist in places that don’t recognize women as people?
In UK universities there are far fewer women in senior posts than men – particularly at professor level. Putting aside teaching, to reach this status, an academic typically needs to have completed a considerable amount of research. Research takes time, and if people want to succeed in academia, they have to apply for funding. This is where one key difference lies.
Women receive less funding than men, and they also apply for smaller grants than their male counterparts. Our study investigated the amount of research funding awarded to male and female study leads across over 6,000 studies related to infectious disease research in UK institutions. Around 75-80% of the funding was awarded to male principal investigators – a huge difference. In addition to the differences in total sums of money, there are also clear differences in the size of the grants secured.
It’s a Catch-22
So what’s the barrier to women getting funding? It’s unlikely to be widespread gender bias from the funders themselves. One of the most famous papers that did highlight clear biases in this area was a 1997 article published in Nature which pulled no punches in highlighting the problem in the peer review process of the Swedish Medical Research Council.
But this analysis is now 20 years old and does seem to be an outlier in an increasing pool of evidence. Most other analyses suggest there is no observable gender bias on the part of the research funders. For example, evidence reported by the Foundation for Science and Technology suggested there is no significant difference in the proportions of successful grant applications led by male and female researchers from the major UK funders, such as the Wellcome Trust and the Medical Research Council.
So why are women getting less by way of grant amounts? With seniority comes big bucks. The more senior the person applying, the bigger the grant they are likely to be requesting. But with fewer senior women out there to apply for something big, it’s a Catch-22.
There are initiatives within, and involving, universities that may help. The Athena Swan programme encourages institutions to consider inequalities and disadvantaged groups, and often focuses on the issues surrounding women in science. There is some evidence to suggest it is having a positive effect. The National Institute for Health Research (NIHR), one of the major UK funders, now insists that university departments and faculties must have at least a silver award from Athena Swan to be eligible to apply for their funding streams. Recipients of an Athena award have demonstrated through work practices and workplace philosophy their commitment to gender equality and supporting women in STEM careers.
There is also an interesting clause in the guidance of the NIHR autumn 2017 call for research professorships (a prestigious and significant award in the career of any aspiring health researcher). Institutions can put forward a maximum of two candidates, and at least one of the two candidates must be female.
I am not aware of other major research funders yet taking a similar approach (though they may do). It would be interesting to hear their views. As universities are increasingly strapped for cash, research income is important, so no doubt many faculties would be happy to jump through hoops to be eligible for all funding streams from the big players.
Still a man’s world?
Funding applications aside, there are good reasons for female academics to be disheartened about their chances of competing on a level playing field. A 2012 US-based study revealed how identical CVs with a male name at the top were favoured over those with a female name. Then there is the evidence that female lecturers are rated lower than their male counterparts by students, without there being any obvious difference in the standard of their teaching. It takes an extra level of tenacity and determination for a woman to make it to the top in a world that is naturally skewed towards men.
There are many additional factors that come into play as to why there are clear differences between the careers of men and women in an academic environment. Digital Science’s new report, Championing The Success of Women in Science, Technology, Engineering, Maths, and Medicine (STEM), explores many of these issues from a range of perspectives, as well as considering other areas where inequality is a problem. It also examines potential ways forward, including the use of mentors, feedback from the academic community and cultural changes that ensure there are more women into senior roles.
But what is very evident is that higher education institutions can prioritise the promotion of equality and still be successful in keeping their heads above water during the ongoing storm of funding cuts, Brexit and general political disdain towards experts.
This laughable 2012 video by the European Commission to encourage teenage girls to take an interest in science underscores the kind of problems that exist in the way women are perceived in terms of science. There was some furious backpedalling by its creators soon after its release, but it is shocking to think it got approved in the first place. But at least its desperately hackneyed approach lays bare some of the sexist, outdated and demeaning attitudes that women have to endure in male-dominated environments.
For restaurant operators, there’s no better hook than coffee to get repeat business. It’s a great scheme that seems to be working for some. Given what’s looming on the horizon, however, offering free coffee may no longer be an option for businesses.
As for Canada, numbers remain robust as more than 90 per cent of adult Canadians drink coffee. Several recent studies suggest coffee is a healthy choice, possibly one factor in the rise in coffee drinkers.
Either way, demand is strong in most Western countries, which puts more pressure on coffee-producing countries. However, as climate change looms, there’s a real threat to coffee’s global success story.
Coffee beans are grown in more than 60 countries and allow 25 million families worldwide to make a living. Brazil is by far the largest producer, followed by Vietnam and Colombia.
Globally, 2017 could be a record year, as the world will likely produce well over 153 million 60-kilogram bags of coffee. Coffee futures are down as a result, but we are far from seeing a bumper crop.
Production has been modestly shifting over the past few years. With good rainfalls in Brazil and favourable weather patterns in other regions of the world, Mother Nature has so far spared coffee growers, but their luck may be running out.
Despite not being a staple in any diet, coffee is big business. At the farm gate, coffee is worth over US$100 billion. In the retail sector, the coffee industry is worth US$10 billion.
But there is growing consensus among experts that climate change will severely affect coffee crops over the next 80 years. By 2100, more than 50 per cent of the land used to grow coffee will no longer be arable.
Ethiopia could be profoundly affected
A combination of effects, resulting from higher temperatures and shifting rainfall patterns, will make the land where coffee is currently grown unsuitable for its production.
According to the National Academy of Science, in Latin America alone, more than 90 per cent of the land used for coffee production could suffer this fate. It’s estimated that Ethiopia, the sixth largest producer in the world, could lose over 60 per cent of its production by 2050. That’s only a generation from now.
As climate conditions become critical, the livelihoods of millions of farmers are at risk and production capacity is jeopardized. Other potential contributors to this predicted downfall are pests and diseases.
With climate change, pest management and disease control are serious issues for farmers who cannot afford to protect their crops. More than 80 per cent of coffee growers are peasant farmers.
Pests and diseases will migrate to regions where temperatures are adequate for survival, and most farmers won’t be ready. Many will simply choose to grow other crops less vulnerable to climate change. Others may attempt to increase their coffee production, but the quality will almost certainly be compromised.
Coffee quality will suffer
Higher temperatures will affect the quality of coffee. Higher-quality coffee is grown in specific regions of the world where the climate allows the beans to ripen at just the right time. Arabica coffee, for example, which represents 75 per cent of world coffee production, is always just a few degrees away from becoming a sub-par product.
This will undoubtedly affect coffee prices and quality for us all. Thanks to the so-called Starbucks Effect, the quality of the coffee we now enjoy is far superior to that of just a decade ago. Good beans may become more difficult to procure in the future.
Right now, coffee futures are valued at US$1.28 per pound and are being exposed to downward pressures. At this rate, the record price of US$3.39 per pound, set in 1977, could return in just a few years.
The coffee wars we are seeing are not just about gaining market shares and getting consumers hooked on java. They are also about how we connect with a crop that is under siege by climate change.
Short of fighting climate change, we could be forced to alter our relationship with coffee. As current coffee-producing countries attempt to develop eco-friendly methods and embrace sustainable practices, Canada could be the next country where coffee is actually grown, not just roasted.
Within the next decade, with climate change and new technologies, perhaps producing coffee beans will be feasible in Canada. After all, if Elon Musk thinks we can start colonizing Mars by 2022, why can’t we grow coffee in Canada?
So if a coffee chain is offering free coffee, take it. It won’t be long before coffee could become a luxury.
British weather isn’t much to write home about. The temperate maritime climate makes for summers which are relatively warm and winters which are relatively cold. But despite rarely experiencing extremely cold weather, the UK has a problem with significantly more people dying during the winter compared to the rest of the year. In fact, 2.6m excess winter deaths have occurred since records began in 1950 – that’s equivalent to the entire population of Manchester.
Although the government has been collecting data on excess winter deaths – that is, the difference between the number of deaths that occur from December to March compared to the rest of the year – for almost 70 years, the annual statistics are still shocking. In the winter of 2014/15, there were a staggering 43,900 excess deaths, the highest recorded figure since 1999/2000. In the last 10 years, there has only been one winter where less than 20,000 excess deaths occurred: 2013/14. Although excess winter deaths have been steadily declining since records began, in the winter of 2015/16 there were still 24,300.
According to official statistics, respiratory disease is the underlying cause for over a third of excess winter deaths, predominantly due to pneumonia and influenza. About three-quarters of these excess respiratory deaths occur in people aged 75 or over. Unsurprisingly, cold homes (particularly those below 16°C) cause a substantially increased risk of respiratory disease and older people are significantly more likely to have difficulty heating their homes.
Health and homes
The UK is currently in the midst of a housing crisis – and not just due to a lack of homes. According to a 2017 government report, a fifth of all homes in England fail to meet the Decent Homes Standard – which is aimed at bringing all council and housing association homes up to a minimum level. Despite the explicit guidelines, an astonishing 16% of private rented homes and 12% of housing association homes still have no form of central heating.
Even when people have adequate housing, the cost of energy and fuel can be a major issue. Government schemes, such as the affordable warmth grant, have been implemented to help low income households increase indoor warmth and energy efficiency. However, approximately 2.5m households in England (about one in nine) are still in fuel poverty – struggling to keep their homes adequately warm due to the cost of energy and fuel – and this figure is rising.
Poor housing costs the NHS a whopping £1.4 billion every year. Reports indicate that the health impact of poor housing is almost on a par with that of smoking and alcohol. Clearly, significant public health gains could be made through high quality, cost-effective home improvements, particulalrly for social housing. Take insulation, for example: evidence shows that properly fitted and safe insulation can increase indoor warmth, reduce damp, and improve respiratory health, which in turn reduces work and school absenteeism, and use of health services.
Warmth on prescription
In our recent research, we examined whether warmer social housing could improve population health and reduce use of NHS services in the northeast of England. To do this, we analysed the costs and outcomes associated with retrofitting social housing with new combi-boilers and double glazed windows.
After the housing improvements had been installed, NHS service use costs reduced by 16% per household – equating to an estimated NHS cost reduction of over £20,000 in just six months for the full cohort of 228 households. This reduction was offset by the initial expense of the housing improvements (around £3,725 per household), but if these results could be replicated and sustained, the NHS could eventually save millions of pounds over the lifetime of the new boilers and windows.
The benefits were not confined to NHS savings. We also found that the overall health status and financial satisfaction of main tenants significantly improved. Furthermore, over a third of households were no longer exhibiting signs of fuel poverty – households were subsequently able to heat all rooms in the home, where previously most had left one room unheated due to energy costs.
Perhaps it is time to think beyond medicines and surgery when we consider the remit of the NHS for improving health, and start looking into more projects like this. NHS-provided “boilers on prescription” have already been trialled in Sunderland with positive results. This sort of cross-government thinking promotes a nuanced approach to health and social care.
We don’t need to assume that the NHS should foot the bill entirely for ill health related to housing, for instance the Treasury could establish a cross-government approach by investing in housing to simultaneously save NHS money. A £10 billion investment into better housing could pay for itself in just seven years through NHS cost savings. With a growing need to prevent ill health and avoidable death, maybe it’s time for the government to think creatively right across the public sector, and adopt a new slogan: improving health by any means necessary.
When humanitarian emergencies flare up, what should prompt the U.S. government to “send in the Marines”?
Disasters like Hurricane Harvey’s floods in Houston and Hurricane Maria’s devastation of Puerto Rico’s roads and power grid can quickly overwhelm civilian authorities and emergency responders. Military support can make a life-or-death difference in those emergencies.
As scholars at the U.S. Naval War College and Harvard Humanitarian Initiative, we have seen that the military can have a profound and positive impact on the immediate response to large-scale disasters such as Hurricanes Harvey, Irma and Maria or the Haiti earthquake in 2010.
But soldiers, sailors, marines and aviators are primarily trained to fight, not feed disaster victims. When they report for humanitarian duties, it typically costs far more than when civilians handle them. Does their muscle actually go to good use?
Why deploy the military
Nonprofits like the Red Cross and government agencies like FEMA simply don’t have the equipment required following disasters like the one unfolding in Puerto Rico – where millions of people may lack power and clean drinking water for months.
At the same time, many military personnel also report that aid missions are good for morale, as countless service members take pride in doing disaster relief.
Having soldiers or sailors airlift people from their flooded homes or distribute hot meals is also great public relations at a time when the U.S. military is engaged in several unpopular and protracted conflicts abroad.
While military missions can fill critical gaps in response to large-scale natural disasters like Hurricanes Harvey, Irma and Maria, there are also significant limits to the military’s ability to jump in.
Also, under a law known as the Stafford Act of 1988, the Department of Homeland Security may request military assistance as a last resort in major disasters and emergencies.
These restrictions have loosened up a little since the 9/11 terrorist attacks, granting the military and National Guard more leeway to support domestic counterterrorism operations. These changes made it easier for the military and National Guard to respond to the recent hurricanes.
But there are no such legal restrictions on how the U.S. military may respond to foreign disasters, as long as host governments request help or consent to it.
A common call
According to the Center for Naval Analyses, a federally funded defense research center, the U.S. military divertedunits from “routine” operations to conduct humanitarian assistance operations 366 times from 1970 to 2000, compared with 22 times for combat missions.
Since 2000, the U.S. armed forces have conducted many massive humanitarian operations around the globe, such as responding to the 2004 Indian Ocean earthquake and tsunami and the 2015 Nepal earthquake, as well as Superstorm Sandy and Hurricane Katrina at home.
Given how frequently the military undertakes these missions, preparing for them should be a high priority. But that is not the case. With few notable exceptions, soldiers, sailors, marines and aviators spend little if any time training for disaster-response strategies, tactics, policies and procedures.
According to estimates by Aruna Apte at the Naval Postgraduate School and Keenan Yoho at Rollins College, the U.S. spent more than $17 million just to operate a single aircraft carrier nearby for 17 days – not counting personnel costs.
Aircraft carriers are essentially floating airfields that make it easier to access otherwise impossible-to-reach areas, facilitating evacuations. Although they can dispatch critical food, water and medicine, there are usually better ways to deliver aid after disasters.
Despite the big price tag, military involvement in disaster relief is bound to grow. That’s because global humanitarian organizations are already stretched thin by competing needs. Conflict-driven migration is growing, and severe storms are becoming more common as a result of climate change – along with the higher sea levels scientists say it is causing.
The High Court has rejected a judicial review challenging the current law which prohibits assisted dying in the UK. Noel Conway, a 67-year-old retired lecturer who was diagnosed with Motor Neurone Disease in 2014, was fighting for the right to have medical assistance to bring about his death. Commenting after the judgement on October 5, his solicitor indicated that permission will now be sought to take the case to the appeal courts.
Campaigners are often quick to highlight the strength of public support in favour of assisted dying, arguing that the current law is undemocratic. But there are reasons to question the results of polls on this sensitive and emotional issue.
There have been numerous surveys and opinion polls on public attitudes towards assisted dying in recent years. The British Social Attitudes (BSA) Survey, which has asked this question sequentially since the 1980s, has shown slowly increasing public support. Asked: “Suppose a person has a painful incurable disease. Do you think that doctors should be allowed by law to end the patient’s life, if the patient requests it?” in 1984, 75% of people surveyed agreed. By 1989, 79% of people agreed with the statement, and in 1994 it had gone up to 82%.
Detail of the question matters
But not surprisingly, the acceptability of assisted dying varies according to the precise context. The 2005 BSA survey asked in more depth about attitudes towards assisted dying and end of life care. While 80% of respondents agreed with the original question, support fell to 45% for assisted dying for illnesses that were incurable and painful but not terminal.
A 2010 ComRes-BBC survey also found that the incurable nature of illness was critical. In this survey, while 74% of respondents supported assisted suicide if an illness was terminal, this fell to 45% if it was not.
It may not be surprising that support varies considerably according to the nature of the condition described, but it is important. First, because the neat tick boxes on polls belie the messy reality of determining prognosis for an individual patient. Second, because of the potential for drift in who might be eligible once assisted dying is legalised. This has happened in countries such as Belgium which became the first country to authorise euthanasia for children in 2014, and more recently in Canada where within months of the 2016 legalisation of medical assistance in dying, the possibility of extending the law to those with purely psychological suffering was announced.
It’s not just diagnosis or even prognosis that influences opinion. In the US, Gallup surveys carried out since the 1990s have shown that support for assisted dying hinges on the precise terminology used to describe it. In its 2013 poll, 70% of respondents supported “end the patient’s life by some painless means” whereas only 51% supported “assisting the patient to commit suicide”. This gap shrank considerably in 2015 – possibly as a result of the Brittany Maynard case. Maynard, a high-profile advocate of assisted dying who had terminal cancer, moved from California to Oregon to take advantage of the Oregon Death with Dignity law in 2014.
Even so, campaigning organisations for assisted dying tend to avoid the word “suicide”. Language is emotive, but if we want to truly gauge public opinion, we need to understand this issue, not gloss over it.
Information changes minds
Support for assisted dying is crucially known to drop-off simply when key information is provided. Back in the UK, a ComRes/CARE poll in 2014 showed 73% of people surveyed agreed with legalisation of a bill which enables: “Mentally competent adults in the UK who are terminally ill, and who have declared a clear and settled intention to end their own life, to be provided with assistance to commit suicide by self-administering lethal drugs.” But 42% of these same people subsequently changed their mind when some of the empirical arguments against assisted dying were highlighted to them – such as the risk of people feeling pressured to end their lives so as not to be a burden on loved ones.
This is not just a theoretical phenomenon. In 2012, a question over legalising assisted dying was put on the ballot paper in Massachusetts, one of the most liberal US states. Support for legalisation fell in the weeks prior to vote, as arguments against legalisation were aired, and complexities became apparent. In the end, the Massachusetts proposition was defeated by 51% to 49%. Public opinion polls, in the absence of public debate, may gather responses that are reflexive rather than informed.
Polls are powerful tools for democratic change. While opinion polls do show the majority of people support legalisation of assisted dying, the same polls also show that the issue is far from clear. It is murky, and depends on the responder’s awareness of the complexities of assisted dying, the context of the question asked, and its precise language. If we can conclude anything from these polls, it is not the proportion of people who do or don’t support legislation, but how easily people can change their views.
Once known as multiple personality disorder, dissociative identity disorder remains one of the most intriguing but poorly understood mental illnesses. Research and clinical experience indicate people diagnosed with the condition have been victims of sexual abuse or other forms of criminal mistreatment.
Media references to dissociative identity disorder are also often highly stigmatising. The recent movie Split depicted a person with the condition as a psychopathic murderer. Even supposedly factual reporting can present people with dissociative identity disorder as untrustworthy and prone to wild fantasies and false memories.
But research hasn’t found people with the disorder are more prone to “false memories” than others. And brain imaging studies show significant differences in brain activity between people with dissociative identity disorder and other groups, including those who have been trained to mimic the disorder.
Dissociative identity disorder comes about when a child’s psychological development is disrupted by early repetitive trauma that prevents the normal processes of consolidating a core sense of identity. Reports of childhood trauma in people with dissociative identity disorder (that have been substantiated) include burning, mutilation and exploitation. Sexual abuse is also routinely reported, alongside emotional abuse and neglect.
In response to overwhelming trauma, the child develops multiple, often conflicting, states or identities. These mirror the radical contradictions in their early attachments and social and family environments – for instance, a parent who swings unpredictably between aggression and care.
These states display marked differences in a person’s behaviour, recollections and opinions, and ways of engaging with the world and other people. The person frequently experiences gaps in memory or difficulties recalling events that occurred while they were in other personality states.
The manifestations of these symptoms are subtle and well concealed for most patients. However, overt symptoms tend to surface during times of stress, re-traumatisation or loss.
People with the condition typically have a number of other problems. These include depression, self-harm, anxiety, suicidal thoughts, and increased susceptibility to physical illness. They frequently have difficulties engaging in daily life, including employment and interactions with family.
This is, perhaps, unsurprising, given people with dissociative identity disorder have experienced more trauma than any other group of patients with psychiatric difficulties.
Dissociative identity disorder is a relatively common psychiatric disorder. Research in multiple countries has found it occurs in around 1% of the general population, and in up to one fifth of patients in inpatient and outpatient treatment programs.
Trauma and dissociation
The link between severe early trauma and dissociative identity has been controversial. Some clinicians have proposed dissociative identity disorder is the result of fantasy and suggestibility rather than abuse and trauma. But the causal relationship between trauma and dissociation (alterations of identity and memory) has been repeatedly shown in a range of studies using different methodologies across cultures.
People with dissociative identity disorder are generally unresponsive to (and may deteriorate under) standard treatment. This may include cognitive behavioural treatment, or exposure therapy for post-traumatic stress disorder.
Phase-orientated treatment has been shown to improve dissociative identity disorder. This involves stages (or phases) of treatment, from an initial focus on safety and stabilisation, through to containment and processing of trauma memories and feelings, to the final phase of integration and rehabilitation. The goal of treatment is for the person to move towards better engaging in life without debilitating symptoms.
Critics have pointed to poor therapeutic practice causing dissociative symptoms as well as false memories and false allegations of abuse. Some are particularly concerned therapists are focused on recovering memories, or encouraging patients to speculate that they have been abused.
A recent literature analysis concluded that criticisms of dissociative identity disorder treatment are based on inaccurate assumptions about clinical practice, misunderstandings of symptoms, and an over-reliance on anecdotes and unfounded claims.
Dissociative identity disorder treatment is frequently unavailable in the public health system. This means people with the condition remain at high risk of ongoing illness, disability and re-victimisation.
The underlying cause of the disorder, which is severe trauma, has been largely overlooked, with little discussion of the prevention or early identification of extreme abuse. Future research should not only address treatment outcomes, but also focus on public policy around prevention and detection of extreme trauma.
If this article has raised concerns for you or anyone you know, call Lifeline 13 11 14, Suicide Call Back Service 1300 659 467 or Kids Helpline 1800 55 1800.
Welfare reform and austerity in the UK has led to reductions in public spending on services that support older people. Age UK has highlighted how nearly one million older people have unmet social care needs. This is of particular concern as the winter months approach.
In ongoing research on food insecurity in older age, my colleagues and I have analysed survey data and interviewed older people who use foodbanks. We’re finding that many older people are at risk of under-nutrition because of poverty, or because they don’t get the support they need to shop, cook and eat.
While many older people have been less affected by the recent recession than other age groups, in part because of the triple lock protection for pensions, poverty can persist in old age. Data from 2015 shows that 1.6m pensioners live below the relative poverty line, and 8% of pensioners are in persistent poverty – defined as having spent three years out of any four-year period in a household with below 60% of median income.
Poverty and social isolation
Around 20% of older people have little or no private pension, housing or material wealth and retiring with debt is also a growing problem. There are 3.8m people aged 65 and older living alone in the UK and evidence from Age UK suggests that nearly one million people in this age group always or often feel lonely.
Older people living alone tend to eat less. This can lead to under-nutrition – a major cause of functional decline among older people. It can lead to poorer health outcomes, falls, delays in recovery from illness and longer periods in hospital, including delayed operations.
Evidence from the National Nutrition Screening Survey suggests that an estimated 1.3m people aged over 65 in the UK are not getting adequate protein or energy in their diet. On admission to hospital, 33% of people in this age group are identified as being at risk of under-nutrition.
Data we are analysing from the 2014 English Longitudinal Study of Ageing suggests that for around 10% of people aged 50 and over “too little money stops them buying their first choice of food items” and this has increased consistently since 2004. Evidence from the Poverty and Social Exclusion Survey in 2012 found that 12% of people aged over 65 had often or sometimes: “skimped on food so others in the household would have enough to eat”.
Embarrassment and stigma
The Health Survey of England consistently highlights the issue of unmet need among some older people. For example, 6% of people aged over 65 reported that they had not received help from anyone with shopping for food in the last month. In addition, 19% of this age group reported needing help to leave their home.
Evidence suggests that as food insecurity has increased in the UK, many older people have become reliant on food banks. In 2016, the food redistribution charity FareShare said that 13% of its clients were aged over 65.
Our interviews with older people using food banks have highlighted the challenges many older people can face. Some were having food parcels delivered by the food banks as they were unable to go themselves or did not want to be seen going.
Embarrassment and stigma were also a concern for one 69-year-old man who told us how he preferred coming to the food bank than asking family or friends for help. “I don’t believe in asking others, I don’t want to upset people,” he said. Another 65-year-old man told us: “My family would help but I don’t like to ask them, they have their own families to look after.” Others, however are either unable or too embarrassed to visit a food bank.
Food or warmth
One 54-year-old man said: “I can go for a couple of days without food… the gas is cut off and I get hot water from the kettle to wash.” There was also evidence that some older people were not fully recognising their nutritional needs. As one 60-year-old woman said: “When you are on your own… sometimes I don’t cook, depends how I feel.” Another 65-year-old man revealed his poor diet, stating how when he had no food he would: “Just eat cornflakes.”
Other people chose to cut back on food during the winter due to the costs of heating their home – suffering the cold as a result. As one 72-year-old woman stated: “Sometimes I just go without putting the heating on.”
An increasing number of older people are constrained in their spending on food, many are skipping meals and are not getting the social care support they need. Emergency food parcels are an inadequate and unsustainable way of addressing the issue of food insecurity.
There are currently 10m people in the UK aged over 65, but this is expected to increase to 19m by 2050 – that’s one in every four people.
As the size of the older population continues to grow, the reductions in local authority spending on social care raise concerns about their long-term welfare. Given the follow-on costs to the public purse, including in terms of healthcare, the government must do more to combat food insecurity amongst older people.
Then I decided that it was better to be for things than against things. More positive.
But you can’t be for the safety and well-being of children if you don’t also fight child abuse, which includes that you are against its acceptance in some circles and cultures. (As expressed by for instance a recent decision in Britain that child abuse victims by definition “consented” to their abuse if they were living in the same house as the abuser, and other nasty nonsense like that.)
Similarly, you can’t be for the creation of a better future if you’re not also against its destruction.
You can’t be for human rights for every human being if you’re not also against the taking away or diminishment of human rights of some people by some people (such as in the case of that abused apprentice who had the misfortune of working at a business with an approved abuse culture).
I see that now.
I am redefining myself as fiercely anti-abuse (etc) first and fiercely pro-flourishing (etc) second.
That is probably what I already was when I started out. I don’t like feeling angry, however. So I tend to avoid anger and tend to see it as something negative. But you can’t accomplish a thing in the world without anger. Ultimately, anger is what makes the world go round. Anger for instance makes people fight against (the effects of) abusive people in power, like Donald Trump, and fight for a better world.
Anger pushes people out of complacency and opens their eyes. And then it makes them decide to do something about what caused the anger and fight for what becomes possible without it. Anger makes people start food banks and raise funds for medical treatments in the presence of failing governments and corrupt politicians.
Anger is a tool that you can learn to use. The first step in that learning process is to stop avoiding and suppressing it so that you see how you can actually use it constructively. Anger makes people stop waffling and whining and begin to act instead. Anger is empowering. It is powerful.
Anger can therefore be very destructive (particularly if you suppress it and allow it to fester). That is the risk inherent to anger, and part of the reason why most people try to avoid it (and also why it’s generally seen as done for men but not for women).
That’s why you have to tie it to something else. Compassion, for example. Anchor it.
See, when you get angry, you have a choice. That choice is whether to let the anger make you act for good or act for bad. Whether to make a cake to throw into a politician’s face or to make soup to hand out to strangers on a cold street. Whether to start a mud-slinging campaign on Twitter against some public figure or start a fund-raising campaign for someone’s medical treatment, or heck, sponsor the pill for an American woman.
An example of fighting for justice and against child abuse:
wow the gaurdian,the times news papers,not calling me a fantasist anymore after conifer report
However, “Even when people are unhappy with a state of affairs, they are usually disinclined to change it. In my area of research, the cognitive and behavioral sciences, this is known as the “default effect.” wrote Musa al-Gharbi in May in the US News on the likely reelection of Donald Trump. Today, the same prediction was made by a different medium.
People generally dislike taking responsibility. They don’t like stepping up. This is often connected to risk aversion. So they are angry, but don’t do anything with their anger. That causes stress.
Stepping up does not have to mean getting your face into the newspapers because of something you did or proclaiming that you want to rule the world. It does not have to involve huge risks. Stepping up can be as simple as driving your neighbor to the supermarket and back.
So to use anger, you have to look at your possibilities. If you don’t have a car, you can’t drive someone else to the supermarket. And I, for example, don’t have the power to vote against Trump or against Theresa May. So what can I do? And what can you do? Looking into that can force you to take other steps. Empowering steps. Steps that enable you to do something instead of nothing.
Can one form a friendship with a magpie? –even when adult males are protecting their nests during the swooping season? The short answer is:“ Yes, one can” – although science has just begun to provide feasible explanations for friendship in animals, let alone for cross-species friendships between humans and wild birds.
Ravens and magpies are known to form powerful allegiances among themselves. In fact, Australia is thought to be a hotspot for cooperative behaviour in birds worldwide. They like to stick together with family and mates, in the good Australian way.
Of course, many bird species may readily come to a feeding table and become tame enough to take food from our hand, but this isn’t really “friendship”. However, there is evidence that, remarkably, free-living magpies can forge lasting relationships with people, even without depending on us for food or shelter.
When magpies are permanently ensconced on human property, they are also far less likely to swoop the people who live there. Over 80% of all successfully breeding magpies live near human houses, which means the vast majority of people, in fact, never get swooped. And since magpies can live between 25 and 30 years and are territorial, they can develop lifelong friendships with humans. This bond can extend to trusting certain people around their offspring.
A key reason why friendships with magpies are possible is that we now know that magpies are able to recognise and remember individual human faces for many years. They can learn which nearby humans do not constitute a risk. They will remember someone who was good to them; equally, they remember negative encounters.
Why become friends?
Magpies that actively form friendships with people make this investment (from their point of view) for good reason. Properties suitable for magpies are hard to come by and the competition is fierce. Most magpies will not secure a territory – let alone breed – until they are at least five years old. In fact, only about 14% of adult magpies ever succeed in breeding. And based on extensive magpie population research conducted by R. Carrick in the 1970s, even if they breed successfully every single year, they may successfully raise only seven to eleven chicks to adulthood and breeding in a lifetime. There is a lot at stake with every magpie clutch.
The difference between simply not swooping someone and a real friendship manifests in several ways. When magpies have formed an attachment they will often show their trust, for example, by formally introducing their offspring. They may allow their chicks to play near people, not fly away when a resident human is approaching, and actually approach or roost near a human.
In rare cases, they may even join in human activity. For example, magpies have helped me garden by walking in parallel to my weeding activity and displacing soil as I did. One magpie always perched on my kitchen window sill, looking in and watching my every move.
On one extraordinary occasion, an adult female magpie gingerly entered my house on foot, and hopped over to my desk where I was sitting. She watched me type on the keyboard and even looked at the screen. I had to get up to take a phone call and when I returned, the magpie had taken up a position at my keyboard, pecked the keys gently and then looked at the “results” on screen.
The bird was curious about everything I did. She also wanted to play with me and found my shoelaces particularly attractive, pulling them and then running away a little only to return for another go.
Importantly, it was the bird (not hand-raised but a free-living adult female) that had begun to take the initiative and had chosen to socially interact and such behaviour, as research has shown particularly in primates, is affiliative and part of the basis of social bonds and friendships.
If magpies can be so good with humans how can one explain their swooping at people (even if it is only for a few weeks in the year)? It’s worth bearing in mind that swooping magpies (invariably males on guard duty) do not act in aggression or anger but as nest defenders. The strategy they choose is based on risk assessment.
A risk is posed by someone who is unknown and was not present at the time of nest building, which unfortunately is often the case in public places and parks. That person is then classified as a territorial intruder and thus a potential risk to its brood. At this point the male guarding the brooding female is obliged to perform a warning swoop, literally asking a person to step away from the nest area.
If warnings are ignored, the adult male may try to conduct a near contact swoop aimed at the head (the magpie can break its own neck if it makes contact, so it is a strategy of last resort only). Magpie swooping is generally a defensive action taken when someone unknown approaches who the magpie believes intends harm. It is not an arbitrary attack.
When I was swooped for the first time in a public place I slowly walked over to the other side of the road. Importantly, I allowed the male to study my face and appearance from a safe distance so he could remember me in future, a useful strategy since we now know that magpies remember human faces. Taking a piece of mince or taking a wide berth around the magpies nest may eventually convince the nervous magpie that he does not need to deter this individual anymore because she or he poses little or no risk, and who knows, may even become a friend in future.
A sure way of escalating conflict is to fence them with an umbrella or any other device, or to run away at high speed. This human approach may well confirm for the magpie that the person concerned is dangerous and needs to be fought with every available strategy.
In dealing with magpies, as in global politics, de-escalating a perceived conflict is usually the best strategy.
This article was co-written by Adeline Lacroix, who works with Fabienne Cazalis and was recently diagnosed with Asperger syndrome. A second year master’s student in psychology, she is working on a scientific literature review about the characteristics of high-functioning autistic women.
Let’s call her Sophie. The description we’ll give could be that of any woman who is on the autistic spectrum without knowing it. Because they’re intelligent and used to compensating for communication impediments they may not be consciously aware of, these women slip through the cracks of our still-too-inefficient diagnostic procedures.
Studies reveal one woman for every nine men is diagnosed with so-called “high-functioning” autism, that is, autism without intellectual disability. If we compare this to the one woman for every four men diagnosed with the more readily identified “low-functioning” autism, we can easily imagine many autistic women are left undiagnosed.
Today, Sophie, who lives in France, has a job interview. If you could see her nervously twisting her hair, you might think she’s anxious, like anyone would be in the circumstances. You would be wrong. Sophie is actually on the verge of a panic attack. At 27, she just lost her job as a salesperson due to repeated cash-register mistakes – and it’s the eighth time in the last three years. She loved maths at university and is deeply ashamed. She hopes the person hiring will not bring up the subject – she has no justification for her professional failures and knows that she is incapable of making one up.
Learning accounting by herself at home
Sophie’s wish is granted: the interviewer asks her instead about her time at university. Relieved, she happily launches into an explanation of her masters thesis on meteorological modelling, but he cuts her off abruptly, clearly irritated. He wants to know why she is applying for a temporary job as an accounting assistant when she has no experience or training. Although her heart is racing wildly, Sophie manages to keep her composure, explaining that she taught herself accounting at home in the evenings. She describes the excellent MOOC (online course) she found on the website of the French Conservatoire National des Arts et Métiers, and tells him how one of the questions she asked the teacher on the forum led to a fascinating debate on the concept of depreciation expenses.
Sophie is not good at guessing what people are thinking, but she understands from the way the man is staring at her that he believes she is lying. Overwhelmed, she feels weaker by the minute. She watches his lips move but does not understand what he’s saying. Ten minutes later she’s in the street, with no memory of how the interview ended. She is shaking and holding back tears. She curses herself, wondering how anyone could be so stupid and pathetic.
She climbs into a crowded bus, swaying under the heavy odours of perfumes worn by those pressed up around her. When the bus brakes suddenly, she loses her balance and bumps into a fellow passenger. She apologises profusely and hurriedly gets off. In her rush, she trips again and falls to the pavement. “I must get up, everyone is looking,” she thinks, but her body refuses to obey. She can no longer see properly and doesn’t even realise her own tears are blinding her. Someone calls an ambulance. Sophie wakes up in a psychiatric facility. She will be misdiagnosed with a psychological disorder and given medication that will solve none her problems.
A unique way of thinking, a taste for solitude, intense passions
Sophie’s story is typical of the chaotic lives led by women whose autism remains undiagnosed because they are on that part of the spectrum where the signs are less obvious. In spite of her impressive cognitive capacities – like the ability to teach herself a totally new field of knowledge – Sophie has no idea of her own talents, and neither do those around her, or only rarely. Trapped in a social environment highly critical of what makes her unique, such as her unusual way of thinking, taste for solitude, and the intensity of her passions, Sophie is acutely aware that these are seen as shortcomings.
If Sophie could be given the correct diagnosis of high-functioning autism, she would at last understand the way her mind works. She could meet other autistic adults and learn from their experience to help her overcome her own difficulties.
Autism is characterised by social and communicative difficulties, specific interests that people with autism are capable of speaking about for hours (like meteorological modelling, in Sophie’s case), and stereotyped behaviours. There are also differences in perception, such as hypersensitivity to smells or sounds, or, conversely, reduced sensitivity to pain. Autism is thought to affect around one in one hundred people.
70% of people with autism have either normal or superior intelligence. This form of autism is generally referred to as high-functioning autism, as per the latest version of the “bible” of psychiatric disorders, the DSM 5 (Diagnostic and Statistical Manual of Mental Disorders). In this version, all reference to older categories has been removed, including Asperger syndrome. The term Asperger’s is still used today in some countries, however, even though all types of autism are now grouped under a single spectrum and classified according to the severity of symptoms.
Appropriate support throughout schooling
Ideally, Sophie would have been diagnosed as a child. She could have benefited from specialised support throughout her schooling, as is legally required in France and other countries. This support would have made her less vulnerable, giving her the tools to defend herself from bullying in the schoolyard and helping her learn with teaching methods adapted to her way of thinking. Upon leaving school, her diagnosis would have opened up access to labour rights, such as disabled worker status, which would have helped her find an adapted employment. Sophie’s life would have been simpler and she would be more at peace with herself.
But Sophie’s problems are twofold. Not only is she autistic, but she’s also a woman. If getting a diagnosis is already tricky for men, it’s even more difficult for women. Originally, autism was thought to only rarely affect women. This erroneous idea, which emerged from a 1943 study conducted by Léo Kanner (the first psychiatrist to describe the syndrome), has been reinforced by the long-dominant psychoanalytical approach. The criteria defining autistic symptoms were based on observations in boys.
Later, when science replaced psychoanalysis as the dominant model, studies were largely conducted on male children, thus reducing the chances of recognising autism as it’s manifested in females. This phenomenon, also present in other areas of science and medicine, has far-reaching implications today.
Similar test results for boys and girls
To diagnose autism spectrum disorder (ASD), doctors and psychologists evaluate quantitative criteria using tests and questionnaires, but also qualitative criteria, like interests, stereotyped movements, difficulties with eye contact and language and isolation. But while autistic girls show similar test results to autistic boys, the clinical manifestation of their condition differs, at least in cases where language has been acquired.
With social-imitation strategies, for example, autistic girls have fewer troubles making friends than autistic boys ; they have seemingly more ordinary interests than boys (for example horses, rather than maps of the subway); while less restless than boys, they are more vulnerable to less-visible anxiety disorders, and more adept at camouflaging their stereotyped and soothing ritual behaviors. In other words, their autism is less obtrusive, which means their symptoms are less obvious to their families, teachers and doctors.
Biology and environment explain these differences, and in this case it’s impossible to separate nature from nurture. On the nature side of the argument, some hypothesise that girls are better equipped for social cognition and more apt at caring roles. This would explain why they appear to be more interested in the animate (cats, celebrities, flowers) than the inanimate (cars, robots, rail networks).
When it comes to nurture, girls and boys are not brought up in the same way. Socially acceptable behaviours differ according to sex. Although autistic children are more resistant to this phenomenon, the pressure to conform is so strong it still ends up influencing their behaviour, as illustrated by the case of Gunilla Gerland. As a girl, this Swedish woman didn’t want to wear rings or bracelets because she hated the way metal felt on her skin. Observing that adults could not fathom that a little girl might not like these things, she resigned herself to getting gifts of jewellery, and even learned to thank the giver, before stashing the object away in a box at the earliest opportunity.
Skilled in the art of camouflage
As autistic girls grow up, the gap between how their condition and that of boys manifests widens. As adults, some autistic women can become highly skilled in the art of camouflage, which explains the use of the term “invisible disability” to describe certain types of high-functioning autism. Incidentally, this is the meaning of the title of Julie Dachez’s 2016 graphic novel, The Invisible Difference (Delcourt).
More and more women are discovering their condition later in life and sharing their experience. Since September 2016, the Francophone Association of Autistic Women (Association francophone des femmes autistes, or AFFA) has been fighting for recognition of the specific ways autism manifests in women. A learned society on autism in women is also being created in France, bringing together the general and scientific communities, with the goal of promoting dialogue between researchers and autistic women.
A specific questionnaire for girls
Historically, major figures in autism research believed there was significant prevalence in women. The Austrian Hans Asperger (for whom the syndrome is named) put forward the idea as early as 1944, as did British psychiatrist Lorna Wing, as early as 1981. But it’s only in recent years the scientific community has really started examining the evidence.
Some researchers aim to better understand the specific characteristics of autism in women. Since the beginning of this year, volunteers are invited to participate in a study on “autism in women” conducted by Laurent Mottron, a professor in the department of psychiatry at the University of Montreal (Canada), and Pauline Duret, a doctoral student in neuroscience, in collaboration with myself and Adeline Lacroix, working at the École des Hautes Études en Sciences Sociales (EHESS) in Paris (France). Adeline Lacroix is a master’s student in psychology and has herself been diagnosed with autism.
Other studies are attempting to adapt diagnostic tools for use with female subjects. A team made up of Australian scientists Sarah Ormond, Charlotte Brownlow, Michelle Garnett, and Tony Attwood, and Polish scientist Agnieszka Rynkiewicz, is currently perfecting a specific questionnaire for young girls, the Q-ASC (“Questionnaire for autism spectrum conditions”). They presented their work in May 2017 at a conference in San Francisco.
While there has been an initial trove of interesting results, current research into the specific characteristics of autism in women is raising more questions than it answers. However, the confusion could be considered a necessary step toward the acquisition of knowledge, provided the women affected can contribute to the research and share their point of view on the direction the work should take.
Ordinary citizens can also work towards ensuring autistic girls have the same rights as their male counterparts. By gaining a better understanding of the different forms of autism, everyone can contribute to a world in which children and adults with autism can find their place, and help fight exclusion by creating an inclusive society.
The recent mass shooting in Las Vegas that left dozens of people dead and hundreds injured raises two important questions: How do dangerous people get their guns? And what should the police and courts be doing to make those transactions more difficult?
The fact is that, even leaving aside the assault in Las Vegas and terrorist attacks like the one in San Bernardino, California, in 2015, gun violence is becoming almost routine in many American neighborhoods. The U.S. homicide rate increased more than 20 percent from 2014 to 2016, while last year’s 3.4 percent rise in the violent crime rate was the largest single-year gain in 25 years.
The guns carried and misused by youths, gang members and active criminals are more likely than not obtained by transactions that violate federal or state law. And, as I’ve learned from my decades of researching the topic, it is rare for the people who provide these guns to the eventual shooters to face any legal consequences.
How can this illicit market be policed more effectively?
Undocumented and unregulated transactions
The vast majority of gun owners say they obtained their weapons in transactions that are documented and for the most part legal.
When asked where and how they acquired their most recent firearm, about 64 percent of a cross-section of American gun owners reported buying it from a gun store, where the clerk would have conducted a background check and documented the transfer in a permanent record required by federal law. Another 14 percent were transferred in some other way but still involved a background check. The remaining 22 percent said they got their guns without a background check.
The same is not true for criminals, however, most of whom obtain their guns illegally.
A transaction can be illegal for several reasons, but of particular interest are transactions that involve disqualified individuals – those banned from purchase or possession due to criminal record, age, adjudicated mental illness, illegal alien status or some other reason. Convicted felons, teenagers and other people who are legally barred from possession would ordinarily be blocked from purchasing a gun from a gun store because they would fail the background check or lack the permit or license required by some states.
Anyone providing the gun in such transactions would be culpable if he or she had reason to know that the buyer was disqualified, was acting as a straw purchaser or if had violated state regulations pertaining to such private transactions.
The importance of the informal (undocumented) market in supplying criminals is suggested by the results of inmate surveys and data gleaned from guns confiscated by the police. A national survey of inmates of state prisons found that just 10 percent of youthful (age 18-40) male respondents who admitted to having a gun at the time of their arrest had obtained it from a gun store. The other 90 percent obtained them through a variety of off-the-book means: for example, as gifts or sharing arrangements with fellow gang members.
Similarly, an ongoing study of how Chicago gang members get their guns has found that only a trivial percentage obtained them by direct purchase from a store. To the extent that gun dealers are implicated in supplying dangerous people, it is more so by accommodating straw purchasers and traffickers than in selling directly to customers they know to be disqualified.
The supply chain of guns to crime
While criminals typically do not buy their guns at a store, all but a tiny fraction of those in circulation in the United States are first sold at retail by a gun dealer – including the guns that eventually end up in the hands of criminals.
That first retail sale was most likely legal, in that the clerk followed federal and state requirements for documentation, a background check and record-keeping. While there are scofflaw dealers who sometimes make under-the-counter deals, that is by no means the norm.
If a gun ends up in criminal use, it is usually after several more transactions. The average age of guns taken from Chicago gangs is over 11 years.
The gun at that point has been diverted from legal commerce. In this respect, the supply chain for guns is similar to that for other products that have a large legal market but are subject to diversion.
In the case of guns, diversion from licit possession and exchange can occur in a variety of ways: theft, purchase at a gun show by an interstate trafficker, private sales where no questions are asked, straw purchases by girlfriends and so forth.
What appears to be true is that there are few big operators in this domain. The typical trafficker or underground broker is not making a living that way but rather just making a few dollars on the side. The supply chain for guns used in crime bears little relationship to the supply chain for heroin or cocaine and is much more akin to that for cigarettes and beer that are diverted to underage teenagers.
There have been few attempts to estimate the scope or scale of the underground market, in part because it is not at all clear what types of transactions should be included. But for the sake of having some order-of-magnitude estimate, suppose we just focus on the number of transactions each year that supply the guns actually used in robbery or assault.
There are about 500,000 violent crimes committed with a gun each year. If the average number of times that an offender commits a robbery or assault with a particular gun is twice, then (assuming patterns of criminal gun use remain constant) the total number of transactions of concern is 250,000 per year.
Actually, no one knows the average number of times a specific gun is used by an offender who uses it at least once. If it is more than twice, then there are even fewer relevant transactions.
That compares with total sales volume by licensed dealers, which is upwards of 20 million per year.
All in the family
So how do gang members, violent criminals, underage youths and other dangerous people get their guns?
A consistent answer emerges from the inmate surveys and from ethnographic studies. Whether guns that end up being used in crime are purchased, swapped, borrowed, shared or stolen, the most likely source is someone known to the offender, an acquaintance or family member.
For example, Syed Rizwan Farook – one of the shooters in San Bernardino – relied on a friend to get several of the rifles and pistols he used because Farook doubted that he could pass a background check. That a friend and neighbor was the source is quite typical, despite the unique circumstances otherwise.
Also important are “street” sources, such as gang members and drug dealers, which may also entail a prior relationship. Thus, social networks play an important role in facilitating transactions, and an individual (such as a gang member) who tends to hang out with people who have guns will find it relatively easy to obtain one.
Effective policing of the underground gun market could help to separate guns from everyday violent crime. Currently it is rare for those who provide guns to offenders to face any legal consequences, and changing that situation will require additional resources to penetrate the social networks of gun offenders.
Needless to say, that effort is not cheap or easy and requires that both the police and the courts have the necessary authority and give this sort of gun enforcement high priority.
This is an updated version of an article originally published on Jan. 15, 2016.