Reblogged: Holding a placard outside court isn’t illegal, judge rules: is that the best British democracy has to offer?

Holding a placard outside court isn’t illegal, judge rules – is that the best British democracy has to offer?

Steven Cammiss, University of Birmingham and Graeme Hayes, Aston University

The UK High Court recently dismissed the case against environmental activist Trudi Warner, who was referred for contempt of court in March 2023. Civil liberties campaigners hailed the decision as a “huge win for democracy”, but is it?

Warner had stood outside the Old Bailey, England’s most important criminal court in central London, with a sign that read “Jurors have an absolute right to acquit according to their conscience”. She did so at the start of a trial of climate activists who had been charged with public nuisance for obstructing traffic. Warner’s sign paraphrased the text on a plaque on display at the Old Bailey itself.

Known as jury equity, the legal principle evoked by this statement dates back to 1670 and is often cited, not least by leading legal figures and in the decisions of the higher courts, as a cornerstone of English democracy: juries can decide according to their conscience, and cannot be bullied into finding as the law dictates.

Indeed, many legal commentators saw the case against Warner as perverse. Since the threat of contempt proceedings was brought by the solicitor general (a government minister responsible for legal advice), Warner’s protest has been repeated outside courtrooms throughout the country at the instigation of campaign group Defend our Juries.

Why have juries became so important for protesters in the UK – and are they any more secure in their right to protest as a result of the High Court’s decision?

Jury equity and protest trials

Among recent protest prosecutions, Warner’s case is unique: as she saw it, her aim was to educate jurors on their rights.

For most non-violent disruptive protests being dealt with in English courts, defendants (like Warner) typically accept they did what they are alleged to have done, but argue they had a lawful basis for doing so. This is the case in many trials, from Extinction Rebellion to Palestine Action.

Over the last five years, this basis has been whittled away through government referrals to the Court of Appeal and decisions by that court which have removed the protection of lawful excuse and necessity defences in protest cases.

Meanwhile, new public order legislation has turned minor acts of disruption (such as occupying the highway) into serious acts of criminality punishable by prison sentences. The Court of Appeal endorsed long sentences for two non-violent activists who closed the Queen Elizabeth II bridge on the M25 in October 2022. Such is the parlous state of the court system following a decade of austerity that judges are under pressure to manage trials quickly.

Warner’s case brings each of these dynamics into sharp focus. Activists now regularly find themselves in court unable to present a defence in law for their actions, but remain committed to justifying them, because being publicly accountable is important to them. The only way they can avoid potentially severe punishments is by persuading juries not to convict them through the sincerity of their arguments and the public utility of their actions.

As such, jury equity is now often their only recourse. But judges, seeking to manage trials, regularly impose limits on what defendants can say in court, and for how long they can say it, particularly when they have no defence in law. In fact, Warner’s action stemmed from the widely publicised rulings of Judge Silas Reid in several Insulate Britain trials, who forbid defendants from addressing the jury on the climate emergency, and imprisoned two defendants for contempt for defying his order.

Restoring faith in British justice?

Does the High Court’s denial of permission to prosecute Warner indicate that the courts now seek to give greater protections to non-violent, disruptive protesters? Warner herself seems to think so, saying the decision “has restored my faith a little in British justice”.

The High Court ruled that Warner’s actions did not meet the threshold for contempt and that it would not be in the public interest to prosecute her. In fact, the court noted it would be “a disproportionate approach to this situation in a democratic society”. This can be read as affirming that protest is central to democratic life, rather than an irritant existing outside of it, and certainly gives some support to Warner’s faith.

But other elements of the court’s reasoning are less supportive. By noting that jurors swear an oath to make decisions according to the law, the court upheld a principle we have seen in numerous climate activist trials: defendants cannot invite a jury to apply the equity principle, nor even to inform them of it. This decision may allow people not involved in a case to do what Warner did, but in the courtroom itself, jury equity is to remain something of a dirty secret to be kept from jurors.

In deciding whether Warner’s actions were sufficient for contempt, the court also made much of her passivity in simply holding her sign; Warner did not attempt to engage with anyone entering the Old Bailey. She was, in both her own words and those of the judge, simply “a human billboard”.

Would the court have decided differently had Warner been more assertive? Where is the line between her permissible actions and those that would be deemed an unlawful hindrance of jurors entering the court?

A closer reading of the judgment suggests that, despite Warner’s victory, little has changed in the law’s view of protest. There is a good chance that Warner’s actions were tolerated for the very qualities that made her case so compelling: through her deliberate passivity, in the eyes of the law, she corresponded to the ideal of how protesters should behave. The court’s decision very much fits with a tolerance only of protest which is not disruptive (and, we might argue, not particularly effective).

It is unlikely then that the Warner outcome signals a return to a more liberal understanding of the role of protest as a democratic right. The court’s decision, if welcome, serves rather to underline how diminished the opportunities for real democratic agency are in Britain today.


Imagine weekly climate newsletter

Don’t have time to read about climate change as much as you’d like?
Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 30,000+ readers who’ve subscribed so far.


Steven Cammiss, Associate Professor, Birmingham Law School, University of Birmingham and Graeme Hayes, Reader in Political Sociology, Aston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Older adults are getting smarter






Ageism appears rampant in the UK, with the over-50s losing their jobs at twice the rate of younger people and finding it three times more difficult to find employment after months of job hunting. While there are many reasons for this, perhaps one is the assumption that older people aren’t quite as sharp as they used to be.


We know that older adults, those over 65, don’t perform as well as younger people, aged 18-30, on tests of memory, spatial ability and speed of processing, which often form the basis of IQ tests. However, there’s good news for us all. New research suggests that this difference in ability between younger and older generations appears to be shrinking over time – with older people catching up with their younger peers.

(from the email in my inbox)

From my own observations as a younger person, I know that the impression that older adults are “slow” can be merely the result of the declining near-sight of older adults. It can take them – us – slightly longer to spot things on screens, for example, just because most older adults’ vision is not as good as it used to be.

I remember making this snap judgement about someone else once, when I was much younger. I am fortunate enough to be near-sighted, so while my near sight has declined with age, I still don’t need reading glasses unless I am wearing contact lenses for far sight.


Are young people smarter than older adults? My research shows cognitive differences between generations are diminishing

AshTproductions/Shutterstock

Stephen Badham, Nottingham Trent University

We often assume young people are smarter, or at least quicker, than
older people. For example, we’ve all heard that scientists, and even more so mathematicians, carry out their most important work when they’re comparatively young.

But my new research, published in Developmental Review, suggests that cognitive differences between the old and young are tapering off over time. This is hugely important as stereotypes about the intelligence of people in their sixties or older may be holding them back – in the workplace and beyond.

Cognitive ageing is often measured by comparing young adults, aged 18-30, to older adults, aged 65 and over. There are a variety of tasks that older adults do not perform well on compared to young adults, such as memory, spatial ability and speed of processing, which often form the basis of IQ tests. That said, there are a few tasks that older people do better at than younger people, such as reading comprehension and vocabulary.

Declines in cognition are driven by a process called cognitive ageing, which happens to everyone. Surprisingly, age-related cognitive deficits start very early in adulthood, and declines in cognition have been measured as dropping in adults as young as just 25.

Often, it is only when people reach older age that these effects add up to a noticeable amount. Common complaints consist of walking into a room and forgetting why you entered, as well as difficulty remembering names and struggling to drive in the dark.

The trouble with comparison

Sometimes, comparing young adults to older adults can be misleading though. The two generations were brought up in different times, with different levels of education, healthcare and nutrition. They also lead different daily lives, with some older people having lived though a world war while the youngest generation is growing up with the internet.

Most of these factors favour the younger generation, and this can explain a proportion of their advantage in cognitive tasks.

Indeed, much existing research shows that IQ has been improving globally throughout the 20th century. This means that later-born generations are more cognitively able than those born earlier. This is even found when both generations are tested in the same way at the same age.

Currently, there is growing evidence that increases in IQ are levelling off, such that, in the most recent couple of decades, young adults are no more cognitively able than young adults born shortly beforehand.

Together, these factors may underlie the current result, namely that cognitive differences between young and older adults are diminishing over time.

New results

My research began when my team started getting strange results in our lab. We found that often the age differences we were getting between young and older adults was smaller or absent, compared to prior research from early 2000s.

This prompted me to start looking at trends in age differences across the psychological literature in this area. I uncovered a variety of data that compared young and older adults from the 1960s up to the current day. I plotted this data against year of publication, and found that age deficits have been getting smaller over the last six decades.

Next, I assessed if the average increases in cognitive ability over time seen across all individuals was a result that also applied to older adults specifically. Many large databases exist where groups of individuals are recruited every few years to take part in the same tests. I analysed studies using these data sets to look at older adults.

I found that, just like younger people, older adults were indeed becoming more cognitively able with each cohort. But if differences are disappearing, does that mean younger people’s improvements in cognitive ability have slowed down or that older people’s have increased?

I analysed data from my own laboratory that I had gathered over a seven-year period to find out. Here, I was able to dissociate the performance of the young from the performance of the older. I found that each cohort of young adults was performing to a similar extent across this seven-year period, but that older adults were showing improvements in both processing speed and vocabulary scores.

The figure shows data for a speed-based task where higher scores represent better performance.
The figure shows data for a speed-based task where higher scores represent better performance.
CC BY-SA

I believe the older adults of today are benefiting from many of the factors previously most applicable to young adults. For example, the number of children who went to school increased significantly in the 1960s – with the system being more similar to what it is today than what it was at the start of the 20th century.

This is being reflected in that cohort’s increased scores today, now they are older adults. At the same time, young adults have hit a ceiling and are no longer improving as much with each cohort.

It is not entirely clear why the young generations have stopped improving so much. Some research has explored maternal age, mental health and even evolutionary trends. I favour the opinion that there is just a natural ceiling – a limit to how much factors such as education, nutrition and health can improve cognitive performance.

These data have important implications for research into dementia. For example, it is possible that a modern older adult in the early stages of dementia might pass a dementia test that was designed 20 or 30 years ago for the general population at that time.

Therefore, as older adults are performing better in general than previous generations, it may be necessary to revise definitions of dementia that depend on an individuals’ expected level of ability.

Ultimately, we need to rethink what it means to become older. And there’s finally some good news. Ultimately, we can expect to be more cognitively able than our grandparents were when we reach their age.The Conversation

Stephen Badham, Professor of Psychology, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Chronic stress is bad for your health

How chronic stress changes the brain – and what you can do to reverse the damage

Stress can make your life considerably less colourful.
Semnic

Barbara Jacquelyn Sahakian, University of Cambridge; Christelle Langley, University of Cambridge, and Muzaffer Kaser, University of Cambridge

A bit of stress is a normal part of our daily lives, which can even be good for us. Overcoming stressful events can make us more resilient. But when the stress is severe or chronic – for example caused by the breakdown of a marriage or partnership, death in the family or bullying – it needs to be dealt with immediately.

That’s because repeated stress can have a huge impact on our brain, putting us at risk of a number of physical and psychological problems.

Repeated stress is a major trigger for persistent inflammation in the body. Chronic inflammation can lead to a range of health problems, including diabetes and heart disease. The brain is normally protected from circulating molecules by a blood-brain barrier. But under repeated stress, this barrier becomes leaky and circulating inflammatory proteins can get into the brain.

The brain’s hippocampus is a critical brain region for learning and memory, and is particularly vulnerable to such insults. Studies in humans have shown that inflammation can adversely affect brain systems linked to motivation and mental agility.

There is also evidence of chronic stress effects on hormones in the brain, including cortisol and corticotropin releasing factor (CRF). High, prolonged levels of cortisol have been associated with mood disorders as well as shrinkage of the hippocampus. It can also cause many physical problems, including irregular menstrual cycles.

Mood, cognition and behaviour

It is well established that chronic stress can lead to depression, which is a leading cause of disability worldwide. It is also a recurrent condition – people who have experienced depression are at risk for future bouts of depression, particularly under stress.

There are many reasons for this, and they can be linked to changes in the brain. The reduced hippocampus that a persistent exposure to stress hormones and ongoing inflammation can cause is more commonly seen in depressed patients than in healthy people.

Chronic stress ultimately also changes the chemicals in the brain which modulate cognition and mood, including serotonin. Serotonin is important for mood regulation and wellbeing. In fact, selective serotonin reuptake inhibitors (SSRIs) are used to restore the functional activity of serotonin in the brain in people with depression.

Sleep and circadian rhythm disruption is a common feature in many psychiatric disorders, including depression and anxiety. Stress hormones, such as cortisol, play a key modulatory role in sleep. Elevated cortisol levels can therefore interfere with our sleep. The restoration of sleep patterns and circadian rhythms may therefore provide a treatment approach for these conditions.

Depression can have huge consequences. Our own work has demonstrated that depression impairs cognition in both non-emotional domains, such as planning and problem-solving, and emotional and social areas, such as creating attentional bias to negative information.

Burning out? Be careful.
Andrey_Popov

In addition to depression and anxiety, chronic stress and its impact at work can lead to burnout symptoms, which are also linked to increased frequency of cognitive failures in daily life. As individuals are required to take on increased workload at work or school, it may lead to reduced feelings of achievement and increased susceptibility to anxiety, creating a vicious cycle.

Stress can also interfere with our balance between rational thinking and emotions. For example, the stressful news about the global spread of the novel Coronavirus has caused people to hoard hand sanitisers, tissues and toilet paper. Shops are becoming empty of these supplies, despite reassurance by the government that there is plenty of stock available.

This is because stress may force the brain to switch to a “habit system”. Under stress, brain areas such as the putamen, a round structure at the base of the forebrain, show greater activation. Such activation has been associated with hoarding behaviour. In addition, in stressful situations, the ventromedial prefrontal cortex, which plays a role in emotional cognition – such as evaluation of social affiliations and learning about fear – may enhance irrational fears. Eventually, these fears essentially override the brain’s usual ability for cold, rational decision-making.

Overcoming stress

So what should you do if you are suffering from chronic stress? Luckily there are ways to tackle it. The UK Government Foresight Project on Mental Capital and Wellbeing has recommended evidenced-based ways to mental wellbeing.

We know, for example, that exercise has established benefits against chronic stress. Exercise tackles inflammation by leading to an anti-inflammatory response. In addition, exercise increases neurogenesis – the production of new brain cells – in important areas, such as the hippocampus. It also improves your mood, your cognition and your physical health.

Another key way to beat stress involves connecting with people around you, such as family, friends and neighbours. When you are under stress, relaxing and interacting with friends and family will distract you and help reduce the feelings of stress.

Learning may be a less obvious method. Education leads to a cognitive reserve – a stockpile of thinking abilities – which provides some protection when we have negative life events. In fact, we know that people are less likely to suffer from depression and problems in cognition if they have better cognitive reserve.

Other methods include mindfulness, allowing us to take notice and be curious of the world around us and spend time in the moment. Giving is another – volunteering or donating to a charity activates the reward system in your brain and promotes positive feelings about life.

Importantly, when you experience chronic stress, do not wait and let things get the better of you. Early detection and early effective treatment is the key to a good outcome and good wellbeing. Remember to act in a holistic manner to improve your mood, your thinking and your physical health.

And you don’t have to wait until you are overwhelmed with stress. Ultimately, it is important that we learn from an early age to keep our brain fit throughout our whole life course.The Conversation

Barbara Jacquelyn Sahakian, Professor of Clinical Neuropsychology, University of Cambridge; Christelle Langley, Postdoctoral Research Associate, Cognitive Neuroscience, University of Cambridge, and Muzaffer Kaser, Clinical Lecturer, University of Cambridge

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why bystanders rarely speak up when they witness sexual harassment

 

File 20171019 1066 16v7wn1.jpg?ixlib=rb 1.1
If you see something, say something.
Photographee.eu

George B. Cunningham, Texas A&M University

The uproar over allegations that Hollywood producer Harvey Weinstein sexually abused and harassed dozens of the women he worked with is inspiring countless women (and some men) to share their own personal sexual harassment and assault stories.

With these issues trending on social media with the hashtag #MeToo, it’s getting harder to ignore how common they are on the job and in other settings.

I have studied sexual harassment and ways to prevent it as a diversity and inclusion researcher. My research on how people often fail to speak out when they witness these incidents might help explain why Weinstein could reportedly keep his despicable behavior an open secret for decades.

//platform.twitter.com/widgets.js

Witnessing sexual harassment

Of course, Weinstein’s alleged wrongdoings went well beyond sexual harassment, which University of British Columbia gender scholar Jennifer Berdahl defines as “behavior that derogates, demeans or humiliates an individual based on that indiviudual’s sex.”

Some of the women speaking out in the U.S. and abroad are accusing him of rape – a crime – during encounters he says were always consensual.

But sexual harassment is such a chronic workplace problem that it accounts for a third of the 90,000 charges filed with the federal government’s Equal Employment Opportunity Commission (EEOC) in 2015. Since only one in four victims report it, however, the EEOC and other experts say the actual number of incidents is far higher than the official number of complaints would suggest.

The usual silence leaves most perpetrators of this toxic behavior free to prey on their co-workers and subordinates. If sexual harassment is pervasive on the job, and most women don’t report it, what can be done?

Some business scholars suggest that the best way to prevent sexual harassment, bullying and other toxic workplace behavior is to train co-workers to stand up for their abused colleagues when they witness incidents. One reason why encouraging intervention makes good sense is that some 70 percent of women have observed harassment in the workplace, according to research by psychologist Robert Hitlan.

The trouble is that most people who witness or become aware of sexual harassment don’t speak out. Screenwriter, producer and actor Scott Rosenberg has both admitted to and denounced how this dynamic enabled Weinstein to become an alleged serial abuser. “Let’s be perfectly clear about one thing,” he wrote in a private Facebook post published in the media. “Everybody-f—ing-knew.” He also said:

“in the end, I was complicit.
I didn’t say s—.
I didn’t do s—.
Harvey was nothing but wonderful to me.
So I reaped the rewards and I kept my mouth shut.
And for that, once again, I am sorry.”

Actor Matt Damon, right, has denied reports that he helped stifle reporting that would have exposed alleged sexual harassment and abuse by movie mogul Harvey Weinstein, left, years ago.
AP Photo/Matt Sayles

Researching how people respond

To understand why witnesses often don’t speak up, a colleague and I did a study in 2010 that asked participants to review hypothetical sexual harassment scenarios and indicate if they would respond.

The results seemed promising: Participants generally said they would take steps to stop harassing behavior if they saw it happen. People indicated they’d be more likely to respond if two conditions were met: It was a quid pro quo – that is, if the harasser promised benefits in exchange for sexual favors – and the workplace valued diversity and inclusion. In such cultures, there are open lines of communication, and leaders embrace diversity and inclusion.

There’s a potential problem with experiments using the kind of hypothetical scenario that we and others employed. People don’t always do what they think they will in real-life situations. For example, psychologists find that people tend to believe they’ll feel more distraught during an emotionally devastating event than they actually do when it occurs.

Other researchers find similar patterns with reactions to racists. People think they will recoil and experience distress when hearing racist comments. But when they actually hear those remarks, they don’t.

The same dynamics are at play when examining sexual harassment during job interviews, as illustrated in a study conducted by psychologists Julie Woodzicka and Marianne LaFrance.

Participants, all of whom were women, expected to feel angry, confront the harasser and refuse to answer the hypothetical interviewer’s inappropriate questions. Some of the questions, for example, included asking the job applicant if she had a boyfriend or if women should wear bras at work.

However, when they witnessed this simulated behavior during the experiment’s mock interviews, people responded differently. In fact, 68 percent of participants who only read about the incidents said they would refuse to answer questions. Yet all 50 of the participants who witnessed the staged hostile behavior answered them.

Drawing from these studies, my team conducted an experiment in 2012 to determine how harassment bystanders would react to hearing inappropriate comments about women.

Some of the female participants read about a hypothetical scenario in which harassment took place, while another group observed harassment occurring in a staged setting. We determined that the participants, who were college students, overestimated how they would respond to seeing someone else get harassed.

The reason this matters is that people who don’t feel distress are unlikely to take action.


https://datawrapper.dwcdn.net/HdxsU/6/


Intervention training

What stops people from reacting the way they think they will?

Psychologists blame this disparity on “impact bias.” People overestimate the impact that all future events – be they weddings, funerals or even the Super Bowl – will have on them emotionally. Real life is messier than our imagined futures, with social pressures and context making a difference.

This suggests a possible solution. Since context matters, organizations can take steps to encourage bystanders to take action.

For example, they can train their staff to speak up with the Green Dot Violence Prevention Program or other approaches. The Green Dot program was originally designed to reduce problems like sexual assault and stalking by encouraging bystanders to do something. The EEOC says this “bystander intervention training might be effective in the workplace.”

Especially with workplace harassment, establishing direct and anonymous lines for reporting sexist incidents is essential. They also say employees should not fear negative reprisal or gossip when they do report harassment.

Finally, bystanders are more likely to intervene in organizations that make their refusal to tolerate harassment clear. For that to happen, leaders must assert and demonstrate their commitment to harassment-free workplaces, enforce appropriate policies and train new employees accordingly.

The ConversationUntil more people take a stand when they witness sexual harassment, it will continue to haunt American workplaces.

George B. Cunningham, Professor of Sport Management, Faculty Affiliate of the Women’s and Gender Studies Program, and Director, Laboratory for Diversity in Sport, Texas A&M University

This article was originally published on The Conversation. Read the original article.

How seeing problems in the brain makes stigma disappear

 

File 20171005 15464 vaswym.png?ixlib=rb 1.1
A pair of identical twins. The one on the right has OCD, while the one on the left does not.
Brain Imaging Research Division, Wayne State University School of Medicine, CC BY-SA

David Rosenberg, Wayne State University

As a psychiatrist, I find that one of the hardest parts of my job is telling parents and their children that they are not to blame for their illness.

Children with emotional and behavioral problems continue to suffer considerable stigma. Many in the medical community refer to them as “diagnostic and therapeutic orphans.” Unfortunately, for many, access to high-quality mental health care remains elusive.

An accurate diagnosis is the best way to tell whether or not someone will respond well to treatment, though that can be far more complicated than it sounds.

I have written three textbooks about using medication in children and adolescents with emotional and behavioral problems. I know that this is never a decision to take lightly.

But there’s reason for hope. While not medically able to diagnose any psychiatric condition, dramatic advances in brain imaging, genetics and other technologies are helping us objectively identify mental illness.

Knowing the signs of sadness

All of us experience occasional sadness and anxiety, but persistent problems may be a sign of a deeper issue. Ongoing issues with sleeping, eating, weight, school and pathologic self-doubt may be signs of depression, anxiety or obsessive-compulsive disorder.

Separating out normal behavior from problematic behavior can be challenging. Emotional and behavior problems can also vary with age. For example, depression in pre-adolescent children occurs equally in boys and girls. During adolescence, however, depression rates increase much more dramatically in girls than in boys.

It can be very hard for people to accept that they – or their family member – are not to blame for their mental illness. That’s partly because there are no current objective markers of psychiatric illness, making it difficult to pin down. Imagine diagnosing and treating cancer based on history alone. Inconceivable! But that is exactly what mental health professionals do every day. This can make it harder for parents and their children to accept that they don’t have control over the situation.

Fortunately, there are now excellent online tools that can help parents and their children screen for common mental health issues such as depression, anxiety, panic disorder and more.

Most important of all is making sure your child is assessed by a licensed mental health professional experienced in diagnosing and treating children. This is particularly important when medications that affect the child’s brain are being considered.

Seeing the problem

Thanks to recent developments in genetics, neuroimaging and the science of mental health, it’s becoming easier to characterize patients. New technologies may also make it easier to predict who is more likely to respond to a particular treatment or experience side effects from medication.

Our laboratory has used brain MRI studies to help unlock the underlying anatomy, chemistry and physiology underlying OCD. This repetitive, ritualistic illness – while sometimes used among laypeople to describe someone who is uptight – is actually a serious and often devastating behavioral illness that can paralyze children and their families.

In children with OCD, the brain’s arousal center, the anterior cingulate cortex, is ‘hijacked.’ This causes critical brain networks to stop working properly.
Image adapted from Diwadkar VA, Burgess A, Hong E, Rix C, Arnold PD, Hanna GL, Rosenberg DR. Dysfunctional activation and brain network profiles in youth with Obsessive-Compulsive Disorder: A focus on the dorsal anterior cingulate during working memory. Frontiers in Human Neuroscience. 2015; 9: 1-11., CC BY-SA

Through sophisticated, high-field brain imaging techniques – such as fMRI and magnetic resonance spectroscopy – that have become available recently, we can actually measure the child brain to see malfunctioning areas.

We have found, for example, that children 8 to 19 years old with OCD never get the “all clear signal” from a part of the brain called the anterior cingulate cortex. This signal is essential to feeling safe and secure. That’s why, for example, people with OCD may continue checking that the door is locked or repeatedly wash their hands. They have striking brain abnormalities that appear to normalize with effective treatment.

We have also begun a pilot study with a pair of identical twins. One has OCD and the other does not. We found brain abnormalities in the affected twin, but not in the unaffected twin. Further study is clearly warranted, but the results fit the pattern we have found in larger studies of children with OCD before and after treatment as compared to children without OCD.

Exciting brain MRI and genetic findings are also being reported in childhood depression, non-OCD anxiety, bipolar disorder, ADHD and schizophrenia, among others.

Meanwhile, the field of psychiatry continues to grow. For example, new techniques may soon be able to identify children at increased genetic risk for psychiatric illnesses such as bipolar disorder and schizophrenia.

New, more sophisticated brain imaging and genetics technology actually allows doctors and scientists to see what is going on in a child’s brain and genes. For example, by using MRI, our laboratory discovered that the brain chemical glutamate, which serves as the brain’s “light switch,” plays a critical role in childhood OCD.

What a scan means

When I show families their child’s MRI brain scans, they often tell me they are relieved and reassured to “be able to see it.”

Children with mental illness continue to face enormous stigma. Often when they are hospitalized, families are frightened that others may find out. They may hesitate to let schools, employers or coaches know about a child’s mental illness. They often fear that other parents will not want to let their children spend too much time with a child who has been labeled mentally ill. Terms like “psycho” or “going mental” remain part of our everyday language.

The example I like to give is epilepsy. Epilepsy once had all the stigma that mental illness today has. In the Middle Ages, one was considered to be possessed by the devil. Then, more advanced thinking said that people with epilepsy were crazy. Who else would shake all over their body or urinate and defecate on themselves but a crazy person? Many patients with epilepsy were locked in lunatic asylums.

Then in 1924, psychiatrist Hans Berger discovered something called the electroencephalogram (EEG). This showed that epilepsy was caused by electrical abnormalities in the brain. The specific location of these abnormalities dictated not only the diagnosis but the appropriate treatment.

The ConversationThat is the goal of modern biological psychiatry: to unlock the mysteries of the brain’s chemistry, physiology and structure. This can help better diagnose and precisely treat childhood onset mental illness. Knowledge heals, informs and defeats ignorance and stigma every time.

David Rosenberg, Professor, Psychiatry and Neuroscience, Wayne State University

This article was originally published on The Conversation. Read the original article.

Why you need to get involved in the geoengineering debate – now

File 20171018 32378 1p7xxv4.jpg?ixlib=rb 1.1
Atakan Yildiz/Shutterstock.com

Rob Bellamy, University of Oxford

The prospect of engineering the world’s climate system to tackle global warming is becoming more and more likely. This may seem like a crazy idea but I, and over 250 other scientists, policy makers and stakeholders from around the globe recently descended on Berlin to debate the promises and perils of geoengineering.

There are many touted methods of engineering the climate. Early, outlandish ideas included installing a ‘space sunshade”: a massive mirror orbiting the Earth to reflect sunlight. The ideas most in discussion now may not seem much more realistic – spraying particles into the stratosphere to reflect sunlight, or fertilising the oceans with iron to encourage algal growth and carbon dioxide sequestration through photosynthesis.

But the prospect of geoengineering has become a lot more real since the Paris Agreement. The 2015 Paris Agreement set out near universal, legally binding commitments to keep the increase in global temperature to well below 2°C above pre-industrial levels and even to aim for limiting the rise to 1.5°C. The Intergovernmental Panel on Climate Change (IPCC) has concluded that meeting these targets is possible – but nearly all of their scenarios rely on the extensive deployment of some form of geoengineering by the end of the century.

Some geoengineers take their inspiration from supervolcanic eruptions, which can lower global temperatures.
patobarahona/Shutterstock.com

How to engineer the climate

Geoengineering comes in two distinct flavours. The first is greenhouse gas removal: those ideas that would seek to remove and store carbon dioxide and other greenhouse gases from the atmosphere. The second is solar radiation management: the ideas that would seek to reflect a level of sunlight away from the Earth.

Solar radiation management is the more controversial of the two, doing nothing to address the root cause of climate change – greenhouse gas emissions – and raising a whole load of concerns about undesirable side effects, such as changes to regional weather patterns.

And then there is the so-called “termination problem”. If we ever stopped engineering the climate in this way then global temperature would abruptly bounce back to where it would have been without it. And if we had not been reducing or removing emissions at the same time, this could be a very sharp and sudden rise indeed.

Most climate models that see the ambitions of the Paris Agreement achieved assume the use of greenhouse gas removal, particularly bio-energy coupled with carbon capture and storage technology. But, as the recent conference revealed, although research in the field is steadily gaining ground, there is also a dangerous gap between its current state of the art and the achievability of the Paris Agreement on climate change.

The Paris Agreement – and its implicit dependence on greenhouse gas removal – has undoubtedly been one of the most significant developments to impact on the field of geoengineering since the last conference of its kind back in 2014. This shifted the emphasis of the conference away from the more controversial and attention-grabbing solar radiation management and towards the more mundane but policy relevant greenhouse gas removal.

Geoengineering measures.
IASS

Controversial experiments

But there were moments when sunlight reflecting methods still stole the show. A centrepiece of the conference was the solar radiation management experiments campfire, where David Keith and his colleagues from the Harvard University Solar Geoengineering Research Programme laid out their experimental plans. They aim to lift an instrument package to a height of 20km using a high-altitude balloon and release a small amount of reflective particles into the atmosphere.

This would not be the first geoengineering experiment. Scientists, engineers and entrepreneurs have already begun experimenting with various ideas, several of which have attracted a great degree of public interest and controversy. A particularly notable case was one UK project, in which plans to release a small amount of water into the atmosphere at a height of 1km using a pipe tethered to a balloon were cancelled in 2013 owing to concerns over intellectual property.

Such experiments will be essential if geoengineering ideas are to ever become technically viable contributors to achieving the goals of the Paris Agreement. But it is the governance of experiments, not their technical credentials, that has always been and still remains the most contentious area of the geoengineering debate.

Critics warned that the Harvard experiment could be the first step on a “slippery slope” towards an undesirable deployment and therefore must be restrained. But advocates argued that the technology needs to be developed before we can know what it is that we are trying to govern.

The challenge for governance is not to back either one of these extremes, but rather to navigate a responsible path between them.

How to govern?

The key to defining a responsible way to govern geoengineering experiments is accounting for public interests and concerns. Would-be geoengineering experimenters, including those at Harvard, routinely try to account for these concerns by appealing to their experiments being of a small scale and a limited extent. But, as I argued in the conference, in public discussions on the scale and extent of geoengineering experiments their meaning has been subjective and always qualified by other concerns.

My colleagues and I have found that the public have at least four principal concerns about geoengineering experiments: their level of containment; uncertainty around what the outcomes would be; the reversibility of any impacts, and the intent behind them. A small scale experiment unfolding indoors might therefore be deemed unacceptable if it raised concerns about private interests, for example. On the other hand, a large scale experiment conducted outdoors could be deemed acceptable if it did not release materials into the open environment.

Under certain conditions the four dimensions could be aligned. The challenge for governance is to account for these – and likely other – dimensions of perceived controllability. This means that public involvement in the design of governance itself needs to be front and centre in the development of geoengineering experiments.

A whole range of two-way dialogue methods are available – focus groups, citizens juries, deliberative workshops and many others. And to those outside of formal involvement in such processes – read about geoengineering, talk about geoengineering. We need to start a society-wide conversation on how to govern such controversial technologies.

Public interests and concerns need to be drawn out well in advance of an experiment and the results used to meaningfully shape how we govern it. This will not only make the the experiment more legitimate, but also make it substantively better.

The ConversationMake no mistake, experiments will be needed if we are to learn the worth of geoengineering ideas. But they must be done with public values at their core.

Rob Bellamy, James Martin Research Fellow in the Institute for Science, Innovation and Society, University of Oxford

This article was originally published on The Conversation. Read the original article.

Why is it so hard for the wrongfully jailed to get justice?

Linda Asquith, Leeds Beckett University

Imagine for a moment you are wrongfully convicted of a crime. You get sent to prison, where you start to serve out your sentence – every minute of every day knowing you are innocent. Then the unthinkable happens and you are released. You are elated – this is the moment you’ve been waiting for.

But those feelings of elation and happiness quickly turn to fear and despair as you realise you have nowhere to go. Your old life as you knew it is gone, you have no way of supporting yourself, your relationships have broken down and you have nowhere to turn to for support.

Sadly, this is the reality many exonerees face when they are trying to put their lives back together. Many of these people – who have in some cases spent years behind bars – find upon release that their problems are only exacerbated. Wrongfully wrenched from their families, homes and communities, they struggle to reintegrate into society when they return.

And things seem to be made worse because unlike prisoners who have access to support to help them resettle when they are released from prison, those who suffer a miscarriage of justice do not get this.

“Rightfully convicted” individuals are provided with a plan for release from prison – often starting months in advance. This involves a range of activities, all of which are aimed at helping the person to resettle back into the community. But exonerees have none of these preparations – and often receive very little notice of their release.

Victor Nealon, for example, served 16 years in prison after he was falsely charged with rape. He received three hours’ notice of his release, and ended up in a bed and breakfast on his first night as a free man – he had nowhere else to go.

An unfamiliar world

The wrongfully convicted don’t receive any preparation for their release because of the way the prison system works. Prisoners have to show they are “tackling their offending behaviour” to gain parole. But if you haven’t committed the crime in the first place, this is not possible. The end result is that a person may spend longer in prison than if they had committed the offence and admitted it.

Upon release, the wrongfully convicted are thrust into a world they are unfamiliar with – and they have zero support or guidance. It’s common for exonerees to develop PTSD as a result of their wrongful conviction, alongside other mental and physical health problems requiring significant support.

This in part happens because as soon as the conviction is quashed, these people are no one’s responsibility. They are no longer a prisoner, or an ex-offender. There is no standard programme of support which is triggered at the point of release. And while probation would be well placed to support the wrongfully convicted, they cannot as they are not ex-offenders – ex-prisoners, yes, but not ex-offenders.

Say I’m innocent

There are only two specific organisations that provide support to exonerees. They are the Citizens Advice Bureau (CAB) based at the Royal Courts of Justice, and the Miscarriages of Justice Organisation (MOJO). This was founded by Paddy Hill – one of the six men wrongly convicted of the 1974 Birmingham pub bombings. He set it up in an attempt to provide the support to others that he was not given when released in 1991.

But both services are restricted by funding and staffing limitations, and while both organisations do superb work against a backdrop of austerity measures and extremely limited resources, both are at best a piecemeal response to what is, in reality, a government responsibility.

A recent BBC documentary called Fallout highlights these issues. The the director of the documentary Mark Mcloughlin has launched the “Say I’m Innocent” campaign, and is now fighting for all the services that are available to guilty prisoners on release to be made available to exonerees. The campaign is also calling for a public announcement of a person’s innocence upon their release. As well as other measure including a transition centre in both the UK and Ireland to allow them time and help to reintegrate into society.

The ConversationThis is important because the key issue here is responsibility. The state assumed responsibility for these individuals when they were wrongfully convicted. It is therefore only right that the state continues to take responsibility for them once exonerated.

Linda Asquith, Senior Lecturer in Criminology, Leeds Beckett University

This article was originally published on The Conversation. Read the original article.

Whales and dolphins have rich cultures – and could hold clues to what makes humans so advanced


File 20171017 30422 eb1qx5.jpg?ixlib=rb 1.1
A pod of spinner dolphins in the Red Sea.
Alexander Vasenin/wikimedia, CC BY-SA

Susanne Shultz, University of Manchester

Humans are like no other species. We have constructed stratified states, colonised nearly every habitat on Earth and we’re now looking to move to other planets. In fact, we are so advanced that some of our innovations – such as fossil fuel technologies, intensive agriculture and weapons of mass destruction – may ultimately lead to our downfall.

Even our closest relatives, the primates, lack traits such as developed language, cumulative culture, music, symbolism and religion. Yet scientists still haven’t come to a consensus on why, when and how humans evolved these traits. But, luckily, there are non-human animals that have evolved societies and culture to some extent. My latest study, published in Nature Evolution & Ecology, investigates what cetaceans (whales and dolphins) can teach us about human evolution.

The reason it is so difficult to trace the origins of human traits is that social behaviour does not fossilise. It is therefore very hard to understand when and why cultural behaviour first arose in the human lineage. Material culture such as art, burial items, technologically sophisticated weapons and pottery is very rare in the archaeological record.

Previous research in primates has shown that a large primate brain is associated with larger social groups, cultural and behavioural richness, and learning ability. A larger brain is also tied to energy-rich diets, long life spans, extended juvenile periods and large bodies. But researchers trying to uncover whether each of these different traits are causes or consequences of large brains find themselves at odds with each other – often arguing at cross purposes.

One prevailing explanation is the social brain hypothesis, which argues that our minds and consequently our brains have evolved to solve the problems associated with living in an information rich, challenging and dynamic social environment. This comes with challenges such as competing for and allocating food and resources, coordinating behaviour, resolving conflicts and using information and innovations generated by others in the group.

Primates with large brains tend to be highly social animals.
Peter van der Sluijs/wikipedia, CC BY-SA

However, despite the abundance of evidence for a link between brain size and social skills, the arguments rumble on about the role of social living in cognitive evolution. Alternative theories suggest that primate brains have evolved in response to the complexity of forest environments – either in terms of searching for fruit or visually navigating a three dimensional world.

Under the sea

But it’s not just primates that live in rich social worlds. Insects, birds, elephants, horses and cetaceans do, too.

The latter are especially interesting as, not only do we know that they do interesting things, some live in multi-generational societies and they also have the largest brains in the animal kingdom. In addition, they do not eat fruit, nor do they live in forests. For that reason, we decided to evaluate the evidence for the social or cultural brain in cetaceans.

Another advantage with cetaceans is that research groups around the world have spent decades documenting and uncovering their social worlds. These include signature whistles, which appear to identify individual animals, cooperative hunting, complex songs and vocalisations, social play and social learning. We compiled all this information into a database and evaluated whether a species’ cultural richness is associated with its brain size and the kind of society they live in.

We found that species with larger brains live in more structured societies and have more cultural and learned behaviours. The group of species with the largest relative brain size are the large, whale-like dolphins. These include the false killer whale and pilot whale.

To illustrate the two ends of the spectrum, killer whales have cultural food preferences – where some populations prefer fish and other seals. They also hunt cooperatively and have matriarchs leading the group. Sperm whales have actual dialects, which means that different populations have distinct vocalisations. In contrast, some of the large baleen whales, which have smaller brains, eat krill rather than fish or other mammals, live fairly solitary lives and only come together for breeding seasons and at rich food sources.

The lives of beaked whales are still a big mystery.
Ted Cheeseman/wikipedia, CC BY-SA

We still have much to learn about these amazing creatures. Some of the species were not included in our analysis because we know so little about them. For example, there is a whole group of beaked whales with very large brains. However, because they dive and forage in deep water, sightings are rare and we know almost nothing about their behaviour and social relationships.

The ConversationNevertheless, this study certainly supports the idea that the richness of a species’ social world is predicted by their brain size. The fact that we’ve found it in an independent group so different from primates makes it all the more important.

Susanne Shultz, University Research Fellow, University of Manchester

This article was originally published on The Conversation. Read the original article.

‘You all look the same’: non-Muslim men targeted in Islamophobic hate crime because of their appearance


File 20171017 30406 7bq16h.jpg?ixlib=rb 1.1
Men with beards have been called terrorists.
via shutterstock.com

Imran Awan, Birmingham City University and Irene Zempi, Nottingham Trent University

There has been a 29% rise in recorded hate crimes in the UK in the past year according to new figures released by the Home Office, which also showed a spike in offences following the EU referendum.

The consequences of hate crime are widespread. While Muslims in Britain are increasingly subject to Islamophobia, some non-Muslims are also being targeted because they are perceived to be Muslim.

In new research presented to the All-Party Parliamentary Group on British Muslims we looked at the experiences of non-Muslim men who reported being the target of Islamophobic hate crime.

We interviewed 20 non-Muslim men of different ages, race and religion, based in the UK. Our group included Sikhs, Christians, Hindus and atheists. Although their experiences were all different, they believed that their skin colour, their beard or turban meant that they were perceived to be Muslim – and targeted for it. We decided to only interview men in this study because we understand from our community work that men are more likely than women to be victims of Islamophobia due to mistaken identity.

Our findings backed up our previous research showing that a spike in hate crime is often triggered by a particular event. The men we interviewed, whose names we have anonymised here to protect their identities, described how they felt “vulnerable” and “isolated” after the EU referendum. Vinesh, a 32-year old, Indian British Hindu, told us:

People have been calling me names on Twitter like ‘You’re a p**i c**t’. I have also been threatened on Facebook like ‘Today is the day we get rid of the likes of you!’ I feared for my safety when I read this.

Some of the men noted how terrorist attacks including those in Manchester and London also triggered more Islamophobia. Others also noted how the Trump administration and its stance towards Muslims had promoted anti-Muslim sentiments globally.

https://datawrapper.dwcdn.net/XkO0U/2/

In some cases, hate crimes are targeted at people’s homes or workplaces, with property damaged with Islamophobic graffiti because the perpetrators believe the victims are Muslim. In a recent case in Liverpool, “Allar Akbar” (sic) was painted on a Hindu family’s future home.

One 37-year-old man, called Paul, a white British atheist who is perceived to be a convert to Islam due to his beard, told us how he had been targeted:

I live on a rough estate. I had dog excrement shoved through the mailbox. They also threw paint over my door.

Nobody stepped in to help

Some of those we interviewed felt that their beard was a key aspect of why they were being targeted for looking Muslim. One 19-year-old, called Cameron, who is black British, said:

It’s happened to me ever since I grew a beard. I’m not a Muslim but people stare at me because they think I am.

Many of those we interviewed reported that they suffered anxiety, depression, physical illness, loss of income and employment as a result of being targeted. Raj, a 39-year-old British Indian, told us:

We live in fear every day. We face abuse and intimidation daily but we should not have to endure this abuse.

Such feelings of insecurity and isolation were exacerbated by the fact that these hate incidents usually took place in public places in front of passers-by who didn’t intervene to help. Mark, who is white and Christian and perceived to be Muslim due to his beard and Mediterranean complexion, said:

I was verbally abused by another passenger on the bus who branded me an ‘ISIS terrorist’ while passengers looked on without intervening. In another incident, I had ‘Brexit’ yelled in my face … I feel very lonely. No one has come to my assistance or even consoled me.

Identity questioned

The men we interviewed constantly felt the need to prove their identity, and differentiate themselves from Muslims in an attempt to prevent future victimisation. Many described it as emotionally draining. Samuel, a 58-year-old black British Christian, said:

My identity is always questioned because I look like a Muslim. It does make me feel low but I got used to it. As a black man with a beard you always get associated as being a Muslim terrorist.

The men we interviewed said they wanted much more public awareness about hate crimes and better police recording of these kind of offences. They also called for training for bystanders and people such as teachers who may need to deal with more of these situations. They also thought that an app, through which all types of hate crime could be reported in real time, could offer support for victims.

The ConversationThe rise in Islamophobic hate crime has made many Muslims live in fear. But this kind of hatred is pervasive, and can affect anyone perceived to be Muslim. “You all look the same”, one man was told after explaining that he wasn’t Muslim to somebody who abused him on the train. British society needs to get a better grip on understanding this often “invisible” form of hate crime and what to do about it.

Imran Awan, Associate Professor and Deputy Director of the Centre for Applied Criminology, Birmingham City University and Irene Zempi, Director of the Nottingham Centre for Bias, Prejudice & Hate Crime, Nottingham Trent University

This article was originally published on The Conversation. Read the original article.

PS
Hate crimes against disabled children, however, are also on the rise in Britain. – AS

Why the Indigenous in New Zealand have fared better than those in Canada


File 20171005 15464 f7v6kb.jpg?ixlib=rb 1.1
Maggie Cywink, of Whitefish River First Nation, holds up a sign behind Canadian Prime Minister Justin Trudeau during a summit in Ottawa in support of missing and murdered Indigenous women.
THE CANADIAN PRESS/Adrian Wyld

Dominic O’Sullivan, Charles Sturt University

Canadian Prime Minister Justin Trudeau’s recent speech to the United Nations brought Canada’s genocidal story to the world stage.

It gave historical context to an enduring colonialism.

The impact is widespread, but neatly summarized in a life expectancy differential between Indigenous and other Canadians of five to 15 years for men and 10 to 15 years for women. In New Zealand, by way of contrast, the differential between Maori and non-Maori is 7.3 years for men and 6.8 years for women.

These figures summarise the story of the power gap between Indigenous peoples and the settler state in both countries. Policy solutions lie beyond the liberal welfare state, beyond egalitarian justice. The origins of the persistent power gaps in each country are different, however, and reflect different understandings of relationships among sovereignty, citizenship, nationhood and self-determination.

The Indigenous peoples of Canada and New Zealand share similar experiences as subjects of British colonialism.

Yet there are profound differences both in the situation for Indigenous peoples in both countries and in the opportunities for resistance they’ve been able to pursue.

Maori have always held a greater share of the New Zealand national population than the Indigenous in Canada. Maori share a common language, and New Zealand’s smaller land mass makes resistance simpler to organize. Yet their place in the body politic is always contested, as state and public strategies of exclusion compete with the claim to self-determination.

‘Lead the lad to be a good farmer’

Historically, the greater Maori capacity for resistance did not dampen colonial resolve. But it did mean that assimilation, rather than genocide, was the intent of government policy. The purpose of New Zealand’s non-residential native schools, for example, was to “lead the lad to be a good farmer and the girl to be a good farmer’s wife,” as the director-general of education put it in 1931.

Following the Canadian Supreme Court ruling in 1997, Canada’s concern for “the reconciliation of the pre-existence of Aboriginal societies with the sovereignty of the Crown” was minimized by the previous Conservative government of Stephen Harper but rhetorically aligned with the “new beginning” that Trudeau spoke of at the United Nations.

Former Canadian Prime Minister Stephen Harper speaks with a Maori elder as he and his wife, Laureen, watch an official Maori powhiri during a visit to New Zealand in 2014.
THE CANADIAN PRESS/Adrian Wyld

Trudeau proposed that the UN Declaration on the Rights of Indigenous Peoples would now be Canada’s policy guide. It would rationalize stronger nation-to-nation, or government-to-government, relationships. Yet at the same time, the 2016 Canadian Human Rights Tribunal’s ruling, handed down a year after Trudeau’s election and urging the government to address discrimination against Indigenous children on reserves, has yet to be heeded.

Trudeau expressed concern at the UN about the self-determination of First Nations in Canada, but he didn’t speak of the individual Indigenous citizen’s self-determination.

He did not speak to the child on the reserve whose poverty is a direct result of lesser access to services that others in Canada take for granted as rights of citizenship.

Similar circumstances do exist in New Zealand where racism in schooling, health, the labour market and criminal justice compromise citizenship. However, Maori in New Zealand can demand better with reference to the Treaty of Waitangi and the “rights and privileges of British subjects” that it confers.

Maori protected under treaty

That treaty gave the British Crown the right to establish government. In return, Britain offered protection of Maori authority over their own affairs and natural resources.

The promise has not been consistently kept, but the treaty does give moral and increasingly political and jurisprudential authority to the Maori claim to self-determination. The treaty means that Maori do not contest the post-settler presence, but they do contest the Crown exercising a unilateral sovereign authority.

In 2015, the Waitangi Tribunal, which hears claims against the Crown for breaches of the treaty, found that the agreement was not a cession of sovereignty as the Crown had always claimed. While the government does not accept the finding, and it’s not legally binding, it affirms the Maori position on self-determination.

It also affirms a Maori way of thinking about contemporary politics. It raises possibilities for deeper introspection about Maori as nations, and Maori as citizens, in ways that are not apparent in Trudeau’s interpretation of the UN’s Indigenous declaration as it pertains to Canada.

There is an argument that nation to nation relationships respect the fact that sovereignty was never ceded. Perhaps an argument that indigenous Canadians claiming the full rights and capacities of state citizenship requires accepting the moral legitimacy of Crown sovereignty. However, if sovereignty means the capacity to function as a self-determining people one needs to think about the relative and relational character of political authority, and the sources of political possibility. These exist both inside and outside the state. They exist simultaneously. Neither is a site of political possibility for self-determination that can reach its potential without the support of the other.

Sharing sovereignty does not mean assimilation

Political authority and self-determination can’t reach their full potential without the support of each other. They exist both inside and outside the state. They exist simultaneously.

If the Crown is sovereign, it exercises that sovereignty only as the people’s agent. The UN declaration is insistent that, if they wish, Indigenous peoples have a right to share that sovereignty.

Sharing sovereignty is not dependent on the Indigenous person’s assimilation into an homogenous body politic, but on the capacity to contribute to society as an Indigenous person.

That could include the ability to receive public education in one’s own language, to be elected to Parliament by one’s own people (as is the situation in New Zealand) or to receive health care in ways that are responsive to cultural preferences.

In these ways, state sovereignty is not an authority that exists over and above Indigenous citizens. Nor does state citizenship exist at the expense of the Indigenous nation. It complements and supports self-determination.

In the only book-length comparative study of Indigenous politics in Canada and New Zealand, Roger Maaka and Augie Fleras imagine Indigenous peoples as “sovereign in their own right yet sharing sovereignty with society at large.”

New Zealand continues to work out the terms of this kind of system.

The ConversationCanada does not give it substantive thought, and that’s a serious constraint on the goal of self-determination for First Nations.

Dominic O’Sullivan, Associate Professor, Charles Sturt University

This article was originally published on The Conversation. Read the original article.

Why blaming ivory poaching on Boko Haram isn’t helpful


File 20171008 3228 10lazn2.jpg?ixlib=rb 1.1
Talking about ivory-funded terrorism overlooks the real sources of income for terror groups.
Author supplied

Mark Moritz, The Ohio State University; Alice B. Kelly Pennaz, University of California, Berkeley; Mouadjamou Ahmadou, and Paul Scholte, The Ohio State University

In 2016, as part of a ceremony in Cameroon’s capital Yaoundé, 2 000 elephant tusks were burned to demonstrate the country’s commitment to fight poaching and illegal trade in wildlife. US Ambassador to the United Nations Samantha Power gave a speech at the event linking poaching to terrorism.

The idea that terror groups like Boko Haram fund their activities through ivory poaching in Africa is a simple and compelling narrative. It has been adopted by governments, NGOs and media alike. But it is undermining wildlife conservation and human rights.

The problem is that such claims hinge on a single document which uses only one, unnamed source to estimate terrorist profits from ivory. The study hasn’t been backed up elsewhere.

Similarly, there is little evidence that terrorist activities are funded by wildlife poaching in Cameroon. We have studied wildlife conservation and pastoralism in the Far North Region of Cameroon in the last two decades. We have found that it is highly unlikely that Boko Haram is using ivory to survive financially. The elephant populations in the areas where Boko Haram operates are so low that this would be a faulty business plan to say the least. Only 246 elephants were counted in Waza Park in 2007.

Talking about ivory-funded terrorism overlooks the real sources of income for these groups. In Cameroon and Nigeria evidence shows that Boko Haram is using profits from cattle raids to support its activities. Boko Haram’s plunder of the countryside leaves cattle herders destitute.

The dangers of militarisation

The wrong focus has implications for conservation and human rights. Linking poachers and terrorists has led to a further militarisation of conservation areas in Africa. More guns and guards have been sent into parks to stop poachers.

The military approach has also led to serious human rights violations. These take the form of shoot-on-sight policies and other violent tactics carried out against local populations. Law enforcement in protected areas is important for controlling poaching and terrorism alike but it is not a perfect solution.

And wildlife conservation can suffer if well armed but underpaid park guards turn to poaching themselves.

It would be more helpful if properly paid and trained people provided security across the region rather than just in protected areas.

Consequences of the wrong connection

Ignoring the fact that cattle, not ivory, may be fuelling terrorism in places like Cameroon does a disservice to pastoralists. While livestock may compete with wildlife when pastoralists take refuge inside better-protected areas like parks, they do so only because their livelihoods are at risk.

Mistaking the true source of income for terrorist groups also means that their violent activities continue.

Finally, it diverts attention from corrupt conservation and government officials who may be complicit in poaching.

Of course, this is not to say that poaching is not happening. The dramatic declines in elephant populations in Cameroon and elsewhere in Africa indicate otherwise. The question is who is doing the poaching and why.

We challenge governments and organisations interested in wildlife, security and human rights to take a closer look at the evidence. Instead of sharing simple claims about terrorism and poaching, they should consider all the forms of economic support to terrorist organisations.

The ConversationIn Cameroon, this would mean offering better security for pastoralists and their cattle. Protecting cattle does not have the same appeal for Western audiences as protecting elephants. But it could be a way to conserve wildlife, protect human rights and stop funding for terrorism.

Mark Moritz, Associate Professor of Anthropology, The Ohio State University; Alice B. Kelly Pennaz, Researcher, University of California, Berkeley; Mouadjamou Ahmadou, Lecturer in Visual Anthropology, and Paul Scholte, Ecologist leading programs and organizations in conservation, The Ohio State University

This article was originally published on The Conversation. Read the original article.

The IQ test wars: why screening for intelligence is still so controversial

File 20170921 21016 ld7zty.jpg?ixlib=rb 1.1
For over a century, IQ tests have been used to measure intelligence. But can it really be measured?
via shutterstock.com

Daphne Martschenko, University of Cambridge

John, 12-years-old, is three times as old as his brother. How old will John be when he is twice as old as his brother?

Two families go bowling. While they are bowling, they order a pizza for £12, six sodas for £1.25 each, and two large buckets of popcorn for £10.86. If they are going to split the bill between the families, how much does each family owe?

4, 9, 16, 25, 36, ?, 64. What number is missing from the sequence?

These are questions from online Intelligence Quotient or IQ tests. Tests that purport to measure your intelligence can be verbal, meaning written, or non-verbal, focusing on abstract reasoning independent of reading and writing skills. First created more than a century ago, the tests are still widely used today to measure an individual’s mental agility and ability.

Education systems use IQ tests to help identify children for special education and gifted education programmes and to offer extra support. Researchers across the social and hard sciences study IQ test results also looking at everything from their relation to genetics, socio-economic status, academic achievement, and race.

Online IQ “quizzes” purport to be able to tell you whether or not “you have what it takes to be a member of the world’s most prestigious high IQ society”.

If you want to boast about your high IQ, you should have been able to work out the answers to the questions. When John is 16 he’ll be twice as old as his brother. The two families who went bowling each owe £20.61. And 49 is the missing number in the sequence.

Despite the hype, the relevance, usefulness, and legitimacy of the IQ test is still hotly debated among educators, social scientists, and hard scientists. To understand why, it’s important to understand the history underpinning the birth, development, and expansion of the IQ test – a history that includes the use of IQ tests to further marginalise ethnic minorities and poor communities.

Testing times

In the early 1900s, dozens of intelligence tests were developed in Europe and America claiming to offer unbiased ways to measure a person’s cognitive ability. The first of these tests was developed by French psychologist Alfred Binet, who was commissioned by the French government to identify students who would face the most difficulty in school. The resulting 1905 Binet-Simon Scale became the basis for modern IQ testing. Ironically, Binet actually thought that IQ tests were inadequate measures for intelligence, pointing to the test’s inability to properly measure creativity or emotional intelligence.

At its conception, the IQ test provided a relatively quick and simple way to identify and sort individuals based on intelligence – which was and still is highly valued by society. In the US and elsewhere, institutions such as the military and police used IQ tests to screen potential applicants. They also implemented admission requirements based on the results.

The US Army Alpha and Beta Tests screened approximately 1.75m draftees in World War I in an attempt to evaluate the intellectual and emotional temperament of soldiers. Results were used to determine how capable a solider was of serving in the armed forces and identify which job classification or leadership position one was most suitable for. Starting in the early 1900s, the US education system also began using IQ tests to identify “gifted and talented” students, as well as those with special needs who required additional educational interventions and different academic environments.

Ironically, some districts in the US have recently employed a maximum IQ score for admission into the police force. The fear was that those who scored too highly would eventually find the work boring and leave – after significant time and resources had been put towards their training.

Alongside the widespread use of IQ tests in the 20th century was the argument that the level of a person’s intelligence was influenced by their biology. Ethnocentrics and eugenicists, who viewed intelligence and other social behaviours as being determined by biology and race, latched onto IQ tests. They held up the apparent gaps these tests illuminated between ethnic minorities and whites or between low- and high-income groups.

Some maintained that these test results provided further evidence that socioeconomic and racial groups were genetically different from each other and that systemic inequalities were partly a byproduct of evolutionary processes.

Going to extremes

The US Army Alpha and Beta test results garnered widespread publicity and were analysed by Carl Brigham, a Princeton University psychologist and early founder of psychometrics, in a 1922 book A Study of American Intelligence. Brigham applied meticulous statistical analyses to demonstrate that American intelligence was declining, claiming that increased immigration and racial integration were to blame. To address the issue, he called for social policies to restrict immigration and prohibit racial mixing.

A few years before, American psychologist and education researcher Lewis Terman had drawn connections between intellectual ability and race. In 1916, he wrote:

High-grade or border-line deficiency … is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come … Children of this group should be segregated into separate classes … They cannot master abstractions but they can often be made into efficient workers … from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.

There has been considerable work from both hard and social scientists refuting arguments such as Brigham’s and Terman’s that racial differences in IQ scores are influenced by biology.

Critiques of such “hereditarian” hypotheses – arguments that genetics can powerfully explain human character traits and even human social and political problems – cite a lack of evidence and weak statistical analyses. This critique continues today, with many researchers resistant to and alarmed by research that is still being conducted on race and IQ.

But in their darkest moments, IQ tests became a powerful way to exclude and control marginalised communities using empirical and scientific language. Supporters of eugenic ideologies in the 1900s used IQ tests to identify “idiots”, “imbeciles”, and the “feebleminded”. These were people, eugenicists argued, who threatened to dilute the White Anglo-Saxon genetic stock of America.

A plaque in Virginia in memory to Carrie Buck, the first person to be sterilised under eugenics laws in the state.
Jukie Bot/flickr.com, CC BY-NC

As a result of such eugenic arguments, many American citizens were later sterilised. In 1927, an infamous ruling by the US Supreme Court legalised forced sterilisation of citizens with developmental disabilities and the “feebleminded,” who were frequently identified by their low IQ scores. The ruling, known as Buck v Bell, resulted in over 65,000 coerced sterilisations of individuals thought to have low IQs. Those in the US who were forcibly sterilised in the aftermath of Buck v Bell were disproportionately poor or of colour.

Compulsory sterilisation in the US on the basis of IQ, criminality, or sexual deviance continued formally until the mid 1970s when organisations like the Southern Poverty Law Center began filing lawsuits on behalf of people who had been sterilised. In 2015, the US Senate voted to compensate living victims of government-sponsored sterilisation programmes.

IQ tests today

Debate over what it means to be “intelligent” and whether or not the IQ test is a robust tool of measurement continues to elicit strong and often opposing reactions today. Some researchers say that intelligence is a concept specific to a particular culture. They maintain that it appears differently depending on the context – in the same way that many cultural behaviours would. For example, burping may be seen as an indicator of enjoyment of a meal or a sign of praise for the host in some cultures and impolite in others.

What may be considered intelligent in one environment, therefore, might not in others. For example, knowledge about medicinal herbs is seen as a form of intelligence in certain communities within Africa, but does not correlate with high performance on traditional Western academic intelligence tests.

According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them potentially problematic in culturally diverse settings. The application of the same test among different communities would fail to recognise the different cultural values that shape what each community values as intelligent behaviour.

Going even further, given the IQ test’s history of being used to further questionable and sometimes racially-motivated beliefs about what different groups of people are capable of, some researchers say such tests cannot objectively and equally measure an individual’s intelligence at all.

Used for good

At the same time, there are ongoing efforts to demonstrate how the IQ test can be used to help those very communities who have been most harmed by them in the past. In 2002, the execution across the US of criminally convicted individuals with intellectual disabilities, who are often assessed using IQ tests, was ruled unconstitutional. This has meant IQ tests have actually prevented individuals from facing “cruel and unusual punishment” in the US court of law.

In education, IQ tests may be a more objective way to identify children who could benefit from special education services. This includes programmes known as “gifted education” for students who have been identified as exceptionally or highly cognitively able. Ethnic minority children and those whose parents have a low income, are under-represented in gifted education.

There is ongoing debate about the use of IQ tests in schools.
via shutterstock.com

The way children are chosen for these programmes means that Black and Hispanic students are often overlooked. Some US school districts employ admissions procedures for gifted education programmes that rely on teacher observations and referrals or require a family to sign their child up for an IQ test. But research suggests that teacher perceptions and expectations of a student, which can be preconceived, have an impact upon a child’s IQ scores, academic achievement, and attitudes and behaviour. This means that teacher’s perceptions can also have an impact on the likelihood of a child being referred for gifted or special education.

The universal screening of students for gifted education using IQ tests could help to identify children who otherwise would have gone unnoticed by parents and teachers. Research has found that those school districts which have implemented screening measures for all children using IQ tests have been able to identify more children from historically underrepresented groups to go into gifted education.

IQ tests could also help identify structural inequalities that have affected a child’s development. These could include the impacts of environmental exposure to harmful substances such as lead and arsenic or the effects of malnutrition on brain health. All these have been shown to have an negative impact on an individual’s mental ability and to disproportionately affect low-income and ethnic minority communities.


Identifying these issues could then help those in charge of education and social policy to seek solutions. Specific interventions could be designed to help children who have been affected by these structural inequalities or exposed to harmful substances. In the long run, the effectiveness of these interventions could be monitored by comparing IQ tests administered to the same children before and after an intervention.

Some researchers have tried doing this. One US study in 1995 used IQ tests to look at the effectiveness of a particular type of training for managing Attention Deficit/Hyperactivity Disorder (ADHD), called neurofeedback training. This is a therapeutic process aimed at trying to help a person to self-regulate their brain function. Most commonly used with those who have some sort of identified brain imbalance, it has also been used to treat drug addiction, depression and ADHD. The researchers used IQ tests to find out whether the training was effective in improving the concentration and executive functioning of children with ADHD – and found that it was.

Since its invention, the IQ test has generated strong arguments in support of and against its use. Both sides are focused on the communities that have been negatively impacted in the past by the use of intelligence tests for eugenic purposes.

The ConversationThe use of IQ tests in a range of settings, and the continued disagreement over their validity and even morality, highlights not only the immense value society places on intelligence – but also our desire to understand and measure it.

Daphne Martschenko, PhD Candidate, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The Irony of Susceptibility to Manipulations: Grooming Neurotypicals for Social Ineptitude

Henny Kupferstein, Ph.D.'s avatarHenny Kupferstein

Printer-Friendly PDF

The stereotypes of autistic people perpetuate a myth that they are socially inept. Yet non-autistics, also known as neurotypicals, portray ineptitudes on the basis of their susceptibility to body language, communication, and perceptual manipulations. How we learn these signals opens the debate for nature versus nurture, and the acquisition of social skill aptitude. Who is more socially equipped? The one who is capable of surrounding himself with pretentious body language, or the one who is mindful of her full spectrum of awareness? A neurotypical who communicates with learned body gestures is currently considered evolved, while the acquisition of those skills are a direct result of the inability to survive otherwise. The autistic who remains authentic in order to adapt to the current environment is potentially most equipped to function in society.

The cycle of life requires attracting a mate, reproduction, and adaptations for exploitation to those who threaten…

View original post 1,590 more words

The murky issue of whether the public supports assisted dying

Katherine Sleeman, King’s College London

The High Court has rejected a judicial review challenging the current law which prohibits assisted dying in the UK. Noel Conway, a 67-year-old retired lecturer who was diagnosed with Motor Neurone Disease in 2014, was fighting for the right to have medical assistance to bring about his death. Commenting after the judgement on October 5, his solicitor indicated that permission will now be sought to take the case to the appeal courts.

Campaigners are often quick to highlight the strength of public support in favour of assisted dying, arguing that the current law is undemocratic. But there are reasons to question the results of polls on this sensitive and emotional issue.

There have been numerous surveys and opinion polls on public attitudes towards assisted dying in recent years. The British Social Attitudes (BSA) Survey, which has asked this question sequentially since the 1980s, has shown slowly increasing public support. Asked: “Suppose a person has a painful incurable disease. Do you think that doctors should be allowed by law to end the patient’s life, if the patient requests it?” in 1984, 75% of people surveyed agreed. By 1989, 79% of people agreed with the statement, and in 1994 it had gone up to 82%.

Detail of the question matters

But not surprisingly, the acceptability of assisted dying varies according to the precise context. The 2005 BSA survey asked in more depth about attitudes towards assisted dying and end of life care. While 80% of respondents agreed with the original question, support fell to 45% for assisted dying for illnesses that were incurable and painful but not terminal.

A 2010 ComRes-BBC survey also found that the incurable nature of illness was critical. In this survey, while 74% of respondents supported assisted suicide if an illness was terminal, this fell to 45% if it was not.

Wording counts.
from http://www.shutterstock.com

It may not be surprising that support varies considerably according to the nature of the condition described, but it is important. First, because the neat tick boxes on polls belie the messy reality of determining prognosis for an individual patient. Second, because of the potential for drift in who might be eligible once assisted dying is legalised. This has happened in countries such as Belgium which became the first country to authorise euthanasia for children in 2014, and more recently in Canada where within months of the 2016 legalisation of medical assistance in dying, the possibility of extending the law to those with purely psychological suffering was announced.

It’s not just diagnosis or even prognosis that influences opinion. In the US, Gallup surveys carried out since the 1990s have shown that support for assisted dying hinges on the precise terminology used to describe it. In its 2013 poll, 70% of respondents supported “end the patient’s life by some painless means” whereas only 51% supported “assisting the patient to commit suicide”. This gap shrank considerably in 2015 – possibly as a result of the Brittany Maynard case. Maynard, a high-profile advocate of assisted dying who had terminal cancer, moved from California to Oregon to take advantage of the Oregon Death with Dignity law in 2014.

Even so, campaigning organisations for assisted dying tend to avoid the word “suicide”. Language is emotive, but if we want to truly gauge public opinion, we need to understand this issue, not gloss over it.

Information changes minds

Support for assisted dying is crucially known to drop-off simply when key information is provided. Back in the UK, a ComRes/CARE poll in 2014 showed 73% of people surveyed agreed with legalisation of a bill which enables: “Mentally competent adults in the UK who are terminally ill, and who have declared a clear and settled intention to end their own life, to be provided with assistance to commit suicide by self-administering lethal drugs.” But 42% of these same people subsequently changed their mind when some of the empirical arguments against assisted dying were highlighted to them – such as the risk of people feeling pressured to end their lives so as not to be a burden on loved ones.

This is not just a theoretical phenomenon. In 2012, a question over legalising assisted dying was put on the ballot paper in Massachusetts, one of the most liberal US states. Support for legalisation fell in the weeks prior to vote, as arguments against legalisation were aired, and complexities became apparent. In the end, the Massachusetts proposition was defeated by 51% to 49%. Public opinion polls, in the absence of public debate, may gather responses that are reflexive rather than informed.

The ConversationPolls are powerful tools for democratic change. While opinion polls do show the majority of people support legalisation of assisted dying, the same polls also show that the issue is far from clear. It is murky, and depends on the responder’s awareness of the complexities of assisted dying, the context of the question asked, and its precise language. If we can conclude anything from these polls, it is not the proportion of people who do or don’t support legislation, but how easily people can change their views.

Katherine Sleeman, NIHR Clinician Scientist and Honorary Consultant in Palliative Medicine, King’s College London

This article was originally published on The Conversation. Read the original article.

When gun control makes a difference: 4 essential reads

Emily Schwartz Greco, The Conversation

Editor’s note: This is a roundup of gun control articles published by scholars from the U.S. and two other countries where deadly mass shootings are far less common.

An underresearched epidemic

Guns are a leading cause of death of Americans of all ages, including children. Yet “while gun violence is a public health problem, it is not studied the same way other public health problems are,” explains Sandro Galea, dean of Boston University’s School of Public Health.

That’s no accident. Congress has prohibited firearm-related research by the Centers for Disease Control and Prevention and the National Institutes of Health since 1996. Galea says:

“Unfortunately, a shortage of data creates space for speculation, conjecture and ill-informed argument that threatens reasoned public discussion and progressive action on the issue.”

The Australian model

The contrast with Australia is especially stark. Just as Congress was barring any research that might strengthen the case for tighter gun regulations, that country established very strict firearm laws in response to the Port Arthur massacre, which killed 35 people in 1996.

To clamp down on guns, the federal government worked with Australia’s states to ban semiautomatic rifles and pump action shotguns, establish a uniform gun registry and buy the now-banned guns from people who had purchased them before owning them became illegal. The country also stopped recognizing self-defense as an acceptable reason for gun ownership and outlawed mail-order gun sales.

These measures worked. Simon Chapman, a public health expert at the University of Sydney, writes:

“When it comes to firearms, Australia is far a safer place today than it was in the 1990s and in previous decades.”

There have been no mass murders since the Port Arthur massacre and the subsequent clampdown on guns, Chapman observes. In contrast, there were 13 of those tragic incidents over the previous 18 years – in which a total of 104 victims died. Other gun deaths have also declined.

Concerns about complacency

After so many years with no mass killings, some Australian scholars fear that their country may be moving in the wrong direction.

Twenty years after doing more than any other nation to strengthen firearm regulation, “many people think we no longer have to worry about gun violence,” say Rebecca Peters of the University of Sydney and Chris Cunneen at the University of New South Wales. They write:

“Such complacency jeopardizes public safety. The pro-gun lobby has succeeded in watering down the laws in several states. Weakening the rules on pistols so that unlicensed shooters can walk into a club and shoot without any waiting period for background checks has resulted in at least one homicide in New South Wales.”

In the UK

Like Australia, the U.K. tightened its gun regulations following its own 1996 tragedy – when a man killed 16 children and their teacher at Dunblane Primary School, near Stirling, Scotland.

Subsequently, the U.K. banned some handguns and bought back many banned weapons. There, however, progress has been less impressive, notes Helen Williamson, a researcher at the University of Brighton. On the one hand, the number of firearms offenses has declined from a high of 24,094 in 2004 to 7,866 in 2015. On the other, criminals are growing more “resourceful in identifying alternative sources of firearms,” she says, adding:

The Conversation“Although the availability of high-quality firearms may have fallen, the demand for weapons remains. This demand has driven criminals to be resourceful in identifying alternative sources of firearms. There are growing concerns about how they could acquire instructions online on how to build a homemade gun, or even 3D-print a functioning pistol.”

Emily Schwartz Greco, Philanthropy and Nonprofits Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

The science behind… coffee!

Brewing a great cup of coffee depends on chemistry and physics

File 20170925 17462 1pcmbbe
What can you do to ensure a more perfect brew?
Chris Hendon, CC BY-ND

Christopher H. Hendon, University of Oregon

Coffee is unique among artisanal beverages in that the brewer plays a significant role in its quality at the point of consumption. In contrast, drinkers buy draft beer and wine as finished products; their only consumer-controlled variable is the temperature at which you drink them.

Why is it that coffee produced by a barista at a cafe always tastes different than the same beans brewed at home?

It may be down to their years of training, but more likely it’s their ability to harness the principles of chemistry and physics. I am a materials chemist by day, and many of the physical considerations I apply to other solids apply here. The variables of temperature, water chemistry, particle size distribution, ratio of water to coffee, time and, perhaps most importantly, the quality of the green coffee all play crucial roles in producing a tasty cup. It’s how we control these variables that allows for that cup to be reproducible.

How strong a cup of joe?

Besides the psychological and environmental contributions to why a barista-prepared cup of coffee tastes so good in the cafe, we need to consider the brew method itself.

Science helps optimize the coffee.
Chris Hendon, CC BY-ND

We humans seem to like drinks that contain coffee constituents (organic acids, Maillard products, esters and heterocycles, to name a few) at 1.2 to 1.5 percent by mass (as in filter coffee), and also favor drinks containing 8 to 10 percent by mass (as in espresso). Concentrations outside of these ranges are challenging to execute. There are a limited number of technologies that achieve 8 to 10 percent concentrations, the espresso machine being the most familiar.

There are many ways, though, to achieve a drink containing 1.2 to 1.5 percent coffee. A pour-over, Turkish, Arabic, Aeropress, French press, siphon or batch brew (that is, regular drip) apparatus – each produces coffee that tastes good around these concentrations. These brew methods also boast an advantage over their espresso counterpart: They are cheap. An espresso machine can produce a beverage of this concentration: the Americano, which is just an espresso shot diluted with water to the concentration of filter coffee.

All of these methods result in roughly the same amount of coffee in the cup. So why can they taste so different?

When coffee meets water

There are two families of brewing device within the low-concentration methods – those that fully immerse the coffee in the brew water and those that flow the water through the coffee bed.

From a physical perspective, the major difference is that the temperature of the coffee particulates is higher in the full immersion system. The slowest part of coffee extraction is not the rate at which compounds dissolve from the particulate surface. Rather, it’s the speed at which coffee flavor moves through the solid particle to the water-coffee interface, and this speed is increased with temperature.

The Coffee Taster’s Flavor Wheel provides a way to name various tastes within the beverage.
Specialty Coffee Association of America, CC BY-NC-ND

A higher particulate temperature means that more of the tasty compounds trapped within the coffee particulates will be extracted. But higher temperature also lets more of the unwanted compounds dissolve in the water, too. The Specialty Coffee Association presents a flavor wheel to help us talk about these flavors – from green/vegetative or papery/musty through to brown sugar or dried fruit.

Pour-overs and other flow-through systems are more complex. Unlike full immersion methods where time is controlled, flow-through brew times depend on the grind size since the grounds control the flow rate.

The water-to-coffee ratio matters, too, in the brew time. Simply grinding more fine to increase extraction invariably changes the brew time, as the water seeps more slowly through finer grounds. One can increase the water-to-coffee ratio by using less coffee, but as the mass of coffee is reduced, the brew time also decreases. Optimization of filter coffee brewing is hence multidimensional and more tricky than full immersion methods.

What do they know that we don’t?
Redd Angelo on Unsplash, CC BY

Other variables to try to control

Even if you can optimize your brew method and apparatus to precisely mimic your favorite barista, there is still a near-certain chance that your home brew will taste different from the cafe’s. There are three subtleties that have tremendous impact on the coffee quality: water chemistry, particle size distribution produced by the grinder and coffee freshness.

First, water chemistry: Given coffee is an acidic beverage, the acidity of your brew water can have a big effect. Brew water containing low levels of both calcium ions and bicarbonate (HCO₃⁻) – that is, soft water – will result in a highly acidic cup, sometimes described as sour. Brew water containing high levels of HCO₃⁻ – typically, hard water – will produce a chalky cup, as the bicarbonate has neutralized most of the flavorsome acids in the coffee.

Ideally we want to brew coffee with water containing chemistry somewhere in the middle. But there’s a good chance you don’t know the bicarbonate concentration in your own tap water, and a small change makes a big difference. To taste the impact, try brewing coffee with Evian – one of the highest bicarbonate concentration bottled waters, at 360 mg/L.

The particle size distribution your grinder produces is critical, too.

Every coffee enthusiast will rightly tell you that blade grinders are disfavored because they produce a seemingly random particle size distribution; there can be both powder and essentially whole coffee beans coexisting. The alternative, a burr grinder, features two pieces of metal with teeth that cut the coffee into progressively smaller pieces. They allow ground particulates through an aperture only once they are small enough.

Looking for a more even grind.
Aaron Itzerott on Unsplash, CC BY

There is contention over how to optimize grind settings when using a burr grinder, though. One school of thought supports grinding the coffee as fine as possible to maximize the surface area, which lets you extract the most delicious flavors in higher concentrations. The rival school advocates grinding as coarse as possible to minimize the production of fine particles that impart negative flavors. Perhaps the most useful advice here is to determine what you like best based on your taste preference.

Finally, the freshness of the coffee itself is crucial. Roasted coffee contains a significant amount of CO₂ and other volatiles trapped within the solid coffee matrix: Over time these gaseous organic molecules will escape the bean. Fewer volatiles means a less flavorful cup of coffee. Most cafes will not serve coffee more than four weeks out from the roast date, emphasizing the importance of using freshly roasted beans.

One can mitigate the rate of staling by cooling the coffee (as described by the Arrhenius equation). While you shouldn’t chill your coffee in an open vessel (unless you want fish finger brews), storing coffee in an airtight container in the freezer will significantly prolong freshness.

The ConversationSo don’t feel bad that your carefully brewed cup of coffee at home never stacks up to what you buy at the café. There are a lot of variables – scientific and otherwise – that must be wrangled to produce a single superlative cup. Take comfort that most of these variables are not optimized by some mathematical algorithm, but rather by somebody’s tongue. What’s most important is that your coffee tastes good to you… brew after brew.

Christopher H. Hendon, Assistant Professor of Computational Materials and Chemistry, University of Oregon

This article was originally published on The Conversation. Read the original article.

Northern Ireland abortion refugees: Supreme Court — UK Human Rights Blog

R (o.t.a A and B) v. Department of Health [2017] UKSC 41, 14 June 2017 – judgment here. Sometimes The Law comes to the rescue. And by this I do not mean constitutional law versus populism or the rule of law versus raw-knuckled fighting. It just happens that, occasionally, litigation drawn from ordinary life encapsulates more political […]

via Northern Ireland abortion refugees: Supreme Court — UK Human Rights Blog

Foreign criminals’ deportation scheme ruled unlawful — UK Human Rights Blog

R (Kiarie) v Secretary of State for the Home Department; R (Byndloss) v Secretary of State for the Home Department [2017] UKSC 42 In a nutshell The Government’s flagship scheme to deport foreign criminals first and hear their appeals later was ruled by the Supreme Court to be incompatible with the appellants’ right to respect for […]

via Foreign criminals’ deportation scheme ruled unlawful — UK Human Rights Blog

Human rights for just a few, that’s discrimination. Human rights apply to all human beings.

https://youtube.com/watch?v=nAFO4HQ6nI8%3Fversion%3D3

It has just been the 6th anniversary of an important human rights case, that of Mark and Steven Neary. Steven, who is autistic, was detained in local authority care for over a year before his dad used the Human Rights Act to get him home. RightsInfo has made a powerful short film to mark the […]

via A powerful new human rights film  — UK Human Rights Blog