Something you may want to watch

It may also shatter your illusions, however, if you still believe that police are the good ones, the ones (that you pay for through your council tax, in Britain) to help keep you safe and secure and protect your basic rights.

This morning, this caught my eye:

(Scottish) Police Pause Rollout Of Device That Hacks Into Phones After Fears ‘It Is Unlawful’

I suspect that police in England and Wales already are using these “kiosks” that hack into people’s phones and laptops, overriding passwords.

I am sure it can be great fun for some officers to play with these “kiosks”. You can almost hear them talk. “I knew it! She’s a lesbian!” and “Does he really think he stands a chance with that woman?” and “Oh my god! Trying to lose weight? Fat chance!”

Yep, very useful.</end of sarcasm>

We need an alternative to police. Because going to or contacting the police has become one of the worst things to do in almost any situation. (Unless your insurance company wants a copy of a report after a burglary or theft, but leave it at that and do not ask police to do anything else other than give you a copy of the report.) How it got to this point? It’s immaterial. It’s what we have in the here and the now.

As Michael Doherty (a former aircraft engineer who made the mistake of reporting something to police and expecting police to follow up on it) says in the video below, you do have the right to investigate on your own, to try to detect and stop crime on your own. If your investigation is successful, you can also prosecute on your own. (I am talking about England and Wales.)

But before you choose this path, as I have stated several times before, look into the Protection from Harassment Act 1997 because police and others can use this against you, assuming that you are unaware of 1(3)(a), which most people probably are. That means that, before you know it, you can already have confessed to a crime that you didn’t actually commit. To prevent this, you need to know what the law says.

I repeat and highlight:

(3) Subsection (1) [F4 or (1A)] does not apply to a course of conduct if the person who pursued it shows—

(a) that it was pursued for the purpose of preventing or detecting crime,

(b) that it was pursued under any enactment or rule of law or to comply with any condition or requirement imposed by any person under any enactment, or

(c) that in the particular circumstances the pursuit of the course of conduct was reasonable.

(Whether it says “and” or “or” makes a difference. It means that each of these conditions on its own applies, that they do not have to apply all at once.)

The video below dates back to 2015, is rather academic and particularly in the beginning lacks a logical thread, in my opinion, but does contain useful information.

You may want to read this as well:
The Human Rights Act Can Transform Lives Without Going To Court

(Also, if you want to protect yourself from police with a camera, you need to have one that does not have wifi or bluetooth.)

It is possible to resolve many situations or at least make them somewhat liveable without going to police, and much more successfully and/or peacefully. If you try this after you’ve been to police, however, police officers are likely to hold it against you. (This is mean because most people who contacted the police in the past decade will have been told that police wouldn’t investigate and would do nothing with what they told the police owing to a lack of resources and/or will have been referred to their GP and the local civic offices.)

Unfortunately, most of us learn these things the hard way – and you can’t undo having contacted the police.

The war on women

I am in the middle of reading “The war on women” by Sue Lloyd-Roberts. The book was finalized without her input after she suddenly passed away in 2015. I wish that I could still contact her.

Because then I would talk with her about her own bias. She sounds convinced that there is a division between the “liberal West and the traditional East”, and it made her slightly blind to what went on in, say, her own country, assessed by the UN as perhaps the most openly misogynistic country in the world. That can probably be explained that she’d been living on the Spanish island of Mallorca since 2003.

I can’t allow myself to be blind to the fact that people in the West who condemn what goes on in other countries but are blind to what goes on in their own culture may be helping their causes less than they think.

Continue reading

Very severe animal cruelty at Mahard Egg Farms in the US

Last evening, I saw a video and photos that I found shocking. It concerns severe animal cruelty that occurs near Sulphur in Oklahoma. The farm is part of Mahard Egg Farms who appear to be headquartered in Texas. I searched LinkedIn and found nine accounts associated with the company, including that of its CFO, Kaitlin Mahard.

I believe that severe animal cruelty can be considered “violent crimes” which would mean that LinkedIn should remove the accounts associated with Mahard Egg Farms. The LinkedIn Professional Community Policies state that “those who engage in violent crimes are not welcome and not permitted on the Services”.

In 2011, Mahard Egg Farm, Inc., indeed a Texas corporation, was told to pay a $1.9 million penalty to settle claims that the company violated the Clean Water Act (CWA) at its egg production facilities in Texas and Oklahoma, according to the EPA:
https://www.epa.gov/enforcement/mahard-egg-farm-inc-clean-water-act-settlement

The latter apparently resulted in this:
https://www.epa.gov/sites/production/files/2013-09/documents/mahardegg-cd.pdf

That document includes the following:

C. MORTALITY MANAGEMENT
18. Defendant shall comply with the Mortality Management Requirements in Appendix D at the Vernon-Chillicothe Facility, the Springhill Facility, the Prosper Facility, the Boogie Hill Facility, the Nebo Ranch, and the Ravia Facility, unless such facility is not growing poultry.

Appendix D stated:

APPENDIX D:
MORTALITY MANAGEMENT

I. Texas
65.
No later than the Effective Date of this Decree, Mahard shall cease any transfer of
carcasses between Facilities unless a composting plan is in place that is consistent with 30 T.A.C. 332, Subchapter B, and has been approved by EPA and TCEQ.

66.
Mahard shall ensure that all carcass disposal at the Vernon-Chillicothe, Prosper, and
Springhill Facilities is conducted in accordance with TCEQ Regulatory Guidance, RG-326, Handling and Disposal of Carcasses from Poultry Operations (August 2009) and in accordance with 30 T.A.C. § 335.25. Mahard shall collect all carcasses within 24 hours of death and properly disposed of them within three (3) Days of death. Animals must not be disposed of in any liquid manure or process wastewater system. Disposal of diseased animals shall be conducted in accordance with Tex. Agric. Code § 161.004.

II. Oklahoma
67.
Mahard shall comply with the terms and conditions in Mahard’s 4/29/09 Carcass Disposal Plan, as amended and supplemented by the letter from ODAFF, dated May 7, 2009, to Mahard (both attached here as the Appendix D Supplement).

The Kroger chain has meanwhile dropped Mahard’s eggs and I’ve reached out on LinkedIn to it spokeswoman Kristal Howard to thank Kroger and ask her to ensure that Kroger will never be associated with such severe animal cruelty again.

Kroger’s 2018 Sustainability Report includes an animal welfare policy, which states:

“Kroger has a long-standing commitment to responsible business practices, including the humane treatment of animals,” Kroger says in its policy. “We require our suppliers to adopt industry-accepted animal welfare standards that we endorse, and we monitor our suppliers for compliance with these standards. We align with the Food Marketing Institute’s industry-adopted and industry-aligned animal welfare standards for the following animal proteins: beef, pork, chicken, turkey and eggs. For nearly a decade, Kroger has convened our own independent panel of animal science experts to make recommendations on how we can work with the industry to improve animal welfare.”

I’ve also contacted the EPA.

Why do we often feel guilty?

Because we have been taught that something – whatever it is – is bad. If you let go of the idea that something is good or bad, you may feel a weight lift from your shoulders.

If you simply allow and observe the thing that is supposed to be bad, you may find that it is interesting – hence also good, right or even fun – all by itself.

Feeling depressed is bad, for instance. It is even considered a mental health problem these days. An illness. Feeling cheerful is good. Acting cheerful when you’re feeling depressed is good. Is it?

It can be, but there are times, after the death of a loved one for example, when we really have to allow feelings that are supposedly bad.

(Is mourning someone’s death truly “a mental health issue”? Or could it be a natural part of life?)

It is our resistance to “bad” feelings that often becomes the greater problem. As soon as you allow certain feelings and stop considering them bad, they can lose their power over you quickly.

And heck, even moping can be a heck of a lot of fun too.

What always comes to my mind when I say something like that is an image from the original Swedish Pippi Longstocking TV series.

Pippi is in a foul mood and goes around angrily stamping her feet, probably in puddles of water, powerfully indulging in her foul mood, full of energy. Acceptance. A foul mood is just a foul mood, not the end of the world.

Puddles of water? So it must have rained. Rain! Rain is bad.

I too have my personal good/bad hangups. Ideas that make me feel vulnerable or guilty or inadequate or unhappy. What are yours?

 

 

 

Update on the Brexpat case

See this post

https://platform.twitter.com/widgets.js

https://platform.twitter.com/widgets.js

The illegality of British government actions

A pattern is starting to emerge. The British government does not display a lot of respect for the law.

At least one judge has commented that the government is wasting the tax payers’ money as well as judicial capacity.

The pattern shows unequivocally that the British government goes after the most vulnerable in British society and seeks to protect the wealthiest in society.

Apparently, the Lord Chancellor has the task of ensuring the government’s compliance with the rule of law. As of the beginning of this year, that is David Gauke, appointed by HM the Queen on advice of the Prime Minister. So the Prime Minister recommends who gets to monitor the legality of her own government’s actions? Hmm.

His predecessors were Chris Grayling (2012-2015), Michael Gove (2015-2016), Elizabeth Truss (2016-2017) and David Lidington (2017-2018). All Conservatives.

We need a global guideline for eugenics – urgently

People are currently focusing on Trump and his silly comments, but perhaps they should be focussing on Britain.

A few days ago, British newspaper The Guardian reported about a eugenics meeting that allegedly had been convened in secret, involving someone who has previously advocated child rape. This meeting is supposed to have taken place at University College London and white supremacists supposedly were present at this meeting. Continue reading

Tony Blair on social engineering

Interview with Mark Easton, BBC. Date unknown, but near the end of Tony Blair’s premiership.

Keep in mind that “hooliganism” and “anti-social behaviour” are often labels used to indicate (and reject) people from a lower socioeconomic class in Britain and that this “hooliganism” for example gets expressed in graffiti.

Of course, causing (increased) financial hardship for parents by taking any benefits away is most definitely not “in the best interest of the child”.

Tony Blair did consider graffiti “anti-social behaviour”. During a photo-op as part of his crusade, he hosed down graffiti and said that older generations of his family would have abhorred such behaviour. It then turned out that his own grandmother had been a “commie” graffiti vandal.

There probably is a work by Banksy somewhere in response to all of this.

Tony Blair also criminalized a lot of behavior that is essentially merely human behavior. That too was in nobody’s best interest and probably did nothing toward decreasing inequality in Britain.

It did not enable (more) people to flourish.

Why bystanders rarely speak up when they witness sexual harassment

 

File 20171019 1066 16v7wn1.jpg?ixlib=rb 1.1
If you see something, say something.
Photographee.eu

George B. Cunningham, Texas A&M University

The uproar over allegations that Hollywood producer Harvey Weinstein sexually abused and harassed dozens of the women he worked with is inspiring countless women (and some men) to share their own personal sexual harassment and assault stories.

With these issues trending on social media with the hashtag #MeToo, it’s getting harder to ignore how common they are on the job and in other settings.

I have studied sexual harassment and ways to prevent it as a diversity and inclusion researcher. My research on how people often fail to speak out when they witness these incidents might help explain why Weinstein could reportedly keep his despicable behavior an open secret for decades.

//platform.twitter.com/widgets.js

Witnessing sexual harassment

Of course, Weinstein’s alleged wrongdoings went well beyond sexual harassment, which University of British Columbia gender scholar Jennifer Berdahl defines as “behavior that derogates, demeans or humiliates an individual based on that indiviudual’s sex.”

Some of the women speaking out in the U.S. and abroad are accusing him of rape – a crime – during encounters he says were always consensual.

But sexual harassment is such a chronic workplace problem that it accounts for a third of the 90,000 charges filed with the federal government’s Equal Employment Opportunity Commission (EEOC) in 2015. Since only one in four victims report it, however, the EEOC and other experts say the actual number of incidents is far higher than the official number of complaints would suggest.

The usual silence leaves most perpetrators of this toxic behavior free to prey on their co-workers and subordinates. If sexual harassment is pervasive on the job, and most women don’t report it, what can be done?

Some business scholars suggest that the best way to prevent sexual harassment, bullying and other toxic workplace behavior is to train co-workers to stand up for their abused colleagues when they witness incidents. One reason why encouraging intervention makes good sense is that some 70 percent of women have observed harassment in the workplace, according to research by psychologist Robert Hitlan.

The trouble is that most people who witness or become aware of sexual harassment don’t speak out. Screenwriter, producer and actor Scott Rosenberg has both admitted to and denounced how this dynamic enabled Weinstein to become an alleged serial abuser. “Let’s be perfectly clear about one thing,” he wrote in a private Facebook post published in the media. “Everybody-f—ing-knew.” He also said:

“in the end, I was complicit.
I didn’t say s—.
I didn’t do s—.
Harvey was nothing but wonderful to me.
So I reaped the rewards and I kept my mouth shut.
And for that, once again, I am sorry.”

Actor Matt Damon, right, has denied reports that he helped stifle reporting that would have exposed alleged sexual harassment and abuse by movie mogul Harvey Weinstein, left, years ago.
AP Photo/Matt Sayles

Researching how people respond

To understand why witnesses often don’t speak up, a colleague and I did a study in 2010 that asked participants to review hypothetical sexual harassment scenarios and indicate if they would respond.

The results seemed promising: Participants generally said they would take steps to stop harassing behavior if they saw it happen. People indicated they’d be more likely to respond if two conditions were met: It was a quid pro quo – that is, if the harasser promised benefits in exchange for sexual favors – and the workplace valued diversity and inclusion. In such cultures, there are open lines of communication, and leaders embrace diversity and inclusion.

There’s a potential problem with experiments using the kind of hypothetical scenario that we and others employed. People don’t always do what they think they will in real-life situations. For example, psychologists find that people tend to believe they’ll feel more distraught during an emotionally devastating event than they actually do when it occurs.

Other researchers find similar patterns with reactions to racists. People think they will recoil and experience distress when hearing racist comments. But when they actually hear those remarks, they don’t.

The same dynamics are at play when examining sexual harassment during job interviews, as illustrated in a study conducted by psychologists Julie Woodzicka and Marianne LaFrance.

Participants, all of whom were women, expected to feel angry, confront the harasser and refuse to answer the hypothetical interviewer’s inappropriate questions. Some of the questions, for example, included asking the job applicant if she had a boyfriend or if women should wear bras at work.

However, when they witnessed this simulated behavior during the experiment’s mock interviews, people responded differently. In fact, 68 percent of participants who only read about the incidents said they would refuse to answer questions. Yet all 50 of the participants who witnessed the staged hostile behavior answered them.

Drawing from these studies, my team conducted an experiment in 2012 to determine how harassment bystanders would react to hearing inappropriate comments about women.

Some of the female participants read about a hypothetical scenario in which harassment took place, while another group observed harassment occurring in a staged setting. We determined that the participants, who were college students, overestimated how they would respond to seeing someone else get harassed.

The reason this matters is that people who don’t feel distress are unlikely to take action.


https://datawrapper.dwcdn.net/HdxsU/6/


Intervention training

What stops people from reacting the way they think they will?

Psychologists blame this disparity on “impact bias.” People overestimate the impact that all future events – be they weddings, funerals or even the Super Bowl – will have on them emotionally. Real life is messier than our imagined futures, with social pressures and context making a difference.

This suggests a possible solution. Since context matters, organizations can take steps to encourage bystanders to take action.

For example, they can train their staff to speak up with the Green Dot Violence Prevention Program or other approaches. The Green Dot program was originally designed to reduce problems like sexual assault and stalking by encouraging bystanders to do something. The EEOC says this “bystander intervention training might be effective in the workplace.”

Especially with workplace harassment, establishing direct and anonymous lines for reporting sexist incidents is essential. They also say employees should not fear negative reprisal or gossip when they do report harassment.

Finally, bystanders are more likely to intervene in organizations that make their refusal to tolerate harassment clear. For that to happen, leaders must assert and demonstrate their commitment to harassment-free workplaces, enforce appropriate policies and train new employees accordingly.

The ConversationUntil more people take a stand when they witness sexual harassment, it will continue to haunt American workplaces.

George B. Cunningham, Professor of Sport Management, Faculty Affiliate of the Women’s and Gender Studies Program, and Director, Laboratory for Diversity in Sport, Texas A&M University

This article was originally published on The Conversation. Read the original article.

How seeing problems in the brain makes stigma disappear

 

File 20171005 15464 vaswym.png?ixlib=rb 1.1
A pair of identical twins. The one on the right has OCD, while the one on the left does not.
Brain Imaging Research Division, Wayne State University School of Medicine, CC BY-SA

David Rosenberg, Wayne State University

As a psychiatrist, I find that one of the hardest parts of my job is telling parents and their children that they are not to blame for their illness.

Children with emotional and behavioral problems continue to suffer considerable stigma. Many in the medical community refer to them as “diagnostic and therapeutic orphans.” Unfortunately, for many, access to high-quality mental health care remains elusive.

An accurate diagnosis is the best way to tell whether or not someone will respond well to treatment, though that can be far more complicated than it sounds.

I have written three textbooks about using medication in children and adolescents with emotional and behavioral problems. I know that this is never a decision to take lightly.

But there’s reason for hope. While not medically able to diagnose any psychiatric condition, dramatic advances in brain imaging, genetics and other technologies are helping us objectively identify mental illness.

Knowing the signs of sadness

All of us experience occasional sadness and anxiety, but persistent problems may be a sign of a deeper issue. Ongoing issues with sleeping, eating, weight, school and pathologic self-doubt may be signs of depression, anxiety or obsessive-compulsive disorder.

Separating out normal behavior from problematic behavior can be challenging. Emotional and behavior problems can also vary with age. For example, depression in pre-adolescent children occurs equally in boys and girls. During adolescence, however, depression rates increase much more dramatically in girls than in boys.

It can be very hard for people to accept that they – or their family member – are not to blame for their mental illness. That’s partly because there are no current objective markers of psychiatric illness, making it difficult to pin down. Imagine diagnosing and treating cancer based on history alone. Inconceivable! But that is exactly what mental health professionals do every day. This can make it harder for parents and their children to accept that they don’t have control over the situation.

Fortunately, there are now excellent online tools that can help parents and their children screen for common mental health issues such as depression, anxiety, panic disorder and more.

Most important of all is making sure your child is assessed by a licensed mental health professional experienced in diagnosing and treating children. This is particularly important when medications that affect the child’s brain are being considered.

Seeing the problem

Thanks to recent developments in genetics, neuroimaging and the science of mental health, it’s becoming easier to characterize patients. New technologies may also make it easier to predict who is more likely to respond to a particular treatment or experience side effects from medication.

Our laboratory has used brain MRI studies to help unlock the underlying anatomy, chemistry and physiology underlying OCD. This repetitive, ritualistic illness – while sometimes used among laypeople to describe someone who is uptight – is actually a serious and often devastating behavioral illness that can paralyze children and their families.

In children with OCD, the brain’s arousal center, the anterior cingulate cortex, is ‘hijacked.’ This causes critical brain networks to stop working properly.
Image adapted from Diwadkar VA, Burgess A, Hong E, Rix C, Arnold PD, Hanna GL, Rosenberg DR. Dysfunctional activation and brain network profiles in youth with Obsessive-Compulsive Disorder: A focus on the dorsal anterior cingulate during working memory. Frontiers in Human Neuroscience. 2015; 9: 1-11., CC BY-SA

Through sophisticated, high-field brain imaging techniques – such as fMRI and magnetic resonance spectroscopy – that have become available recently, we can actually measure the child brain to see malfunctioning areas.

We have found, for example, that children 8 to 19 years old with OCD never get the “all clear signal” from a part of the brain called the anterior cingulate cortex. This signal is essential to feeling safe and secure. That’s why, for example, people with OCD may continue checking that the door is locked or repeatedly wash their hands. They have striking brain abnormalities that appear to normalize with effective treatment.

We have also begun a pilot study with a pair of identical twins. One has OCD and the other does not. We found brain abnormalities in the affected twin, but not in the unaffected twin. Further study is clearly warranted, but the results fit the pattern we have found in larger studies of children with OCD before and after treatment as compared to children without OCD.

Exciting brain MRI and genetic findings are also being reported in childhood depression, non-OCD anxiety, bipolar disorder, ADHD and schizophrenia, among others.

Meanwhile, the field of psychiatry continues to grow. For example, new techniques may soon be able to identify children at increased genetic risk for psychiatric illnesses such as bipolar disorder and schizophrenia.

New, more sophisticated brain imaging and genetics technology actually allows doctors and scientists to see what is going on in a child’s brain and genes. For example, by using MRI, our laboratory discovered that the brain chemical glutamate, which serves as the brain’s “light switch,” plays a critical role in childhood OCD.

What a scan means

When I show families their child’s MRI brain scans, they often tell me they are relieved and reassured to “be able to see it.”

Children with mental illness continue to face enormous stigma. Often when they are hospitalized, families are frightened that others may find out. They may hesitate to let schools, employers or coaches know about a child’s mental illness. They often fear that other parents will not want to let their children spend too much time with a child who has been labeled mentally ill. Terms like “psycho” or “going mental” remain part of our everyday language.

The example I like to give is epilepsy. Epilepsy once had all the stigma that mental illness today has. In the Middle Ages, one was considered to be possessed by the devil. Then, more advanced thinking said that people with epilepsy were crazy. Who else would shake all over their body or urinate and defecate on themselves but a crazy person? Many patients with epilepsy were locked in lunatic asylums.

Then in 1924, psychiatrist Hans Berger discovered something called the electroencephalogram (EEG). This showed that epilepsy was caused by electrical abnormalities in the brain. The specific location of these abnormalities dictated not only the diagnosis but the appropriate treatment.

The ConversationThat is the goal of modern biological psychiatry: to unlock the mysteries of the brain’s chemistry, physiology and structure. This can help better diagnose and precisely treat childhood onset mental illness. Knowledge heals, informs and defeats ignorance and stigma every time.

David Rosenberg, Professor, Psychiatry and Neuroscience, Wayne State University

This article was originally published on The Conversation. Read the original article.

Why you need to get involved in the geoengineering debate – now

File 20171018 32378 1p7xxv4.jpg?ixlib=rb 1.1
Atakan Yildiz/Shutterstock.com

Rob Bellamy, University of Oxford

The prospect of engineering the world’s climate system to tackle global warming is becoming more and more likely. This may seem like a crazy idea but I, and over 250 other scientists, policy makers and stakeholders from around the globe recently descended on Berlin to debate the promises and perils of geoengineering.

There are many touted methods of engineering the climate. Early, outlandish ideas included installing a ‘space sunshade”: a massive mirror orbiting the Earth to reflect sunlight. The ideas most in discussion now may not seem much more realistic – spraying particles into the stratosphere to reflect sunlight, or fertilising the oceans with iron to encourage algal growth and carbon dioxide sequestration through photosynthesis.

But the prospect of geoengineering has become a lot more real since the Paris Agreement. The 2015 Paris Agreement set out near universal, legally binding commitments to keep the increase in global temperature to well below 2°C above pre-industrial levels and even to aim for limiting the rise to 1.5°C. The Intergovernmental Panel on Climate Change (IPCC) has concluded that meeting these targets is possible – but nearly all of their scenarios rely on the extensive deployment of some form of geoengineering by the end of the century.

Some geoengineers take their inspiration from supervolcanic eruptions, which can lower global temperatures.
patobarahona/Shutterstock.com

How to engineer the climate

Geoengineering comes in two distinct flavours. The first is greenhouse gas removal: those ideas that would seek to remove and store carbon dioxide and other greenhouse gases from the atmosphere. The second is solar radiation management: the ideas that would seek to reflect a level of sunlight away from the Earth.

Solar radiation management is the more controversial of the two, doing nothing to address the root cause of climate change – greenhouse gas emissions – and raising a whole load of concerns about undesirable side effects, such as changes to regional weather patterns.

And then there is the so-called “termination problem”. If we ever stopped engineering the climate in this way then global temperature would abruptly bounce back to where it would have been without it. And if we had not been reducing or removing emissions at the same time, this could be a very sharp and sudden rise indeed.

Most climate models that see the ambitions of the Paris Agreement achieved assume the use of greenhouse gas removal, particularly bio-energy coupled with carbon capture and storage technology. But, as the recent conference revealed, although research in the field is steadily gaining ground, there is also a dangerous gap between its current state of the art and the achievability of the Paris Agreement on climate change.

The Paris Agreement – and its implicit dependence on greenhouse gas removal – has undoubtedly been one of the most significant developments to impact on the field of geoengineering since the last conference of its kind back in 2014. This shifted the emphasis of the conference away from the more controversial and attention-grabbing solar radiation management and towards the more mundane but policy relevant greenhouse gas removal.

Geoengineering measures.
IASS

Controversial experiments

But there were moments when sunlight reflecting methods still stole the show. A centrepiece of the conference was the solar radiation management experiments campfire, where David Keith and his colleagues from the Harvard University Solar Geoengineering Research Programme laid out their experimental plans. They aim to lift an instrument package to a height of 20km using a high-altitude balloon and release a small amount of reflective particles into the atmosphere.

This would not be the first geoengineering experiment. Scientists, engineers and entrepreneurs have already begun experimenting with various ideas, several of which have attracted a great degree of public interest and controversy. A particularly notable case was one UK project, in which plans to release a small amount of water into the atmosphere at a height of 1km using a pipe tethered to a balloon were cancelled in 2013 owing to concerns over intellectual property.

Such experiments will be essential if geoengineering ideas are to ever become technically viable contributors to achieving the goals of the Paris Agreement. But it is the governance of experiments, not their technical credentials, that has always been and still remains the most contentious area of the geoengineering debate.

Critics warned that the Harvard experiment could be the first step on a “slippery slope” towards an undesirable deployment and therefore must be restrained. But advocates argued that the technology needs to be developed before we can know what it is that we are trying to govern.

The challenge for governance is not to back either one of these extremes, but rather to navigate a responsible path between them.

How to govern?

The key to defining a responsible way to govern geoengineering experiments is accounting for public interests and concerns. Would-be geoengineering experimenters, including those at Harvard, routinely try to account for these concerns by appealing to their experiments being of a small scale and a limited extent. But, as I argued in the conference, in public discussions on the scale and extent of geoengineering experiments their meaning has been subjective and always qualified by other concerns.

My colleagues and I have found that the public have at least four principal concerns about geoengineering experiments: their level of containment; uncertainty around what the outcomes would be; the reversibility of any impacts, and the intent behind them. A small scale experiment unfolding indoors might therefore be deemed unacceptable if it raised concerns about private interests, for example. On the other hand, a large scale experiment conducted outdoors could be deemed acceptable if it did not release materials into the open environment.

Under certain conditions the four dimensions could be aligned. The challenge for governance is to account for these – and likely other – dimensions of perceived controllability. This means that public involvement in the design of governance itself needs to be front and centre in the development of geoengineering experiments.

A whole range of two-way dialogue methods are available – focus groups, citizens juries, deliberative workshops and many others. And to those outside of formal involvement in such processes – read about geoengineering, talk about geoengineering. We need to start a society-wide conversation on how to govern such controversial technologies.

Public interests and concerns need to be drawn out well in advance of an experiment and the results used to meaningfully shape how we govern it. This will not only make the the experiment more legitimate, but also make it substantively better.

The ConversationMake no mistake, experiments will be needed if we are to learn the worth of geoengineering ideas. But they must be done with public values at their core.

Rob Bellamy, James Martin Research Fellow in the Institute for Science, Innovation and Society, University of Oxford

This article was originally published on The Conversation. Read the original article.

Why is it so hard for the wrongfully jailed to get justice?

Linda Asquith, Leeds Beckett University

Imagine for a moment you are wrongfully convicted of a crime. You get sent to prison, where you start to serve out your sentence – every minute of every day knowing you are innocent. Then the unthinkable happens and you are released. You are elated – this is the moment you’ve been waiting for.

But those feelings of elation and happiness quickly turn to fear and despair as you realise you have nowhere to go. Your old life as you knew it is gone, you have no way of supporting yourself, your relationships have broken down and you have nowhere to turn to for support.

Sadly, this is the reality many exonerees face when they are trying to put their lives back together. Many of these people – who have in some cases spent years behind bars – find upon release that their problems are only exacerbated. Wrongfully wrenched from their families, homes and communities, they struggle to reintegrate into society when they return.

And things seem to be made worse because unlike prisoners who have access to support to help them resettle when they are released from prison, those who suffer a miscarriage of justice do not get this.

“Rightfully convicted” individuals are provided with a plan for release from prison – often starting months in advance. This involves a range of activities, all of which are aimed at helping the person to resettle back into the community. But exonerees have none of these preparations – and often receive very little notice of their release.

Victor Nealon, for example, served 16 years in prison after he was falsely charged with rape. He received three hours’ notice of his release, and ended up in a bed and breakfast on his first night as a free man – he had nowhere else to go.

An unfamiliar world

The wrongfully convicted don’t receive any preparation for their release because of the way the prison system works. Prisoners have to show they are “tackling their offending behaviour” to gain parole. But if you haven’t committed the crime in the first place, this is not possible. The end result is that a person may spend longer in prison than if they had committed the offence and admitted it.

Upon release, the wrongfully convicted are thrust into a world they are unfamiliar with – and they have zero support or guidance. It’s common for exonerees to develop PTSD as a result of their wrongful conviction, alongside other mental and physical health problems requiring significant support.

This in part happens because as soon as the conviction is quashed, these people are no one’s responsibility. They are no longer a prisoner, or an ex-offender. There is no standard programme of support which is triggered at the point of release. And while probation would be well placed to support the wrongfully convicted, they cannot as they are not ex-offenders – ex-prisoners, yes, but not ex-offenders.

Say I’m innocent

There are only two specific organisations that provide support to exonerees. They are the Citizens Advice Bureau (CAB) based at the Royal Courts of Justice, and the Miscarriages of Justice Organisation (MOJO). This was founded by Paddy Hill – one of the six men wrongly convicted of the 1974 Birmingham pub bombings. He set it up in an attempt to provide the support to others that he was not given when released in 1991.

But both services are restricted by funding and staffing limitations, and while both organisations do superb work against a backdrop of austerity measures and extremely limited resources, both are at best a piecemeal response to what is, in reality, a government responsibility.

A recent BBC documentary called Fallout highlights these issues. The the director of the documentary Mark Mcloughlin has launched the “Say I’m Innocent” campaign, and is now fighting for all the services that are available to guilty prisoners on release to be made available to exonerees. The campaign is also calling for a public announcement of a person’s innocence upon their release. As well as other measure including a transition centre in both the UK and Ireland to allow them time and help to reintegrate into society.

The ConversationThis is important because the key issue here is responsibility. The state assumed responsibility for these individuals when they were wrongfully convicted. It is therefore only right that the state continues to take responsibility for them once exonerated.

Linda Asquith, Senior Lecturer in Criminology, Leeds Beckett University

This article was originally published on The Conversation. Read the original article.

Whales and dolphins have rich cultures – and could hold clues to what makes humans so advanced


File 20171017 30422 eb1qx5.jpg?ixlib=rb 1.1
A pod of spinner dolphins in the Red Sea.
Alexander Vasenin/wikimedia, CC BY-SA

Susanne Shultz, University of Manchester

Humans are like no other species. We have constructed stratified states, colonised nearly every habitat on Earth and we’re now looking to move to other planets. In fact, we are so advanced that some of our innovations – such as fossil fuel technologies, intensive agriculture and weapons of mass destruction – may ultimately lead to our downfall.

Even our closest relatives, the primates, lack traits such as developed language, cumulative culture, music, symbolism and religion. Yet scientists still haven’t come to a consensus on why, when and how humans evolved these traits. But, luckily, there are non-human animals that have evolved societies and culture to some extent. My latest study, published in Nature Evolution & Ecology, investigates what cetaceans (whales and dolphins) can teach us about human evolution.

The reason it is so difficult to trace the origins of human traits is that social behaviour does not fossilise. It is therefore very hard to understand when and why cultural behaviour first arose in the human lineage. Material culture such as art, burial items, technologically sophisticated weapons and pottery is very rare in the archaeological record.

Previous research in primates has shown that a large primate brain is associated with larger social groups, cultural and behavioural richness, and learning ability. A larger brain is also tied to energy-rich diets, long life spans, extended juvenile periods and large bodies. But researchers trying to uncover whether each of these different traits are causes or consequences of large brains find themselves at odds with each other – often arguing at cross purposes.

One prevailing explanation is the social brain hypothesis, which argues that our minds and consequently our brains have evolved to solve the problems associated with living in an information rich, challenging and dynamic social environment. This comes with challenges such as competing for and allocating food and resources, coordinating behaviour, resolving conflicts and using information and innovations generated by others in the group.

Primates with large brains tend to be highly social animals.
Peter van der Sluijs/wikipedia, CC BY-SA

However, despite the abundance of evidence for a link between brain size and social skills, the arguments rumble on about the role of social living in cognitive evolution. Alternative theories suggest that primate brains have evolved in response to the complexity of forest environments – either in terms of searching for fruit or visually navigating a three dimensional world.

Under the sea

But it’s not just primates that live in rich social worlds. Insects, birds, elephants, horses and cetaceans do, too.

The latter are especially interesting as, not only do we know that they do interesting things, some live in multi-generational societies and they also have the largest brains in the animal kingdom. In addition, they do not eat fruit, nor do they live in forests. For that reason, we decided to evaluate the evidence for the social or cultural brain in cetaceans.

Another advantage with cetaceans is that research groups around the world have spent decades documenting and uncovering their social worlds. These include signature whistles, which appear to identify individual animals, cooperative hunting, complex songs and vocalisations, social play and social learning. We compiled all this information into a database and evaluated whether a species’ cultural richness is associated with its brain size and the kind of society they live in.

We found that species with larger brains live in more structured societies and have more cultural and learned behaviours. The group of species with the largest relative brain size are the large, whale-like dolphins. These include the false killer whale and pilot whale.

To illustrate the two ends of the spectrum, killer whales have cultural food preferences – where some populations prefer fish and other seals. They also hunt cooperatively and have matriarchs leading the group. Sperm whales have actual dialects, which means that different populations have distinct vocalisations. In contrast, some of the large baleen whales, which have smaller brains, eat krill rather than fish or other mammals, live fairly solitary lives and only come together for breeding seasons and at rich food sources.

The lives of beaked whales are still a big mystery.
Ted Cheeseman/wikipedia, CC BY-SA

We still have much to learn about these amazing creatures. Some of the species were not included in our analysis because we know so little about them. For example, there is a whole group of beaked whales with very large brains. However, because they dive and forage in deep water, sightings are rare and we know almost nothing about their behaviour and social relationships.

The ConversationNevertheless, this study certainly supports the idea that the richness of a species’ social world is predicted by their brain size. The fact that we’ve found it in an independent group so different from primates makes it all the more important.

Susanne Shultz, University Research Fellow, University of Manchester

This article was originally published on The Conversation. Read the original article.

‘You all look the same’: non-Muslim men targeted in Islamophobic hate crime because of their appearance


File 20171017 30406 7bq16h.jpg?ixlib=rb 1.1
Men with beards have been called terrorists.
via shutterstock.com

Imran Awan, Birmingham City University and Irene Zempi, Nottingham Trent University

There has been a 29% rise in recorded hate crimes in the UK in the past year according to new figures released by the Home Office, which also showed a spike in offences following the EU referendum.

The consequences of hate crime are widespread. While Muslims in Britain are increasingly subject to Islamophobia, some non-Muslims are also being targeted because they are perceived to be Muslim.

In new research presented to the All-Party Parliamentary Group on British Muslims we looked at the experiences of non-Muslim men who reported being the target of Islamophobic hate crime.

We interviewed 20 non-Muslim men of different ages, race and religion, based in the UK. Our group included Sikhs, Christians, Hindus and atheists. Although their experiences were all different, they believed that their skin colour, their beard or turban meant that they were perceived to be Muslim – and targeted for it. We decided to only interview men in this study because we understand from our community work that men are more likely than women to be victims of Islamophobia due to mistaken identity.

Our findings backed up our previous research showing that a spike in hate crime is often triggered by a particular event. The men we interviewed, whose names we have anonymised here to protect their identities, described how they felt “vulnerable” and “isolated” after the EU referendum. Vinesh, a 32-year old, Indian British Hindu, told us:

People have been calling me names on Twitter like ‘You’re a p**i c**t’. I have also been threatened on Facebook like ‘Today is the day we get rid of the likes of you!’ I feared for my safety when I read this.

Some of the men noted how terrorist attacks including those in Manchester and London also triggered more Islamophobia. Others also noted how the Trump administration and its stance towards Muslims had promoted anti-Muslim sentiments globally.

https://datawrapper.dwcdn.net/XkO0U/2/

In some cases, hate crimes are targeted at people’s homes or workplaces, with property damaged with Islamophobic graffiti because the perpetrators believe the victims are Muslim. In a recent case in Liverpool, “Allar Akbar” (sic) was painted on a Hindu family’s future home.

One 37-year-old man, called Paul, a white British atheist who is perceived to be a convert to Islam due to his beard, told us how he had been targeted:

I live on a rough estate. I had dog excrement shoved through the mailbox. They also threw paint over my door.

Nobody stepped in to help

Some of those we interviewed felt that their beard was a key aspect of why they were being targeted for looking Muslim. One 19-year-old, called Cameron, who is black British, said:

It’s happened to me ever since I grew a beard. I’m not a Muslim but people stare at me because they think I am.

Many of those we interviewed reported that they suffered anxiety, depression, physical illness, loss of income and employment as a result of being targeted. Raj, a 39-year-old British Indian, told us:

We live in fear every day. We face abuse and intimidation daily but we should not have to endure this abuse.

Such feelings of insecurity and isolation were exacerbated by the fact that these hate incidents usually took place in public places in front of passers-by who didn’t intervene to help. Mark, who is white and Christian and perceived to be Muslim due to his beard and Mediterranean complexion, said:

I was verbally abused by another passenger on the bus who branded me an ‘ISIS terrorist’ while passengers looked on without intervening. In another incident, I had ‘Brexit’ yelled in my face … I feel very lonely. No one has come to my assistance or even consoled me.

Identity questioned

The men we interviewed constantly felt the need to prove their identity, and differentiate themselves from Muslims in an attempt to prevent future victimisation. Many described it as emotionally draining. Samuel, a 58-year-old black British Christian, said:

My identity is always questioned because I look like a Muslim. It does make me feel low but I got used to it. As a black man with a beard you always get associated as being a Muslim terrorist.

The men we interviewed said they wanted much more public awareness about hate crimes and better police recording of these kind of offences. They also called for training for bystanders and people such as teachers who may need to deal with more of these situations. They also thought that an app, through which all types of hate crime could be reported in real time, could offer support for victims.

The ConversationThe rise in Islamophobic hate crime has made many Muslims live in fear. But this kind of hatred is pervasive, and can affect anyone perceived to be Muslim. “You all look the same”, one man was told after explaining that he wasn’t Muslim to somebody who abused him on the train. British society needs to get a better grip on understanding this often “invisible” form of hate crime and what to do about it.

Imran Awan, Associate Professor and Deputy Director of the Centre for Applied Criminology, Birmingham City University and Irene Zempi, Director of the Nottingham Centre for Bias, Prejudice & Hate Crime, Nottingham Trent University

This article was originally published on The Conversation. Read the original article.

PS
Hate crimes against disabled children, however, are also on the rise in Britain. – AS

Why the Indigenous in New Zealand have fared better than those in Canada


File 20171005 15464 f7v6kb.jpg?ixlib=rb 1.1
Maggie Cywink, of Whitefish River First Nation, holds up a sign behind Canadian Prime Minister Justin Trudeau during a summit in Ottawa in support of missing and murdered Indigenous women.
THE CANADIAN PRESS/Adrian Wyld

Dominic O’Sullivan, Charles Sturt University

Canadian Prime Minister Justin Trudeau’s recent speech to the United Nations brought Canada’s genocidal story to the world stage.

It gave historical context to an enduring colonialism.

The impact is widespread, but neatly summarized in a life expectancy differential between Indigenous and other Canadians of five to 15 years for men and 10 to 15 years for women. In New Zealand, by way of contrast, the differential between Maori and non-Maori is 7.3 years for men and 6.8 years for women.

These figures summarise the story of the power gap between Indigenous peoples and the settler state in both countries. Policy solutions lie beyond the liberal welfare state, beyond egalitarian justice. The origins of the persistent power gaps in each country are different, however, and reflect different understandings of relationships among sovereignty, citizenship, nationhood and self-determination.

The Indigenous peoples of Canada and New Zealand share similar experiences as subjects of British colonialism.

Yet there are profound differences both in the situation for Indigenous peoples in both countries and in the opportunities for resistance they’ve been able to pursue.

Maori have always held a greater share of the New Zealand national population than the Indigenous in Canada. Maori share a common language, and New Zealand’s smaller land mass makes resistance simpler to organize. Yet their place in the body politic is always contested, as state and public strategies of exclusion compete with the claim to self-determination.

‘Lead the lad to be a good farmer’

Historically, the greater Maori capacity for resistance did not dampen colonial resolve. But it did mean that assimilation, rather than genocide, was the intent of government policy. The purpose of New Zealand’s non-residential native schools, for example, was to “lead the lad to be a good farmer and the girl to be a good farmer’s wife,” as the director-general of education put it in 1931.

Following the Canadian Supreme Court ruling in 1997, Canada’s concern for “the reconciliation of the pre-existence of Aboriginal societies with the sovereignty of the Crown” was minimized by the previous Conservative government of Stephen Harper but rhetorically aligned with the “new beginning” that Trudeau spoke of at the United Nations.

Former Canadian Prime Minister Stephen Harper speaks with a Maori elder as he and his wife, Laureen, watch an official Maori powhiri during a visit to New Zealand in 2014.
THE CANADIAN PRESS/Adrian Wyld

Trudeau proposed that the UN Declaration on the Rights of Indigenous Peoples would now be Canada’s policy guide. It would rationalize stronger nation-to-nation, or government-to-government, relationships. Yet at the same time, the 2016 Canadian Human Rights Tribunal’s ruling, handed down a year after Trudeau’s election and urging the government to address discrimination against Indigenous children on reserves, has yet to be heeded.

Trudeau expressed concern at the UN about the self-determination of First Nations in Canada, but he didn’t speak of the individual Indigenous citizen’s self-determination.

He did not speak to the child on the reserve whose poverty is a direct result of lesser access to services that others in Canada take for granted as rights of citizenship.

Similar circumstances do exist in New Zealand where racism in schooling, health, the labour market and criminal justice compromise citizenship. However, Maori in New Zealand can demand better with reference to the Treaty of Waitangi and the “rights and privileges of British subjects” that it confers.

Maori protected under treaty

That treaty gave the British Crown the right to establish government. In return, Britain offered protection of Maori authority over their own affairs and natural resources.

The promise has not been consistently kept, but the treaty does give moral and increasingly political and jurisprudential authority to the Maori claim to self-determination. The treaty means that Maori do not contest the post-settler presence, but they do contest the Crown exercising a unilateral sovereign authority.

In 2015, the Waitangi Tribunal, which hears claims against the Crown for breaches of the treaty, found that the agreement was not a cession of sovereignty as the Crown had always claimed. While the government does not accept the finding, and it’s not legally binding, it affirms the Maori position on self-determination.

It also affirms a Maori way of thinking about contemporary politics. It raises possibilities for deeper introspection about Maori as nations, and Maori as citizens, in ways that are not apparent in Trudeau’s interpretation of the UN’s Indigenous declaration as it pertains to Canada.

There is an argument that nation to nation relationships respect the fact that sovereignty was never ceded. Perhaps an argument that indigenous Canadians claiming the full rights and capacities of state citizenship requires accepting the moral legitimacy of Crown sovereignty. However, if sovereignty means the capacity to function as a self-determining people one needs to think about the relative and relational character of political authority, and the sources of political possibility. These exist both inside and outside the state. They exist simultaneously. Neither is a site of political possibility for self-determination that can reach its potential without the support of the other.

Sharing sovereignty does not mean assimilation

Political authority and self-determination can’t reach their full potential without the support of each other. They exist both inside and outside the state. They exist simultaneously.

If the Crown is sovereign, it exercises that sovereignty only as the people’s agent. The UN declaration is insistent that, if they wish, Indigenous peoples have a right to share that sovereignty.

Sharing sovereignty is not dependent on the Indigenous person’s assimilation into an homogenous body politic, but on the capacity to contribute to society as an Indigenous person.

That could include the ability to receive public education in one’s own language, to be elected to Parliament by one’s own people (as is the situation in New Zealand) or to receive health care in ways that are responsive to cultural preferences.

In these ways, state sovereignty is not an authority that exists over and above Indigenous citizens. Nor does state citizenship exist at the expense of the Indigenous nation. It complements and supports self-determination.

In the only book-length comparative study of Indigenous politics in Canada and New Zealand, Roger Maaka and Augie Fleras imagine Indigenous peoples as “sovereign in their own right yet sharing sovereignty with society at large.”

New Zealand continues to work out the terms of this kind of system.

The ConversationCanada does not give it substantive thought, and that’s a serious constraint on the goal of self-determination for First Nations.

Dominic O’Sullivan, Associate Professor, Charles Sturt University

This article was originally published on The Conversation. Read the original article.

Why blaming ivory poaching on Boko Haram isn’t helpful


File 20171008 3228 10lazn2.jpg?ixlib=rb 1.1
Talking about ivory-funded terrorism overlooks the real sources of income for terror groups.
Author supplied

Mark Moritz, The Ohio State University; Alice B. Kelly Pennaz, University of California, Berkeley; Mouadjamou Ahmadou, and Paul Scholte, The Ohio State University

In 2016, as part of a ceremony in Cameroon’s capital Yaoundé, 2 000 elephant tusks were burned to demonstrate the country’s commitment to fight poaching and illegal trade in wildlife. US Ambassador to the United Nations Samantha Power gave a speech at the event linking poaching to terrorism.

The idea that terror groups like Boko Haram fund their activities through ivory poaching in Africa is a simple and compelling narrative. It has been adopted by governments, NGOs and media alike. But it is undermining wildlife conservation and human rights.

The problem is that such claims hinge on a single document which uses only one, unnamed source to estimate terrorist profits from ivory. The study hasn’t been backed up elsewhere.

Similarly, there is little evidence that terrorist activities are funded by wildlife poaching in Cameroon. We have studied wildlife conservation and pastoralism in the Far North Region of Cameroon in the last two decades. We have found that it is highly unlikely that Boko Haram is using ivory to survive financially. The elephant populations in the areas where Boko Haram operates are so low that this would be a faulty business plan to say the least. Only 246 elephants were counted in Waza Park in 2007.

Talking about ivory-funded terrorism overlooks the real sources of income for these groups. In Cameroon and Nigeria evidence shows that Boko Haram is using profits from cattle raids to support its activities. Boko Haram’s plunder of the countryside leaves cattle herders destitute.

The dangers of militarisation

The wrong focus has implications for conservation and human rights. Linking poachers and terrorists has led to a further militarisation of conservation areas in Africa. More guns and guards have been sent into parks to stop poachers.

The military approach has also led to serious human rights violations. These take the form of shoot-on-sight policies and other violent tactics carried out against local populations. Law enforcement in protected areas is important for controlling poaching and terrorism alike but it is not a perfect solution.

And wildlife conservation can suffer if well armed but underpaid park guards turn to poaching themselves.

It would be more helpful if properly paid and trained people provided security across the region rather than just in protected areas.

Consequences of the wrong connection

Ignoring the fact that cattle, not ivory, may be fuelling terrorism in places like Cameroon does a disservice to pastoralists. While livestock may compete with wildlife when pastoralists take refuge inside better-protected areas like parks, they do so only because their livelihoods are at risk.

Mistaking the true source of income for terrorist groups also means that their violent activities continue.

Finally, it diverts attention from corrupt conservation and government officials who may be complicit in poaching.

Of course, this is not to say that poaching is not happening. The dramatic declines in elephant populations in Cameroon and elsewhere in Africa indicate otherwise. The question is who is doing the poaching and why.

We challenge governments and organisations interested in wildlife, security and human rights to take a closer look at the evidence. Instead of sharing simple claims about terrorism and poaching, they should consider all the forms of economic support to terrorist organisations.

The ConversationIn Cameroon, this would mean offering better security for pastoralists and their cattle. Protecting cattle does not have the same appeal for Western audiences as protecting elephants. But it could be a way to conserve wildlife, protect human rights and stop funding for terrorism.

Mark Moritz, Associate Professor of Anthropology, The Ohio State University; Alice B. Kelly Pennaz, Researcher, University of California, Berkeley; Mouadjamou Ahmadou, Lecturer in Visual Anthropology, and Paul Scholte, Ecologist leading programs and organizations in conservation, The Ohio State University

This article was originally published on The Conversation. Read the original article.

The IQ test wars: why screening for intelligence is still so controversial

File 20170921 21016 ld7zty.jpg?ixlib=rb 1.1
For over a century, IQ tests have been used to measure intelligence. But can it really be measured?
via shutterstock.com

Daphne Martschenko, University of Cambridge

John, 12-years-old, is three times as old as his brother. How old will John be when he is twice as old as his brother?

Two families go bowling. While they are bowling, they order a pizza for £12, six sodas for £1.25 each, and two large buckets of popcorn for £10.86. If they are going to split the bill between the families, how much does each family owe?

4, 9, 16, 25, 36, ?, 64. What number is missing from the sequence?

These are questions from online Intelligence Quotient or IQ tests. Tests that purport to measure your intelligence can be verbal, meaning written, or non-verbal, focusing on abstract reasoning independent of reading and writing skills. First created more than a century ago, the tests are still widely used today to measure an individual’s mental agility and ability.

Education systems use IQ tests to help identify children for special education and gifted education programmes and to offer extra support. Researchers across the social and hard sciences study IQ test results also looking at everything from their relation to genetics, socio-economic status, academic achievement, and race.

Online IQ “quizzes” purport to be able to tell you whether or not “you have what it takes to be a member of the world’s most prestigious high IQ society”.

If you want to boast about your high IQ, you should have been able to work out the answers to the questions. When John is 16 he’ll be twice as old as his brother. The two families who went bowling each owe £20.61. And 49 is the missing number in the sequence.

Despite the hype, the relevance, usefulness, and legitimacy of the IQ test is still hotly debated among educators, social scientists, and hard scientists. To understand why, it’s important to understand the history underpinning the birth, development, and expansion of the IQ test – a history that includes the use of IQ tests to further marginalise ethnic minorities and poor communities.

Testing times

In the early 1900s, dozens of intelligence tests were developed in Europe and America claiming to offer unbiased ways to measure a person’s cognitive ability. The first of these tests was developed by French psychologist Alfred Binet, who was commissioned by the French government to identify students who would face the most difficulty in school. The resulting 1905 Binet-Simon Scale became the basis for modern IQ testing. Ironically, Binet actually thought that IQ tests were inadequate measures for intelligence, pointing to the test’s inability to properly measure creativity or emotional intelligence.

At its conception, the IQ test provided a relatively quick and simple way to identify and sort individuals based on intelligence – which was and still is highly valued by society. In the US and elsewhere, institutions such as the military and police used IQ tests to screen potential applicants. They also implemented admission requirements based on the results.

The US Army Alpha and Beta Tests screened approximately 1.75m draftees in World War I in an attempt to evaluate the intellectual and emotional temperament of soldiers. Results were used to determine how capable a solider was of serving in the armed forces and identify which job classification or leadership position one was most suitable for. Starting in the early 1900s, the US education system also began using IQ tests to identify “gifted and talented” students, as well as those with special needs who required additional educational interventions and different academic environments.

Ironically, some districts in the US have recently employed a maximum IQ score for admission into the police force. The fear was that those who scored too highly would eventually find the work boring and leave – after significant time and resources had been put towards their training.

Alongside the widespread use of IQ tests in the 20th century was the argument that the level of a person’s intelligence was influenced by their biology. Ethnocentrics and eugenicists, who viewed intelligence and other social behaviours as being determined by biology and race, latched onto IQ tests. They held up the apparent gaps these tests illuminated between ethnic minorities and whites or between low- and high-income groups.

Some maintained that these test results provided further evidence that socioeconomic and racial groups were genetically different from each other and that systemic inequalities were partly a byproduct of evolutionary processes.

Going to extremes

The US Army Alpha and Beta test results garnered widespread publicity and were analysed by Carl Brigham, a Princeton University psychologist and early founder of psychometrics, in a 1922 book A Study of American Intelligence. Brigham applied meticulous statistical analyses to demonstrate that American intelligence was declining, claiming that increased immigration and racial integration were to blame. To address the issue, he called for social policies to restrict immigration and prohibit racial mixing.

A few years before, American psychologist and education researcher Lewis Terman had drawn connections between intellectual ability and race. In 1916, he wrote:

High-grade or border-line deficiency … is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come … Children of this group should be segregated into separate classes … They cannot master abstractions but they can often be made into efficient workers … from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.

There has been considerable work from both hard and social scientists refuting arguments such as Brigham’s and Terman’s that racial differences in IQ scores are influenced by biology.

Critiques of such “hereditarian” hypotheses – arguments that genetics can powerfully explain human character traits and even human social and political problems – cite a lack of evidence and weak statistical analyses. This critique continues today, with many researchers resistant to and alarmed by research that is still being conducted on race and IQ.

But in their darkest moments, IQ tests became a powerful way to exclude and control marginalised communities using empirical and scientific language. Supporters of eugenic ideologies in the 1900s used IQ tests to identify “idiots”, “imbeciles”, and the “feebleminded”. These were people, eugenicists argued, who threatened to dilute the White Anglo-Saxon genetic stock of America.

A plaque in Virginia in memory to Carrie Buck, the first person to be sterilised under eugenics laws in the state.
Jukie Bot/flickr.com, CC BY-NC

As a result of such eugenic arguments, many American citizens were later sterilised. In 1927, an infamous ruling by the US Supreme Court legalised forced sterilisation of citizens with developmental disabilities and the “feebleminded,” who were frequently identified by their low IQ scores. The ruling, known as Buck v Bell, resulted in over 65,000 coerced sterilisations of individuals thought to have low IQs. Those in the US who were forcibly sterilised in the aftermath of Buck v Bell were disproportionately poor or of colour.

Compulsory sterilisation in the US on the basis of IQ, criminality, or sexual deviance continued formally until the mid 1970s when organisations like the Southern Poverty Law Center began filing lawsuits on behalf of people who had been sterilised. In 2015, the US Senate voted to compensate living victims of government-sponsored sterilisation programmes.

IQ tests today

Debate over what it means to be “intelligent” and whether or not the IQ test is a robust tool of measurement continues to elicit strong and often opposing reactions today. Some researchers say that intelligence is a concept specific to a particular culture. They maintain that it appears differently depending on the context – in the same way that many cultural behaviours would. For example, burping may be seen as an indicator of enjoyment of a meal or a sign of praise for the host in some cultures and impolite in others.

What may be considered intelligent in one environment, therefore, might not in others. For example, knowledge about medicinal herbs is seen as a form of intelligence in certain communities within Africa, but does not correlate with high performance on traditional Western academic intelligence tests.

According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them potentially problematic in culturally diverse settings. The application of the same test among different communities would fail to recognise the different cultural values that shape what each community values as intelligent behaviour.

Going even further, given the IQ test’s history of being used to further questionable and sometimes racially-motivated beliefs about what different groups of people are capable of, some researchers say such tests cannot objectively and equally measure an individual’s intelligence at all.

Used for good

At the same time, there are ongoing efforts to demonstrate how the IQ test can be used to help those very communities who have been most harmed by them in the past. In 2002, the execution across the US of criminally convicted individuals with intellectual disabilities, who are often assessed using IQ tests, was ruled unconstitutional. This has meant IQ tests have actually prevented individuals from facing “cruel and unusual punishment” in the US court of law.

In education, IQ tests may be a more objective way to identify children who could benefit from special education services. This includes programmes known as “gifted education” for students who have been identified as exceptionally or highly cognitively able. Ethnic minority children and those whose parents have a low income, are under-represented in gifted education.

There is ongoing debate about the use of IQ tests in schools.
via shutterstock.com

The way children are chosen for these programmes means that Black and Hispanic students are often overlooked. Some US school districts employ admissions procedures for gifted education programmes that rely on teacher observations and referrals or require a family to sign their child up for an IQ test. But research suggests that teacher perceptions and expectations of a student, which can be preconceived, have an impact upon a child’s IQ scores, academic achievement, and attitudes and behaviour. This means that teacher’s perceptions can also have an impact on the likelihood of a child being referred for gifted or special education.

The universal screening of students for gifted education using IQ tests could help to identify children who otherwise would have gone unnoticed by parents and teachers. Research has found that those school districts which have implemented screening measures for all children using IQ tests have been able to identify more children from historically underrepresented groups to go into gifted education.

IQ tests could also help identify structural inequalities that have affected a child’s development. These could include the impacts of environmental exposure to harmful substances such as lead and arsenic or the effects of malnutrition on brain health. All these have been shown to have an negative impact on an individual’s mental ability and to disproportionately affect low-income and ethnic minority communities.


Identifying these issues could then help those in charge of education and social policy to seek solutions. Specific interventions could be designed to help children who have been affected by these structural inequalities or exposed to harmful substances. In the long run, the effectiveness of these interventions could be monitored by comparing IQ tests administered to the same children before and after an intervention.

Some researchers have tried doing this. One US study in 1995 used IQ tests to look at the effectiveness of a particular type of training for managing Attention Deficit/Hyperactivity Disorder (ADHD), called neurofeedback training. This is a therapeutic process aimed at trying to help a person to self-regulate their brain function. Most commonly used with those who have some sort of identified brain imbalance, it has also been used to treat drug addiction, depression and ADHD. The researchers used IQ tests to find out whether the training was effective in improving the concentration and executive functioning of children with ADHD – and found that it was.

Since its invention, the IQ test has generated strong arguments in support of and against its use. Both sides are focused on the communities that have been negatively impacted in the past by the use of intelligence tests for eugenic purposes.

The ConversationThe use of IQ tests in a range of settings, and the continued disagreement over their validity and even morality, highlights not only the immense value society places on intelligence – but also our desire to understand and measure it.

Daphne Martschenko, PhD Candidate, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The Irony of Susceptibility to Manipulations: Grooming Neurotypicals for Social Ineptitude

Henny Kupferstein, Ph.D.'s avatarHenny Kupferstein

Printer-Friendly PDF

The stereotypes of autistic people perpetuate a myth that they are socially inept. Yet non-autistics, also known as neurotypicals, portray ineptitudes on the basis of their susceptibility to body language, communication, and perceptual manipulations. How we learn these signals opens the debate for nature versus nurture, and the acquisition of social skill aptitude. Who is more socially equipped? The one who is capable of surrounding himself with pretentious body language, or the one who is mindful of her full spectrum of awareness? A neurotypical who communicates with learned body gestures is currently considered evolved, while the acquisition of those skills are a direct result of the inability to survive otherwise. The autistic who remains authentic in order to adapt to the current environment is potentially most equipped to function in society.

The cycle of life requires attracting a mate, reproduction, and adaptations for exploitation to those who threaten…

View original post 1,590 more words

The murky issue of whether the public supports assisted dying

Katherine Sleeman, King’s College London

The High Court has rejected a judicial review challenging the current law which prohibits assisted dying in the UK. Noel Conway, a 67-year-old retired lecturer who was diagnosed with Motor Neurone Disease in 2014, was fighting for the right to have medical assistance to bring about his death. Commenting after the judgement on October 5, his solicitor indicated that permission will now be sought to take the case to the appeal courts.

Campaigners are often quick to highlight the strength of public support in favour of assisted dying, arguing that the current law is undemocratic. But there are reasons to question the results of polls on this sensitive and emotional issue.

There have been numerous surveys and opinion polls on public attitudes towards assisted dying in recent years. The British Social Attitudes (BSA) Survey, which has asked this question sequentially since the 1980s, has shown slowly increasing public support. Asked: “Suppose a person has a painful incurable disease. Do you think that doctors should be allowed by law to end the patient’s life, if the patient requests it?” in 1984, 75% of people surveyed agreed. By 1989, 79% of people agreed with the statement, and in 1994 it had gone up to 82%.

Detail of the question matters

But not surprisingly, the acceptability of assisted dying varies according to the precise context. The 2005 BSA survey asked in more depth about attitudes towards assisted dying and end of life care. While 80% of respondents agreed with the original question, support fell to 45% for assisted dying for illnesses that were incurable and painful but not terminal.

A 2010 ComRes-BBC survey also found that the incurable nature of illness was critical. In this survey, while 74% of respondents supported assisted suicide if an illness was terminal, this fell to 45% if it was not.

Wording counts.
from http://www.shutterstock.com

It may not be surprising that support varies considerably according to the nature of the condition described, but it is important. First, because the neat tick boxes on polls belie the messy reality of determining prognosis for an individual patient. Second, because of the potential for drift in who might be eligible once assisted dying is legalised. This has happened in countries such as Belgium which became the first country to authorise euthanasia for children in 2014, and more recently in Canada where within months of the 2016 legalisation of medical assistance in dying, the possibility of extending the law to those with purely psychological suffering was announced.

It’s not just diagnosis or even prognosis that influences opinion. In the US, Gallup surveys carried out since the 1990s have shown that support for assisted dying hinges on the precise terminology used to describe it. In its 2013 poll, 70% of respondents supported “end the patient’s life by some painless means” whereas only 51% supported “assisting the patient to commit suicide”. This gap shrank considerably in 2015 – possibly as a result of the Brittany Maynard case. Maynard, a high-profile advocate of assisted dying who had terminal cancer, moved from California to Oregon to take advantage of the Oregon Death with Dignity law in 2014.

Even so, campaigning organisations for assisted dying tend to avoid the word “suicide”. Language is emotive, but if we want to truly gauge public opinion, we need to understand this issue, not gloss over it.

Information changes minds

Support for assisted dying is crucially known to drop-off simply when key information is provided. Back in the UK, a ComRes/CARE poll in 2014 showed 73% of people surveyed agreed with legalisation of a bill which enables: “Mentally competent adults in the UK who are terminally ill, and who have declared a clear and settled intention to end their own life, to be provided with assistance to commit suicide by self-administering lethal drugs.” But 42% of these same people subsequently changed their mind when some of the empirical arguments against assisted dying were highlighted to them – such as the risk of people feeling pressured to end their lives so as not to be a burden on loved ones.

This is not just a theoretical phenomenon. In 2012, a question over legalising assisted dying was put on the ballot paper in Massachusetts, one of the most liberal US states. Support for legalisation fell in the weeks prior to vote, as arguments against legalisation were aired, and complexities became apparent. In the end, the Massachusetts proposition was defeated by 51% to 49%. Public opinion polls, in the absence of public debate, may gather responses that are reflexive rather than informed.

The ConversationPolls are powerful tools for democratic change. While opinion polls do show the majority of people support legalisation of assisted dying, the same polls also show that the issue is far from clear. It is murky, and depends on the responder’s awareness of the complexities of assisted dying, the context of the question asked, and its precise language. If we can conclude anything from these polls, it is not the proportion of people who do or don’t support legislation, but how easily people can change their views.

Katherine Sleeman, NIHR Clinician Scientist and Honorary Consultant in Palliative Medicine, King’s College London

This article was originally published on The Conversation. Read the original article.

When gun control makes a difference: 4 essential reads

Emily Schwartz Greco, The Conversation

Editor’s note: This is a roundup of gun control articles published by scholars from the U.S. and two other countries where deadly mass shootings are far less common.

An underresearched epidemic

Guns are a leading cause of death of Americans of all ages, including children. Yet “while gun violence is a public health problem, it is not studied the same way other public health problems are,” explains Sandro Galea, dean of Boston University’s School of Public Health.

That’s no accident. Congress has prohibited firearm-related research by the Centers for Disease Control and Prevention and the National Institutes of Health since 1996. Galea says:

“Unfortunately, a shortage of data creates space for speculation, conjecture and ill-informed argument that threatens reasoned public discussion and progressive action on the issue.”

The Australian model

The contrast with Australia is especially stark. Just as Congress was barring any research that might strengthen the case for tighter gun regulations, that country established very strict firearm laws in response to the Port Arthur massacre, which killed 35 people in 1996.

To clamp down on guns, the federal government worked with Australia’s states to ban semiautomatic rifles and pump action shotguns, establish a uniform gun registry and buy the now-banned guns from people who had purchased them before owning them became illegal. The country also stopped recognizing self-defense as an acceptable reason for gun ownership and outlawed mail-order gun sales.

These measures worked. Simon Chapman, a public health expert at the University of Sydney, writes:

“When it comes to firearms, Australia is far a safer place today than it was in the 1990s and in previous decades.”

There have been no mass murders since the Port Arthur massacre and the subsequent clampdown on guns, Chapman observes. In contrast, there were 13 of those tragic incidents over the previous 18 years – in which a total of 104 victims died. Other gun deaths have also declined.

Concerns about complacency

After so many years with no mass killings, some Australian scholars fear that their country may be moving in the wrong direction.

Twenty years after doing more than any other nation to strengthen firearm regulation, “many people think we no longer have to worry about gun violence,” say Rebecca Peters of the University of Sydney and Chris Cunneen at the University of New South Wales. They write:

“Such complacency jeopardizes public safety. The pro-gun lobby has succeeded in watering down the laws in several states. Weakening the rules on pistols so that unlicensed shooters can walk into a club and shoot without any waiting period for background checks has resulted in at least one homicide in New South Wales.”

In the UK

Like Australia, the U.K. tightened its gun regulations following its own 1996 tragedy – when a man killed 16 children and their teacher at Dunblane Primary School, near Stirling, Scotland.

Subsequently, the U.K. banned some handguns and bought back many banned weapons. There, however, progress has been less impressive, notes Helen Williamson, a researcher at the University of Brighton. On the one hand, the number of firearms offenses has declined from a high of 24,094 in 2004 to 7,866 in 2015. On the other, criminals are growing more “resourceful in identifying alternative sources of firearms,” she says, adding:

The Conversation“Although the availability of high-quality firearms may have fallen, the demand for weapons remains. This demand has driven criminals to be resourceful in identifying alternative sources of firearms. There are growing concerns about how they could acquire instructions online on how to build a homemade gun, or even 3D-print a functioning pistol.”

Emily Schwartz Greco, Philanthropy and Nonprofits Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

The science behind… coffee!

Brewing a great cup of coffee depends on chemistry and physics

File 20170925 17462 1pcmbbe
What can you do to ensure a more perfect brew?
Chris Hendon, CC BY-ND

Christopher H. Hendon, University of Oregon

Coffee is unique among artisanal beverages in that the brewer plays a significant role in its quality at the point of consumption. In contrast, drinkers buy draft beer and wine as finished products; their only consumer-controlled variable is the temperature at which you drink them.

Why is it that coffee produced by a barista at a cafe always tastes different than the same beans brewed at home?

It may be down to their years of training, but more likely it’s their ability to harness the principles of chemistry and physics. I am a materials chemist by day, and many of the physical considerations I apply to other solids apply here. The variables of temperature, water chemistry, particle size distribution, ratio of water to coffee, time and, perhaps most importantly, the quality of the green coffee all play crucial roles in producing a tasty cup. It’s how we control these variables that allows for that cup to be reproducible.

How strong a cup of joe?

Besides the psychological and environmental contributions to why a barista-prepared cup of coffee tastes so good in the cafe, we need to consider the brew method itself.

Science helps optimize the coffee.
Chris Hendon, CC BY-ND

We humans seem to like drinks that contain coffee constituents (organic acids, Maillard products, esters and heterocycles, to name a few) at 1.2 to 1.5 percent by mass (as in filter coffee), and also favor drinks containing 8 to 10 percent by mass (as in espresso). Concentrations outside of these ranges are challenging to execute. There are a limited number of technologies that achieve 8 to 10 percent concentrations, the espresso machine being the most familiar.

There are many ways, though, to achieve a drink containing 1.2 to 1.5 percent coffee. A pour-over, Turkish, Arabic, Aeropress, French press, siphon or batch brew (that is, regular drip) apparatus – each produces coffee that tastes good around these concentrations. These brew methods also boast an advantage over their espresso counterpart: They are cheap. An espresso machine can produce a beverage of this concentration: the Americano, which is just an espresso shot diluted with water to the concentration of filter coffee.

All of these methods result in roughly the same amount of coffee in the cup. So why can they taste so different?

When coffee meets water

There are two families of brewing device within the low-concentration methods – those that fully immerse the coffee in the brew water and those that flow the water through the coffee bed.

From a physical perspective, the major difference is that the temperature of the coffee particulates is higher in the full immersion system. The slowest part of coffee extraction is not the rate at which compounds dissolve from the particulate surface. Rather, it’s the speed at which coffee flavor moves through the solid particle to the water-coffee interface, and this speed is increased with temperature.

The Coffee Taster’s Flavor Wheel provides a way to name various tastes within the beverage.
Specialty Coffee Association of America, CC BY-NC-ND

A higher particulate temperature means that more of the tasty compounds trapped within the coffee particulates will be extracted. But higher temperature also lets more of the unwanted compounds dissolve in the water, too. The Specialty Coffee Association presents a flavor wheel to help us talk about these flavors – from green/vegetative or papery/musty through to brown sugar or dried fruit.

Pour-overs and other flow-through systems are more complex. Unlike full immersion methods where time is controlled, flow-through brew times depend on the grind size since the grounds control the flow rate.

The water-to-coffee ratio matters, too, in the brew time. Simply grinding more fine to increase extraction invariably changes the brew time, as the water seeps more slowly through finer grounds. One can increase the water-to-coffee ratio by using less coffee, but as the mass of coffee is reduced, the brew time also decreases. Optimization of filter coffee brewing is hence multidimensional and more tricky than full immersion methods.

What do they know that we don’t?
Redd Angelo on Unsplash, CC BY

Other variables to try to control

Even if you can optimize your brew method and apparatus to precisely mimic your favorite barista, there is still a near-certain chance that your home brew will taste different from the cafe’s. There are three subtleties that have tremendous impact on the coffee quality: water chemistry, particle size distribution produced by the grinder and coffee freshness.

First, water chemistry: Given coffee is an acidic beverage, the acidity of your brew water can have a big effect. Brew water containing low levels of both calcium ions and bicarbonate (HCO₃⁻) – that is, soft water – will result in a highly acidic cup, sometimes described as sour. Brew water containing high levels of HCO₃⁻ – typically, hard water – will produce a chalky cup, as the bicarbonate has neutralized most of the flavorsome acids in the coffee.

Ideally we want to brew coffee with water containing chemistry somewhere in the middle. But there’s a good chance you don’t know the bicarbonate concentration in your own tap water, and a small change makes a big difference. To taste the impact, try brewing coffee with Evian – one of the highest bicarbonate concentration bottled waters, at 360 mg/L.

The particle size distribution your grinder produces is critical, too.

Every coffee enthusiast will rightly tell you that blade grinders are disfavored because they produce a seemingly random particle size distribution; there can be both powder and essentially whole coffee beans coexisting. The alternative, a burr grinder, features two pieces of metal with teeth that cut the coffee into progressively smaller pieces. They allow ground particulates through an aperture only once they are small enough.

Looking for a more even grind.
Aaron Itzerott on Unsplash, CC BY

There is contention over how to optimize grind settings when using a burr grinder, though. One school of thought supports grinding the coffee as fine as possible to maximize the surface area, which lets you extract the most delicious flavors in higher concentrations. The rival school advocates grinding as coarse as possible to minimize the production of fine particles that impart negative flavors. Perhaps the most useful advice here is to determine what you like best based on your taste preference.

Finally, the freshness of the coffee itself is crucial. Roasted coffee contains a significant amount of CO₂ and other volatiles trapped within the solid coffee matrix: Over time these gaseous organic molecules will escape the bean. Fewer volatiles means a less flavorful cup of coffee. Most cafes will not serve coffee more than four weeks out from the roast date, emphasizing the importance of using freshly roasted beans.

One can mitigate the rate of staling by cooling the coffee (as described by the Arrhenius equation). While you shouldn’t chill your coffee in an open vessel (unless you want fish finger brews), storing coffee in an airtight container in the freezer will significantly prolong freshness.

The ConversationSo don’t feel bad that your carefully brewed cup of coffee at home never stacks up to what you buy at the café. There are a lot of variables – scientific and otherwise – that must be wrangled to produce a single superlative cup. Take comfort that most of these variables are not optimized by some mathematical algorithm, but rather by somebody’s tongue. What’s most important is that your coffee tastes good to you… brew after brew.

Christopher H. Hendon, Assistant Professor of Computational Materials and Chemistry, University of Oregon

This article was originally published on The Conversation. Read the original article.

The tomato growers

Once there were two tomato growers. One was called James and the other one Gordon.

Gordon was very disappointed with his tomatoes. Every day, he would go to them and water them and check how much they had grown. Sadly, his tomatoes stayed pitifully small. He would twist them and squeeze them to feel if they were at least ripening a little bit, and accidentally dislodge one from the vine on occasion. It would drop to the ground and rot away.

Gordon felt something had to be done. So he purchased the best fertilizer he could find, with the right amount of potassium and all the other nutrients a tomato could wish for, and placed it in front of his tomatoes. He told them: “If you grow really really well, I will give you this fertilizer as a reward. This shall be your motivation.” It seemed to have no effect on the tomatoes. If anything, they were only growing at an even slower pace.

Gordon became even more dissatisfied with his tomatoes and started withholding water to see if that would convince the tomatoes to grow. But all that happened was that the tomato plants became infested with pests and he had to spray them with pesticides. (“Damn, that stuff is expensive,” Gordon grumbled.) It was too late. The tomato plants turned yellow and started dying. Gordon got very frustrated and kicked at the plants.

James, on the other hand, adored his tomatoes. He loved them! Every day, he went to them, and removed all those little sprouts from the armpits of the tomato plants and enjoyed that typical spicy tomato smell. That way, all the nutrition went to the little tomato fruits, not into making sprouts. He watered them every day, and made sure the quantity of water was just so.

He took care that they got the right amount of nice warm sunshine and on days without sunshine, he would provide artificial sunshine. He also gave them the right amount of fertilizer whenever they needed it. His tomatoes became famous. Everyone admired them. They were so beautiful, so healthy! His tomatoes seemed to be shining with joy. It was almost as if they loved James back and wanted to make him really really happy.

Gordon commented that life just is not fair and that there is nothing you can do about it and also that James had started growing his tomatoes a year earlier, hadn’t he, and that there were no pests at James’s location, and probably also a lot more sunshine. He knew it! Life ain’t fair! And he had never liked James much anyway.

James was not aware of Gordon’s grumblings at all. He found more than enough joy in caring for his tomatoes.


The above is from my e-book FCQ. It’s available from Amazon and other retailers.

Will future parents need a license?

I ran into a discussion on Kialo, to which I quickly contributed the first paragraph below and penned what I have added below, all within about five minutes. I later edited it a bit, to make it easier to read.

I am so pleased someone started this discussion. I promote non-discrimination of embryos and fetuses. A child is not a consumer product but a human being who must be loved and encouraged to flourish. How can you love one child but not another if the latter is non-mainstream? I’ve been thinking about that and it’s made me wonder if it actually means that the parents aren’t fit to be parents. I haven’t dared say that out loud yet, but this discussion clears the road for me.

So yes, maybe parents-to-be should require vetting.

Within a few decades, we will no longer require sex to create babies, but will make our offspring in the lab, possibly on the basis of skin cells from each of the parents. We’ll probably look after our little gestating (incubating) children as if they are rare orchids that we want to bring to bloom.

(So by that time, women will no longer have a need for abortions and they won’t have to menstruate and experience PMS any longer either.)

I can imagine very well that you will require a license in the future in order to have a child. Somehow, that feels like an automatic consequence of the possibilities we will have then.

And also, indeed, why should adoptive parents be scrutinized but are natural parents free to do whatever they want?

And after all, in that distant future, anyone who wants can probably have a child (technically speaking). Even adoption may slowly become a thing of the past, that is, if we get to the point that we no longer succumb to illnesses and accidents and maybe even can choose when our lives end.

I hasten to add that at the moment, natural parents are not always free to do as they please either, of course. For example, in countries with a great deal of inequality, the state may step in on the basis of what is no more than prejudice in practice.

Nowadays, some children suffer horribly, either because of their parents or because of someone else. Sometimes before children are removed from their parents and sometimes afterward.

In practice, perhaps it won’t be an actual license but a training program that must be completed with good results. If that training is tough and long enough, that alone will already sort committed parents from parents who aren’t ready for a child.

Would they have to get a license or go through some kind of training program every time they want to have a child? Yes, I think so. Insights change.

It’s even possible that parenting will eventually become a profession.

PS
Unfortunately, Kialo may not work very well with Linux. I was able to post my contribution, but seem unable to comment on other people’s contributions. Maybe it’s part of the learning curve, but I did see the intro video and the comment option mentioned in it simply does not seem to exist for me.

My response to Dr Seidel’s post on the BMJ blog (Baby genome screening—paving the way to genetic discrimination?)

I just submitted the following comment, here:
http://blogs.bmj.com/bmj/2017/07/05/markus-g-seidel-baby-genome-screening-paving-the-way-to-genetic-discrimination/
It is still in moderation. has been accepted. Yes, it was far from flawless – I wrote most of it at the spur of the moment – but I think that what I mean is clear enough. I have done some editing in the version below.

Dear Dr Seidel, thank you for making these very important points.

I am taking the opportunity to offer a few suggestions for discussion and invite more views on these issues. Some of what I write below only emerged during the writing of this response and may not be watertight. Can you withhold initial judgement, think along with me and see it as an exercise in exploring the various angles?

But first of all, please forgive me my shortcomings; I phrase various concepts differently than you do as my background is not in medicine and I tend to shy away from jargon. Also, what I say is not limited to newborns, but that will be obvious to this audience. The principles largely remain the same, whether we are talking about a pre-embryo, a fetus or a newborn, and whether I call them person, individual or child. (Legally, this is currently much more complex, as you know.) My focus in this discussion does not extend to persons beyond the age of majority (likely not even beyond 8 or 10, in practice) and I am also keeping the concept of euthanasia out of the discussion even though it is related. Worst of all, I throw all techniques related to genetic material into one big pot because it enables me to see the bigger picture better.

I write from my own perspective of an opinionated white woman in the west, but when I say “we”, my intention is to refer to the human species. People from other cultures will undoubtedly spot biases in my western views; I would like those people to point out those biases.

You ask whether genome screening for newborns will pave the way to genetic discrimination. You also raise the question of the interpretation (and reliability) of such data and you have privacy concerns.

With regard to the latter, I think that we will slowly have to accept that the digital age comes with the loss of privacy in many ways. That does not have to be as dramatic as it sounds. Privacy is a changing concept anyway, which also has a cultural angle to it. The realization that people from different generations and from different cultures have slightly different views on what privacy is may add some perspective that can make us breathe easier. So we should probably become more relaxed about the loss of privacy as we knew it and focus more on preventing and ameliorating potential negative consequences of that loss, if any. The real issue is not the loss of privacy, but abuse of personal information.

In my opinion, what we need to do is ensure non-discrimination and make certain that genomic information will only be used to improve any individual’s (medical) care. (The data can become part of studies, anonymized or not; we also need to redefine consent, but I am going to leave that out of this discussion too.) In other words, genomic information must only be used to enable and allow human beings to flourish.

Even a word like “flourish” or “thrive” is highly ambiguous, though. I mean it in a non-materialistic manner, whereas some others do not at all. Perhaps I can break it all down into stages to show what I mean within this specific context. Perhaps I can break it down to show what I mean within this context.

You mention the Hippocratic Oath, which some define as “Do no harm”. Harm is another concept that we don’t agree on yet and that we – therefore? – haven’t been able to define well.

I think that we need to start applying the principle of non-discrimination to all new human life. I believe that we should consider every human individual is just as valuable – in a non-materialistic manner – as every other human individual.

When I toss this around, I run into a peculiar dilemma. While I must see a deaf or a blind person (as an example) as equally valuable as a hearing or sighted person, I cannot accept it when a hearing or sighted person is deliberately made (permanently) deaf or blind, for instance during a mugging or a work-related accident. This also applies with regard to so-called augmentations. I cannot take a human being against his or her wishes and carry out a nose reconstruction or even inject botox. That makes me realize that harm done to a human appears to be any interference or change that occurs against that human being’s wishes and is implemented by someone else.

For now, I have to limit this to physical changes because the area of psychological changes is too complicated. (Just think of schools; we do not take bad teachers to court for being bad teachers, but we do take bad surgeons and physicians to court for being bad doctors, also because the evidence related to the latter is often much clearer.) Physical interference that occurs against a person’s wishes can of course also result in psychological changes, but that does not actually matter for the concept of harm within this context.

The next problem I then run into is the fact that particularly an embryo, fetus or newborn has a very limited ability to express wishes, but and that also holds for young children. If I try to put myself in the shoes of a child, however, it becomes possible to define harm in spite of that limitation.

This – putting themselves in the shoes of the child, as adults – is what parents, guardians and other carers do all the time, of course. They sometimes have to make the decisions for the child and express the child’s wishes for the child, as if they were the child, using the knowledge they have as adults, knowledge that the child will have in the future but does not possess yet.

So, lLet’s step into a child’s feet, then. It is hard to imagine a sick or injured child that would want to get sicker and sicker and sicker or want to have a permanently festering wound resulting from an injury caused by a fall. So it is fair to say that anything we do toward remedying such a situation is in accordance with the child’s wishes, in essence, even in cases in which the child cannot even say “please make the pain go away”. It is what the child would want if it possessed the knowledge and abilities of an adult.

So, the first step in part of enabling a human – a child – to flourish is to attempt to prevent any deterioration of the child’s health.

We may have to start agreeing that this cannot be considered harm within this context, even if the chance of success is small, certainly in cases for which there are no alternative remedies. We may even have to decide that doing nothing constitutes harm when there is still an option of doing something.

If a child has appendicitis, a surgeon will have to cut into the child’s abdomen in order to remove the appendix to prevent deterioration of the child’s health or even death. Strictly speaking, cutting into a child’s abdomen constitutes inflicting an injury, but in this case, as it is done with the intention of preventing greater harm, namely the deterioration of the child’s health, it does not constitute harm within this context we do not see it as harm. (This may be be an example of where I display a western bias?)

(Of course, we can still take the surgeon to court if his or her work fails to meet professional standards, but that is a different type of harm. We certainly need professional standards.)

We can also take a child to the dentist and the dentist may have to inflict some discomfort in order to prevent deterioration of the child’s health.

By contrast, we should not, however, drag a child along kicking and screaming to have its ears pierced as this is not done with the aim of preventing a deterioration of health. (If a child asks to have its ears pierced, there is a clear wish on the side of the child.)

Note that the intention matters. When a procedure is carried out with the intention of wanting to prevent deterioration of health, we never have 100% certainty that the intended result will be achieved. (This may have implications for how we think about practices carried out in other cultures. Keep this at the back of your mind. Our own western views are not the only views that hold value.)

The second step vital part of enabling a human being to flourish is to do everything we can within a daily-life context to allow that person to thrive on the basis of the person’s given physical (and mental) situation.

We send children to playgrounds to let them play with other children and test their physical limits, we feed them, clothe them and provide shelter as well as love and all those other concepts that are hard to measure but easy to grasp. In essence, this is no different for children who are, say, blind or deaf or who have Down syndrome.

The BBC news site just highlighted a very nice albeit exceptional example of what I mean by flourishing within this context: http://www.bbc.co.uk/news/m…

To do everything we can to allow that child to thrive is also required for children who are born with a medical condition that requires some form of medication or extra nutritional care to prevent deterioration of health. This, I think, is where standard genomic testing of newborns can play a pivotal role. These days, parents still too often have to conclude that something is seriously genetically wrong with their child on the basis of the deterioration of the child’s health, which in some cases means that irreversible damage has already occurred to the child’s health.

So, failure to provide such testing (screening) from the point in the future at which we know how to do and use this properly and reliably could perhaps also be seen as harm as it could lead to the preventable deterioration of a child’s health and would not encourage the child to thrive.

The third step next level within this context of enabling someone to flourish – and this is where it gets even trickier – is interfering with the child’s genetic make-up.

We may feel that the child is flawed, whereas the child is actually viable and does will not suffer a deterioration of health or be at great risk of certain complications if we allow it to live. At the moment, we often prevent such a child from coming into the world. This is where, I think, we need to draw the line and have to take a step back. It is a discriminatory practice because it appears to express a value judgement.

I also think that because of limited resources, we may need to approach this in a stepped manner.

What I mean is that if we initially limit techniques like CRISPR and gene therapy to all situations in which a resulting child would have “a life not worth living”, then we might have a fairly just and affordable way to start implementing CRISPR, gene therapy and anything else that may come along. Once we’ve done that, we can slowly start to take it forward, extend it to other conditions. The costs of such techniques will come down. and if we start with rare diseases that are currently incurable, we also limit the initial costs of implementation.

The loss of privacy may actually become an advantage because openness makes it also much easier to detect abuse of information and to safeguard against discrimination.

One of the reasons why I strongly believe that we need to start implementing non-discrimination for all new human life is the following. Once humans start interfacing with technology, other so-called impairments – which are currently often either biased opinions or restrictions imposed by society – cease to be impairments, taking away much of the motivation for “correcting” these individuals.

Moreover, not only do we – the human race as well as society – need diversity, we may have future needs for abilities of which we currently don’t realize that some people possess them. Those may well be people who are currently considered “impaired” or “flawed”. Junk DNA was once considered just that, too.

As I already indicated, we need a workable definition of what constitutes a life not worth living and once we have one (I may have found one, by the way, based on the principle of humanity), we may end up concluding that these are the primary cases in which we actually have a duty to interfere with the child’s genetic make-up.

So I agree with you that we have to exercise restraint, in spite of all the enormously exciting developments we currently see around us. Discrimination is not the only concern and neither are interpretation and costs. We don’t know all the possible consequences yet of the application of any of those new developments, even if we think we do.

We have made many decisions in the past without asking questions that now are so blatantly obvious in hindsight. Did nobody foresee that insecticides might also affect bees and birds and amphibians, to name just one example of a past mistake, albeit a highly significant one that now also affects human fertility?

We have another reason to take it slow, namely the fact that laws and regulations lag behind, evolve in response to arising situations in real life, and rarely anticipate on what may happen in the future. Legal professionals, too, tend to think conservatively and in a geographically limited manner. It’s probably the UN and WHO who should start taking the lead in this area, and guide us into the future. Do they need a push? Should we apply pressure?

Because perhaps more than anything else, we need to work toward reaching a global consensus (including legislation) on such important matters, irrespective of how challenging and impossible that may seem. It was also once completely unimaginable that we’d have humans land on the moon, so if we did that, then we can accomplish so much more than we think we can.