Reblogged: Super-intelligence and eternal life: transhumanism’s faithful follow it blindly into a future for the elite

File 20170609 32437 9sfejw
Distant Earth.

Alexander Thomas, University of East London

The rapid development of so-called NBIC technologies – nanotechnology, biotechnology, information technology and cognitive science – are giving rise to possibilities that have long been the domain of science fiction. Disease, ageing and even death are all human realities that these technologies seek to end.

They may enable us to enjoy greater “morphological freedom” – we could take on new forms through prosthetics or genetic engineering. Or advance our cognitive capacities. We could use brain-computer interfaces to link us to advanced artificial intelligence (AI).

Nanobots could roam our bloodstream to monitor our health and enhance our emotional propensities for joy, love or other emotions. Advances in one area often raise new possibilities in others, and this “convergence” may bring about radical changes to our world in the near-future.

“Transhumanism” is the idea that humans should transcend their current natural state and limitations through the use of technology – that we should embrace self-directed human evolution. If the history of technological progress can be seen as humankind’s attempt to tame nature to better serve its needs, transhumanism is the logical continuation: the revision of humankind’s nature to better serve its fantasies.

As David Pearce, a leading proponent of transhumanism and co-founder of Humanity+, says:

If we want to live in paradise, we will have to engineer it ourselves. If we want eternal life, then we’ll need to rewrite our bug-ridden genetic code and become god-like … only hi-tech solutions can ever eradicate suffering from the world. Compassion alone is not enough.

But there is a darker side to the naive faith that Pearce and other proponents have in transhumanism – one that is decidedly dystopian.

There is unlikely to be a clear moment when we emerge as transhuman. Rather technologies will become more intrusive and integrate seamlessly with the human body. Technology has long been thought of as an extension of the self. Many aspects of our social world, not least our financial systems, are already largely machine-based. There is much to learn from these evolving human/machine hybrid systems.

Yet the often Utopian language and expectations that surround and shape our understanding of these developments have been under-interrogated. The profound changes that lie ahead are often talked about in abstract ways, because evolutionary “advancements” are deemed so radical that they ignore the reality of current social conditions.

In this way, transhumanism becomes a kind of “techno-anthropocentrism”, in which transhumanists often underestimate the complexity of our relationship with technology. They see it as a controllable, malleable tool that, with the correct logic and scientific rigour, can be turned to any end. In fact, just as technological developments are dependent on and reflective of the environment in which they arise, they in turn feed back into the culture and create new dynamics – often imperceptibly.

Situating transhumanism, then, within the broader social, cultural, political, and economic contexts within which it emerges is vital to understanding how ethical it is.

Competitive environments

Max More and Natasha Vita-More, in their edited volume The Transhumanist Reader, claim the need in transhumanism “for inclusivity, plurality and continuous questioning of our knowledge”.

Yet these three principles are incompatible with developing transformative technologies within the prevailing system from which they are currently emerging: advanced capitalism.

Perpetual doper or evolutionary defunct?
Shutterstock

One problem is that a highly competitive social environment doesn’t lend itself to diverse ways of being. Instead it demands increasingly efficient behaviour. Take students, for example. If some have access to pills that allow them to achieve better results, can other students afford not to follow? This is already a quandary. Increasing numbers of students reportedly pop performance-enhancing pills. And if pills become more powerful, or if the enhancements involve genetic engineering or intrusive nanotechnology that offer even stronger competitive advantages, what then? Rejecting an advanced technological orthodoxy could potentially render someone socially and economically moribund (perhaps evolutionarily so), while everyone with access is effectively forced to participate to keep up.

Going beyond everyday limits is suggestive of some kind of liberation. However, here it is an imprisoning compulsion to act a certain way. We literally have to transcend in order to conform (and survive). The more extreme the transcendence, the more profound the decision to conform and the imperative to do so.

The systemic forces cajoling the individual into being “upgraded” to remain competitive also play out on a geo-political level. One area where technology R&D has the greatest transhumanist potential is defence. DARPA (the US defence department responsible for developing military technologies), which is attempting to create “metabolically dominant soldiers”, is a clear example of how vested interests of a particular social system could determine the development of radically powerful transformative technologies that have destructive rather than Utopian applications.

Designing super-soldiers.
Shutterstock

The rush to develop super-intelligent AI by globally competitive and mutually distrustful nation states could also become an arms race. In Radical Evolution, novelist Verner Vinge describes a scenario in which superhuman intelligence is the “ultimate weapon”. Ideally, mankind would proceed with the utmost care in developing such a powerful and transformative innovation.

There is quite rightly a huge amount of trepidation around the creation of super-intelligence and the emergence of “the singularity” – the idea that once AI reaches a certain level it will rapidly redesign itself, leading to an explosion of intelligence that will quickly surpass that of humans (something that will happen by 2029 according to futurist Ray Kurzweil). If the world takes the shape of whatever the most powerful AI is programmed (or reprograms itself) to desire, it even opens the possibility of evolution taking a turn for the entirely banal – could an AI destroy humankind from a desire to produce the most paperclips for example?

It’s also difficult to conceive of any aspect of humanity that could not be “improved” by being made more efficient at satisfying the demands of a competitive system. It is the system, then, that determines humanity’s evolution – without taking any view on what humans are or what they should be. One of the ways in which advanced capitalism proves extremely dynamic is in its ideology of moral and metaphysical neutrality. As philosopher Michael Sandel says: markets don’t wag fingers. In advanced capitalism, maximising one’s spending power maximises one’s ability to flourish – hence shopping could be said to be a primary moral imperative of the individual.

Philosopher Bob Doede rightly suggests it is this banal logic of the market that will dominate:

If biotech has rendered human nature entirely revisable, then it has no grain to direct or constrain our designs on it. And so whose designs will our successor post-human artefacts likely bear? I have little doubt that in our vastly consumerist, media-saturated capitalist economy, market forces will have their way. So – the commercial imperative would be the true architect of the future human.

System-led evolution.
Shutterstock

Whether the evolutionary process is determined by a super-intelligent AI or advanced capitalism, we may be compelled to conform to a perpetual transcendence that only makes us more efficient at activities demanded by the most powerful system. The end point is predictably an entirely nonhuman – though very efficient – technological entity derived from humanity that doesn’t necessarily serve a purpose that a modern-day human would value in any way. The ability to serve the system effectively will be the driving force. This is also true of natural evolution – technology is not a simple tool that allows us to engineer ourselves out of this conundrum. But transhumanism could amplify the speed and least desirable aspects of the process.

Information authoritarianism

For bioethicist Julian Savulescu, the main reason humans must be enhanced is for our species to survive. He says we face a Bermuda Triangle of extinction: radical technological power, liberal democracy and our moral nature. As a transhumanist, Savulescu extols technological progress, also deeming it inevitable and unstoppable. It is liberal democracy – and particularly our moral nature – that should alter.

The failings of humankind to deal with global problems are increasingly obvious. But Savulescu neglects to situate our moral failings within their wider cultural, political and economic context, instead believing that solutions lie within our biological make up.

Yet how would Savulescu’s morality-enhancing technologies be disseminated, prescribed and potentially enforced to address the moral failings they seek to “cure”? This would likely reside in the power structures that may well bear much of the responsibility for these failings in the first place. He’s also quickly drawn into revealing how relative and contestable the concept of “morality” is:

We will need to relax our commitment to maximum protection of privacy. We’re seeing an increase in the surveillance of individuals and that will be necessary if we are to avert the threats that those with antisocial personality disorder, fanaticism, represent through their access to radically enhanced technology.

Such surveillance allows corporations and governments to access and make use of extremely valuable information. In Who Owns the Future, internet pioneer Jaron Lanier explains:

Troves of dossiers on the private lives and inner beings of ordinary people, collected over digital networks, are packaged into a new private form of elite money … It is a new kind of security the rich trade in, and the value is naturally driven up. It becomes a giant-scale levee inaccessible to ordinary people.

Crucially, this levee is also invisible to most people. Its impacts extend beyond skewing the economic system towards elites to significantly altering the very conception of liberty, because the authority of power is both radically more effective and dispersed.

Foucault’s notion that we live in a panoptic society – one in which the sense of being perpetually watched instils discipline – is now stretched to the point where today’s incessant machinery has been called a “superpanopticon”. The knowledge and information that transhumanist technologies will tend to create could strengthen existing power structures that cement the inherent logic of the system in which the knowledge arises.

This is in part evident in the tendency of algorithms toward race and gender bias, which reflects our already existing social failings. Information technology tends to interpret the world in defined ways: it privileges information that is easily measurable, such as GDP, at the expense of unquantifiable information such as human happiness or well-being. As invasive technologies provide ever more granular data about us, this data may in a very real sense come to define the world – and intangible information may not maintain its rightful place in human affairs.

Systemic dehumanisation

Existing inequities will surely be magnified with the introduction of highly effective psycho-pharmaceuticals, genetic modification, super intelligence, brain-computer interfaces, nanotechnology, robotic prosthetics, and the possible development of life expansion. They are all fundamentally inegalitarian, based on a notion of limitlessness rather than a standard level of physical and mental well-being we’ve come to assume in healthcare. It’s not easy to conceive of a way in which these potentialities can be enjoyed by all.

Will they come along for the ride?
Shutterstock

Sociologist Saskia Sassen talks of the “new logics of expulsion”, that capture “the pathologies of today’s global capitalism”. The expelled include the more than 60,000 migrants who have lost their lives on fatal journeys in the past 20 years, and the victims of the racially skewed profile of the increasing prison population.

Grenfell Tower, London, 2017.
EPA/Will Oliver

In Britain, they include the 30,000 people whose deaths in 2015 were linked to health and social care cuts and the many who perished in the Grenfell Tower fire. Their deaths can be said to have resulted from systematic marginalisation.

Unprecedented acute concentration of wealth happens alongside these expulsions. Advanced economic and technical achievements enable this wealth and the expulsion of surplus groups. At the same time, Sassen writes, they create a kind of nebulous centrelessness as the locus of power:

The oppressed have often risen against their masters. But today the oppressed have mostly been expelled and survive a great distance from their oppressors … The “oppressor” is increasingly a complex system that combines persons, networks, and machines with no obvious centre.

Surplus populations removed from the productive aspects of the social world may rapidly increase in the near future as improvements in AI and robotics potentially result in significant automation unemployment. Large swaths of society may become productively and economically redundant. For historian Yuval Noah Harari “the most important question in 21st-century economics may well be: what should we do with all the superfluous people?”

We would be left with the scenario of a small elite that has an almost total concentration of wealth with access to the most powerfully transformative technologies in world history and a redundant mass of people, no longer suited to the evolutionary environment in which they find themselves and entirely dependent on the benevolence of that elite. The dehumanising treatment of today’s expelled groups shows that prevailing liberal values in developed countries don’t always extend to those who don’t share the same privilege, race, culture or religion.

In an era of radical technological power, the masses may even represent a significant security threat to the elite, which could be used to justify aggressive and authoritarian actions (perhaps enabled further by a culture of surveillance).

Life in the Hunger Games.
© Lionsgate

In their transhumanist tract, The Proactionary Imperative, Steve Fuller and Veronika Lipinska argue that we are obliged to pursue techno-scientific progress relentlessly, until we achieve our god-like destiny or infinite power – effectively to serve God by becoming God. They unabashedly reveal the incipient violence and destruction such Promethean aims would require: “replacing the natural with the artificial is so key to proactionary strategy … at least as a serious possibility if not a likelihood [it will lead to] the long-term environmental degradation of the Earth.”

The extent of suffering they would be willing to gamble in their cosmic casino is only fully evident when analysing what their project would mean for individual human beings:

A proactionary world would not merely tolerate risk-taking but outright encourage it, as people are provided with legal incentives to speculate with their bio-economic assets. Living riskily would amount to an entrepreneurship of the self … [proactionaries] seek large long-term benefits for survivors of a revolutionary regime that would permit many harms along the way.

Progress on overdrive will require sacrifices.

God-like elites.
Shutterstock

The economic fragility that humans may soon be faced with as a result of automation unemployment would likely prove extremely useful to proactionary goals. In a society where vast swaths of people are reliant on handouts for survival, market forces would determine that less social security means people will risk more for a lower reward, so “proactionaries would reinvent the welfare state as a vehicle for fostering securitised risk taking” while “the proactionary state would operate like a venture capitalist writ large”.

At the heart of this is the removal of basic rights for “Humanity 1.0”, Fuller’s term for modern, non-augmented human beings, replaced with duties towards the future augmented Humanity 2.0. Hence the very code of our being can and perhaps must be monetised: “personal autonomy should be seen as a politically licensed franchise whereby individuals understand their bodies as akin to plots of land in what might be called the ‘genetic commons’”.

The neoliberal preoccupation with privatisation would so extend to human beings. Indeed, the lifetime of debt that is the reality for most citizens in developed advanced capitalist nations, takes a further step when you are born into debt – simply by being alive “you are invested with capital on which a return is expected”.

Socially moribund masses may thus be forced to serve the technoscientific super-project of Humanity 2.0, which uses the ideology of market fundamentalism in its quest for perpetual progress and maximum productivity. The only significant difference is that the stated aim of godlike capabilities in Humanity 2.0 is overt, as opposed to the undefined end determined by the infinite “progress” of an ever more efficient market logic that we have now.

A new politics

Some transhumanists are beginning to understand that the most serious limitations to what humans can achieve are social and cultural – not technical. However, all too often their reframing of politics falls into the same trap as their techno-centric worldview. They commonly argue the new political poles are not left-right but techno-conservative or techno-progressive (and even techno-libertarian and techno-sceptic). Meanwhile Fuller and Lipinska argue that the new political poles will be up and down instead of left and right: those who want to dominate the skies and became all powerful, and those who want to preserve the Earth and its species-rich diversity. It is a false dichotomy. Preservation of the latter is likely to be necessary for any hope of achieving the former.

The ConversationTranshumanism and advanced capitalism are two processes which value “progress” and “efficiency” above everything else. The former as a means to power and the latter as a means to profit. Humans become vessels to serve these values. Transhuman possibilities urgently call for a politics with more clearly delineated and explicit humane values to provide a safer environment in which to foster these profound changes. Where we stand on questions of social justice and environmental sustainability has never been more important. Technology doesn’t allow us to escape these questions – it doesn’t permit political neutrality. The contrary is true. It determines that our politics have never been more important. Savulescu is right when he says radical technologies are coming. He is wrong in thinking they will fix our morality. They will reflect it.

Alexander Thomas, PhD Candidate, University of East London

This article was originally published on The Conversation. Read the original article.

How do we approach the future?

In the science, health and environment section of thehindu.com, an article appeared under the heading “Do we understand the genome well enough to let Big Pharma jump into it?”.

I left the following brief reply.

You make important points.

Markus G. Seidel, who works at the Department of Pediatrics and Adolescent Medicine of Medical University Graz in Austria, just wrote something similar on the site of the BMJ, with regard to babies. He asks whether genome screening for newborns will pave the way to genetic discrimination. He too raises the question about interpretation (and reliability) of such data. He also discusses privacy issues.

http://blogs.bmj.com/bmj/2017/07/05/markus-g-seidel-baby-genome-screening-paving-the-way-to-genetic-discrimination/

But I wanted to write more…

With regard to the latter, I think that humanity will slowly have to accept that the digital age comes with the loss of privacy in many ways. Privacy is a changing concept and there also is a cultural angle to it, so people from different generations and from different cultures have slightly different views on what privacy is. We probably should become more relaxed about the loss of privacy as we knew it and focus more on preventing and ameliorating potential negative consequences.

In my opinion, what we need to do is ensure non-discrimination and ensure that genomic information will only be used to improve any individual’s (medical) care. In other words, genomic information must only be used to enable and allow human beings to flourish. All human beings. In a non-materialistic way.

(Note that this is not the same as eradicating everything we may not like. But we seem to have a tendency to want to do that, unfortunately, and we need to curb that urge. We need a great deal of diversity to function well as a species and as a society, for many reasons. Good and bad cannot exist without each other – as cheesy as it may sound. There simply is too much we don’t know yet, and we therefore cannot foresee all possible consequences of everything we do. Eradicating everything that seems bad to us may be bad too.)

That will require two things: good legislation and regulations and a global consensus on these issues.

Particularly the latter is a major challenge. That is why we need to discuss these topics broadly and entice people to move out of their mental comfort zone, allowing them to explore other people’s views without instantly rejecting them. Our own views aren’t the only valid or even valuable views, but they tend to feel that way to us.

Legislation, however, also has a problem as it currently tends to display a big lag relative to what’s technologically possible. It does not anticipate (much), but responds after what is happening in practice forces it to respond. Also, legal scholars still tend to contemplate situations and consequences with regard to their own jurisdictions only.

So it looks like there is a great need for discussions pervaded by a spirit of tolerance (the willingness to step out of one’s mental comfort zone and listen to people from other cultures and generations) and a forward-thinking attitude.

By “forward-thinking”, I don’t mean “blindly embracing everything science and technology have to offer” because in the past, we’ve often forgotten to ask many questions we should have asked. That, for example, appears to have happened when we embraced pesticides. They seemed such a good thing, initially, that we never considered their obvious potential for bad.

Do you agree or do you see it differently? Do you think we also need to change big pharma, and if so, in which ways, and how could we approach that?

PS
I write from my own perspective of an opinionated white woman in the west without ties to big pharma.

Abortion

Writing the first edition of my essay “We need to talk about this” – the second edition is in the works – forced me to think about issues I had never thought about before in great depth and I had to leave many of them untouched at the time.

For example, I am a feminist and I have always believed in a woman’s right to abortion. While I was considering how we could regulate the new eugenics, I ran into boundaries. It included having to think about how to fit abortion into the topic. That was a significant hurdle.

I was no longer able to say “of course women should be able to have abortions” – which I had always done in the past – but had to think about why and when they should, regardless of my own personal feelings. Because what I was writing about selecting pre-embryos and fetuses clashed with the general ideas that I had always entertained about abortion but had never examined in detail.

Legislation and protocols can sound very cold to people, but it’s not enough to just state something like “we think this is very very good” or “we think this is very bad”. That wouldn’t work in practice. If you want to make sure legislation is solid and leaves little room for abuse (deliberate misinterpretation), you end up with language that can come across as heartless. But that does not mean that the legislation (or protocol) is heartless or that the people who wrote it are!

It can be difficult to get that across, I have seen in various online comments (on for example the Groningen Protocol). It works the same way for traffic rules or rules for building skyscrapers. The law can’t just say something vague like “drivers should be careful” and “buildings should be safe” and leave it at that.

When Obamacare was introduced, a staunch Republican (and stauncher Libertarian) wrote to me that it was ridiculous that its legislation was taking up more than 2,000 pages or something like that. (Who would ever read that?)

I replied to him that I knew a jurist who works in precisely that area in the Netherlands and explained what that kind of legislation has to include. Fortunately, he listened to that explanation.

Unfortunately, I have found that even people who see themselves as the voice of reason (and sometimes as having absolute wisdom, too) aren’t always willing to listen to what someone “from the other side” is saying.