[Column] Welcome to the desert of post-humanity!

Posted on : 2023-04-17 17:30 KST Modified on : 2023-04-17 17:30 KST
Getty Images Bank
Getty Images Bank
By Slavoj Žižek, Global Eminent Scholar at Kyung Hee University

A letter published on March 29, 2023, by the Future of Life Institute demands that any artificial intelligence lab working on systems more powerful than GPT-4 should “immediately pause” work for at least six months so that humanity can take stock of the risks such advanced systems pose. Labs are “locked in an out-of-control race” to develop and deploy increasingly powerful systems that no one — including their creators — can understand, predict or control. The letter has already been signed by thousands, among them big corporate names like Elon Musk.

What should we make of this new outburst of panic? It is about control and regulation — but whose? In the half a year pause, humanity can supposedly “take stock of the risks.” What is really at stake here?

In his Homo Deus, Yuval Harari predicted that the most realistic option in the development of AI is a radical division, much stronger than the class division, within human society itself. In the near future, biotechnology and computer algorithms will join their powers in producing bodies, brains and minds, with the gap exploding between those who know how to engineer bodies and brains and those who do not.

“Those who ride the train of progress will acquire divine abilities of creation and destruction,” Harari writes, “while those left behind will face extinction. The panic the letter expresses is sustained by the fear that even those who “ride the train of progress” will no longer control development — in short, it expresses the fear of our new digital feudal masters.

Obviously, what the Future of Life letter aims at is therefore far from a big public debate. It is an agreement between the government(s) and companies. The threat of expanded AI is very serious, not least for those in power and those who develop, own, and control AI. At the horizon is nothing less than the end of capitalism as we know it, the prospect of a self-reproducing AI system which will need human agents less and less.

Many lonely (and not so lonely) individuals in the evenings (mostly) chat extensively with a chatbot, exchanging friendly messages about new movies and books, debating political and ideological questions, and more. No wonder they find such an exchange relaxing and satisfying. What they get is an AI version of decaffeinated coffee or sugar-free soft drink — a neighbor without its opaque monstrosity, an Other which simply accommodates itself to my needs.

There is a structure of fetishist disavowal at work here: “I know very well that I am not talking to a real person, but nonetheless it feels as if I am doing it, without any risks involved in talking to a real person!”

Upon a close reading, we can easily see that the attempts to “take stock” of the threats of AI tend to repeat the old paradox of prohibiting the impossible: “A truly post-human AI is impossible, that’s why we should prohibit its development.” To orient ourselves in this mess, we should urgently raise here Lenin’s old question: Freedom for whom, to do what? In what sense were we free till now? Were we not already controlled much more than we were aware of? Instead of just complaining about the threat to our freedom and dignity, we should thus also consider what freedom means, how it will have to change.

Ray Kurzweil predicts that, due to the exponential growth of the capacity of digital machines, we will soon be dealing with machines which will not only display all the signs of self-awareness but also surpass human intelligence. We should not confuse this “posthuman” stance with the paradigmatically modern belief in the possibility of total technological domination over nature.

What we are witnessing today is an exemplary dialectical reversal: The slogan of today’s “posthuman” sciences is no longer domination but surprise (contingent, unplanned) emergence. Jean-Pierre Dupuy detected a weird reversal of the traditional Cartesian anthropocentric arrogance which grounded human technology, the reversal clearly discernible in today’s robotics, genetics, nanotechnology, artificial life and AI research, as shown in the following:

“How are we to explain that science became such a ‘risky’ activity that, according to some top scientists, it poses today the principal threat to the survival of humanity? Some philosophers reply to this question by saying that Descartes’s dream — ‘to become master and possessor of nature’ — has turned wrong, and that we should urgently return to the ‘mastery of mastery.’ They have understood nothing. They don’t see that the technology profiling itself at our horizon through ‘convergence’ of all disciplines aims precisely at nonmastery. The engineer of tomorrow will not be a sorcerer’s apprentice because of his negligence or ignorance, but by choice. He will ‘give’ himself complex structures or organizations and he will try to learn what they are capable of by way of exploring their functional properties — an ascending, bottom-up approach. He will be an explorer and experimenter at least as much as an executor. The measure of his success will be more the extent to which his own creations will surprise him than the conformity of his realization to the list of preestablished tasks.”

While the outcome cannot be clearly predicted, one thing is clear: If something resembling “post-humanity” will effectively emerge as a massive fact, then all three (overlapping) moments of our spontaneous world-view (humans, god, nature) will disappear. Our being-human can only exist against the background of impenetrable nature, and if, through bio-genetic science and practices, life becomes something that can be technologically fully manipulated, human and natural life lose their “natural” character. What humans experience as “god” is something that has meaning only from the standpoint of human finitude and mortality.

The tech-gnostic visions of a posthuman world are ideological fantasies that obfuscate the abyss of what awaits us.

Please direct questions or comments to [english@hani.co.kr]

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories