Let’s stop fearing and creating imaginary scenarios of a deadly conflict between Artificial Intelligence and Human Intelligence.
Artificial intelligence (AI) is making massive advancements. AI has radically changed the realm of justice, with the emergence of “predictive justice”. Surgical robots are currently working magics, which possibly could result in major breakthroughs in medical technology. Is AI on the verge of surpassing human or biological intelligence?
This is the thesis defended, in 2017, by Dr Laurent Alexandre in his work The war of intelligences , to which its success has just earned a new edition in paperback. Reading it gives us the opportunity to reflect, through the question of intelligence, on the specificity and future of humanity. Is it at a decisive turning point that could see it disappear?
A deadly face-to-face?
For Laurent Alexandre, the “NBIC revolution” (nanotechnologies, biotechnologies, computer science and cognitive sciences) is reflected in the emergence of an AI which, opening up “extraordinary perspectives”, is already in a position to surpass the ‘biological intelligence, even to enslave it. The machine could triumph over the human being.
But why imagine and fear a conflict, which is more deadly? Are AI and BI competing, and do they have opposing interests? The AI could only wish to go to war against the BI if two conditions are met: that it proves to be more efficient than the BI and that it is aware of its superiority. In other words: that it be endowed with a power of decision, which would be the mark of a real autonomy, and of a real conscience.
The first condition appears to be met. “The AI tsunami” is already going “too fast and too high”, considers Laurent Alexandre. “The neuron loses more points in front of the transistor every day.” In the race for productivity and efficiency, “the speed and infallibility of execution of intelligent machines” are such as to make human labor “absolutely uncompetitive”. The fight here is unequal. The AI “gallops”, while the BI stumbles, awaiting a hypothetical genetic mutation.
A very hypothetical “singularity”
But what about the second condition? Alexander writes that “AI could become superior to humanity”, for whom it takes on “the appearance of twilight”. What makes the present superiority (although in great danger?) Of humanity is the existence of a conscious will. Even if they are not always good, the human being is able to make choices, to endow himself with goals, and to ask himself the question of the value of these.
Can AI be endowed (worse: endow itself?) With such a capacity? Can she become capable of reflection, ethics, and politics? This is the whole question of “singularity”, “this is the moment when the intelligence of machines will exceed that of men”. Is such a moment inevitable? Is it anything other than a mere sight of the mind? Can we really make the hypothesis of a “strong AI”, which “would have the capacity to hide its own goals”, and therefore already have some?
If biological intelligence is limited and fallible, at least it is accompanied by this self-awareness and this capacity for critical thinking, which we could only attribute to AI by a magic trick, similar to the one that makes the Pinocchio puppet a real human being.
The belief in a possible “shift in a world where robots are as intelligent as humans” testifies to a reductive conception of intelligence, limited to “computing power”; and manifests a childish credulity in front of a fable worthy of Carlo Collodi. How seriously to believe that the BI can be the Geppetto giving birth to an AI endowed with “free will” and “artificial consciousness”?
The real challenge
However, the “galloping” development of AI raises questions. Without going so far as to think that, too weak in front of machines, we are going to become their slaves, or even be exterminated by them, we must recognize that the prospect of a growing automation of increasingly complex human tasks, and of ‘invasive management of high-level cognitive activities by algorithms leads us to a triple crisis: social, ethical and existential.
If even doctors are threatened with disappearance, what is left for human work? If the integrity of our brain is in danger, how can we protect our freedom to think? If Artificial Intelligence challenges us in who we are as human beings, how do we preserve our humanity?
But this is undoubtedly, precisely, the real issue. Artificial Intelligence challenges us to prove ourselves in humanity! Where is the essential, which we should preserve with all our strength? Through the question of intelligence, it is indeed that of human specificity that is raised. “What do we want as human beings? Do we have a specific feature to promote? “
Finally, where is the enemy?
Laurent Alexandre suggests “sanctifying a few red lines that are the basis of our humanity”. He sees three pillars for it: the physical body, the individual mind, and chance. This allows him to conclude, optimistically: “No, biological intelligence will not die because of Artificial Intelligence.”
To this, we would gladly add a condition: that she knows who her real enemy is, and where he is hiding. Because the fear of the misdeeds of a “strong AI” is the fruit of externalization of our terrors. We fear an external enemy, whereas, as Paul Valéry had warned us, the real enemy is within us.
The greatest enemy of human intelligence, and therefore, ultimately, of humanity, is lurking in the human being. It has a name: stupidity. Or, to put it even more clearly, with all due respect to the readership: bullshit …