Robots at the age of transhumanism

Publié le 1 Mar, 2017

At a time when Europe is questioning the status given to robots, Emmanuel Brochier, professor of philosophy at the IPC in Paris, answers Gènéthique’s questions concerning relations between men and robots.

 

Gènéthique: Paris took on an orchestra conductor which did the job perfectly… Do you think humans will be asked to start acting like robots? Isn’t there a risk that men no longer tolerate the imperfect nature of human beings?

Emmanuel Brochier: Yes, on May 17th 2008 Urban orchestra Sibelius was a world first at the “cité des sciences” in Paris, back when the debate on transhumanism had not yet started in Europe… The Japanese, fascinated for years by humanoid robots had already produced trumpet and violin robot players (see Toyota’s Partner range). But in the case of Robot Orchestra, it was a simple industrial arm that, until then, was used on assembly lines, and not a humanoid robot as will later be the case in 2009 with Honda’s Asimov or in 2014 with Pepper, the franco-Japanese robot. Using motion sensors, Pascal Gautier, Orchestra’s maker, tried to mimic the greatest orchestra conductors by reproducing the way their hips, their shoulders, their elbows, and their wrists moved. It did not have artificial intelligence, was incapable of “reading” a score or correcting a mistake made by a musician; it was only capable of directing an orchestra, or even several at the same time, the same as a true maestro. As far as I know, since then, we have simply made its appearance a bit more human. So though the robot behaves like a director, it is far from being its equal.

But let’s suppose that one day, progress in artificial intelligence enables men to make robots as good as great orchestra directors. We would probably face what already happened in assembly plants after industrial arms were invented. Robots ended up replacing the workers. I don’t think one can expect from a man that he acts like a robot. Because human work, what human beings need, consists less of a performance than in know-how and soft skills. Basically, what is expected is a form of wisdom in its original and philosophical meaning. Performance, we know all too well, leads to burnouts. As soon as a robot starts doing better than men, it replaces them. We are not only dealing with a decrease in cost or competitiveness, there is also the fact that human work obeys to a different logic. A gigantic replacement has been announced and it seems urgent that we anticipate it. In January 2016, the organizers of the World Economy Forum indeed planned a loss of 7.1 million jobs before 2020 in developed countries as a consequence of what we already call the 4th industrial revolution. This loss cannot be compensated by employment creation estimated at 2.1 million! And the report specifies that the tertiary sector will also be hit. Besides, a ONU report (published in October 2016) plans the replacement of two thirds of the employments in developing countries. There is therefore a high risk of leaving entire populations unemployed.

The challenge will be getting people to understand that robots can never replace human work, because there is a reason to its existence, other than simple economic growth. To be convinced, one must think about its deeper meaning. Robots may one day be better than our orchestra … but they will never acquire its very culture. That day, the mistakes, voluntarily made by our blood and flesh maestros will be as many signs of this forever unique culture. And already these mistakes can appear as a sign of hope, a wish to go back to the meaning of human work. Human work will always be possible beside robotic performances.

 

G: Should robots be given specific rights (for example giving them a legal personality), or should the present laws simply be adapted?

EB: There will be a vote about this in the European Parliament next February 16th. The debate has been prepared by Mady Delvaux [1] who recommends in §31f the creation of  a legal personality specific to robots due to the fact that the most sophisticated autonomous robots (cars, drones, surgeon robots, etc.) are capable of making “intelligent autonomous decisions” and interacting “independently with a third party”. According to the report adopted last January 12th by the Legal Affairs Committee, these robots should be considered as “electronic persons”.

It makes no doubts that the current legislation is insufficient to the extent that the behavior of new generation robots is unpredictable for both users and owners but also for their designers and makers. Autonomous robots can now interact with their environment in a unique way and are provided with surprising learning skills. Hence the idea, which in itself is not a bad one, of creating a  specific insurance. That is not the point; the point is in the intention that drives the relation, and which is clearly transhumanist. In §I of the introduction, Mrs Delvaux indeed wrote: “At the end of the day, it is possible that in a few decades, artificial intelligence will surpass human intellectual capabilities, which could, if we are not cautious, make it difficult for mankind to control its own creation, and from there on, be the master of its own destiny and insure the species survival.” Can we vote on the base of a philosophical opinion that cannot be imposed? Written in a transhumanist perspective, it is clear that the report agrees with the idea that there is no radical difference between a robot and a man, and suggests, although pretending not to believe it, the possibility for robots to become “conscious of their ow existence [sic!]” (ibid., §l), to be subjects of the law the same as men. It is no more surprising that further down, in §18, is discussed the « great potential of robotics […]  concerning the improvement of the human body”. I do not know what the Parliament’s decision will be, but if the concept or « electronic personality » implies transhumanism to that extent, its acceptation poses serious problems. As suggested by the European Commission in 2006 in the report “Technology assessment. On converging technologies” we first need to debate on the question of transhumanism. A philosophy cannot be imposed in the same way as a scientific theory. Not only is a debate necessary, there also needs to be room for a possible disagreement.

 

G : Serge Tisseron (author of the book The day my robot will love me-Albin Michel 2015) explains that robots will not detect bad feelings in us: “The robot will always agree with you. It will not detect a feeling such as shame, and will therefore not make you feel ashamed when you have done something wrong”. Do you believe that artificial intelligence will cause the notion of good and evil to be corrupted?

EB: Perhaps the notion of good and evil has more to fear from mankind than artificial intelligence. That notion has already been lost if we believe that a thing -whatever it may be- is good simply because  we feel or could feel a legitimate desire for it. About the truth, Augustine of Hippo said: “The spirit does not make the truth, it finds it”. I believe the same goes for goodness. It precedes us, even when it depends on us. It therefore never always depends exclusively on us. We are responsible of it, but not its source. We will always have to look for goodness, and learn to recognize it. I see many who seek only what is useful and measure goodness only through that spectrum. When feeling generous those people look for what will be the most advantageous for the most people. No doubt, artificial intelligence will be even better at it. Indeed, if goodness was always useful, utilitarianism would be much better. But experience shows that the better is sometimes the enemy of the good. Imagine us never feeling ashamed; we would still remain incorrigible, which is worse. People need to be reminded: what makes goodness grow is not usefulness, but the truth. And first of all, the truth about men. I have noticed that we didn’t wait for the emergence of artificial intelligence to forget that.

 

G: Laurent Alexandre speaks a lot about “technological determinism” He believes the “all technological” to be inevitable. Do you believe that the world such as he describes it will really exist? What means could prevent such a world from emerging?

EB: The expression “technological determinism” is an oxymoron. Because technology precisely relies within our responsibility. For Laurent Alexandre, as for most transhumanists who have a long-term approach, the problem is the survival of humankind. If it is up to men to avoid extinction, then the “all technological” becomes necessary. But one could turn the reasoning around and say that if the “all technological” is unacceptable, then the objective of saving the human species is no longer an obligation because no one can achieve the impossible. I am afraid we may be collectively falling into a trap. And yet Karl Popper did warn us by showing, in The Poverty of Historicism (1956), that history is contingent and gives rise to no law, only points of view. Some, I might add, are more pertinent than others; and today, it is up to philosophy to provide help with this urgent and irreplaceable discernment

 

[1] Cf. European Parliament: make robots the equals of men?

Share this post

For further