On Monday, in the US town of Tempe, near Phoenix, Arizona, a female pedestrian was knocked down by a self-driving Uber. “The vehicle was operating in autonomous mode when the collision happened, with an operator behind the wheel”. The woman who “was crossing the road without using a pedestrian crossing”, was taken to hospital where she died.
If “those in favour of self-drive vehicles believe that this technology can reduce the number of accidents, precisely because the machines would be more reliable than a human being,” the accident is raising many questions. This is not the first case “involving a vehicle with this type of function”.
In 2016, an American man in his forties died at the wheel of an S model Tesla saloon. It was equipped with autonomous mode technology, which allowed a certain number of manoeuvres to be carried out without involving the driver. At the time, the US National Transportation Safety Board (NTSB) concluded that the driver’s “excessive dependence” on the Autopilot had “led to prolonged disengagement” resulting in the collision.
For Jothi Periasamy, Data Manager at Experfy and Artificial Intelligence lecturer at the Massachusetts Institute of Technology (MIT), “artificial intelligence is still in its infancy in the area of self-driving cars and other contexts”. He acknowledged the fact that, “situations may arise where the model does not understand the context. I would not condemn the system simply because an accident has occurred. We have to establish the circumstances of the accident and discover its cause in order to improve the model”.
The Uber Company – one of many firms involved in the international race to develop a self-drive vehicle, has halted autonomous car tests.
For further reading:
Le Devoir, Karl Rettino-Parazelli (20/03/2018) ; Le Devoir, Glenn Chapman et Julie Charpentrat (20/03/2018)