Happily, for those seeking a universal rule book for AVs, they discovered that we all share a “big three” basic rules: we choose to prioritise human lives over animal lives; would save more lives over fewer; and children before others.
But then things get more complicated. In Asia and the East, where age is honoured more than in the West, the instinct to save the young over the old was much less clear cut. In South America and the Francophone world, women were favoured, proof perhaps that chivalry is not dead. And in countries with high economic inequality, the homeless were sacrificed far more readily than elsewhere.
Many in the AV world complain about the Trolley Problem and its ethical implications for their future products. They say that such situations never arise. But it is clear from reports of accidents that note the driver “swerved to avoid a pedestrian” that they do. Moreover it bleeds into other issues AVs must face. If a driverless vehicle was navigating a narrow road, for example, with a group of children on one side and an adult on the other, should it allow more space for the children, or perfectly in the middle?
Human drivers make ethical judgments in such cases all the time. But they tend to be fuzzy – and computers don’t do fuzzy. “There are so many moral decisions that we usually make during the day we don’t realise,” said Edmond Awad, one of the researchers involved in the Moral Machine. “In driverless cars, these decisions will have to be implemented ahead of time.”
It is one of the great challenges holding back fully autonomous vehicles (rather than the cars, such as Teslas, currently available with various driver assistance modes). For years, we have been told that such AVs are just around the corner, ready to take us wherever we want. They never are. In some confined, well-regimented places like ports, or even the suburbs of Phoenix, Arizona, driverless vehicles already shuttle about. “The problem is,” says Jack Stilgoe, professor of science policy at UCL, “most of the world does not look like Phoenix.” It’s much more complicated. And the learning-by-doing methods that often inform AI are simply impossible on the roads.
It was trial and error, for example, that helped a Google computer learn, and then master, the complexities of first Chess and then the boardgame Go. Other researchers have even let a tiny aircraft crash almost 12,000 times as its computer learnt to fly it. But the skies are comparatively empty. And letting computers learn by getting it wrong as children jump out from behind parked cars is unthinkable.
The result is that forecasts have got more pessimistic. Chris Urmson, former head of Google’s AV team, thinks they will only be phased in over the next 50 years.