Self-driving cars and the imposition of a world-wide moral code  
 
 
 


Results were published in Nature on 24 October 2018, of a massive survey by MIT researchers into people’s moral priorities.  Forty million people from around the world took part.  In this instance, the participants gave their views as to the choices to be made in the event of brake failure by a self-driving car.  The responses to the research have prompted a somewhat curious academic reaction to the whole concept of the people deciding their own moral codes.  This article also underlines the dystopia we would necessarily be part of if we actually tried to apply a moral code in such circumstances.


As we know, although religions require absolute obedience to the rules of their morality, for the non-religious one might imagine that a different approach could be taken. It seems that this is not necessarily so. The issues involved in this are nicely illustrated by the sort of decision it seems we shall have to make in the future. It is all about a revised version of the well-known trolley problem. This time, we are asked how we should program a self-driving car to react in the event of a brake failure.

The example given was of an autonomous car travelling along a road when its brakes fail. Should it carry straight on and hit a pregnant woman, a doctor and a criminal on a pedestrian crossing, or crash into a barrier, so avoiding the people on the crossing, but instead killing all the occupants of the self-driving car, a family of four?

This, was the kind of scenario included in the 'Moral Machine’ experiment, a survey on the internet of tens of millions of people in 233 countries and territories worldwide, the results of which were published on 24th October in the respected journal Nature.

Participants were asked to consider different scenarios in which those saved by the car’s decision might be, for example, obese or fit, young or old, pets or criminals or those with important jobs. In all, 40 million decisions in 10 languages were collected. So, an impressive gathering of data.

As a subsequent article in the New Scientist explained, generally speaking, people preferred to save humans rather than animals, and young people rather than the elderly. Least favourite to be saved were dogs, followed by criminals and then cats. The results also demonstrated variations between different areas of the world, with a less marked preference in the East to save young people rather than the elderly. Decisions made to save humans in preference to dogs or cats were less common than the average in Central and South America and countries with a French influence, but in these areas women and fit people were preferred over others.

The researchers involved clearly thought that these results could form the basis for the decisions necessary to regulate the transport of the future. But it seems that it's not that simple. Many researchers and ethicists apparently said to the journalist writing the article that these results should not in fact be used to create policy or regulate the design of autonomous vehicles. "That would perpetuate cultural biases that might not reflect moral decisions. The fact that there are different cultural patterns should not surprise us, but it has nothing to do with whether something is right or wrong", Professor Peter Steeves, an ethicist from De Paul University in Chicago is quoted as saying.  He added: "The instinct to save the lives of women or children, for example, is rooted in the patriarchal view that these groups have less autonomy and are therefore more worthy of being saved.".

Really?  Could it not equally be that evolutionary pressure is the cause? After all, in days of yore, to lose a man or two did not matter very much in the great scheme of things as long as the next tribe had a similar male attrition rate, but to lose a woman, the one who actually bears multiple children, or to lose a child, part of the next generation, is an obvious impediment to the flourishing of that tribe.

But what is really quite extraordinary about the Professor's opinion is that it dismisses the views of the masses as to the validity of their moral choices!  Such dismissal of peoples’ understanding of what is moral seems to confirm my impression that, despite the diminution in the number of believers in the traditional religions in the West, there is still perceived to be a need for a set of rules that are somehow ‘right’ in an absolute sense. So then, it is not only religious people who see morality as an absolute coming from above. Apparently, supposedly liberal thinkers have a similar need.

It is unsurprising that very many non-religious people still think that there are some fundamental, universal, principles which ought to form a part of our lives. It has been part of received wisdom for as long as we have had recorded history that what is moral isn’t our choice.  However, that there is a similar view amongst those most involved with ethics as academics is quite surprising.  I have the impression that there is a desire to find a 'natural law’ of morality, a bit like the natural laws of science that are fixed and invariable. How to find it is perhaps not clear, but it seems that we must have, above all, world-wide uniformity, a uniformity that complies with the norms of political correctness and avoids the influence of patriarchy. That our norms have changed constantly with time and place, however, indicates that there isn’t any reason to believe that there is a universal or fixed morality - that natural law apparently being sought.

Surely it is not difficult to see instead that our morality is a tacit agreement between members of different societies throughout the world and members of sub-sets of those societies as to what behaviour is acceptable at any given time and in any given circumstances? And I include the acceptability of all forms of behaviour, whether that of spitting in public or of honour killings. Our morality over the years has been a changing set of rules, resulting from the circumstances in which we have found ourselves. They have been enforced by social pressure and also our laws.  As we in the West have progressed towards a better standard of living and have discovered more about how we and the world around us actually work, our moral codes have changed. They have changed in ways which would have shocked our forbears, although I suspect that our descendants will, in the light of their knowledge, think that our way of looking at things is actually quite primitive.

On the other hand, the very fact that there are indeed similarities in the way we behave as societies at any given time indicates the probability of an evolutionary benefit from our behaviour, just like the behaviour of non-human species.  But as we have seen, acceptable behaviour varies according to the place in the world in which it takes place. So whilst we may not approve of how people in other places act, we have to accept that it is possible, at least, that the evolutionary benefit is not the same for the same action in the whole world. It can depend on the context, whether place or time.

So, in the case of self-driving cars, why not accept the democratic verdict of the people in each country of the world regarding the morality of programming the self-driving cars of that country? Why does an academic (male) ethicist in Chicago, for example, have the right to impose his morality on the rest of the world? Is not this an example of patriarchy?

Now, having said all of this, it is quite clear that the garnering of peoples’ opinions in this Moral Machine experiment was not exactly open to all. It would have been confined to those with access to the internet and those who happened both to spot the survey and decide to respond to it. So then, it is not really very good guidance as to the morality of a particular area. It is a view as to what the better-off, more computer-savvy people of the areas with internet access think to be ‘right’.


Neither is it obvious to me that we need to or could realistically programme cars to try to make such decisions. Even if we wanted our autonomous vehicles to follow our local moral codes as revealed by the survey, how could it possibly work out exactly who was on the pedestrian crossing in order to make a calculation as to the relative merits of killing me and my family, or ploughing into the people crossing the road? Surely it must mean that my car would have to know all about not only me, but the pedestrians in the line of fire and the occupants of all the other cars with which it might choose to collide.

This requires that we would all have to be instantly identifiable, whether as pregnant women, doctors, criminals, young or old. We would have to carry transponders or have chips inserted into us in order to reveal who we were, frequently updated to indicate to the car's algorithm what we were “worth” in moral terms.  A society with no privacy, and all of our data held by our friend Mr Google?

No, to contemplate the sort of dystopia which would even begin to allow a self-driving car to make moral decisions on our behalves is not something I would wish to be a part of. Better perhaps to concentrate on having more reliable brakes – secondary systems which cut in when things go wrong to provide a fail-safe back-up. Now that’s something which doesn’t require a major moral debate or a complete loss of privacy.

So then, a thank you to the researchers at the Moral Machine for enlightening us as to the attitudes of those who took part in their mega-survey, but I don’t think that we have any obvious application for the results of their research as yet.


Paul Buckingham


4th November 2018


 
 
Home      A Point of View     Philosophy     Who am I?      Links     Photos of Annecy