The article linked below takes an interesting look at one of the most difficult problems facing society in the age of AI - who should decide on the moral programming of the machines we are creating?
In November 2017, the UK government published its white paper setting out a long-term plan to boost the productivity and earning power of people throughout the UK. In this paper the government made it clear that their plan was to "become a world leader in shaping the future of mobility".
In order to incentivise businesses to come to the UK to develop this technology, the government set up the "Centre for Connected and Autonomous Vehicles", providing matched funding for businesses and also introduced the Automated and Electric Vehicles Act 2018 on the 19th of July this year.
The act sought to position the UK as the go-to location to develop, test and drive automated vehicles. It brought automated vehicle insurance in-line with long-standing motor insurance practices, dealing with one of the most controversial issues surrounding the topic: fault/liability.
The Act provides guidance on who will be deemed responsible in the eyes of the law where there is an incident, but it does not provide us with any guidance on how we should programme the cars to respond when faced with a moral dilemma.
The article provides a brief statistical analysis of how different countries may approach this issue if it were left to the public to decide. But, is this the correct approach? Should we be asking the public at large or might we see the development of morality as a profession?
With Addison Lee saying that we will see fully autonomous vehicles on the road by 2021 we will need to very quickly address these questions.
The results from 40 million decisions suggested people preferred to save humans rather than animals, spare as many lives as possible, and tended to save young over elderly people.