This week artificial intelligence has been in the news, partly because of the UK Chancellor’s promise to have automated cars on the road by 2021. In our University we do a great deal of work in this area. In my own department we have a team of EU-funded researchers examining the issue of AI and its application to future transport. Over the last few months we have been reviewing a number of scenarios that are likely to form part of this brave new world. My own view is 2021 for fully automated vehicles is a bit optimistic.
Much more likely is a move towards level 4 assisted driving, where the car and the human driver share responsibility. This sharing of responsibility between man and machine throws up some interesting ethical issues, especially given the surveillance capabilities of new cars. So, for example, if the car detects that you have broken the law whilst driving, should the car call the police? If the car calculates that you are somewhat impaired in your driving, should the car take control, pull over and wait for An Garda Síochána to arrive with a breathalyser?
Part of the team is looking at the idea of ethical AI embedded within the car. At the fully automated level the car will have to make some ethical decisions. The classic scenario is that of the unavoidable traffic accident or UTA. If the car is faced with the prospect of crashing into a high-end car with four people on board or a much smaller car with three inside, the car will be faced with an ethical choice. The decision will depend on the software and this, in turn, will depend on agreed protocols. How aggressive the car should be is also an ethical decision. Imagine trying to make your way up William Street on a Saturday morning in an automated vehicle if the pedestrians know the car will definitely stop if they walk out in front of it. In our city of jaywalkers, it’s going to be slow progress.
Of course, there is more to the adoption of AI than automated cars. There is a realisation in the industry that there are profound ethical issues to be resolved. DeepMind, the UK artificial intelligence company owned by Google, has launched a research unit focused on the ethical and social implications of AI. Already, computer algorithms determine our credit rating, what adverts we see and in the case of dating apps, who we might grow to love. It is possible that AI systems had a tangible impact on the US presidential election and perhaps the UK’s Brexit vote. There is often a time lag between the development of disruptive technologies and our ability to manage and regulate them and this is certainly the case with AI and machine learning. The need for good governance is particularly acute where ethical issues arise.
At present the instances where moral agency is required are controlled by professionals, be they medical or legal. By people, in other words. However, we all know what people are like – they are malicious, envious, fatigued, disinterested, biased, damaged and I could go on. And yet we trust life-changing decisions to our fellow citizens. Would it not be better to trust a machine that didn’t exhibit all these characteristics and could operate in a more objective fashion? We could perhaps train machines to be more like people in their decision-making processes and to introduce them to our social norms. Computers are good at processing text, so they could be directed to read novels or plays to understand what it is to be human. The question as to how we would decide what texts should carry weight in any future set of AI actions is likely to be controversial. The Old Testament and Shakespeare – and we could all be in big trouble.
If we consider the running of large complex organisations, it might be the case that the computers could do a better job in terms of the efficient allocation of resources. We would also have to factor in who is the alternative manager. Faced with the choice of Danny Healy Rae or a computer to run the Department of Communications, Climate Action & Environment, I think we might be better off with the machine. If things were going wrong at least we could pull out the plug and stop all the noise. With Donald Trump in the White House, the lure of machines in public office may become irresistible.
Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Australia, argues that the development of thinking machines is as bold and ambitious an adventure as mankind has ever attempted. “Like the Copernican revolution, it will fundamentally change how we see ourselves in the universe,” he writes. One of the risks we face with the introduction of AI is a gradual abdication of our responsibility as human beings to be well-informed and indeed to try and be the best that we can.
With the arrival of so-called superintelligence, that is the ability of machine to engage in multifaceted complex decision-making and to correct previous errors, it may become just too easy to step aside and let AI take control. For the human race, the consequences may be profound. Part of being human is the desire to develop and to better understand the world. If we become mere spectators our desire to be educated and thoughtful members of society may be compromised – our very humanity could be undermined.