We are discussing artificial intelligence tonight, said Newsnight’s Kirsty Wark, and we have some very intelligent humans with us to help us do it.
That, by the way, is a sort of Newsnight joke. I think.
Artificial intelligence and the driverless car
To be fair, there followed a very intelligent discussion by the very intelligent humans on the increasing impact of artificial intelligence and the ethical dilemmas raised, for example in the emerging era of the driverless car.
Unsurprisingly, one expert had been brought in to reassure us of the unlikely prospects of some disaster scenario of the ‘computers will take us over’ kind. Another took the contrary more cautious view. The debaters showed even more respect towards each other that was shown earlier in the day at PMQ by David Cameron and Jeremy Corbyn.
At the core of the discussion is agency theory. I don’t mean the narrower ideas of corporate control between owners and managers or agents. I mean the great sociological issues of the nature of structures and the potential of humans to act as free agents.
In Newsnight, human agency was examined as potentially under threat by computers taking decisions. This introduces questions of whether the computers in cars and anywhere else will be able to deal with ethical issues on our behalf
To connect this to a problem of immediate practical importance, the case of driverless cars was introduced. The experts gently considered the possibility, concluding that it did not influence the positions they had outlined. But the Newsnight production team had their own secret weapon introduced by David Grossman, their excellent culture and technology editor.
The trolley problem
David has set up an experiment to replicate one of the famous ethical dilemmas known as the runaway rail truck or trolley problem.
Scientific American also had a look at it a few years ago, and I seem to remember a few references in The Economist. David, drawing on the BBC’s vast budget had obtained what looked like a bit of model rail track complete with a little red truck, and a switch that could be used to divert the truck way from the line that would kill five people and on to a branch line which would result in only one person being killed.
Grossman’s volunteers had the life or death choice of pulling the switch and after that the more tricky task of reflecting on the ethical dilemma to which they had been exposed. The volunteers conformed to the behaviours of countless laboratory subjects who had taken part in such experiments in the past. Yes, mostly they preferred to act. They also confirmed that it is jolly difficult to sort out that darn moral dilemma. What right had someone to take a life? Or not intervene to save five lives?
Hmm. What do you think?
When reintroduced to the viewers, the experts in ethics and artificial intelligence were given a chance to consider the implications of the experiment for philosophy, and the ethical problems of driverless cars. They tactfully avoided mentioning that a genius called Ludwig Wittgenstein has more or less drawn the poison out of ‘mind games’ as a bunch of linguistic traps.
More interestingly, one discussant pointed out a fundamental principle of creativity when anyone faces a tricky either-or decision. The concept is repeatedly found in my textbook Dilemmas of Leadership. A dilemma can be effectively re-framed if the binary nature of the ‘either-or’ is examined and its assumptions tested. You can apply that principle to the practically important issues of driverless cars, loss of human agency and ethical resolution of dilemmas.
I welcome comments and will elaborate on the conclusions later in an update to this post.