Wednesday, November 28, 2012

Moral Machines

Interesting hypothetical that will eventually become relevant.  Thoughts?

Moral Machines--From Whose Point of View

A friend sent me a link to a New Yorker piece--link below--that pointed out that the self-driving cars that Google is developing will sometimes have to make "moral" decisions. The author, Gary Marcus, provides this example: "Your car is speeding along a bridge at fifty miles per hour when an errant school bus carrying forty innocent children crosses its path." Should you swerve, with the expectation that your car will fly off the bridge and you will die, or simply slam on the brakes with the expectation that you will hit the bus fast enough to kill many children (you being protected by your airbag)?

Marcus points out that the computers that control cars will have to make such judgment calls in a split second. My concern is: how should they do it? In particular, whose perspective should they take on?

One perspective is that of you, the driver. It seems to me that you are not required to turn your car if you expect to die as a result. It's not your fault that the bus cut in front of you, and I'll suppose that going 50 mph is within the speed limit. It would be heroic of you to sacrifice yourself for the children, but it's beyond the call of duty. I will suppose that you would not do it.

The other perspective is that of a neutral party (of course there's also the perspective of the children and their loved ones, but it's hard to see why the computer would take their perspective). I think it would be permissible, and that there is positive moral reason, for someone who had the power to flip a switch and cause your car to swerve off to the side in order to save some number of children. You and your car constitute an innocent threat to the children, but I think innocent threats can be killed to save a greater number of innocent victims. I will suppose that a neutral party would divert your car, thereby killing you, to save them.

Should your car take your perspective, as though it is your agent? Or should it take the neutral perspective, as we would want state installed machines to be programmed if they could intervene in such situations? I can see reasons on both sides, but I'd love to hear what thoughts those of you who read this post have.