Will Self-Driving Cars Change Moral Decision-Making?

It’s time to separate science fact from science fiction about self-driving cars

By Jay Richards Published on December 19, 2019

Irish writer and playwright John Waters has a recent piece in First Things, “Algorithm Religion,” bemoaning the moral implications of ceding our choices to ever-more-sophisticated algorithms, such as those that will presumably guide autonomous cars.

He starts by imagining a scenario in 2032, when fully autonomous cars are a reality. It’s not new; it’s a replay of the “trolley problem” from your college ethics course: A trolley is hurtling down a track toward five people who are tied to the track ahead. A switchman who spots them can flip a switch that will divert the trolley. Alas, there’s a person on that track who would be killed if he flips the switch. So what should the switchman do? If he does nothing, five people will die. If he pulls the switch, he will save the five, but kill one person who would otherwise have lived.

In Waters’ version, a self-driving car must “choose” between a young child and an old man. The car, presumably using utilitarian logic, opts to take out the old man.

By itself, it’s hard to see how Waters presents a new scenario. Surely the dilemma — however we resolve it — is simply transferred to whomever programmed the car. Just as the trolley problem would present a choice for a driver in real time, so too would it present a choice for the programmer ahead of time. Right?

Waters doesn’t think so:

Self-driving vehicles are an example of a new category of machine, in that they have access to public thoroughfares on much the same basis as humans: without constraint of track or rail — hence “autonomous.” Computer-generated movement of machines is a brave initiative for all kind of reasons, and will necessitate radical changes in laws and cultural apprehension.

His argument isn’t explicit. However, if I understand his thinking, he sees something sinister in the algorithms used in such technology because they are (and presumably will be) the product of machine learning rather than old-fashioned programming. As a result, they will be, at least in part, “opaque” and “indescribable.” That is, all their details won’t be written by moral agents such as programmers. Some of their rules will emerge, bottom up, through an iterative statistical process. So, he fears a time in the near future when

We will leave it to the computers to decide, and won’t understand or seek to understand the underlying logics being applied. Self-driving cars, though safer in many respects, will become inscrutable to users, pedestrians, and other adjacent humans.

He warns of a time, not so far away, when we might have to grant moral discretion to the opaque algorithms, just as Christians now grant to the all-knowing but often inscrutable decrees of God.

This is overblown. It’s clear that Waters — who some years ago confessed to being a “Luddite” — is taking his cues from of a couple of other writers. One is a catastrophist and the other is a techno-utopian.

Based as it is on such third-hand knowledge, Water’s piece is a member of that species of conservative commentary that naively accepts hype and then responds to it, rather than questioning the hype itself. In particular, he fails to separate science fact from science fiction with respect to algorithms and autonomous cars.

Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.

Water assumes that fully autonomous cars are right around the corner, indeed, are a “virtual … certainty, perhaps within a decade.” If you read Mind Matters, you know that that claim is justifiably disputed. What’s more likely in the near term is that our cars will take on more and more “automated’ features that augment our own roles as drivers, without replacing us full stop.

But let’s assume that soon we will all be passengers in cars that we used to drive. Even so, the mere possibility of error is hardly a serious moral quandary. If autonomous cars are, on balance, less lethal than the arrangement we have now, that would be an improvement. It hardly makes one a utilitarian to say so.

In any case, it just doesn’t follow, as Waters muses, that we will find ourselves in a world “in which there is no recourse to justice, reckoning, or even satisfactory closure after algorithms cause death or serious injury.”

On the contrary, much the same tort and negligence laws would apply, other things being equal, to “self-driving” cars in 2032 as apply to a 2019 Honda Odyssey or, for that matter, to a Samsung refrigerator. Machine learning or not, human choices and design will be all over the technology and will be just as subject to moral scrutiny as they are now. No jury in a tort case 13 years from now will be moved by a Waymo attorney who argues that “the algorithm will do what the algorithm will do.”

But what if these “self-driving” cars find themselves with trolley problems from time to time? So what? How will the moral situation be different than if the car had a live human driver? The main difference is that the outcome could be subject to simulation, assessment, and the moral judgment of engineers ahead of time, rather than being subject to the panic of a human driver who scarcely has time to think. In other words, the outcome could be subject to some moral judgment, however imperfect. And again, surely that is an improvement.

Waters sees a problem here because he has been led astray by academics who assume that strong AI that can think consciously for itself is bound to emerge. They muse about algorithms and machines that somehow become “artificial moral agents.” That’s a myth. Waters’ rhetorical gifts would be better served by a bit less Luddism and bit more skepticism.

 

Jay Richards is the Executive Editor of The Stream, an Assistant Research Professor in the School of Business and Economics at The Catholic University of America, and a Senior Fellow at the Discovery Institute.

For more breaking news about the interface of natural & artificial intelligence, visit MindMatters.AI. Copyright 2019 Mind Matters.

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
The Scarcity Mindset
Robert Morris
More from The Stream
Connect with Us