When Denying God’s Reality Leads to Denying Every Important Human Reality

By Tom Gilson Published on March 8, 2021

What would it take for some of us to think robots are conscious and deserve rights? We’re not there yet, but Eric Schwitzgebel, professor at UC Riverside, tells a compelling story on his blog, revealing just how close we might be. This isn’t your ordinary rights-gone-wild sort of looniness, where people claim rivers and trees and mountains deserve human rights. This goes to the core of what it means to be human.

The very question shows just how confused we are on that.

Schwitzgebel imagines a “mall cop” robot with the latest and greatest software, enabling it to talk convincingly like a human. Scientists are approaching that stage of programmimg with a package called GPT-3, which can even “answer” questions about its own place in the world:

To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honor.

To which Schwitzgebel responds, “The d*** thing has a better sense of humor than most humans.” To be clear: That last sentence is false. Even more clear: Each time you see the words “I am” in that quote, you should be thinking instead, “This software package is.” The software package is not an “I.” It’s not person. It is neither self-aware nor conscious.

How Human Could a Robot Be?

There’s a real sense in which it doesn’t even calculate or predict. Calculation and prediction are mental functions, the product of thought. Computers don’t think. They route input signals, typically in the form of voltage variations, and they route those voltage variations through various switches to produce an output useful to humans. That’s all computers do. They do it to an unfathomable degree of complexity, but it’s still all they do.

Yet Schwitzgebel draws a persuasive picture of a future robot, maybe a GPT-6, able to process those signals to produce outputs through a human-like frame to produce behaviors that look very, very human. It will act as if it cares how it’s treated. And then:

if they think it has feelings and experiences, they will probably also think that it shouldn’t be treated in certain ways. In other words, they’ll think it has rights. Of course, some people think robots already have rights. Under the conditions I’ve described, many more will join them.

Schwitzgebel himself doubts robots should be considered for human rights. But it’s not simple, he says. “Theories of consciousness are a tricky business. The academic community is far from consensus” on it.

You Didn’t Think This Was Really About Robots, Did You?

Let’s not be fooled, though. This isn’t about robots, it’s about us. Humans. You and me. What does it mean to be human? We may think we know, but you’d be amazed at the conflict and confusion over that. We’re different from the animals, right? In what way, though? These questions keep philosophers up at night.

Some suggest consciousness is what sets us apart. According to the professor, though, there’s reason to believe even garden snails might have a form of consciousness; but what does consciousness even mean? It depends on one’s theory on it.

It takes a strong commitment to materialism — to atheism, really — to deny what everyone knows to be true about being human.

The philosopher David Chalmers spoke (with great understatement) of a “hard problem” in understanding consciousness (Wikipedia/Scholarpedia). It’s not the only hard problem you run into if you think the whole world, humans included, is a lot like the robot. I’m talking about the view called materialism, or its near-synonym, naturalism. This view says that reality consists of nothing else but matter and energy and the physical laws that govern their interactions.

Atheism and Its “Illusions”

Obviously it’s an atheistic view, for it completely denies all spiritual reality. The universe — all reality, actually — is just one huge machine, and so is everything in it. We humans aren’t substantially different from GPT-6. We operate on wetware instead of hardware, and our “software” version is more akin to maybe some future GPT-20 robot.

Still, at bottom, we’re basically the same: We’re strictly machines, made of different parts all connected, all interacting, totally and forever under the control of physical law.

Robots are tied to their programming; they can’t have their own free will. A goodly number of naturalist thinkers conclude that humans can’t have free will either. Sam Harris, Jerry Coyne, and Lawrence Krauss are three fairly prominent writers who say it loudly and often. You think you have free will? It’s an illusion.

Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.

So also is consciousness: That’s an illusion, too. Awareness isn’t real. It’s not entirely clear what it is inside the person that’s aware of the illusion that it’s aware of an illusion … that it’s aware of an allusion. (I didn’t say this was a coherent belief.)

It gets worse, though. If you agree that robots’ data processing isn’t really thinking, you can even find at least one university philosopher, Alex Rosenberg, who thinks humans don’t think either. I heard him say that in an online debate, and I thought (yes, thought) I must have heard wrong. I went straight to my Kindle and downloaded his Atheist’s Guide to Reality, and sure enough, that’s exactly what he argues.

We Know Better, Actually

This is where naturalism takes us. You are not a conscious individual; you only think you are. Except you don’t really think; you only think you think. If you really want to be a consistent naturalist, you should make the hard choice to see yourself that way — only machines can’t make choices.

Confused? The logic is simple, really: If you start out committed to the position that we’re just machines, you end up having to conclude we’re just machines. That’s all we are. The confusion doesn’t come from the logic, but from the undeniable realities we all live in. We know better, it’s as simple as that. Even Alex Rosenberg knew better: He knew he was thinking when he thought his way through to the conclusion that there’s no such thing as thinking.

It takes a strong commitment to materialism — to atheism, really — to deny what everyone knows to be true about being human. It’s one reason I could never be an atheist, and why I grieve for those who are.

 

Tom Gilson (@TomGilsonAuthor) is a senior editor with The Stream and the author or editor of six books, including the recently released Too Good To Be False: How Jesus’ Incomparable Character Reveals His Reality.

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
Military Photo of the Day: Standing Guard on USS New York
Tom Sileo
More from The Stream
Connect with Us