Will 2016 Be the Year We Approach The Great and Terrible Singularity?

Will super-intelligent computers soon spell our doom, or have futurists forgotten something fundamental?

By William M Briggs Published on January 1, 2016

The chilling news is that killer robots are marching this way. Paypal founder Elon Musk and physicist Stephen Hawking assure us that Artificial Intelligence (AI) is more to be feared than a Hillary Clinton presidency. Google’s futurist Ray Kurzweil and Generational Dynamics’s John J. Xenakis are sure the Singularity will soon hit.

When any of these things happens, humanity is doomed. Or enslaved. Or cast into some pretty deep and dark kimchee. Or in Kurzweil’s vision, we’ll leave our mortal coil behind and upload ourselves to some ancestor of the Internet. Or so we’re told.

Terminator - 400It makes sense to worry about the government creating self-mobilized killing machines, or the government doing anything, really, but what’s The Singularity? Remember The Terminator? An artificially intelligent computer network became self-aware and so hyper-intelligent that it decided “our fate in a microsecond: extermination.” Sort of like that. Computers will become so fast and smart that they will soon realize they don’t need us to help them progress. They’ll be able to design their own improvements and at such a stunning rate that there will be an “intelligence explosion,” and maybe literal explosions, too, if James Cameron was on to anything.

Xenakis says, “The Singularity cannot be stopped. It’s as inevitable as sunrise.” But what if we decided to stop building computers right now? Xenakis thought about that: “Even if we tried, we’d soon be faced by an attack by an army of autonomous super-intelligent computer soldiers manufactured in China or India or Europe or Russia or somewhere else.”

Cybermen Doctor Who Upgraded - 900

As I said, we surely will build machines, i.e. robots, to do our killing for us, but robots with computer “minds” will never be like humans. Why? Because computer “minds” will forever be stuck behind human minds. The dream of “strong” AI where computers become superior creatures with consciousness is and must be just that: a dream. I’ll explain why in a moment. Machines will become better at certain tasks than humans, but this has long been true.

Abacus Transparent - 400

Consider that one of the first computers, the abacus, though it had no batteries and “ran” on muscle power, could calculate sums easier and faster than could humans alone. These devices are surely computers in the sense that they take “states,” i.e. fixed positions of its beads, that have meaning when examined by a rational intelligence, i.e. a human being. But nobody would claim an abacus can think.

An abacus can do sums or multiplications only as fast as somebody’s fingers can move. Electronic calculators are much faster and growing faster in time. Suppose I wanted to sum 132 and 271. Those who didn’t go to public school can do it quickly in their heads, while those not as fortunate may reach for an abacus, calculator, or computer to do it for them.

As time goes by and our technological prowess grows, we will be able to design machines to figure this sum faster and faster. It takes something like a second to do in your head or on an abacus and far less than a second on a computer. Let the time required to do the sum on a computer approach zero. It won’t reach zero, of course, but will be limited by the characteristics of the circuitry.

It’s easy to see the abacus has no idea what it’s doing. How could it? It’s just a wooden frame with some dowels and beads, and dowels and beads do not produce rational thought. Computers, might though. Instead of dowels and beads they have wires and transistors and lots of them. And they’re very fast. Here’s the big question: at what point does the calculating machine become aware that is calculating numbers?

The answer is: never. The computer does not know what it is doing and never will. A computer is a mere machine that is one moment in this state, i.e. a fixed configuration of its circuitry just like the beads on an abacus, and the next moment it is in a different state. It progresses, barring accidents or mishaps, from one state to the next in rigorously determined ways. There is no over-riding intellect behind these states. There are only the states.

But aren’t human brains nothing but faster, more complex computers? The brain is made of neurons and these may be said to take states, i.e. each neuron is nothing but a certain configuration of chemicals, and the change from state to state is, just like in the computer, governed by the “laws” of physics. That means we can eventually figure out how to make a computer as intelligent as we are, right? No. Humans are different because we are rational creatures with intellects that rely on the meat-machines in our head, but we also are more than just our brains. Why? Because our intellects are not material.

Brain Circuit - 400It’s true we use the material which are our brains, but only as a means to an end. How do we know that our rational intellects are not made of physical stuff? That’s complicated, but you can get a glimpse by asking yourself these questions: What are numbers? What is a logical relation? A logical relation is the kind of thing that lets us conclude, “I’ll get wet” given “If I go swimming I’ll get wet. I’ll go swimming.” The relation is the immaterial “glue” between these two sentences.

Now numbers and logical relations are not material, yet they exist and we know of them. And they are the very stuff of higher, rational thoughts. Now where numbers, logical relations and a host of other similar things exist and how they exist as immaterial objects and how our immaterial intellects interact with our material brains are excellent questions which we can’t here tackle. Suffice to say that the answer takes us into the densest thickets of philosophy.

Even if you don’t follow all the details of this argument, the main point is this. Since to become rational requires an intellect, and intellects are not made of physical stuff, but computers are, computers can never become rational. Strong AI is not unlikely or not here yet, it is impossible. There thus can be no such Singularity.

It is impossible that there can come a point at which computers — mere machines — will take over. The government becoming all powerful? Well, that is a different story.

Artificial Intelligence  - 900

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Military Photo of the Day: Transiting the Baltic Sea
Tom Sileo
More from The Stream
Connect with Us