Will 2016 Be the Year We Approach The Great and Terrible Singularity?

Will super-intelligent computers soon spell our doom, or have futurists forgotten something fundamental?

By William M Briggs Published on January 1, 2016

The chilling news is that killer robots are marching this way. Paypal founder Elon Musk and physicist Stephen Hawking assure us that Artificial Intelligence (AI) is more to be feared than a Hillary Clinton presidency. Google’s futurist Ray Kurzweil and Generational Dynamics’s John J. Xenakis are sure the Singularity will soon hit.

When any of these things happens, humanity is doomed. Or enslaved. Or cast into some pretty deep and dark kimchee. Or in Kurzweil’s vision, we’ll leave our mortal coil behind and upload ourselves to some ancestor of the Internet. Or so we’re told.

Terminator - 400It makes sense to worry about the government creating self-mobilized killing machines, or the government doing anything, really, but what’s The Singularity? Remember The Terminator? An artificially intelligent computer network became self-aware and so hyper-intelligent that it decided “our fate in a microsecond: extermination.” Sort of like that. Computers will become so fast and smart that they will soon realize they don’t need us to help them progress. They’ll be able to design their own improvements and at such a stunning rate that there will be an “intelligence explosion,” and maybe literal explosions, too, if James Cameron was on to anything.

Xenakis says, “The Singularity cannot be stopped. It’s as inevitable as sunrise.” But what if we decided to stop building computers right now? Xenakis thought about that: “Even if we tried, we’d soon be faced by an attack by an army of autonomous super-intelligent computer soldiers manufactured in China or India or Europe or Russia or somewhere else.”

Cybermen Doctor Who Upgraded - 900

As I said, we surely will build machines, i.e. robots, to do our killing for us, but robots with computer “minds” will never be like humans. Why? Because computer “minds” will forever be stuck behind human minds. The dream of “strong” AI where computers become superior creatures with consciousness is and must be just that: a dream. I’ll explain why in a moment. Machines will become better at certain tasks than humans, but this has long been true.

Abacus Transparent - 400

Consider that one of the first computers, the abacus, though it had no batteries and “ran” on muscle power, could calculate sums easier and faster than could humans alone. These devices are surely computers in the sense that they take “states,” i.e. fixed positions of its beads, that have meaning when examined by a rational intelligence, i.e. a human being. But nobody would claim an abacus can think.

An abacus can do sums or multiplications only as fast as somebody’s fingers can move. Electronic calculators are much faster and growing faster in time. Suppose I wanted to sum 132 and 271. Those who didn’t go to public school can do it quickly in their heads, while those not as fortunate may reach for an abacus, calculator, or computer to do it for them.

As time goes by and our technological prowess grows, we will be able to design machines to figure this sum faster and faster. It takes something like a second to do in your head or on an abacus and far less than a second on a computer. Let the time required to do the sum on a computer approach zero. It won’t reach zero, of course, but will be limited by the characteristics of the circuitry.

It’s easy to see the abacus has no idea what it’s doing. How could it? It’s just a wooden frame with some dowels and beads, and dowels and beads do not produce rational thought. Computers, might though. Instead of dowels and beads they have wires and transistors and lots of them. And they’re very fast. Here’s the big question: at what point does the calculating machine become aware that is calculating numbers?

The answer is: never. The computer does not know what it is doing and never will. A computer is a mere machine that is one moment in this state, i.e. a fixed configuration of its circuitry just like the beads on an abacus, and the next moment it is in a different state. It progresses, barring accidents or mishaps, from one state to the next in rigorously determined ways. There is no over-riding intellect behind these states. There are only the states.

But aren’t human brains nothing but faster, more complex computers? The brain is made of neurons and these may be said to take states, i.e. each neuron is nothing but a certain configuration of chemicals, and the change from state to state is, just like in the computer, governed by the “laws” of physics. That means we can eventually figure out how to make a computer as intelligent as we are, right? No. Humans are different because we are rational creatures with intellects that rely on the meat-machines in our head, but we also are more than just our brains. Why? Because our intellects are not material.

Brain Circuit - 400It’s true we use the material which are our brains, but only as a means to an end. How do we know that our rational intellects are not made of physical stuff? That’s complicated, but you can get a glimpse by asking yourself these questions: What are numbers? What is a logical relation? A logical relation is the kind of thing that lets us conclude, “I’ll get wet” given “If I go swimming I’ll get wet. I’ll go swimming.” The relation is the immaterial “glue” between these two sentences.

Now numbers and logical relations are not material, yet they exist and we know of them. And they are the very stuff of higher, rational thoughts. Now where numbers, logical relations and a host of other similar things exist and how they exist as immaterial objects and how our immaterial intellects interact with our material brains are excellent questions which we can’t here tackle. Suffice to say that the answer takes us into the densest thickets of philosophy.

Even if you don’t follow all the details of this argument, the main point is this. Since to become rational requires an intellect, and intellects are not made of physical stuff, but computers are, computers can never become rational. Strong AI is not unlikely or not here yet, it is impossible. There thus can be no such Singularity.

It is impossible that there can come a point at which computers — mere machines — will take over. The government becoming all powerful? Well, that is a different story.

Artificial Intelligence  - 900

Print Friendly
Comments ()
The Stream encourages comments, whether in agreement with the article or not. However, comments that violate our commenting rules or terms of use will be removed. Any commenter who repeatedly violates these rules and terms of use will be blocked from commenting. Comments on The Stream are hosted by Disqus, with logins available through Disqus, Facebook, Twitter or G+ accounts. You must log in to comment. Please flag any comments you see breaking the rules. More detail is available here.
  • mtc7

    “Those who didn’t go to public school can do it (easy math addition) quickly in their heads…”

    Ouch…

  • Roy Denio

    Just my onw opinion. The brain and all of it’s Brodmann regions is not the substance of a person: It is merely an interface to the real “us.”

    I look at as an interface. An interface between the physical and the soul, core, essence or whatever you wish to call it. It links our physical world to another, whether it be physical, spiritual or something else.

  • Roy Denio

    So as we only have a mass of storage cells to work with, and no intellect, I don’t see where enhancing stupidity is much of an answer.

  • Gary Feierbach

    The author is making the same mistake as a fellow that ran the analog computer simulation of helicopters at NASA Ames many years ago. He said that digital computers could never simulate helicopters because it’s an analog process. Ten years later we were simulating helicopters and now we can do it on a cell phone. We are biological machines with a lot of noise and irrelevant data to work with. We make decisions, reflect, dream, love, enjoy and fear. There is no physical reason why digital machines won’t be able to do the same. Procreation? That too, but not the way we do it. The only thing we have to fear is ignorant or malicious use of intelligent machines. As in the past, our mortal enemies will be humans, not machines.

    • illuvitus

      The mistake is in thinking that analog and digital have the same relationship that material and immaterial things do. They do not, so the analogy is flawed.

      • Gary Feierbach

        Illuvitus, You clearly believe in an immaterial soul while I do not so we do not have a common basis on which to discuss this. I hope this gap does not cause you to be prejudice against future AI beings or uploaded humans when that happens. If you are under thirty, it will surely happen in your lifetime.

  • Peter A.

    “Suppose I wanted to sum 132 and 271. Those who didn’t go to public
    school can do it quickly in their heads, while those not as fortunate
    may reach for an abacus, calculator, or computer to do it for them.”

    Do you actually think private schools are better simply because they’re far more expensive and exclusive? 403 – that answer took me less than 3 seconds, without having to resort to using a calculator or even a pen and pad. The public schools I attended were first-class, although that was way back in the 1970’s and 80’s, and as I understand it the quality of teaching – in both private and public schools – isn’t what it used to be.

  • That which is fallen, sinful, and broken, will never be able to write a sophisticated program that is not broken. There will always be flaws in them. The code of the human body is written on various media, including, but not limited to, DNA. DNA is coded in such a way that it works in at least 6 different dimensions at the same time and in the same code space, and that’s just the physical coding. The spiritual is far, far, far more sophisticated yet.

    Humans can’t even code a significant program that works forward and backward in the same code space at the same time, much less the 6 dimensions of DNA. How do you propose that humans will ever be able to code what they simply do not understand?

    It cannot happen.

  • To Post

    “We will never make flying machines because we do not fully understand birds.”
    Non sequitur.
    “We will never make thinking machines because we do not fully understand brains.”
    Non sequitur.

Inspiration
A Christian ‘Opposite’ Strategy for Making a Difference
Tom Gilson
More from The Stream
Connect with Us