Elon Musk Dreams of Artificial Intelligence But Misses a Key Question About Human Intelligence

By Michael Cook Published on June 20, 2016

In the 19th century and for a good part of the 20th century, many people feared that humanity was destined to become lapdogs of bloated industrialists. The world would be divided between the haves and the have-nots, the capitalists and the proletariat.

The fear persists, but nowadays capitalists, a la Mark Zuckerberg, wear hoodies and tennis shoes like the rest of us. Apart from North Korea, there is universal agreement that “to get rich is glorious.” So the great divide of the 21st Century and beyond will be based not on money, but on intelligence. And the superior intellects will not even be humans, but machines.

Or at least that’s what Elon Musk says, one of the leading figures in Silicon Valley and one of America’s richest men.

Musk is a co-founder of PayPal who has since launched visionary projects like sending men to Mars with a one-way ticket and Tesla electric cars.

One of his visions, though, is a nightmare in which computers take over from humanity and begin to think for themselves at ever-increasing speed and sophistication. A guru at Google, Ray Kurzweil, calls this “The Singularity” and predicts that this is going to happen in 2045.

To forestall this existential risk, Musk has launched yet another project, OpenAI, to create “friendly” artificial intelligence. “The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet, or a house cat,” he told a programmers conference recently. “I don’t love the idea of being a house cat.”

Another of his strategies for the house cat problem is intriguing. He plans to invest in technology which will integrate the brain with the internet. Although this might sound preposterous, he recently identified a promising technology: the neural lace.

This is a term invented by science fiction writer Iain M. Banks to describe a device which acts as an interface between technology and the brain.

In a recent issue of Nature Neurotechnology, researchers report that they have injected a fine mesh of wire and plastic mesh into the brains of mice where it integrates into brain tissue and “eavesdrops” on neural chatter. This could help the disabled or enhance performance. “We have to walk before we can run, but we think we can really revolutionize our ability to interface with the brain,” says a co-author of the article, Charles Lieber, a nanotechnologist at Harvard University.

The novelty of neural lace is that it can be injected into the bloodstream through an artery and does not require complicated surgery.

Futurist Thomas Frey predicts that the human brain will eventually become integrated into computers. The task of retrieving information that once took 10 hours of poring over books became 10 minutes on Wikipedia, and will become 10 milliseconds for a person with neural lace.

“Somebody’s gotta do it, I’m not saying I will. If somebody doesn’t do it then I think I should probably do it,” Musk told the conference.

The idea was received with gasps of wonder amongst the digerati — and the medical benefits are obvious. The blind could see; the deaf hear; and the lame walk. But Musk’s ultimate goal is not to make disabled people normal; it is to make normal people superhuman. So there are also some serious ethical issues to work through.

Take privacy. If you can hack a computer, you can hack a brain. Integrating your memories and cognitive activity with the internet allows other people to see what you are doing and thinking 24/7 — a kind of upscale parole bracelet.

Take autonomy. In our culture, this is the most cherished of our personal values. But once brain are integrated into an information network, they can be manipulated in increasingly sophisticated ways. And since technology always serves its owner, we could easily become the tools of Google or the government. It’s bad enough suffering from the effects of a cold virus; what if our brains were taken over by a computer virus?

Take responsibility. There might be no crime, as the neural lace could shut down the “hardware” whenever passions threaten to overwhelm social norms — as defined by the network. We might be living in a very “moral” society of highly intelligent sheep.

Musk (and others) are asking: Should we hack the brain so that we can be the masters and not the slaves of artificial intelligence? But this is the wrong question. What they should be asking is: Is there anything that makes us uniquely human, that AI can never replicate? If not, the issues of privacy, autonomy and responsibility are meaningless anyway.

Why isn’t Musk asking these questions? Probably because he has a very impoverished idea of what it means to be human. To give you an insight into his views on this, Musk also told the conference that he believes that we are living in a computer simulation of people living 10,000 years from now.

… given that we’re clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we’re in base reality is one in billions.

This is a philosophical argument put forward by Oxford philosopher Nick Bostrom. But one expects peculiar metaphysical theories from academics — not from influential billionaires. If Musk really believes that there is only a billion to one chance that we are not part of a super-intelligent human’s computer game, what kind of respect can he have for human life?  What sort of social policies will he back? The corollary of living as characters in a computer game is that humanity is not precious because it is not real. Life would be a game played for very low stakes.

Musk is an extreme example of the technological mindset which is deeply admired in the developed world. But technological progress has to walk hand in hand with moral progress. President Barack Obama gave some wise advice on this very topic recently, when he spoke at Hiroshima, where the first atom bomb was dropped:

Technological progress without an equivalent progress in human institutions can doom us. The scientific revolution that led to the splitting of an atom requires a moral revolution, as well.

What part is Silicon Valley playing in that moral revolution? Lamentably, it may not be very much at all.

 

The column originally appeared at Mercatornet.com, by whose kind permission it is reprinted here.

Print Friendly
Comments ()
The Stream encourages comments, whether in agreement with the article or not. However, comments that violate our commenting rules or terms of use will be removed. Any commenter who repeatedly violates these rules and terms of use will be blocked from commenting. Comments on The Stream are hosted by Disqus, with logins available through Disqus, Facebook, Twitter or G+ accounts. You must log in to comment. Please flag any comments you see breaking the rules. More detail is available here.
  • davidrev17

    But according to the standard “inhuman” philosophical anthropology unique to Musk et al’s. materialistic worldview of a strictly (faith-based) “a priori” metaphysical naturalism, we #36″ sapiens represent nothing more than the “emergent” biochemical outcome of a purely fortuitous/unguided, “self-organizing” cosmic evolutionary process. “But is it science?”

    And yet these metaphysically-motivated, atheistic adherents of “AI” (like Musk, Bostrom, Kurzweil et al.) still seem to be strangely ignorant of the scientific facts, that this currently “falsified” view of nature – i.e., the “old physics” of deterministic/reductionist “classical physics” – has long-since been overturned, replaced, or supplanted, by the “new physics of quantum mechanics,” during the last 80-plus years?

    So it’s been demonstrably clear in scientific circles (during the 21st-century), that the human brain is NOT anything like a computer, no matter how one slices it (no pun intended) – especially when viewed from the sound scientific perspective of 21st-century quantum physics.

    Don’t ya’ just love science fiction?

    “Facts are stubborn things…”

    * * *

    “The classical-physics-based claim that science has shown us to be ESSENTIALLY MECHANICAL AUTOMATA, has had a large impact upon our lives: our teachers teach it: our courts uphold it; our governmental and official agencies accept it; and our pundits proclaim it.

    Consequently, WE ARE INCESSANTLY BEING TOLD THAT WE ARE PHYSICALLY EQUIVALENT TO MINDLESS ROBOTS, AND ARE TREATED AS SUCH. Even we ourselves are confused, and disempowered, by this supposed verdict of science, which renders our lives meaningless.

    “We are now in the twenty-first century. It is time to abandon the mechanical conception of ourselves fostered by empirically invalidated nineteenth-century physics.

    “Contemporary physics is built on conscious experience, not material substance. Its mathematically described physical aspect enters as potentialities for future experiences. The unfolding of the future is governed by von Neumanns mathematical laws, into which our conscious free choices enter as essential inputs.” (My emphasis, of course.)

    — Distinguished “Orthodox/von Neumann” mind-brain quantum physicist, Dr. Henry P. Stapp, “Quantum Theory of Consciousness,” Paris Talk, (2013). And Dr. Stapp’s academic training, traces-back through four (4) Nobel laureates, including such luminaries as Wolfgang Pauli, and Werner Heisenberg.

  • Kaleb VonBerg

    I find the entire AI “apocalypse” discussion rather interesting. From my very limited understanding the perceived threat, by many experts in technology, hinges on the belief that AI will one day surpass human intelligence. This way of thinking seems to conflict with the notion that a created thing cannot be greater or more intelligent than it’s creator. Now, perhaps it is possible for a created thing to surpass it’s creator in every way (though I am very skeptical of this), and if so, the AI “apocalypse” discussion becomes very important.

    Even if a created thing can surpass it’s creator I also find in very hard to believe that AI will actually be able to think for itself. Admittedly, I am anything but an expert in AI, but as far as I know computers, even the most powerful and “intelligent,” strictly operate on the basis of the computer code information we as humans have input into them. Even if we can truly create a computer that can “think” for itself, is it possible for the computer to exert control over us? Could we not cut off its power source effectively killing it?

Inspiration
Garbage In, Garbage Out
Wade Trimmer
More from The Stream
Connect with Us