Elon Musk Dreams of Artificial Intelligence But Misses a Key Question About Human Intelligence

By Michael Cook Published on June 20, 2016

In the 19th century and for a good part of the 20th century, many people feared that humanity was destined to become lapdogs of bloated industrialists. The world would be divided between the haves and the have-nots, the capitalists and the proletariat.

The fear persists, but nowadays capitalists, a la Mark Zuckerberg, wear hoodies and tennis shoes like the rest of us. Apart from North Korea, there is universal agreement that “to get rich is glorious.” So the great divide of the 21st Century and beyond will be based not on money, but on intelligence. And the superior intellects will not even be humans, but machines.

Or at least that’s what Elon Musk says, one of the leading figures in Silicon Valley and one of America’s richest men.

Musk is a co-founder of PayPal who has since launched visionary projects like sending men to Mars with a one-way ticket and Tesla electric cars.

One of his visions, though, is a nightmare in which computers take over from humanity and begin to think for themselves at ever-increasing speed and sophistication. A guru at Google, Ray Kurzweil, calls this “The Singularity” and predicts that this is going to happen in 2045.

To forestall this existential risk, Musk has launched yet another project, OpenAI, to create “friendly” artificial intelligence. “The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet, or a house cat,” he told a programmers conference recently. “I don’t love the idea of being a house cat.”

Another of his strategies for the house cat problem is intriguing. He plans to invest in technology which will integrate the brain with the internet. Although this might sound preposterous, he recently identified a promising technology: the neural lace.

This is a term invented by science fiction writer Iain M. Banks to describe a device which acts as an interface between technology and the brain.

In a recent issue of Nature Neurotechnology, researchers report that they have injected a fine mesh of wire and plastic mesh into the brains of mice where it integrates into brain tissue and “eavesdrops” on neural chatter. This could help the disabled or enhance performance. “We have to walk before we can run, but we think we can really revolutionize our ability to interface with the brain,” says a co-author of the article, Charles Lieber, a nanotechnologist at Harvard University.

The novelty of neural lace is that it can be injected into the bloodstream through an artery and does not require complicated surgery.

Futurist Thomas Frey predicts that the human brain will eventually become integrated into computers. The task of retrieving information that once took 10 hours of poring over books became 10 minutes on Wikipedia, and will become 10 milliseconds for a person with neural lace.

“Somebody’s gotta do it, I’m not saying I will. If somebody doesn’t do it then I think I should probably do it,” Musk told the conference.

The idea was received with gasps of wonder amongst the digerati — and the medical benefits are obvious. The blind could see; the deaf hear; and the lame walk. But Musk’s ultimate goal is not to make disabled people normal; it is to make normal people superhuman. So there are also some serious ethical issues to work through.

Take privacy. If you can hack a computer, you can hack a brain. Integrating your memories and cognitive activity with the internet allows other people to see what you are doing and thinking 24/7 — a kind of upscale parole bracelet.

Take autonomy. In our culture, this is the most cherished of our personal values. But once brain are integrated into an information network, they can be manipulated in increasingly sophisticated ways. And since technology always serves its owner, we could easily become the tools of Google or the government. It’s bad enough suffering from the effects of a cold virus; what if our brains were taken over by a computer virus?

Take responsibility. There might be no crime, as the neural lace could shut down the “hardware” whenever passions threaten to overwhelm social norms — as defined by the network. We might be living in a very “moral” society of highly intelligent sheep.

Musk (and others) are asking: Should we hack the brain so that we can be the masters and not the slaves of artificial intelligence? But this is the wrong question. What they should be asking is: Is there anything that makes us uniquely human, that AI can never replicate? If not, the issues of privacy, autonomy and responsibility are meaningless anyway.

Why isn’t Musk asking these questions? Probably because he has a very impoverished idea of what it means to be human. To give you an insight into his views on this, Musk also told the conference that he believes that we are living in a computer simulation of people living 10,000 years from now.

… given that we’re clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we’re in base reality is one in billions.

This is a philosophical argument put forward by Oxford philosopher Nick Bostrom. But one expects peculiar metaphysical theories from academics — not from influential billionaires. If Musk really believes that there is only a billion to one chance that we are not part of a super-intelligent human’s computer game, what kind of respect can he have for human life?  What sort of social policies will he back? The corollary of living as characters in a computer game is that humanity is not precious because it is not real. Life would be a game played for very low stakes.

Musk is an extreme example of the technological mindset which is deeply admired in the developed world. But technological progress has to walk hand in hand with moral progress. President Barack Obama gave some wise advice on this very topic recently, when he spoke at Hiroshima, where the first atom bomb was dropped:

Technological progress without an equivalent progress in human institutions can doom us. The scientific revolution that led to the splitting of an atom requires a moral revolution, as well.

What part is Silicon Valley playing in that moral revolution? Lamentably, it may not be very much at all.

 

The column originally appeared at Mercatornet.com, by whose kind permission it is reprinted here.

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
Military Photo of the Day: Standing Guard on USS New York
Tom Sileo
More from The Stream
Connect with Us