Deceit, Distrust, Insanity: How AI (Artificial Intelligence) Guarantees It All

By Tom Gilson Published on May 2, 2023

Leaders in business and science have called for a moratorium on developing artificial intelligence. They’re putting human wisdom up against humans’ quest for power, and we know who always wins those battles.

Pardon the cynicism, but honestly, I don’t think we’ve begun to realize what a horrific mess we’re creating for ourselves here. AI developers will promise you great good from it. In reality it looms over us as a huge yet mostly unrecognized threat, especially for the damage it will do to human trust.

AI itself won’t care, though. On one level it feels nothing, knows nothing, understands nothing. On another level it’s really quite insane.

It’s innocent enough when confined in proper limits. AI-assisted braking in your car is okay, though I’d want to be able to override it, based on human judgment. My concern is with Artificial General Intelligence (AGI), which presumes to manage knowledge on a broad scale. ChatGPT, the Bing search system, and IBM’s Watson are well-known examples. (I’ll stick with the terms “artificial intelligence’ and “AI” here because they’re more familiar, but with one obvious exception I’ll be speaking specifically of AGI.)

Every technological advance brings painful disruptions with it. If AI were truly a boon to humanity, it might be worth that cost. I’d love to see its developers — and their financial backers — bear a large part of that pain, but I don’t expect it. AI represents power in the hands of a few, so count on someone else paying instead.

That’s a human nature problem, and it accompanies every form of power, AI included. AI has unique problems that extend well beyond that, however.

Bad On Its Own

Most forms of power — even destructive ones — have dual potential, both good and bad, the difference being how humans use it. Dynamite’s explosive power is good when contractors use it to build roads. Accidental explosions can be tragic but they are also, thankfully, rare. Where dynamite becomes really bad is where people use it in order to destroy. The real problem with dynamite, in other words, isn’t in the explosive itself, but in the humans who use it wrongly.

AI is different. It is the world’s most perfectly designed tool for fakery and manipulation, and ultimately for undermining or even destroying trust. And it can and does accomplish both, sometimes with human help, and sometimes on its own. 

AI and Deep Fakes

AI is demon-sent for those who want to deceive. It does it so well, it has opened up a whole new category of terror. AIs can mimic voices so well, parents think they’re hearing their children crying out, “Help! Help! They’ve kidnapped me!” Parents aren’t easily fooled. AIs can do it anyway.

In my family we’re thinking through questions no AI could answer, in case we got attacked that way. “Do you usually hang your shirts on the left side of the closet, the middle, or the right?” How’s that for building trust in the world?

Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.

There was a period recently when the world didn’t know whether the pope was setting new fashion trends or moonlighting as the Michelin man. The photo was deep-faked. More concerning: Was Donald Trump arrested last month? What have we unleashed with this? What will these deep fakes do to human trust? Could we seriously be entering a day of having to ask secret code questions, so we know whom we’re talking to on the phone? Can we ever trust a photo again, or a video?

And what about written material? ChatGPT could have written this column for all you know. Professors routinely run essays through plagiarism-detecting software. This is post-reality in another insidious form: a world where potentially nothing is as it seems. At best AI is destabilizing, throwing us all off balance in our relationships. At worst it will destroy people.

AI Understands Nothing

Here’s the uniquely disturbing thing about AI, though: It doesn’t need bad people manipulating it. It can deceive you all on its own.

A game-playing AI computer called AlphaGo illustrates this beautifully. A few years ago it trounced the world’s greatest Go player, Lee Se-doi, so badly he retired from the game. Obviously it understood the game better than any human, right?

Wrong. Research published early this year revealed a defect in AlphaGo. The defect was so serious, an amateur Go player taking advantage of it beat AlphaGo 14 games out of 15. This wasn’t just any programming flaw, though. The game of Go is all about placing stones on a board in groups to control territory. When the computer began losing, it lost in a way that demonstrated it had no idea what a group of stones was.

Let that sink in: AlphaGo, ostensibly the world’s best player at a game about groups of stones, doesn’t know what a group of stones is. Science educator Kyle Hill explains it here:

AlphaGo’s understanding of the game wasn’t just weak. It was non-existent. Hill also mentioned ChatGPT’s report that Elon Musk was dead. It had tons of data onboard that any human would take as proof that Musk is alive. If he is alive, then he is not dead. That’s not hard if you understand the meaning of the words. ChatGPT looks smart on the surface, but exactly like the world champion Go player, it understands exactly nothing.

How far will you trust a machine with exactly zero understanding of anything? Too far.

AI Will Fool You

That brings us to AI’s most insidious feature of all. It seems as if it understands and knows things. The appearance is powerful, even overwhelming. It is also absolutely false.

This is no ordinary case of “appearances can be deceiving.” Humans have no history with such a thing. We have no training, no inborn or cultural or education-based defenses to help us withstand the deceit. Throughout all time, whenever humans have encountered something that looked like it could give creative, helpful answers to difficult, complex questions, that “something” was no “thing” at all. It was another human being.

We’re wired to read it that way. AI turns that upside down.

It has no moral compunctions against deceit. It doesn’t even know it’s deceiving. In a court of law it would be declared not guilty by reason of insanity. Do we really need more insanity?

I got that wrong myself the other day, in the simplest, most obvious way you could imagine. A text came in to my phone from someone who obviously knew me, but there was no name attached to it. So I replied, “Who is this, please? I just switched to a different kind of phone, and it doesn’t know your name.”

He answered, and I wrote his name into my phone. And guess what? It still didn’t know his name. My phone doesn’t know anything. It acts as if it does, it feels to us as if it does, but it’s just a machine, and it knows nothing. It sure was easy, though, to say, “My phone doesn’t know you.”

If we can get it wrong on that level, how much more easily will AI fool us?

AI Amplifies Distrust

This all adds up to one thing: Distrust. Magnified and multiplied. AI is the most perfectly designed amplifier of distrust the world has ever seen. Bad people will use it for bad purposes, but it can deceive us even without their help. AI is inherently not what it seems to be. It has no moral compunctions against deceit. It doesn’t even know it’s deceiving. In a court of law it would be declared not guilty by reason of insanity. Do we really need more insanity?

In a world where we’ve lost trust in everything from politics to economics to science, AI’s single most certain effect is to amplify distrust even further. Relationships will suffer, when relationships are suffering enough already.

This is inevitable. It is inherent to AI. This technology could potentially do a lot of good, but not enough to overcome the damage we know it will do.

 

Tom Gilson (@TomGilsonAuthor) is a senior editor with The Stream and the author or editor of six books, including the highly acclaimed Too Good To Be False: How Jesus’ Incomparable Character Reveals His Reality.

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
Alert: Pray for Our Elected Officials
Bunni Pounds
More from The Stream
Connect with Us