The Problem With AI Is People

Misusing Tech to Become Superhuman

By John Stonestreet & G.S. Morris Published on May 11, 2024

There is no shortage of films predicting the dangers of rogue artificial intelligence — Terminator; The Matrix; I, Robot; Avengers: Age of Ultron, to name a few. Perhaps these dystopian science fictions will one day become fact, but for now, at least, AI is not poised to take over the world.

A recent, hilarious example is when X’s “Grok” AI tried to summarize Klay Thompson’s poor shooting night in the NBA playoff “play-in” game. The typically reliable Golden State Warrior went 0 for 10 from the three-point line in a loss to the Sacramento Kings. Pulling from online fan content that teased Thompson for “throwing up bricks,” Grok generated a story reporting that Thompson vandalized several people’s houses with actual bricks.

The Real Danger of AI

As USA Today recently observed, on its own at least, AI clearly is still working out some kinks. But, of course, AI is not on its own, so at least for now, the real threat remains fallen humans using AI to spread falsehoods, gain power, hurt others, and try to become superhuman.

Questions like, “What is the truth?” and “Who is actually telling it?” will become more important than ever as AI technology takes off and is used by unscrupulous people to flood the internet, newsfeeds, and airwaves with misinformation.

Examples of this also abound. Writing in The Wall Street Journal, Jack Brewster recently described “How I Built an AI-Powered, Self-Running Propaganda Machine for $105.” It took him just two days to launch his so-called “pink slime” news site capable of generating and publishing thousands of false stories every day using AI, which could self-fund with ads.

The whole process required, as he put it, “no expertise whatsoever,” and could be tailored to suit whatever political bias or candidate he chose. Readers would have no clear way of distinguishing the auto-generated fake news from real journalism, or of knowing that Buckeye State Press, the name Brewster assigned his phony website, was only a computer making things up according to his political specifications. Even worse, the one human that Brewster spoke with in the process of setting up the site was a web designer in Pakistan who claimed to have already built more than 500 similar sites (and likely not for reporters interested in exposing this problem). The news and information rating service NewsGuard has identified more than a thousand such “pink slime” news sites so far, and claims that many are “secretly funded and run by political operatives.”

What Does It Mean to be Human?

Questions like, “What is the truth?” and “Who is actually telling it?” will become more important than ever as AI technology takes off and is used by unscrupulous people to flood the internet, newsfeeds, and airwaves with misinformation. Christians will need a great deal more discernment than we currently cultivate, and a hesitancy to believe everything we see, especially when it reinforces our biases and assumptions.

More importantly, we’ll need to carefully weigh out which tasks and activities are irrevocably human and shouldn’t be outsourced to machine learning. This question is already urgent. The Associated Press reported last year on an AI avatar that “preached” a “sermon” to a gathering of German liberal Protestants. Recently, a major Catholic apologetics website announced an “interactive AI” chatbot named “Father Justin,” which supposedly provides “a new and appealing way for searchers to begin or continue their journey of faith.” It didn’t take long for “Father Justin” to be “demoted.” If it needs to be said, following spiritual advice from AI is a terrible idea.

Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.

In his book 2084: Artificial Intelligence, the Future of Humanity, and the God Question, Dr. John Lennox argued that it’s not alarmist to note how some of the main AI pioneers openly espouse transhumanism. At the heart of this worldview is a very old lie, first whispered by a snake in a Garden, that humans “shall be like God.” We can acknowledge that without denying the legitimate, helpful, and humane uses of AI.

The only way to distinguish the legitimate from the dangerous, the false, and the nefarious will be to know clearly what it means to be human. Perhaps, then, the real danger of AI is timing. Years ago, philosopher Peter Kreeft mourned that just as our sticks and stones had turned into thermonuclear bombs, we had become moral infants. In the same way, just as the Western world lost touch with who we are as embodied image bearers, the inherent relationship between language and truth and the limits of our humanity, our technologies have given us this incredibly important and powerful tool. As with most powerful tools, the real danger of AI is those who use — or rather, misuse — it.

Christians, then, have a necessary and important gift for the world: the truth about what it means to be human, including our physical and moral limits.

 

John Stonestreet serves as president of the Colson Center for Christian Worldview. He’s a sought-after author and speaker on topics related to faith and culture, theology, worldview, education, and apologetics.

Shane Morris is a senior writer at the Colson Center and host of the Upstream podcast as well as cohost of the BreakPoint podcast.

Originally published on Breakpoint.org: BreakPoint Commentaries. Republished with permission of The Colson Center for Christian Worldview.

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
Military Photo of the Day: Twin Eagles
Tom Sileo
More from The Stream
Connect with Us