AI Legal Theories

By Richard Stevens Published on May 11, 2023

In a recent article on The Stream,How to Stop Troubling Abuse From Artificial Intelligence,” Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or given dangerous advice:

  • The Snapchat ChatGPT-powered AI feature “told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.”
  • Snapchat’s ChatGPT reportedly advised a user posing as age 15 how to have an “epic birthday party” by giving “advice on how to mask the smell of alcohol and pot.”
  • When a 10-year-old child asked Amazon’s Alexa for a “challenge to do,” Alexa reportedly suggested: “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
  • Jonathan Turley, the nationally known George Washington University law professor and commentator, discovered he had been terribly defamed. ChatGPT had published a sexual harassment charge supposedly made against him, which included entirely false statements of “fact” and referenced a non-existent newspaper article.

Prof. Marks suggested that instead of having government grow even bigger trying to “regulate” AI systems such as ChatGPT: “How about, instead, a simple law that makes companies that release AI responsible for what their AI does. Doing so will open the way for both criminal and civil lawsuits.”

Strict Liability for AI-Caused Harms

Prof. Marks has a point. Making AI-producing companies responsible for the actions of their software is feasible using two existing legal ideas. The best known such concept is strict liability. Under general American law, strict liability exists when a defendant is liable for committing an action — regardless of whether the defendant committed the action with intent, recklessness, or negligence.

Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.

That concept appeared in the 20th century as strict products liability. The person suffering harm from a product needs only to prove that a defect in the design or manufacturing of the product caused the harm. Another way is to prove the user wasn’t adequately warned of the product’s risks. The harmed person doesn’t have to prove the product supplier was “at fault.”

Strict products liability could apply to AI systems including bots like ChatGPT. Following Prof. Marks’ thought process, if ChatGPT gave information or advice that a user reasonably relied upon and caused or suffered damage, then the bot’s maker would be liable for the harm.

Untethered AI Systems May Be Exceptionally Dangerous

Another possible legal concept is abnormally dangerous activity liability. Dating back to the 19th century, the rule applies in situations where a landowner maintains a hazard, for example, a toxic waste processing plant next to a residential area. Toxic waste processing (1) is not a “common” activity, but (2) does create a foreseeable and highly significant risk of physical harm to people who do not benefit from the activity, and (3) can harm such people even if everyone involved acts reasonably. With these three factors present, the toxic plant is an abnormally dangerous activity. Its owners and operators can be strictly liable for harms caused to other people, regardless of fault.

Those three factors could apply to AI. People fear that AI systems like ChatGPT and other bots, and other online information and advising systems, can invade and even direct the worlds of youngsters and naively-trusting adults. AI systems are manufactured and hosted by a small number of providers, so they are not a “common activity” that everyday people engage in or know all about. Already we see bots giving objectively dangerous information to people who do not benefit from such information. (Not to mention deliberate criminality using bots.)

Morevoer, the AI systems can harm users, especially kids, even if the AI programmers thought they were acting reasonably when designing the systems. AI systems might well be classified as abnormally dangerous activities — and that means the manufacturers and providers could be held strictly liable for harms the AI systems cause.

Sue the Bums vs. Call the Cops

Society will have to decide if laws addressing AI-caused harm should be enforced by police and criminal prosecutions, or by injured victims suing for damages in civil courts. Typically, strict liability concepts make it easier to sue the manufacturers and other providers in civil court for damages caused by their products. A typical civil lawsuit, however, costs a lot to maintain and can take years. Special fast-track courts, however, could be created to focus on getting victims relief from AI-caused harm.

Laws can conceivably be passed to authorize police and prosecutors to investigate and charge people with the crimes their AI products commit. Because governments can prosecute crimes speedily, while the victims don’t have to pay for lawyers to make a case, the criminal law system could effectively deter AI misconduct.

Yet the idea of government police rounding up programmers for chat bot messages seems a tad authoritarian. Recent reports say Chinese authorities arrested a man for using ChatGPT to write a news story falsely claiming a train accident had killed nine people. Indications are the writer faces five to ten years in prison.

The threat of strict liability lawsuits or prosecutions, as Prof. Marks suggested, may well reduce the dangers of AI systems and bots. Using those existing concepts of law, rather than inventing new laws and government bureaucracies, can meet the challenges of harmful AI bot misconduct.

 

Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute’s Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He has authored or co-authored four books and has written numerous articles and spoken on subjects including the Bill of Rights, artificial intelligence, genocide studies, intelligent design, and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
The Scarcity Mindset
Robert Morris
More from The Stream
Connect with Us