Here’s What Facebook and Google Did Not Discuss During Hearing on White Nationalism

By Published on April 10, 2019

  • Facebook executives did not provide many details during a congressional hearing Tuesday about why their algorithms are unable to distinguish the difference between hate speech and legitimate conservative content.
  • Tech experts argue that the tools Facebook, Twitter and Google use to moderate content might not be up to the task.
  • Google and Facebook told Congress Tuesday that their algorithms make mistakes and sometimes nix content that does not specifically violate company policies.

Google and Facebook executives were grilled Tuesday over their inability to govern white nationalism on their platforms, but they gave few answers about why their algorithms are dinging conservative content.

Neil Potts, Facebook’s director of public policy, and Alexandria Walden, counsel for free expression and human rights at Google, spoke to the House Judiciary Committee alongside activists of the Anti-Defamation League, among other groups. House lawmakers asked the two executives about the effectiveness of their artificial intelligence programs.

DCNF - 300

“That’s why hate speech and violent extremism have no place on YouTube,” Walden said during the House Judiciary Committee, noting what Google is doing to combat white nationalism on its platform. Walden and Potts also noted that their algorithms sometimes have a difficult time fleshing out the difference between legitimate forms of speech and language that is not permitted.

“We don’t and we won’t always get it right, but we’ve improved significantly,” Potts added, referring to Facebook’s new stance on nixing white nationalism. He noted that the company is not prohibiting people from expressing their love for country and community, but it does not permit bigotry and hatred. Walden made similar comments.

“Hate speech removals can be particularly complex compared to other types of content,” she said. “Hate speech, because it often relies on spoken rather than visual ques, is sometimes harder to detect than some forms of branded terrorist propaganda. It’s intensely content specific.” Neither of the executives gave a detailed explanation as to whether either company is capable of making these distinctions.

Conservatives meanwhile argue that Facebook is targeting them because of their politics. President Donald Trump’s social media director Dan Scavino Jr., for instance, was temporarily blocked in March from making public Facebook comments.

The ban claimed that “some of your comments have been reported as spam,” and that “to avoid getting blocked again,” he should “make sure your posts are in line with the Facebook Community Standards.” Trump assured his supporters in a March 19 tweet that he “will be looking into this!”

Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.

Tech experts have expressed concerns that Facebook and Google are not up to the task of distinguishing between legitimate content and hate speech. Emily Williams, a data scientist and founder of Whole Systems Enterprises, for one, argues that Facebook’s lack of transparency about the frailties of their AI-deep learning instruments makes it difficult for people to understand why and how content is being throttled.

“I think that is a very big stretch,” she told The Daily Caller News Foundation in March, referring to media reports that Facebook might be using code designed to deboost suicide content as a way to target conservatives. “If Facebook came out and was transparent, that would be one thing, but a lot of people are arrogant and don’t want to admit their algorithms are imperfect,” Williams said.

She added: “If I were trying to weed out extremists I would not use this code. These codes are very good at what they are trained for but not very good at anything else.” Facebook is a profit-driven corporation, so if it wanted to target conservatives or liberals, then it would probably not use a code for a purpose other than what it was designed to do, Williams noted.

Williams believes that Facebook’s algorithm likely has a 70 percent success rate. That means roughly 30 percent of the time the company’s moderators are nixing conservatives who are sharing provocative content but not what might be prohibited by the Silicon Valley company.

Walden and Potts were often asked during the congressional hearing, which was chaired by Democratic Rep. Jerry Nadler of New York, about their handling of the New Zealand shooting. Facebook and Twitter struggled to remove video images of shootings at two mosques in March as some analysts criticize the companies’ inability to immediately ding such content.

 

Follow Chris White on Facebook and Twitter.

Copyright 2019 Daily Caller News Foundation

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected].

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
The Scarcity Mindset
Robert Morris
More from The Stream
Connect with Us