On Sept. 12, I had the opportunity to participate in a small group discussion with Pulitzer Prize-winning journalist Glenn Greenwald, who fell into the international spotlight for assisting Edward Snowden in exposing the surveillance tactics of the United States’ National Security Agency.
Last Thursday, I attended a panel discussion featuring Nadine Strossen, the former president and first woman to head the American Civil Liberties Union— the nation’s largest and oldest civil liberties organization.
Both of these speakers engaged the topic of free speech — an issue that has been hotly contested on the Williams campus recently. Strossen is of the view that speech must not be censored, but should rather be confronted by counter-speech and activism. Greenwald is also opposed to censorship and argues that the censorship of speech on college campuses will produce graduates who are unprepared to engage with the harmful ideas that will likely face them after graduation.
Both Greenwald and Strossen have faith in the human capability to engage with hate speech, to pick it apart and simply move on. Their argument is similar to John Stuart Mill’s thinking: Speech that you do not agree with provides a valuable opportunity to either strengthen your conviction, or perhaps change your mind. Greenwald, however, extends this logic into the realm of social media, where algorithms rule.
Internet-based algorithms are intended for one purpose: engage users to create revenue. They achieve this by promoting content that is sensational and shocking, that will generate the most clicks and keep readers on the site for the longest amount of time. They are frighteningly effective at doing this; YouTube estimates that 70 percent of the videos that users watch are driven by their algorithmic recommendations. The human element of engaged “empathetic listening” that is championed by Strossen and Greenwald disappears. Rhetorical platforms no longer involve two humans engaged in discourse. Instead, young children are being targeted with radical videos on YouTube and are subsequently driven to commit acts of violence. Seniors above the age of 65 are being targeted with disinformation on Facebook intended to influence their voting tendencies with disturbing success.
Greenwald’s extension of absolute free speech online is reckless and can be perfectly captured by a flawed metaphor he uses to explain his stance. Greenwald argues that someone standing on a street corner with a megaphone spewing hate speech deserves the same protection as someone posting hate speech on Facebook. It is the classic argument that you should blame the person, not the medium. What Greenwald is neglecting is how the medium truly impacts the reach of those messages. Even if you were to stand on the most populous city corner in the world, you would have no chance of reaching as many people as one viral post on Facebook.
Strossen’s premise of meeting hate speech with free speech is much more difficult to execute online. People do not take the time to critically engage with the content they are seeing. There is not a real person in front of them with whom they could empathize and perhaps reach a mutual understanding. Not only is there a danger of consumers accepting what they read at face value, but there’s also a threat that the content might not even have been produced by a real person. Greenwald argues that in order to fight hate speech and disinformation online, they solely need to be met by more persuasive content. This again neglects the traditional function of algorithms: to create revenue. A well-developed critical analysis of white nationalism will not generate the clicks that a vitriolic conspiracy theory will, simply because the latter is more provocative.
Now, I am not advocating for government oversight and censorship in the traditional sense. We have seen that our own Congress doesn’t even understand the ways in which these sites work, such as in the famous case of Senator Orrin Hatch, who asked Mark Zuckerberg, the CEO of Facebook, during his testimony on Capitol Hill, how Facebook is able to sustain itself when users do not have to pay to join. Zuckerberg smirked and replied flatly, “Senator, we run ads.” Instead, these companies need to employ ways of minimizing the reach of content that can cause real world harm.
Facebook has taken the lead on this initiative, introducing measures that limit the spread of posts that approach their line of prohibited content. More recently, Zuckerberg announced the introduction of an “Oversight Board.” This board will be in charge of deciding what content is to be allowed on Facebook through a complex case-by-case system reminiscent of our own court system. It functions by creating a precedent that Facebook can then use to remove similar content, if the board finds that it should not exist on the platform.
To have faith in the human ability to engage with hateful or damaging content is shortsighted and idealistic when considering how algorithms function. We have seen the disastrous effects of unregulated and anonymous speech on 8chan. We have also seen how the governments of other authoritarian countries abuse online censorship. As a result, large social media and internet platforms have the responsibility to ensure that the content being amplified on their platform is fair. Hateful and dangerous content should not be broadcast to the top of anyone’s newsfeed. It can still exist on the platform, but it should not be promoted by an algorithm seeking to generate engagement.
Bernal Cortés ’22 is from South Bend, Ind.