Examining the Past, Protecting the Future: Redefining an Open Internet and Online Privacy

Written by Melat Asmerom

Edited by Alexandra Portmann Muñoz, Eryn Rhodes, and Giulia Silverstein

Unforeseen challenges emerged from the swift and dynamic evolution of the internet. One of the most prevalent is the dissemination of radical content through online platforms. Online spaces, such as Instagram and Facebook, are utilized by extremists to fundraise, share information, mobilize, and recruit potential members, threatening U.S. national and international security. The risk is real. In cases such as Clayborn v. Twitter (2021), Twitter v. Taamneh (2022), and Gonzalez v. Google (2022), U.S. victims were killed by terrorist acts associated with online radicalization.

Gonzalez v. Google is especially relevant, as it is the first case to challenge the immunity of Internet Service Providers (ISPs) granted under Section 230 of the Communications Decency Act (CDA). This 1996 legislation held that ISPs cannot be expected to moderate all user-generated content on online platforms because the sheer volume and frequency of content greatly complicates regulation of content distribution. This standard of immunity outlines the legal expectations of ISPs.

Despite recent technological strides, digital law has remained essentially untouched since the early 2000s. Legislators feared that developing digital law would limit innovation, when it should promote it. This article analyzes the implications of Section 230 in the context of online terrorism, arguing that additional judicial review is necessary to strike a balance between online freedom and national security. First Amendment rights should be meaningfully accounted for, but the implementation of a more active judiciary will increase ISP liability and effectively diminish the propagation of malicious content.  

Section 230 of the Communications Decency Act has significantly impacted the nature of the internet and online communication. Enacted in 1996, it aimed to provide legal protection to Internet Service Providers (ISPs) and online platforms. Until Gonzalez v. Google in 2022, the Supreme Court had not directly interpreted or ruled on the Act's provisions; ISPs acted under a regime of legal ambiguity. 

The authoring of Section 230 stemmed from two cases concerning internet liability: Cubby, Inc. v. CompuServe Inc. (1991) and Stratton Oakmont, Inc. v. Prodigy Services Co. (1995). In Cubby, the District Court for the Southern District of New York decided that online messaging boards were passive distributors of content, rather than publishers; thus, the courts granted broader immunity to ISPs. In Stratton, the New York Supreme Court ruled that online bulletin boards were liable as publishers because the forums had editorial control over content. These opposing rulings raised the question of whether ISPs were publishers or distributors; because each face differing liability standards, appropriate categorization is key. Section 230 diminishes uncertainty, including through two critical features known as the “Good Samaritan provisions.” The first underlines that “providers or users of ISPs cannot be treated as publishers,” and the second extends standards of good faith to ISPs, outlining that they are not liable for “voluntary action to remove obscene, lewd, violent, and harassing language.” 

While publishers and distributors are regulated under tort law, ISPs are subject only to Section 230. Publishers face greater liabilities than distributors, specifically for illegal content or defamatory statements. More editorial control over content leads to increased liability. Distributors tend to have little control over what is republished and are therefore burdened with less liability. However, distributors can be held just as liable as publishers if they are aware of illegal material and do not act to remove it. ISPs—including search engines, websites, and social media platforms—are immune from liability as they technically act as third parties. They are further protected by the First Amendment: while citizens are allowed freedom of speech, private businesses are granted the right to review content as they see fit. 

Due to ISPs’ lack of editorial control, they are not legally obligated to review or remove general content, unless it is explicitly illegal, such as copyright infringement and sex trafficking. Furthermore, due to the sheer scale of the Internet, ISPs are not expected to monitor all content posted, allowing lower-level self-organizing and radical communication to occur unchecked, threatening national security.  According to Gill et al.’s United Kingdom-based study on terrorism and online activities, ”54% of all cases used the Internet for learning, 44% for the spread of extremist online media, and 32% for attack preparation.” 

A newfound stance on the issue, supported by Justice Clarence Thomas, entails labeling and regulating online platforms as public goods.  Under common law, public goods are “U.S. government-controlled spaces that transmit a vast range of information, people, or goods, in which they have zero control over the content disseminated.” Online platforms under this regulation could not review or enforce standards of content entirely, putting the United States in a much more vulnerable position, as the State is then responsible and liable for the content of public goods, rather than online distributors. 

The rise of social media and online platforms dedicated to user-generated content created a breeding ground for users to share whatever opinions they may have, including controversial or extremist ones. In 1998, the U.S. State Department reported 15 websites being run by terrorist groups compared to over 4,000 in 2005. This leap coincides with the increased usage of online platforms globally and in radical communities. 

The dangers of social media came to life in the 2008 terrorist attack in Mumbai. Ten members of Lashkar-e-Taiba, an Islamic terrorist organization, struck Mumbai while communicating with group members in Pakistan via X (formerly Twitter). Because X serves as a public forum, terrorist affiliates in Pakistan could monitor happenings in Mumbai and advise their associates on courses of action. Through X, the terrorists were able to discern key information “such as the movements and positioning of Indian counter-terrorism units” effectively aiding their mission. This unregulated communication and dissemination of sensitive information illustrates the lack of content oversight from social media platforms. 

Furthermore, these platforms create online communities by uniting like-minded individuals, which can unwittingly lead to increased incidence of terrorism and violence. Social media’s targeted algorithms create an echo chamber environment in which predominantly posts aligning with one’s viewpoints appear. This platform-induced cycle perpetuates intense polarization and radical political and religious ideologies. Facebook, for example, serves as a recruitment tool: members can join sub-groups within the platform based on shared interests, most notably political affiliations. Radical organizations use them to recruit participants and communicate with members on a large scale. These enterprises use the impartial facade of social media to their advantage by centering a group on a moderate cause and then slowly introducing extremist material to reinforce members’ beliefs and imbue new recruits.

Similarly, sociologist Zeynep Tufekci found that relying on YouTube’s algorithms and autoplay features led her to more radical content than anticipated. Under various YouTube accounts, Tufekci watched unassuming political videos, but over time the platform recommended extremist content on both ends of the political spectrum. While social media can be used for well-intentioned advocacy and mass communication, such as elections or social justice efforts, it can also serve as a tool for “organizing and instigating major political riots and even revolutions.”

In Gonzalez v. Google, the social media platform in question is Google-owned YouTube, which primarily serves as a space for video-sharing. This feature differentiates YouTube from other platforms, adding another dimension to the spread of harmful content. Videos such as bomb making tutorials, firearm assembly and shooting instructions, and calls to violent action create a dangerous environment on YouTube that contributes to the training of current and future terrorists and radicals. Although YouTube guidelines prohibit actively displaying violence, users can still upload and widely distribute illicit videos before moderators find and remove them. Additionally, videos involving firearm instructions and demonstrations technically evade YouTube’s guidelines, as they do not “actively incite violence.” This loosely enforced system leaves gaping holes for malicious exploitation.  

Gonzalez v. Google alleged that Google's platforms are used to spread extremist ideologies and coordinate acts of violence. According to the petitioner, YouTube’s targeted algorithms amplified Islamic State of Iraq and Syria (ISIS) content which encouraged the 2015 Paris attacks, that killed Nohemi Gonzalez. Attorneys representing the Gonzalez family argued that Google could be charged with secondary liability under the Anti-Terrorism Act (ATA) for “aiding and abetting a terrorist attack” through algorithmic recommendations. The ATA was created in 1990 to compensate U.S. nationals for damages incurred by acts of international terrorism. In 2016, it was broadened under the Justice Against Sponsors of Terrorism Act (JUSTA), which provided civilians with a wider scale of relief from persons that have provided support to foreign organizations which engage in terrorist activities against the United States. This implicates ISPs moving forward by narrowing the scope of immunity granted by Section 230 by allowing civilians to sue ISPs for facilitating terrorism. 

In a closely watched decision, the Supreme Court upheld the Good Samaritan provisions of Section 230, emphasizing the importance of maintaining the balance between online freedom and liability. These provisions hold ISPs to a lower standard than publishers and waive liability from them, in regard to removing harmful language on their platforms. This immunity continues to grant ISPs freedom from liability, exemplifying digital law’s stagnation since the birth of Section 230. However, the decision left room for discussion about the necessity of additional judicial scrutiny. The serious implications associated with the lack of liability assigned to ISPs raise concerns amongst the public and its welfare. 

The internet, including major platforms like Google, provides a haven for extremist ideologies to spread without significant consequences. Critics argue that some ISPs use Section 230 as a weapon to evade responsibility for the harmful content they house. In the context of online terrorism, this has far-reaching consequences: platforms become safe spaces for malicious coordination and radicalization.  

Additional judicial review would tailor Section 230’s provisions to individual cases. Section 230 compromises free expression and legal accountability. However, its pursuit of balance has grown tenuous in recent years because of the increasing misuse of online platforms for extremist purposes. The argument for additional judicial review centers on assessing each case individually to determine whether immunity should apply, and the nature of that application. It would prevent platforms from becoming vehicles of harm by considering the specific content, intent, and context in which online terrorism is propagated. According to the Department of Justice, reforming Section 230 by removing the good faith portion of the Good Samaritan provisions would deny immunity to malicious actors. To qualify as a malicious actor, an online platform would have to facilitate content that violates federal criminal law. This is only one method of many seeking to create a more structured process for evaluating whether immunity applies to ISPs, particularly in cases involving extremist content and online terrorism.  

Ensuring that internet service providers are held accountable when their platforms are used to facilitate terrorism fosters trust in the online ecosystem and allows non-violent free speech to flourish. Striking the right balance through additional judicial review can help protect the internet's core principles while mitigating harm. If the scope of Section 230 is narrowed, ISPs could be as liable as publishers, causing more censorship in online content. Conversely, if the scope of Section 230 is broadened, ISPs will avoid liability associated with distributions, and dangerous content will go largely unchecked. 

Gonzalez highlights an urgent need to reevaluate the immunity provided to internet service providers under Section 230 of the Communications Decency Act. Online terrorism is a grave national security concern. Increased ISP regulation may appear to infringe on online freedom, but preserving radical content creates a hostile and harmful environment. Drawing the line between free use and malevolence is imperative to ensure a safe and vibrant digital space in the 21st century. As the internet evolves, so must the legal framework surrounding it to address the unprecedented challenges of the information age. As illuminated by Gonzalez, an expanded, circumstance-dependent review is a promising solution. 

Previous
Previous

Mallinckrodt’s Machinations in Bankruptcy: Lessons for the Opioid Crisis 

Next
Next

Are We Keeping It Real? Virginia Hospital Compliance with Price Transparency Laws