Back to All Blog Posts

ACFCS Special Contributor Op-Ed: Gonzalez v. Google Allows SCOTUS to Curb the Rise of Terrorism on Social Media

supreme court building

The Skinny:

  • When the Supreme Court agreed to hear the case Gonzalez v. Google, Inc. this term, speculations and theories exploded across the Internet. Why? The case could usher in a watershed moment and bring a painful, long-delayed reckoning for social media and tech titans like Google, which owns YouTube, Facebook, now Meta, and others.
  • These companies and others have used broad protections crafted at the dawn of the Internet – at issue, Section 230 of the 1996 Communications Decency Act – as a shield against the hateful and dangerous words used by others on their platforms. The chief legal argument: these companies are just providing a service for third-party users, but are not content creators themselves.
  • The counterargument: These companies are platforms for words that, when amplified and made available to billions of users around the globe, have caused suffering and death through terror attacks at home and abroad and fomented hate by a bevy of seething groups, becoming the tipping point to murderous acts in the real world, like mass shootings.

Freedom of speech is one of the most singularly important rights guaranteed by the U.S. Constitution.

Many believe it’s what makes America, America. But it’s time to draw the line at big tech companies, who now are waiting for an upcoming ruling from the Supreme Court that may change online platforms’ liability for terrorist activity on social media.

When the Supreme Court agreed to hear the case Gonzalez v. Google, Inc. this term, speculations and theories exploded across the Internet, with commentators analyzing past decisions and statements made by the various Justices to predict how this case may turn out.

The case could usher in a watershed moment for the Internet and bring a painful, long-delayed reckoning for social media and tech titans like Google, which owns YouTube, Facebook, now Meta, and others.

These companies and their ilk have used broad protections crafted at the dawn of the Internet as a shield against the hateful and dangerous words used by others on their platforms.

Words that, when amplified and made available to billions of users around the globe, have caused suffering and death through terror attacks at home and abroad and fomented hate by a bevy of seething groups, becoming the tipping point to murderous acts in the real world, like mass shootings.

The case before the Supreme Court will weigh many legal and free speech issues: including the availability and ease of use for online message platforms, their oversight for screening and scrutinizing speech that goes against their own terms of service and the accountability when these forums are used for illicit ends.

Moreover, the justices will explore the legal nuances between providing a neutral platform for third parties, where billions of people make comments all the time, and a content creator, which thoughtfully, or algorithmically, curates and creates new content for users.

Curated content that, even if done by a software program, ends up spreading a message of hate to indoctrinate others – while at the same time the companies involved profit from the burgeoning views, even sharing ad revenues for the blacklisted groups creating the toxic messages and gruesome videos, like ISIS.

The familes suing YouTube, for example, believe the company’s placement of targeted ads and algorithmic recommendations of videos transforms YouTube from an interactive computer service (ICS) to an information content provider (ICP), stripping the platform’s Section 230 immunity.

As defined by Section 230(f)(3), an ICP is any “person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet.”

So how can you view these legal theories in a more tangible way?

Think of it like this: when the Internet was still in its infancy, and they allowed people to make comments on forums or chat rooms, if a user wrote something that could be perceived as hate speech, it was highly unlikely the company or Internet Service provider would get sued – for many reasons.

The sites back in those days had little reach. The only people seeing the message would be users or visitors. They had to find it – it wasn’t automatically generated for them and suggested from a video they were watching.

Further, whatever improper message was on that old school forum, it wasn’t then picked up and shot all over the world to billions of users – making money for the poster and the company itself.

It just kind of sat there and whoever happened upon the site and scrolled back far enough might possibly find the post.

To explain the concept beyond the Internet, such as a physical newspaper, think of it like this:If you don’t like what is printed in a newspaper’s OP-ED section, or even the paper itself, you don’t sue the manufacturer of the printing press.

The printing press published the paper, but it didn’t create the content. It is a neutral party, just a non-thinking machine – without any fancy algorithms.

In another, more concrete, if slightly ridiculous example to illustrate the legal theories at play in this case, if someone puts hate speech on the side of a building, you don’t sue the owner for providing the wall upon which the words are displayed.

The building, while having many walls that could be used for hate speech and could be seen by many passing cars, is a neutral party.

Now, if the owner of the building goes outside, sees the hate speech, and adds a few more letters, that is a different story. That individual is not a neutral party and, in fact, created new content to view – analyzed by human eyes and edited by a human brain.

These are some of the issues Gonzalez may resolve in the virtual world.  

Supreme Court Social Media graphic

When it comes to immunity from third-party messages, times, they are a-changin’

However, until Gonzalez is heard before the Court, until each side makes its case before all nine Justices, and until each Justice has an opportunity to interrogate and probe each legal theory – the outcome remains unknown.

Notably, where once the law favored allowing tech companies more deference and less liability, the climate around these kinds of cases has been trending in the other direction.

White supremacist manifestos are published on Facebook well in advance of a school shooting; the YouTube “Incel” community grows and indoctrinates more young men; the Islamic State continues to recruit, radicalize, and grow on Twitter.

Congress has also taken notice, with a rising chorus calling for more oversight and accountability.

Lawmakers from both sides of the aisle have condemned Section 230, and the current Congress has introduced more than 20 bills that propose to amend or repeal it, according to published reports.  

Some examples of these: the See Something, Say Something Online Act of 2021 and the Protecting Americans from Dangerous Algorithms Act, which are aimed squarely at creating accountability for interactive computer services that contribute to the proliferation of terrorist content online, says analysis by Lawfare.  

“It remains to be seen whether the legislature or the Supreme Court will address these questions first, and to what extent one may affect the outcome of the other. Not one of the tech companies giving a platform to these voices is held to account for whatever role they may have played to encourage violence and terrorism,” the group concluded.  

People are rightfully angry, and big tech gains more power and wealth, profiting from death.

Gonzalez is the Supreme Court’s opportunity to enact a legal requirement for tech companies to combat the rise of terrorism on social media and online mediums.

Or, at the very least, open the door to accountability and legal avenues for compensation when they provide online platforms that play a role in terror attacks.  

iPhone apps

“The twenty-six words that created the Internet.”

Enacted right as the Internet began its massive boom into the mainstream, the Communications Decency Act of 1996 (CDA) was passed in an effort to regulate the new digital space.

It prohibits any individual from knowingly transmitting “obscene or indecent” messages to a recipient under the age of 18.

It also outlaws the “knowing” display of “patently offensive” materials in a manner “available” to those under 18. Given that the Internet is available to people of all ages, it must abide by the CDA.

However, in an effort to protect the freedom of speech that is seen by many as essential to the American identity, Congress wove an exception into the law that protects internet platforms from liability for third-party speech.

Section 230(c)(1) of the Communications Decency Act says, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Therefore, all of the big tech companies – Twitter, Meta, Google, to name a few – cannot be held liable for the posts of its users.

And in so few words, many credit the passing of this law as the moment that the Internet, as we know it, was born.

Synthetic ID Fraud Data System

Gonzalez questions Section 230 necessarily and importantly.

The Gonzalez lawsuit was brought by the family members of Nohemi Gonzalez, an American victim of a 2015 terror attack in Paris. The day after her death, the Islamic State (IS), also known as ISIS, took responsibility for the attack on YouTube.

The Gonzalez family alleges that YouTube had become integral to the IS’ recruitment, radicalization, and coordination and that Google, which owns YouTube, knew this.

Specifically, the Gonzalez plaintiffs point to two of the twelve IS actors responsible for Nohemi Gonzalez’s death, and videos those two individuals posted to recruit jihadi fighters to join the IS.

The Gonzalez family argues that the YouTube algorithm recommended IS videos to users and assisted users in finding IS content, leading to recruitment and radicalization through social networking.

Adding insult to injury, Google profits from this propagation of IS content through targeted advertising that further promoted more IS content.

Because a portion of YouTube advertising revenue goes to the content creator, Google effectively paid the IS for its videos.

The Gonzalez plaintiffs argue that Google reviewed, approved, and promoted content posted by the IS.

The counter argument by the tech companies: these were not analyzed and approved by humans, essentially creating and publishing new content.

But instead, these were computational decisions decided by various data points, views and likes merely suggesting what billions of users may or may not like – and algorithmic choices do not an internet information content provider make.

The Ninth Circuit Court of Appeals held that Google was shielded from liability by Section 230.

The Gonzalez family raised several arguments to challenge Section 230’s application to their case, including the geographic limitations of Section 230’s reach and Congress’ lawmaking intentions, but no avail.

The Ninth Circuit was better persuaded by Google’s arguments that the conduct in question, despite having international implications, took place in the U.S. and was therefore covered by Section 230.

The court also agreed with Google that recent amendments to other laws, including the Anti-Terrorism Act (ATA), did not imply that Section 230 was repealed.

Such a repeal would be the place of Congress.

For the Supreme Court to get involved, one law would have to essentially supercede or cancel out another, creating a conflict with no easy legal recourse.

In this instance, they believed that the ATA and Section 230 could coexist – but that peace may have an expiration date.

While the court did not accept Google’s assertion that the Gonzalez family was trying to treat Google as a publisher of user content, the court still emphasized that Google was not a publisher of content by virtue of offering a content platform.

The court also did not agree that Google materially contributed to or support IS activity through its algorithm or its profit from IS content.

Moreover, the court wrote that Google’s algorithms did not treat the IS and its content any differently than any other user and their content, and this meant that Google was entitled to Section 230 protection.

Ultimately, the court held that economic self-enrichment is not sufficient motivation to allege that Google was liable for international terrorism.

The Gonzalez family then appealed to the Supreme Court to hear their case, and over Google’s opposition, the Supreme Court agreed.

The Internet has long been a place where violent extremism festers and attracts newcomers to violent communities.

In 1999, Eric Harris and Dylan Klebold murdered thirteen people in the tragic and infamous Columbine High School shooting.

Three years before murdering his classmates, Harris started a blog on a website hosted by America Online (AOL). While originally used mostly to discuss gaming, the blog devolved into a space glorifying and promoting acts of mass violence.

These written ideas became reality when its author opened fire on his peers in what was, at the time, the deadliest school shooting in American history.

In 2014, before murdering six people and injuring fourteen others in Isla Vista, California, Elliot Rodger uploaded a video to YouTube outlining his plan of attack and his motives, including the desire to punish women for rejecting him.

He published a written manifesto to accompany it.

In 2015, before killing nine people at Umpqua Community College in Oregon, Christopher Harper-Mercer handed a student a package containing written admiration for Rodger, his manifesto, and his murders.

In 2018, Alek Minassian committed the Toronto van attack, the deadliest vehicle-ramming in Canadian history, killing ten people and injuring sixteen others.

Before this attack, Minassian posted on Facebook about how he ascribed to the Incel culture and wrote a post specifically praising Rodger.

Also in 2018, Nikolas Cruz murdered seventeen people and injured seventeen more at Marjory Stoneman Douglas High School in Parkland, Florida, following countless violent social media posts warning of the coming massacre.

He, too, praised Rodger on social media.

Terrorism bloodstained map

Beyond becoming a place where violent extremism lurks in its dark corners, the Internet has also become a place where terrorist organizations recruit and radicalize new members.

Al Qaeda was one of the earliest known foreign terrorist organizations to implement “cyber-planning” of terror attacks, including 9/11, and online recruitment of new members.

The Taliban began using Twitter in May 2011, and this utilization of social media has been credited as one of the reasons the Taliban was able to swiftly regain power in Afghanistan.

Famously, the Islamic State (IS) relies on social media to recruit, radicalize, and even further terrorize.  

The IS is known for publishing photos and videos of beheadings on social media, along with threats and warnings of future violence. The IS has also grown in its ability to recruit and radicalize new members online, using Facebook, Twitter, and YouTube to expand its numbers across geographic and national boundaries.

This has given rise to one of the most dangerous – and difficult to uncover – types of terror plots: lone wolf attacks.

In these, a person with, say, a normal job and legal income, with use those funds to buy weapons and engage in attacks that appear to come out of the blue.

The IS’s expansion and radicalization of those who may not have ever even traveled to interact with the group physically has taken root in the United States.

This was seen in the 2015 San Bernardino terrorist attack, where terrorist perpetrators Syed Rizwan Farook and Tashfeen Malik killed fourteen people and injured twenty-two others after being radicalized online.

Malik posted a pledge of her allegiance to the IS on her Facebook before committing the attack.

It was also seen in the 2016 Pulse Nightclub shooting, where radicalized IS member Omar Mateen killed forty-nine people and injured fifty-three more in an LGBTQIA+ nightclub in Orlando, Florida.

Studies show that nearly 90% of organized terrorism on the Internet takes place on social media.

There are many reasons why social media has become an optimal tool for terrorist extremists to spread their message and further their ideology.

Social media is free to use and easily accessed by the public. It allows for the broad dissemination of messages without any of the filters imposed by mainstream news outlets and forums.

Studies are also suggesting that social media has contributed to the acceleration of the radicalization of U.S. extremists.

The time that it takes to radicalize terrorist actors in the United States has dramatically decreased over the last decade, as social media has considerably increased in popularity.[1]


[1] Data from The National Consortium for the Study of Terrorism and Responses to Terrorism (START) at the University of Maryland that researches the causes and consequences of terrorism in the United States and around the world.

stacks of currency creating a shadow of a rifle

Governments are heavily reliant on the private sector to address terrorist activities online.

Despite the clear link between social media and the rise of terrorist activity online, governments have declined to impose significant compliance demands on big tech companies.

On May 15, 2019, following the terrorist attacks on several mosques in Christchurch, New Zealand, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron brought together governments, civil society, and big tech companies to adopt the Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online.

Those who join pledge to address internet terrorist activity by addressing the role of algorithms in driving radicalization, implementing positive interventions to combat the radicalization process, identifying emergent crises, and crafting responses from members of the Christchurch Call community.

Notably, the Christchurch Call emphasizes the need for voluntary transparency from internet platform providers and seemingly relies on voluntary cooperation from big tech partners in order to accomplish its goals.

Tech companies that have joined the Christchurch Call include Meta (formerly Facebook), Google, YouTube and Twitter, among others.

The U.S. government joined the Christchurch Call in May 2021 and has consistently relied on big tech companies to self-monitor, without intervening or placing compliance requirements on tech companies for their role in allowing terrorism to flourish online.

In June 2021, the U.S. National Security Council announced its National Strategy for Countering Domestic Terrorism – in which, it lists “[a]ddress[ing] online terrorist recruitment and mobilization to violence by domestic terrorists” as one of its strategic goals.

However, the plan of action listed within is merely to “assist online platforms with their own initiatives to enforce their own terms of service that prohibit the use of their platforms for domestic terrorist activities.”

Effectively, the U.S. strategy is to hope that big tech companies will keep their promises to combat rising terrorism on their platforms.

But no law forces them to act this way and U.S. courts have affirmed that there is no duty for these companies to act at all.

In a similar vein, unlike banks that are subject to a host of financial crime compliance duties to find and fight illicit finance tied to organized crime, corruption, terror finance and more – with regulators reviewing their anti-money laundering (AML) programs and fining them for missteps, tech titans have no such duties or watchdogs watching.

Section 230 continues to protect big tech from liability for inaction and for profiting from terror-related content. Gonzalez creates a new and profound opportunity to change this standard.

supreme court building

The Gonzalez plaintiffs are not the first nor the only ones to criticize Section 230.

In the last ten years, several other similar lawsuits have been initiated where the aggrieved families of victims of terror attacks where the attacker published related content on social media.

Section 230 has repeatedly been used to shield Twitter, Meta/Facebook, Google, and others from any liability.

The U.S. Department of Justice recently recommended that Section 230 be amended to include a carve-out exception for terrorism.

Even Judge Marsha Berzon, one of the Ninth Circuit judges who concurred with the Gonzalez decision to protect Google from liability under Section 230, wrote a separate opinion to voice her concern that Section 230 needs to be limited, especially concerning the use of algorithms to connect users to content.

As written in the Ninth Circuit decision, Judge Berzon “joined the growing chorus of voices calling for a more limited reading of the scope of [Section] 230 immunity.”

The Supreme Court has finally heard the call.

Supreme Court engraved Justice

The Supreme Court has the power to limit Section 230, create liability for platform providers, and impact the way terrorism operates online.

Times have changed significantly since 1996 when the CDA became law.

Back then, lawmakers were afraid of the vast expansion of the Internet, particularly the advent of online pornography, and the impact of this new unknown on young internet explorers.

This was the same era of chatrooms and the rising tide of online stranger danger.

No one expected that the Internet would evolve so greatly and so rapidly that today’s young internet users already paint the Y2K era as vintage and retro.

With the Internet becoming deeply integrated into everyday life and easily accessed by most people with the smartphones they keep in their pockets, we necessarily treat online communications with the same urgency and relevancy as in-person communications.

If an algorithm pushes an IS recruitment video onto your feed, the effect is no different than if an IS fighter handed you a pamphlet – except online.

In short, that algorithm and platform involve a third-party tech company that facilitated your contact with IS recruitment.

Speech online should not be limited, but where algorithms promote radicalization so social media platforms may profit from views, there must be some way to keep these platforms liable for the role they play.

It is obvious that the law has not caught up to reality, as lawmakers are incapable of keeping up with changing technology.

The process of lawmaking and amending existing law is a lengthy process – most bills, even bi-partisan initiatives with broad support on both sides of the aisle die on the vine – which makes judicial review optimal for reading limitations into the law as issues arise.

Recently, the IS used Telegram to discuss and take credit for the bombing near the Russian Embassy in Kabul.

Andrew Tate’s proliferation of violence against women on Tik Tok has inspired boys as young as eight and nine years old.

More recently, Kanye West threatened violence against Jewish people on Twitter, followed by former President Trump’s threatening language against Jewish people on Truth Social.

At this point, we can already anticipate what the effects of this content will be, and the algorithmic “rabbit holes” this kind of content is attached to.

It is beyond reason to allow big tech companies to continue hiding from liability behind Section 230 and to continue profiting from user engagement with violent and indoctrinating content – leading to acts of real-life violence.

Gonzalez presents the Supreme Court with the opportunity to review and limit the most contentious internet law that has yet to be caught up to in the modern digital age and to mark the end of a violent era of the Internet.

We need the Supreme Court to limit Section 230 in its Gonzalez decision, to create real liability for big tech companies, and to begin building the framework for a new, safer internet age.

About the Author

Heidi Sandomir Social Media image

Heidi Sandomir is the Editor-in-Chief of the Cardozo Journal of Equal Rights and Social Justice (CJERSJ) and a current J.D. Candidate and Public Service Scholar at the Benjamin N. Cardozo School of Law in New York City.

See What Certified Financial Crime Specialists Are Saying

"The CFCS tests the skills necessary to fight financial crime. It's comprehensive. Passing it should be considered a mark of high achievement, distinguishing qualified experts in this growing specialty area."

KENNETH E. BARDEN 

(JD, Washington)

"It's a vigorous exam. Anyone passing it should have a great sense of achievement."

DANIEL DWAIN

(CFCS, Official Superior

de Cumplimiento Cidel

Bank & Trust Inc. Nueva York)

"The exam tests one's ability to apply concepts in practical scenarios. Passing it can be a great asset for professionals in the converging disciplines of financial crime."

MORRIS GUY

(CFCS, Royal Band of

Canada, Montreal)

"The Exam is far-reaching. I love that the questions are scenario based. I recommend it to anyone in the financial crime detection and prevention profession."

BECKI LAPORTE

(CFCS, CAMS Lead Compliance

Trainer, FINRA, Member Regulation

Training, Washington, DC)

"This certification comes at a very ripe time. Professionals can no longer get away with having siloed knowledge. Compliance is all-encompassing and enterprise-driven."

KATYA HIROSE
CFCS, CAMS, CFE, CSAR
Director, Global Risk
& Investigation Practice
FTI Consulting, Los Angeles

READY TO BEGIN YOUR JOURNEY TOWARDS
CFCS CERTIFICATION?