"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
These are the 26 words from Section 230 of Title 47 of the United States Code that was enacted as part of the Communications Decency Act of 1996. Maybe not as famous as Seven dirty words, but this section had a profound impact on how the modern internet shaped up to be. You can read more about this in a book called THE TWENTY-SIX WORDS THAT CREATED THE INTERNET. Decency, obscenity and related laws are fascinating topics for media studies. I had to deal with some of this content and law when I had my radio show at WEFT 90.1 FM in Champaign, IL. Now with streams of content on the internet, algorithmic manipulation, and huge ad revenue these regulations (at least in the U.S.) are taking on a different dimension. And this week the fate of Section 230 is up to the supreme court (see Gonzalez v. Google).
Today’s post is a short note about this lawsuit. The first hearing just finished on 21 Feb 2023 — more analysis and reports will on this will be coming out soon. I link to a few of those here.
What is Section 230? Section 230 protects companies from liability for most content contributed by third parties. This means when someone writes and uploads really nasty stuff on the internet companies like Twitter, Facebook, and YouTube are not legally responsible. You may ask why? This was enacted in 1996. And as with several other media legislations, section 230 had the “family empowerment” idea behind it. The following excerpts are from an article written (2021) by former representative Chris Cox, one of the authors on this legislation.
The Cox-Wyden bill, first known as the Internet Freedom and Family Empowerment Act before it was folded into the Telecommunications Act of 1996 and rechristened as Section 230, was an exemplar of this bipartisanship. Its two authors, one a Republican and the other a Democrat, joined with the overwhelming majority of our colleagues on both sides of the aisle in adapting for the internet age what Clinton called “outdated laws, designed for a time when there was one phone company, three TV networks, [and] no such thing as a personal computer.”
But things have changed:
We also know that by empowering billions of people to speak their minds, we have unleashed the whirlwind. The law that gives to anyone and everyone the opportunity to say what they will—limited only by what the platforms hosting this speech find objectionable—has come with costs in the form of obnoxious speech, dangerous speech, hate speech and violent speech. Do the unquestioned benefits of user-created internet content outweigh these very real costs?
In the world of algorithms, Section 230 now needs to distinguish between content created by a platform and that uploaded by a third party (for example Peppa Pig or ISIS). Both types of content can be promoted by the company's algorithm. But the issue here is whether should these be treated equally (from a legal point of view). One perspective is that platforms cannot be responsible for content uploaded by others, while others argue that the system and algorithms allow harmful content to reach a wider audience. The ongoing Gonzalez v. Google LLC lawsuit addresses this issue, and the circumstances behind it are undoubtedly tragic. Without delving into excessive analysis, it is clear that the case highlights the debate over the responsibility of platforms for the content they promote, the role of algorithms and free speech.
Here is a summary of Gonzalez v. Google LLC (includes interview with Jeff Kosseff)
The Supreme Court will hear oral arguments in Gonzalez v. Google on Tuesday [Feb 21, 2023]. The case was brought forward by the family of Nohemi Gonzalez, a 23-year-old American college student who was one of nearly 130 people killed in Paris in 2015 by members of ISIS. Gonzalez’s family argues that Google aided ISIS when it recommended the terrorist group’s videos on YouTube, a violation of federal anti-terrorism law. Google, meanwhile, claims Section 230 protects it from such claims. The court is expected to deliver its decision on the case this summer.
Another good summary here: (it talks about Gonzalez v. Google and Twitter v Taamneh
Together, the cases symbolize what some on the Court feel is the problem with CDA 230: it’s an undefeatable liability shield that protects already too-powerful internet giants. And it’s a protection that Congress could not have possibly intended.
While Congress intend CDA 230 to protect websites from publishing someone else’s speech, courts have interpreted CDA 230 far more broadly. And if the Court’s last term is any indication of its commitment to history and tradition, we can expect it to zero in on that fact.
Lower courts have held that algorithmic recommendations are a tool for directing content, but they aren’t content in and of themselves. Yet, sweeping rulings have essentially immunized the platforms from curation that foments genocide, incites mass shootings, and propagates election misinformation. This odd imbalance in the law has led Justice Thomas in particular to believe that lower courts have strayed too far from the “natural reading” of the statute. It’s not a stretch to think that many of his compatriots on the Court think the same.
In reality, these cases are a scapegoat for our frustration with the tech giants. We like the promotions, news articles, and new outfit suggestions; but we dislike all the hate, the vitriol and the fact that they seem to make billions of dollars on top of it all. But the Court is not the right vehicle to vent our frustrations.
It is still not clear how the decision will go. But here are a few early reactions — apparently the judges were very confused at some point:
Tim Wu has some reactions here:
I am closely following these two cases and hope to write more next week. Stay tuned. These are a few other Tweets to follow for updates and analysis: