When the First Amendment meets the Digital Services Act

Hassan Abdullah Niazi

Senior Fellow

A Global South View of Transatlantic Speech Myths

When the U.S. House Judiciary Committee held a hearing on Europe’s Threat to American Speech and Innovation earlier this month, one would have been forgiven for expecting a serious debate about the jurisprudential clash between the U.S. First Amendment and Europe’s approach to regulating free expression under the Digital Services Act (DSA). 

In fact, the US’ and EU’s approach to free expression have never aligned. Consider what are known as the ‘Skokie’ cases, where multiple forums, such as the 7th Circuit, Court of Appeals of the United States, allowed the neo-Nazi National Socialist Party of America to hold a march displaying swastikas through a predominantly Jewish neighborhood. In Europe, with its history of the Holocaust, the results would undoubtedly have been different. Europe’s approach to free expression has historically not followed the famous “marketplace of ideas” concept of the U.S. It has always been more grounded in an analysis regarding the historic harm that hate speech has played in the region against specific marginalized communities. 

Unfortunately, this was not what the hearing was about. It was more a platform for Republican rhetoric, with crumbs thrown about to appease a conservative base, lapping up the narrative of foreign conspiracies against U.S. values, and billionaire tech CEOs fed up with regulation despite decades of unchecked power. 

The specific target was the EU’s DSA: a legislation designed to foster greater transparency and oversight over online platforms operating in the European Union. This, according to Republican lawmakers, was an existential threat to the American values of free and open debate. 

The Republican argument boils down to two things: (i) European regulation of online content will censor speech of American citizens; and (ii) Europe should embrace tech companies as “town squares”, avoid regulation, and let the marketplace of ideas “self-correct” any harmful content. 

First, the core narrative is false: EU regulators cannot censor speech in the U.S. under the DSA. The DSA does not have an extra-territorial application to block speech in the U.S., only users in the EU may have their speech restricted in their respective country. The DSA’s core purpose is not to block speech. Fundamentally, it is about providing people with more transparency on how social media companies moderate their content. This is no authoritarian law enacted by a dictatorial regime to repress dissent. It is a law derived through a broad-based consensus and backed by a democratic process. 

Realistically, what social media companies do when faced with content that is unlawful in a particular jurisdiction is called “geo-blocking,” that is, restricting content that is unlawful in a country within its territorial limits. This would mean that if speech is found unlawful in the EU, it would be restricted only within the EU. People in the U.S. would still be able to access the content. There is precedent in the sense that in Google v. CNIL (2019), the CJEU held that EU laws, such as the General Data Protection Regulation (GDPR), could not compel a social media platform to globally de-list search results. 

Of course, the argument could still go that this allows the EU to restrict a U.S. citizen’s speech within EU borders. But that is hardly something U.S. lawmakers can object to. Every country has sovereign laws which apply within its borders. The U.S. may have one of the most permissible laws on free expression in the world, but it is just that: a U.S. law, not a global one. It cannot impose those laws on other countries. As scholars in the U.S. have said: “The DSA does not do anything to give Europe more power over speech that we can say and see here in the United States…It is intended as a law about what gets seen in Europe.”

The threat of censorship is hyperbolic in this case. The EU has a robust judicial review mechanism which would enable any abuse of power by an EU authority to be kept in check. For example, an EU Commissioner lost their job by falsely claiming that the DSA gave them power to block lawful speech on the social media platform X. The threat of censorship is remote, given the EU has a substantial framework of speech protection built into its human rights framework. The DSA’s own provisions refer to protecting freedom of expression. It is clear that restrictions on speech are meant to be exceptions, not the norm.

There are generous helpings of irony in the recent House Judiciary Committee hearing. The U.S. flagging a threat of foreign influence on its values is insincere given that most U.S. tech companies have built their content policies with a view towards what is acceptable to those currently in power in the U.S. Essentially, U.S. foreign policy plays a dominant role in how social media companies approach, for example, the designation of dangerous organizations and individuals, or the conflict in the Middle East and other regions in the Global Majority. The sudden shift by Meta in its fact-checking policy was heavily influenced by Donald Trump’s presidency, as was the change in its hate speech rules. The Trump administration has even gone so far as to propose sanctions on the EU over enforcement of the DSA. 

Furthermore, if this was truly about censorship of speech across borders, it would not be Europe in the crosshairs. It would be countries like Brazil or India whose courts have, with varying degrees of success, attempted to globally restrict certain categories of content that are unlawful in their jurisdictions. Yet, instead of this hearing addressing growing authoritarian abuse, it has given authoritarian governments a new weapon in their arsenal. They, too, can now deploy the rhetoric of foreign values being imposed on them with greater zeal, just like how the government of Pakistan grabbed hold of the narrative of “national sovereignty” in the aftermath of the U.S. plan to ban TikTok as a pretext for passing draconian laws. 

Let’s address the second aspect of the debate: the mythical self-regulating market. The U.S.’ “marketplace of ideas” outlook was captured in a spirited opinion by Oliver Wendell Holmes’ dissent in Abrams v. United States (1919). It would later be cemented in U.S. jurisprudence in Brandenburg v. Ohio (1969). The problem is that even though the First Amendment’s language says Congress may make “no law” restricting speech, the reality is that many categories of speech have been declared to be illegal by the Supreme Court of the United States. 

The marketplace has therefore always needed restrictions. We have always understood that free expression must be balanced against the possibility of harm to individuals and society. How to achieve that balance will be up for debate, but international human rights law, such as the ICCPR, can play a big role in bridging that divide in a borderless internet landscape. 

There may be doubts about how to achieve this balance, but there should remain no doubts about the fact that “self-regulation” by Big Tech companies has failed. We know this because we see the harm that content on social media has done to vulnerable communities everywhere, such as in Myanmar and Ethiopia.  

It is idealistic to believe that the self-regulation model can work when it comes to massive social media companies. These are not “town squares.” They are commercial entities, driven by commercial incentives. They will adapt or comply with regulations based on a business calculus, not a principled one. Self-regulation fails when its aims conflict with global profit. Europe and most of the world understands this, and even within the U.S., there is a push to do more. 

Europe’s DSA is not a perfect attempt to regulate Big Tech. However, it has broken new ground in providing mechanisms of oversight, transparency, and empowerment to users rather than centralizing power to governments. This model, like any model, is not without its criticism, and will be refined with time. In contrast, the U.S. has so far failed to address the problems democratic systems face when confronted with genuine online harms such as disinformation and violence. Vaccine disinformation, which came up repeatedly in the hearing, is a real problem. Consider Pakistan, one of the last few countries battling polio, struggling annually with countering disinformation about the vaccine. This is disinformation that is placing real lives in harm’s way. 

Moreover, half of the companies with the most obligations under the DSA (categorized as Very Large Online Platforms) are non-U.S. 

The only difference is that while the EU is trying to empower users and states, the U.S. is empowering tech CEOs. In the U.S., difficult content decisions are eventually decided by CEOs who may know nothing about the context, culture, or reality of speech in countries abroad. UNESCO has already flagged growing concern that major technology platforms, particularly U.S.-based firms, may be walking back earlier commitments to user-safety and towards a less regulated environment. 

The U.S. perception of the DSA as a specter haunting free speech is misplaced. Contrarily, these perceptions empower authoritarians in Global Majority countries to misconstrue free speech principles, including those outlined in the ICCPR, as Western hegemony over developing countries. We need to understand the DSA not as a perfect law, but as an attempt to check decades of unchecked power wielded by private corporations. It is this unchecked power that is under threat, not free speech. 

As former UN Special Rapporteur David Kaye said in his testimony, “This [US] administration is directly attacking freedom of expression. [We] are only discussing these issues now, in the wake of significant fines to major technology companies coming from the European Union for failure to meet the expectations of lawmakers and the public. This hearing addresses mainly harms to companies, not to users in Europe and certainly not to the American people.

The real threat to free expression does not come from Brussels, but from decades of billionaire CEOs deciding what gets said on the world’s digital platforms. If the U.S. truly wants to protect speech, it should start asking hard questions at home. 

Hassan Abdullah Niazi

Senior Fellow

Hassan Abdullah Niazi is a partner at Common Law Chambers (CLC) in Pakistan. His practice concerns constitutional law, anti-trust & competition, data protection & privacy, Internet & technology regulation, among others.