Jawboning and lawboning: comparing platform-state relations in the US and Europe
The concept of ‘jawboning’ refers to scenarios where the government pressures online services to moderate user content in ways going beyond their legal authority to do so. Jawboning has becoming a highly generative topic of debate in US law schools. European commentary is comparatively sparse (though not for a lack of government overreach). Before importing American scholarship to critique EU policy, I think it is worth reflecting on some important differences in the European legal system.
My analysis starts with Bambauer’s original definition of jawboning as compulsion without authority. Here I tease out a latent paradox: how can the government compel platforms if it has no authority to do so? Reviewing subsequent scholarship, I identify three distinct theories of jawboning with different normative thrusts, which I refer to as bluff, blackmail, and domination.
On this basis I compare jawboning debates in Europe and in the US. My argument is that jawboning is overall a smaller problem in the EU simply because European states’ legal authority to regulate speech is greater. Jawboning is a distinctly American problem insofar as the US government is uniquely incapable of binding regulation, and uniquely capable of exercising informal, political-economic power. In Europe, by contrast, government demands on platforms tend to be articulated in legal terms; not extralegal jawboning but – excusez le mot – intralegal lawboning. The Digital Services Act (DSA)’s systemic risk rules, in particular, act as a totalising framework that can translate almost any content-regulatory demand into legal obligation, and as such severely blurs the boundaries between authorized and unauthorized government action. In this new context, the priority for critical scholarship should not be jawboning but good old rule of law and due process: how the state employs its enforcement powers against platforms and users, and what rights these have to contest decisions.
Jawboning as compulsion without authority
The basic idea of jawboning as compulsion without authority goes back to the foundational, eponymous article by Derek Bambauer. He provides a very useful heuristic for platform-state interactions based on the degree of compulsion (from mere persuasion to hard sanctions) and the degree of authorization (from formally specified to entirely absent). This yields the following compass, with jawboning in the bottom-right quadrant.
Image adapted from Bambauer, Against Jawboning
This basic definition still leaves open many questions about the nature of jawboning; Daphne Keller’s 2023 symposium post, to give one example, offers a dizzying list of jawboning parameters such as the moderation demands (e.g. policy change, removal); potential sanctions (e.g. prosecution, legislation, reputational harm); originating entity (executive, judiciary, legislature), and many more. But the core of concept of unauthorized compulsion seems to have maintained its relevance. Much US commentary on jawboning continues to debate the nature of compulsion (e.g. distinguishing illicit coercion from permissible persuasion) and on the nature of authority (e.g. assessing whether the government has acted within its authority for specific cases).
The jawboning paradox: bluff, blackmail or domination?
I see something of a paradox in the concept of jawboning as compulsion without authority: how can the government in fact coerce platforms if they have no binding authority to make any demand? Several theories seem to be in play.
Some cases might involve a bluff, where the government invokes authorities it doesn’t actually have. The textbook case here is Bantam Books, where the Rhode Island Commission to Encourage Morality in Youth threatened booksellers with prosecution under obscenity laws, despite it lacking any legal enforcement powers. Other cases might come closer to blackmail, abuse of power or détournement de pouvoir, in which a government official threatens to exercise powers intended for other purposes. Bambauer offers a marvellous example: Mississippi Attorney General Jim Hood threatened to prosecute Google for various criminal offences (e.g. trafficking and pharmaceutical regulations), as part of a scheme concocted by Sony and other entertainment business interests to pressure the platform into removing copyrighted content.
Other theories of jawboning are more expansive and accept that government doesn’t need any express authority to wield power over platforms. Practitioners like as Katie Harbath point to the importance of reputation; the threat of public criticism is often an important jawboning weapon. Hannah Bloch-Wehba notes the broader codependencies of ‘new governance’ which encourage platforms to maintain good relations with governance. As Daphne Keller puts it, coercion should therefore ‘not be the sine qua non for jawboning claims’, since government often achieves censorship through softer means. This I refer to as the ‘domination’ theory of jawboning: it highlights power relationships that trouble the persuasion/sanction distinction and compel platforms even in the absence of overt threats.
Bluff, blackmail and domination on Bambauer’s jawboning compass
These different theories of coercion matter because they tend to support different normative positions on jawboning: should it be allowed, and if so when? When it comes to blackmail or bluster, the answer is almost invariably no. Since the government is misstating or abusing its authorities, these types of jawboning seem to be, definitionally, a form of wrongdoing which the law ought to prevent.
Much of that normative clarity is lost in domination theories of jawboning. Hardliners might want governments to refrain from all speech that could potentially influence platform moderation, but most recent commentary is dedicated to showing just how untenable that position is (e.g. #1, #2, #3). Governments constantly coordinate with platforms in ways that are relatively innocuous or positively necessary. The challenge for legal scholarship on jawboning is to pick through relevant factors that might distinguish the good from the bad; the content, institutions, processes, transparency safeguards and other circumstances that all matter normatively when assessing government-platform coordination. Such is the price of nuance; in many cases, domination theories of jawboning seem to be descriptively valid but normatively inconclusive.
In the second half of this post, I’d like to reflect on how questions of jawboning authority play out differently in the US and EU legal systems.
Jawboning in the US
I think I can explain why jawboning has become such a big topic in the US, and less so in other countries. It’s not just that US scholars seem to be especially prolific, and excel at coming up with catchy nicknames for their theories. It’s that the US has unique means and motives to engage in jawboning, since they are distinctly incapable of legislating on content moderation and distinctly capable of informally pressuring platforms.
The US has a hard time legislating content moderation for both political and constitutional reasons. Politically, the US legislature faces incessant partisan gridlock, which, barring some exceptions, has thwarted almost all attempts over the past decade to meaningfully reform online content laws at the federal level. A recent smattering of state-level initiatives has broken the spell, but they face an uphill battle of judicial challenges.
That brings me to the other major constraint on US legal reform: the constitution. The First Amendment imposes uniquely strict limits on government action. Categories of speech that are illegal in most countries such as hate speech, aren’t just lawful in the US but constitutionally protected. And besides this uniquely narrow approach to unlawful content, the First Amendment also massively restricts the state’s capacity to regulate corporate conduct and impose duties on platforms, as part of a broader trend of pro-corporate ‘free speech Lochnerism’.
Besides a clear motive, the US government also has tremendous means for jawboning platforms. To state the obvious: the US is the global hegemon and wields the greatest economic and political power of any state in the world. For platforms, specifically, it is the largest advertising market and also the country of establishment. Most major platforms are owned and staffed in large part by Americans. And the ones that aren’t, like TikTok, US Congress threatens to ban entirely unless American owners are found. Of course, institutions still matters, and a threat from a small-town mayor isn’t the same as from Joe Biden or Mitt Romney. But the general rule holds that American institutions have privileged access to major platforms, relative to their equivalents in other states.
One might even say that jawboning has always been a part of US media governance. Brenda Dvoskin has shown how American media’s 20th century settlement of minimal government intervention and broad free speech rights developed alongside systems of moralistic corporate self-regulation, from radio’s National Board of Review to Hollywood’s Hay’s office and the National Broadcasting Council. All these were created under the looming threat of regulation and sought to regulate speech with the “explicit goal of persuading lawmakers that state intervention was unnecessary”.
Lawboning in Europe
In Europe the situation is markedly different. Amongst its various fundamental rights traditions (national constitutions, ECHR, EU Charter), none are so restrictive as the First Amendment. The range of potentially unlawful speech is far broader, and keeps expanding – for better and for worse. The Netherlands has in recent years enacted specific criminal penalties for doxing and on non-consensual sexual imagery (i.e. ‘revenge porn’). Hungary, for its part, is introducing a ban on ‘LGBTQ propaganda’, and numerous jurisdictions have enacted anti-disinformation and ‘false news’ laws. Some of these may yet fall to fundamental rights challenges, but it should be clear that baseline scope of ‘illegal content’ is clearly far broader than in the US.
Regulatory obligations for big tech are also proliferating. In stark contrast to US gridlock, the EU has produced a bewildering excess of new legislation. Most readers will be familiar with the DSA and DMA, but fewer will have heard of the Terrorist Content Online Regulation, Audiovisual Media Services Directive, Platform-to-Business Regulation, Political Advertising Regulation, European Media Freedom Act, or forthcoming Digital Fairness Act. The widely-cited Draghi report now calls for a ‘regulatory pause’ on new initiatives, indicating that even Europeans are feeling the fatigue.
The EU’s legirrhoea could be interpreted as a sign of desperation; a reflection of its relative impotence to sway platforms through other means. Indeed, governments across the EU express the sentiment that platforms are difficult to access and unresponsive to requests for assistance. As Rob Gorwa would put it, governments that can’t convince or cooperate with platforms resort to coercion instead.
This being said, the EU also boasts an impressive record of more informal coordination with platforms, from the early and obscure days of the European Commission’s Internet Forum to the more formalized and public-facing Codes on hate speech and disinformation. As discussed by Jacob van de Kerkhof, these quasi-voluntary coregulatory instruments also bear the hallmarks of jawboning.
Compared to the US, however, Europe’s initiatives have always been backed by a more credible and explicit threat of binding regulation – first at the national level in initiatives such as the NetzDG, and finally in the DSA framework. In effect, the DSA’s systemic risk framework now swallows all these past interactions and brings them into the remit of formal, though vague, legal obligations. The Code of Practice on Disinformation, for instance, is now being embedded into the DSA, as a means of complying with its risk management obligations. The DSA’s systemic risk provisions thus acts as the main fulcrum of EU regulatory power over content moderation; first it was the threat of the DSA’s enactment that cajoled platforms into agreeing to these codes, and now it is the threat of DSA enforcement that holds them in place. The clearest example here is Thierry Breton’s widely-criticised actions of the DSA’s systemic risk framework. His bully pulpit approach to online platforms on topics like Ukraine, Palestine, and the platforming of Donald Trump bears many resemblances to classic jawboning cases – except for the fact that they are clearly rooted in an underlying legal framework.
EU relations to platforms are thus gradually legalising, from a language of responsibility and cooperation to one of risk management and obligation. What distinguishes it from your typical ‘bluff’ jawboning scenario, in my view, is the extreme vagueness and flexibility of the DSA’s obligations. Who is to say, really, whether Breton has overstepped his authority? The Court of Justice of the European Union might overrule some of actions, but their involvement, if any, will be slow and sparse. For now, I believe the better criticism of these interactions is not so much that the Commission is overstepping its authority but rather that the scope of its authority is almost impossible to discern, and therefore of questionable legality. This ‘lawboning’, if you will, has less to with ambiguously phrased veiled threats, and everything to do with ambiguously phrased legal statutes.
DSA systemic risks on the Bambauer jawboning compass
As long as the Commission’s underlying authority is so thoroughly indeterminate, devising any kind of norm against jawboning seems somewhat beside the point. Can the Commission truly be said to be bluffing about its enforcement powers, or abusing them, when nobody can agree what systemic risks demand of platforms in the first place? Framing these problems as ‘jawboning’ could even counterproductive since it could constrain the Commission further in trying to clarify their interpretations of this framework. They should be able to communicate about their findings and communications without being accused of wrongdoing or untoward bullying. The threat of sanction is an essential feature of every regulatory framework, and guiding its proper application is not a radically novel problem of jawboning but a familiar matter of legality and due process.
What Europe needs is to start making rule of law and due process demands of the new systemic risk framework it has erected. A useful starting point here is Martin Husovec’s outlines for a viewpoint neutrality requirement in the Commission’s disinformation actions under the DSA. Besides such substantive analyses on the scope of risk mitigation duties, there are also procedural questions about the Commissions’ handling of and communications about their findings. Incendiary open letters needn’t always be the first resort.
Truth be told, I am not optimistic that these projects to clarify the DSA’s risk framework will bear fruit any time soon; it may take years before the CJEU has any chance to provide authoritative clarity – if ever. Failing that, maybe the entire framework ought to be challenged on grounds of legality and foreseeability. These are questions I can’t resolve here. But I do hope to have established that concept of jawboning isn’t quite apposite here; when the EU threatens platforms, the calls are coming from inside its legal system.