The BACL Annual Seminar at the SLS Annual Conference took place on 31st August 2021 (the recording is available by clicking here). The panel was formed following a call for papers concerning the definition of hate speech, the appropriate balance between rights and interests in this field, and the different modes of regulation found across different jurisdictions with distinct media and socio-political environments. This included identifying the commonalities or assumptions that underpin such differences, including factors such as the relevance of differences in enforcement and its effectiveness, the location of regulators and platforms relative to one another, and regional variations in conceptions of hate speech and harm.
Professor Uta Kohl examined whether discordant US and European conceptions of free speech, rooted in their distinctive historical starting points and background understandings, could be reconciled through approaches like the German Network Enforcement Act. She contrasted different constructions of US and European approaches to dignity, liberty and equality in the concept of an empowered citizenry and the role of the state. Kohl argued these approaches were not reconcilable nor resolvable due to their fundamentally different understandings of the state. The German Network Enforcement Act, and similar efforts in the UK and France, had only created an “awkward halfway house”. As a “public framework for private censorship”, Kohl argued that this would leave both sides unsatisfied, though for different reasons. From a US perspective, this was undue to its governmental censorship of content, with private bodies acting as agents of the state. From a European perspective, there was insufficient public oversight over private censorship and insufficient accountability for the processes by with speech was censored.
Professor Mathias Hong presented on the equal right to free speech in the US, Europe and Germany, and the different tools that are used or not to address the regulation challenge of hate speech online. Hong argued that the European approach could learn from the US approach regarding the prohibition on viewpoint discrimination, whereas the US approach lacked the protection of equal and positive freedom of expression that was present in Europe.
Hong explained that Germany reflects a European approach with additional viewpoint neutrality restrictions. Although the German Network Enforcement Act was a “commendable… first step to regulate online hate speech”, it has insufficient protection against the over-blocking of content by private platforms. The “one-sided regulatory approach” to free speech was problematic because it lacked “equally effective” remedies to restore lawful speech that had been removed in error. Some obligations of fairness, to provide for a diversity of viewpoints, were also necessary for private platforms, like those imposed on public broadcasters.
Professor Thomas Hochmann presented on the role of Government as a “speaker” on social networks and the issues this raises when considering the regulation of hate speech on such platforms. He considered the first attempt by the French Parliament to legislate for the regulation of hate speech on online platforms, which was held to be unconstitutional. He explained that it had imposed both obligations of results and means within a concrete system, punishing every failure to remove content within 24 hours. This was an “inducement to suppress” speech, created a problem of over-blocking, and lacked consideration of freedom of expression. The new legislation now in place provides only for obligations of means and sanctions for systemic failure. Hochmann argued that this takes freedom of expression more seriously, and recognises platforms as “new public forums” in which respect for freedom of expression must be imposed under the supervision of public authorities. However, where Government is the speaker, such instruments can be used against elected officials, which raises additional problems. Hochmann argued that Government speech can be more harmful than private speech, as it has more impact when it engaged in hate speech, but there is also a public interest in knowing what elected officials have to say and silencing by a private platform poses a democratic problem.
Dr Peter Coe asked in his presentation whether the UK Government’s proposed Online Harms Bill, in seeking to protect from both illegal and legal yet harmful speech, may unintentionally open Pandora’s box, by allowing social media platforms the opportunity to take greater “control of the message”. He explained that the Online Safety Bill raises free speech concerns. It refers to a range of existing offences, whose definitions are unclear and raise difficulties in application for regulators. Hard duties to take down illegal or harmful content are combined with general duties to “have regard to” or to “take into account” interests including the protection of free speech, privacy, and democratic or journalistic content. Coe argued that there was a risk that only “lip service” may be paid to these softer duties, while allowing platforms to successfully demonstrate compliance, making them “de facto gatekeepers to the online world”. In addition to the risk of over-blocking, this could provide “an opportunity or even an excuse to remove content that does not conform with their ideological values on the basis that it could be illegal or harmful.”
Professor Jake Rowbottom’s presentation concerned soft controls on speech online, those going beyond criminal, content-based prohibitions. He explained that the traditional focus on criminal prohibition carried a danger of a “misleading impression of an all-or-nothing approach”. Debates concerning the Online Harms Bill reflect this in criticisms that it targets “lawful but harmful” speech. The focus on criminal law prohibitions of speech is the source of “all-or-nothing” thinking, and reflects the drastic nature of the criminal sanction, which would normally require a high threshold to be reached before prohibition. Rowbottom argued that there was no simple line that could be drawn between speech that is fully permitted and speech that is prohibited. There is a “grey area” of many types of hate speech that, while not prohibited, were neither fully legitimate. He suggested that different degrees of hate speech may require different types of response. Two examples of softer controls were the denial of a benefit, such as the denial of state funding or opportunities to make public broadcasts to political parties based on its engagement in hate speech, and government counter speech.
The denial of a benefit still engages free speech principles, as it represents a governmental assessment of the legitimacy of viewpoints and whether they deserve state support. Where there is a positive obligation to support free speech, it would also raise a further dilemma. Rowbottom asked whether this amounted to a positive obligation to provide resources to support hate speech which is not prohibited hate speech. He suggested that this felt problematic but raised a question of viewpoint discrimination. He concluded that it was government action short of an outright prohibition. Government counter speech, while not prohibiting speech, uses government resources to sponsor a countervailing message. Rowbottom explained that this raised similar problems in that the government uses its powers to disadvantage certain viewpoints and, if the issue is political, raises additional problems of democratic legitimacy, for example by countering the views of a political opponent. To accept it in the context of hate speech must be based on the view that hate speech is different from political views with which one disagrees. Rowbottom noted that soft controls would become more important, and argued that viewpoint discrimination might be more acceptable for measures short of prohibition, though this might vary in different free speech cultures.
Dr Ge Chen argued that China has echoed global calls for the regulation of hate speech and the protection of equality rights, while acting within an authoritarian constitutional framework. He explained that this model entails three layers. The first, political censorship through prior restraint. The second, a model of rule by law, with administrative penalties and criminal sanctions for hate speech. The third, a notice-and-take-down regime within the civil code, including filtering mechanisms: a “rights-based governance model”. Although China’s approach is carried out under the “ageis of the protection of equality rights”, in fact “the root of online hate speech is part of a broader, decades-long” process of “statist propaganda”, in which the “targets of online harassment” would be stigmatised by the government itself, including advocates of liberal democracy.
Professor Andrew Kenyon and Dr Anjalee de Silva argued that, while negative approaches to free speech can suggest that counter speech is a prima facie more legitimate response than negative restrictions on speech, it does not follow that “without prohibitive laws speech is free, equal or democratically sufficient”. This is because vilifying speech can silence target group members, and by “marginalising, devaluing and debilitating their speech even where they can and do speak”. This suggests that what is required for effective free speech includes prohibitions including viewpoint discrimination regarding vilifying speech. Free speech must be “thought of in positive and not just negative terms”. This has implications for “sustained plural public speech”, which is required to form the public in a given democracy through deliberation. This is a reason for seeking diversity, as the formation of any public “both excludes as well as includes”. A sustained plural public speech required a “structurally mixed media environment”, with “diverse media entities, funding, missions and people”. They argued that this could not happen without state action and support. Anti-vilification laws are required where they enable both targets and other actors to “speak back”, and enable the “reformation of ‘publics’ of targets impacted on by vilification”.
Some important common themes emerged from these presentations. They speak to a common set of problems concerning the relationship between governmental actors and private platforms in shaping agendas and narratives online and promoting or supressing online speech. This goes beyond the traditional focus on prohibition and the distinction between moderation and curation, to consider a wider range of tools and strategies, used both by governmental and private actors, to promote or supress speech that, while not criminal, is considered illegitimate by those actors. Regional differences in conceptions of dignity, equality, and liberty, rooted in different historical experiences, are reflected in conceptions of hate speech, harm, and the appropriate role of state and private actors. The extent of those historical regional differences, and their continuing relevance, should be questioned to ask whether common approaches to the problem of regulating speech online can emerge online.
Plans are in development for a workshop in spring 2022 to present and discuss final written papers based on these panel presentations with a view to their publication in a special issue of the Journal of Media Law.
Posted by Dr Oliver Butler (Assistant Professor in Law, University of Nottingham)
Comments are closed.