Sunday 30 October 2016

A Tale of Two Organs: Hate Speech Regulation in the European Context



Clotilde Pégorier, Lecturer in Law, University of Essex

The issue of hate speech regulation has again moved in recent years to the forefront of legal and political debate in Europe. To note that questions in this area are complex, and often generate diverging opinions as to the appropriate balance between legislation and the protection of rights, is no novelty. What is striking, however, is the marked difference in the tendencies of those “natural born twins” (Gabriel Toggenburg), the EU and the Council of Europe, in their respective approaches to hate speech. How might this be explained? And what, crucially, might be the wider legislative implications at European level?

The EU and the Fight Against Online Hate Speech

First, let us consider the EU’s efforts in this context, which might here be exemplified in relation to the battle against online hate speech. In response to the problem and threat of terrorism and radicalisation, and prompted in particular by the attack in Brussels on 22 March 2016, the EU decided to intensify its work on fighting hate speech – a campaign upon which they had embarked some eight years earlier with the adoption of the Council Framework Decision 2008/913/JHA on combating certain forms and expressions of racism and xenophobia by means of criminal law. As part of its security agenda for the period 2015-2020, the Commission presented, on 14 June 2016, a communication outlining action in seven specific areas where cooperation at EU level could effectively support Member States in preventing and countering radicalisation. Alert to the ever greater role played by the internet in the dissemination of views and ideologies, the European Commission took the step of consulting IT companies with the intention of creating legislation designed to inhibit the online spread of illegal content inciting violence.

In pursuing such an initiative, the Commission was, in fact, expanding upon longer-standing awareness of the importance of preventing the spread of hate speech via media forms. As Advocate General Yves Bot concluded in his Opinion from 5 May 2011 with respect to the cases C-244/10 and C-245/10:

Member states are to ensure that television broadcasts do not contain any incitement to hatred on grounds of race, sex, religion or nationality, must be interpreted as also prohibiting broadcasts which, in attempting to justify a group classify as a ‘terrorist’ organisation by the European Union, may create reactions of animosity or rejection between communities of different ethnic or cultural origin (para 93).

In May 2016, the European Commission had, moreover, already signed an online Code of Conduct on countering illegal hate speech online with four of the biggest internet companies – namely, Twitter, Facebook, YouTube and Microsoft. The code is not legally binding yet would appear to indicate a willingness on the part of the named IT companies to support the EU’s drive to prevent online hate – a willingness that owes in some measure, no doubt, to the protections supplied by Articles 12 to 14 of the e-Commerce Directive of 8 June 2000, commonly known as the ‘safe harbour’ provisions. According to Article 12, the provider of a service cannot be held liable for any information it transmits – including hate speech – as long as it: (a) does not initiate the transmission; (b) does not select the receiver of the transmission; and (c) does not select or modify the information contained in the transmission. Article 14 limits the liability of providers of “information society services” still further when such services consist only of the “storage of information” provided by a recipient of the services. This provision only applies where the provider does not control or have knowledge of the illegal activity or information; or having gained knowledge or awareness of such illegal activity expeditiously removes or disables the links to the activity.

However we speculate on the primary motives of the IT companies, of prime significance is that they are assisting the EU in its fight against online hate speech. The Code encourages social media companies to take quick action as soon as a valid notification of online hate speech has been received, e.g. by removing or disabling access. It also underlines that, in order to combat the spread of illegal hate speech, “it is essential to ensure that relevant national laws transposing the Council Framework Decision 2008/913/JHA are fully enforced by Member States in the online as well as the in the offline environment.” With the adoption of the Council Framework Decision, the EU considered that Member states were permitted to enact criminal sanctions against anyone:

publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in Articles 6, 7 and 8 of the Statute of the International Criminal Court, directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin when the conduct is carried out in a manner likely to incite to violence or hatred against such a group or a member of such a group.

Reading such provisions takes us to the heart of one of the key dilemmas at the core of current debates on hate speech – namely, the definition and understanding of the concept itself. A brief excursus on this point seems warranted here. For the question of definition remains somewhat thorny – hate speech is a term that is, at once, both over- and underdetermined. As Anne Weber puts it in her Manual on Hate Speech for the Council of Europe in 2009:

No universally accepted definition of the term “hate speech” exists, despite its frequent usage. Though most States have adopted legislation banning expressions amounting to “hate speech”, definitions differ slightly when determining what is being banned.

This is undeniably true. Yet there are international and national sources that provide useful guidance. The Council of Europe’s Committee of Ministers’ Recommendation 97(20) on “Hate Speech” defined it as follows:

[T]he term “hate speech” shall be understood as covering all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.

Also relevant here are the provisions of Art. 20, para. 2 of the International Covenant on Civil and Political Rights (ICPPR) of 1966, which stipulate that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.” An authoritative interpretation of Art. 20, para. 2 is supplied by General Comment No. 34 by the Human Rights Committee, which reads:

What distinguishes the acts addressed in article 20 from other acts that may also be subject to limitations, is that for the acts addressed in article 20, the Covenant indicates the specific response required from the State: their prohibition by law. It is only to this extent that article 20 may be considered as lex specialis with regard to article 19 [which establishes other limitations on freedom of expression]. The acts referred to in article 20, paragraph 2, must cumulatively (a) advocate, (b) be for purposes of national, racial or religious hatred, and, (c) constitute incitement to discrimination, hostility or violence. By “advocacy” is meant public forms of expression that are intended to elicit action or response. By “hatred” is meant intense emotions of opprobrium, enmity and detestation towards a target group. “Incitement” refers to the need for the advocacy to be likely to trigger imminent acts of discrimination, hostility or violence. It would be sufficient that the incitement relate to any one of the three outcomes: discrimination, hostility or violence (para. 51).

This interpretation provides perhaps the fullest, and most useful, elucidation of hate speech – one that does most to capture its particular power to harm. Read in conjunction with modern understandings of the potential of online media to contribute to the dissemination of political views, and to generate and spread ‘hatred’, it casts particularly sharp light, moreover, on how, by enlisting the support of IT companies, the EU is taking a progressive – and legitimate – stand in trying to confront modern hate speech in one of its most threatening forms. 

The Council of Europe: The Protection of Freedom of Expression Over the Fight against (Online) Hate Speech?

The situation is somewhat different, however, in the case of the other main European organ, the Council of Europe, which appears to be taking a much more cautious approach. A latest manifestation of this has been in the Perinçek case, for instance, where the European Court of Human Rights (ECtHR) decided, on 15 October 2015, that Switzerland’s criminalisation of Doğu Perinçek for genocide denial constituted a violation of Article 10 of the European Convention on Human Rights (ECHR). The Court finding here was that the restriction on freedom of expression imposed by the Swiss authorities was not proportionate.

This is but the latest sign of a divergence in the attitudes and response from two European organs to the issue of hate speech, reflecting a breach within Europe with regards to the status of hate speech in relation to freedom of expression, the latter itself a fundamental notion of both the ECHR and the Charter of Fundamental Rights of the European Union (Article 11).

The prevention and prohibition of online hate speech has been on the agenda of the Council of Europe since at least 2001, when the Convention on Cybercrime was adopted. In 2003, an Additional Protocol concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems was adopted. According to this Additional Protocol:

1. Each Party shall adopt such legislative measures as may be necessary to establish the following conduct as criminal offences under its domestic law, when committed intentionally and without right: distributing or otherwise making available, through a computer system to the public, material which denies, grossly minimises, approves or justifies acts constituting genocide or crimes against humanity, as defined by international law and recognised as such by final and binding decisions of the International Military Tribunal, established by the London Agreement of 8 August 1945, or of any other international court established by relevant international instruments and whose jurisdiction is recognised by that Party. […]

In June 2016, however, at the same time that the EU Code of Conduct was adopted, the Council of Europe Secretary General, concerned about internet censorship, decided that rules for blocking and removing illegal content must be transparent and proportionate. This opinion came after his report on the state of democracy, human rights and the rule of law, based on a study conducted by the Swiss Institute of Comparative Law and identifying a number of shortcomings in some states,  became public.

In the report, the Secretary General clearly stated that:

In the majority of member states, the legal framework on blocking, filtering and removal of Internet content meets the requirements of being prescribed by law, pursuing legitimate aims and being necessary in a democratic society, in accordance with Article 10 of the Convention. Exceptions remain however, notably with regard to laws regulating hate speech and counter-terrorism (p. 33).

In view of this, one can understand why the Grand Chamber of ECtHR decided in the Perinçek case that the Swiss criminal provision was disproportionate and did not fulfil the criteria of being necessary in a democratic society. Yet Art. 261bis of the Swiss penal code provides that ‘any person who publicly denigrates or discriminates against another or a group of persons on the grounds of their race, ethnic origin or religion in a manner that violates human dignity, whether verbally, in writing or pictorially, by using gestures, through acts of aggression or by other means, or any person who on any of these grounds denies, trivialises or seeks justification for genocide or other crimes against humanity, […] is liable to a custodial sentence not exceeding three years or to a monetary penalty’. It is difficult to see how a criminal law could be much more transparent or clearer here. The decision to uphold Perinçek’s claim to a violation of Art. 10 ECHR certainly delivered a blow to the fight against hate speech at the EU level – as was duly noted by judges Spielmann (president of the Grand Chamber), Casadevall, Berro, De Gaetano, Sicilianos, Silvis and Kūris, in their joint dissenting opinion:

With regard to the finding that there was no obligation on Switzerland to criminalise the applicant’s statements (see paragraphs 258-68), we confess to having serious doubts as to the relevance of the reasoning. Can it not be maintained, on the contrary, that a (regional) custom is gradually emerging through the practice of States, the European Union (Framework Decision 2008/913/JHA) or ECRI (Policy Recommendation no. 7)? We would also note that beyond Europe, the United Nations Committee on the Elimination of Racial Discrimination has repeatedly recommended criminalising negationist discourse. Can all these developments be disregarded at a stroke by examining the case in terms of an alleged conflict of obligations? (para 10)

It thus seems that the Council of Europe is taking a retrogressive step in the fight against hate speech – both offline and online – as the laws in place regulating hate speech do not appear to be in line with the ECHR. The approach of the Council also lies in opposition to that being taken by the EU, rendering the position of EU members difficult: should they criminalise online hate speech, or should they rather grant greater weight to Art. 10 ECHR? Indeed, what if Switzerland was an EU member? By criminalising genocide denial as a form of hate speech liable to incite violence, as it initially did in the Perinçek case, Switzerland complied with the Framework Council decision. In so doing, however, it contravened Art. 10 of the ECHR and was thus found guilty of a violation by the ECtHR.

Conclusion

The question of how to square the protection of freedom of expression with the imposition of criminal sanctions for hate speech is, doubtless, one which is difficult. Yet wherever one draws the line between acceptable and unacceptable limits on freedom of expression, it seems apparent that, at the European level, the EU and the Council of Europe should be working together much more coherently in attempting to confront the issue of online (and offline) hate speech.  

To this end, the Council of Europe should liaise more closely with the EU – not least as the Secretary General, in his 2016 report, commented that:

-In addition to calling on member states to implement in full the recommendations in this report, I urge them to make clear their commitment to the European Convention on Human Rights and the Strasbourg Court. Our Convention system can never be taken for granted: it depends on the active and constructive engagement of all governments. By embedding these fundamental freedoms into the legal, political and social fabric of their nations, Europe’s leaders can build democracies which are more open and inclusive and, as a result, more secure (p.5)

In order to facilitate a more consistent approach across Europe, it seems clear that the European Court of Human Rights itself has to be prepared to allow for greater restrictions to be placed on freedom of expression, precisely as noted by the judges in their dissenting opinion in the Perinçek case. As long as the Strasbourg Court continues to permit freedom of expression to be used as a catch-all defence, it will remain extremely difficult to combat online hate speech and to develop a common European standard. Two measures thus seem necessary. Firstly, a common understanding of what hate speech is and entails should be striven for – the interpretation supplied by General Comment No. 34 by the Human Rights Committee provides useful initial orientation, not least in the manner that it explicates key notions of ‘incitement’ and ‘hatred’, and in the way that it outlines the possible effects of hate speech beyond physical violence. Secondly, there needs to be common agreement on the way in which such forms threaten democratic values – how they violate ‘the respect of the rights or reputations of others’ and may imperil ‘national security’, ‘public order’, or ‘public health or morals’, and thus constitute a legitimate restriction on freedom of expression provisions.

Barnard & Peers: chapter 9
JHA4: chapter II:6

Photo credit: European Centre for Press and Media Freedom

3 comments:

  1. Comment from Adrian Hunt, Birmingham Law School, University of Birmingham (part 1)

    Thanks for this thought provoking piece, Clotilde. I would like to make a comment on the premise of the article that the decision in Perinçek reflects a difference between the EU position as reflected in the Framework Decision [FD], and the position adopted under the Convention.

    It is not at all clear to me - and I cannot see where this article identifies - a clear division between the EU and the ECHR, as regards the relevant rules and principles in a situation such as this.

    The decision in Perinçek was, as this piece explains, based upon proportionality grounds. However the subsequent critique of it set out in the blog focuses not on proportionality, but rather on the "prescribed by law" requirement, which did not form the basis for the reasoning of the European Court of Human Rights in the case at all. The critique by the Council of Europe Secretary General referred to in blog post raised concerns about the method of internet censorship, and was primarily premised on the prescribed by law requirement; not the proportionality type issue considered in Perinçek. This is because some EU countries when dealing with speech on the internet have preferred to operate informal processes where public authorities contact ISPs etc expressing concern about particular types of content, “suggesting” the contents removal, which the ISPs may then often do. The interference with speech in such a way operates unofficially and arguably/obviously therefore presents problems in terms of the prescribed by law requirement.

    ReplyDelete
    Replies
    1. part 2:

      But that was not the matter at issue in Perinçek, which accepted that the Swiss law did not offend in the prescribed by law sense. Furthermore, Perinçek accepted that each category offence set out in the FD, is capable of being compatible with the Convention.

      Thus, Perinçek accepts that speech (such as genocide denial) which is carried out in a manner likely to incite to violence or hatred, may be criminalized without such criminalization being incompatible with the Convention. (see the provisions in the Framework Decision in Article 1 (a) to (d) are ok.). However in the instance case on the facts (for reasons explained below) the court concluded that the speech was not likely to incite violence or hatred.

      Delete
    2. part 3:


      Perinçek also accepts that measures which criminalise such speech ‘carried out in a manner likely to disturb public order’ (FD Article 2) is also permitted by Article 10, (but in that context this is to be given the narrow meaning denoted by “the prevention of disorder” in the English text, as distinct from the possible wider notion argued for by the Swiss Goverment on the basis of the French language text). Since the Swiss Governments specific arguments on this ground were not able to point to the fact that the speech in the particular case was likely to provoke disorder” in the narrow sense, that part of their argument failed.

      Perinçek, also accepts that criminalization of speech of this kind which is threatening, abusive or insulting (FD Article 2)(and does not incite hatred or provoke public disorder) may be compatible with the convention, since

      'the negative stereotyping of an ethnic group was capable, when reaching a certain level, of having an impact on the group’s sense of identity and on its members’ feelings of self-worth and self-confidence. It could thus affect their “private life” within the meaning of Article 8 § 1 of the Convention' [Perinçek para 200 - relying on Aksu v. Turkey].

      And the court concluded on the facts that

      ‘statements bore on a matter of public interest and did not amount to a call for hatred or intolerance, that the context in which they were made was not marked by heightened tensions or special historical overtones in Switzerland, that the statements cannot be regarded as affecting the dignity of the members of the Armenian community to the point of requiring a criminal law response in Switzerland, ……., that the Swiss courts appear to have censured the applicant for voicing an opinion that diverged from the established ones in Switzerland, and that the interference took the serious form of a criminal conviction – the Court concludes that it was not necessary, in a democratic society, to subject the applicant to a criminal penalty in order to protect the rights of the Armenian community at stake in the present case.’

      You and I may disagree with this assessment but it does not in and of itself signify a difference between the position in EU Law and under the Convention. It is rather a specific decision of a court on its facts. The approach of the dissenting judges does not as such offer a different basis in principle to the majority, but rather indicates that they concluded differently as regards the proportionality of the measures in the light of their reading of the facts.

      Thus, the assertion which the article makes that the ‘Strasbourg Court continues to permit freedom of expression to be used as a catch-all defence’ arguably seriously over simplifies the process of reasoning adopted in resolving this case, and the many other cases referred in Perinçek, some of which went one way, and some of which went the other.

      The call which the article makes therefore for some sort of dialogue to arrive at some sort of agreed position seems strange because it is difficult to discern from the article what it that that the EU and ECHR differ on, which might be the focus of a dialogue, such as to produce a different rule/set of principles such as might definitively lead to different outcomes than the one arrived at in this case? Which part of the FD points to a situation where the conclusion in this case should inevitably lead to a finding that the conviction of Perinçek should be compatible with the Convention? Indeed the FD (Article 7), itself provides that the Framework decision shall not have the effect of modifying the obligation to respect fundamental rights and fundamental legal principles, including freedom of expression, referring both to Art 6 TEU, and the concept more generally (this would include article 10 ECHR - see 14th preambular para).

      Delete