Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Hate speech, and the role played by internet services providers, such as social media companies, in its regulation, has been the subject of much discussion. We examine the heightened regulatory activity concerning hate speech online and potential encroachment on the traditional immunities of online intermediaries regarding user-generated content.

Immunity for user-generated content

Internet services providers, as ‘mere conduits’ and ‘hosts’ of information generally have legal immunity for user-generated content posted on their sites. This qualified guarantee of immunity has contributed to the development of the internet and social media around the world. In the EU, this protection is provided by Articles 12 to 15 of the eCommerce Directive 2000 and, in the US, by Section 230 of the Communications Decency Act of 1996.

Generally, to maintain such immunity, an internet services provider site must block or disable access to unlawful speech or information once made aware of, often within a strict time frame. This task puts the service provider in a conflicted position. As a facilitator of communication, maintaining equal treatment of all users and content they post is an important aspect of their business. The requirements to remove unlawful speech, and at such speed, may at times undermine their efforts at due process and neutrality.

Boundaries of speech

For a global internet services provider company, 180 plus different jurisdictions’ regulations on unlawful speech complicate those efforts. Insulting Thai royalty, an offence in Thailand, would not be considered harmful or hateful communication in South Africa. However, the site may be expected to comply with Thai law if it enables users’ access to the site in Thailand. Accordingly, the site may disable access to the unlawful speech or information only for users located in that country e.g., Thailand, but not South Africa.

However, if a user’s access to unlawful speech is a national security concern, or subject to court ordered removal, the platform may be required (by the courts or national security body) to delete the content not just locally, but from the site entirely. Consequently, if the company complies with this requirement, users in one country become subject to another’s unlawful speech laws.

Contrasting approaches to hate speech

Even countries with a close historical relationship, the US and Ireland, have disparate views when it comes to hate speech and its regulation.

In the US hate speech is typically protected under the first amendment to the US Constitution. The First Amendment states that “Congress shall make no law . . . abridging the freedom of speech, or of the press.” While the government may impose limits on speech — that which is considered obscene or ‘fighting words’, for example — it may not regulate speech solely because of its hateful or racist content. As Supreme Court Justice Holmes noted in a 1929 Supreme Court opinion, speech “that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express “the thought that we hate” (United States v Schwimmer).

The recent Supreme Court decision in Matal v Tam confirms this approach. Here, the Supreme Court found that the ‘disparagement clause’ of the Lanham Act, where the US Patent and Trademark Office would refuse to register a trademark that ‘disparaged… or br[ought]… into contempt or disrepute’ any ‘persons, living or dead’, to be in violation of the first amendment’s protection of free speech. In this case, a rock band, the Slants, were trying to register their name as a trademark. They had been refused by the registering attorney because such mark would likely be found offensive to people of Asian descent.

In Ireland, the guarantees of freedom of expression are more qualified. The State guarantees liberty for the exercise of speech but subject to public order and morality. Publication of blasphemous, seditious, or indecent matter is also an offence. Historically, courts have shown substantial deference to the Irish parliament, known as the Oireachtas, when confronted on the question of government restriction of speech.

Unlike in the US, hate speech is not protected in Ireland. The Prohibition of Incitement to Hatred Act 1989 makes it a criminal offence to publish, distribute or broadcast material that incites hatred. ‘Hatred’ is defined to mean “hatred against a group of persons in the State or elsewhere on account of their race, colour, nationality, religion, ethnic or national origins, membership of the travelling community or sexual orientation.” So far, few prosecutions have occurred under the Act.

Focus on hate speech online

Recent initiatives taken by the Irish government to target hate speech and harassment online may signal an increase in enforcement. This increase in activity may impact many US internet services companies that have established their international headquarters in Ireland.

Minister for Communications Denis Naughten recently proposed appointing a Digital Safety Commissioner. This Office will have authority to enforce stronger measures so that regulated services providers take down offensive content more quickly. Ireland also recently released a draft bill: ‘Harassment, Harmful Communications and Related Offences Bill 2017’. Based on a plain reading, the Bill’s scope of application is broad. In its present draft, the Bill does not mention the liability or otherwise of an internet intermediary in the transmission of hate speech. Therefore, the position of internet intermediaries under the Bill is currently unclear.

Neighbouring EU countries are also taking a more proactive legislative approach to targeting hate speech online. A 2016-17 House of Commons Committee report, ‘Hate crime: abuse, hate and extremism online’, recommended that the UK government consult on a system of escalating sanctions with fines for certain companies which fail to remove illegal content within a strict timeframe.

The German government has also backed a draft law which includes proposals to fine certain internet services companies up to €50 million if they do not remove illegal hate speech within 24 hours after a complaint is made.

This legislation is paralleled in other parts of the world. In Vietnam, frequently visited sites that do not take down hate speech in 48 hours risk having their site blocked on a national level.

Potential narrowing of online intermediary immunity

Coinciding with the increased regulatory focus on hate speech online is the possible narrowing of an intermediary’s immunity for user-generated content.

Last year, the EU Commission publicly communicated that, despite remaining committed to the existing liability regime for internet intermediaries, it will make efforts to encourage their more effective self-regulation.

The recent judgment of the European Court of Human Rights (“ECtHR”) in Delfi v Estonia also appears to support a narrowing of the scope of intermediary immunity for user-generated content and the entities that may avail of it. Here, the Estonian Supreme Court found that a news portal could not be considered an intermediary with the benefit of immunity under the Electronic Commerce Directive. Accordingly, the news portal was liable for user-generated content despite promptly taking it down upon notice. The ECtHR affirmed the Estonian Court’s decision, finding that the decision did not violate the news portal’s Article 10 right to freedom of expression under the European Convention of Human Rights.

Whether the Delfi decision and recent Commission statements will have limited impact, or, in the alternative, signal a narrowing of scope of intermediary immunity, is unclear.

Change for internet services companies?

Given the potential changes in the regulation of hate speech, internet services companies will need to invest more in content moderation, whether it is carried out by individuals or on an automated basis. It remains to be seen how proactive such intermediaries will be expected to be, but it does appear that the existing immunities are narrowing.

For more information, contact a member of our Technology team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.

Share this: