A blog on why norms matter online

Tuesday, March 26, 2013

To Delete or Not To Delete Comments - Is that a Question? Worrying Liability Trends for Online Contents (I)

Leaving a comment is a great way to interact with an 
article, its author and the broader public. But who 
shold be liable if the comment is derogatory? 
(c) Kettemann, 2011
Should Google (or other Internet plattform providers) be held liable for content uploaded by users? 

Yes, said an Italian tribunal - even the managers can we held personally liable. 
No, said an Italian higher court. 

Yes, said a British court, if they do not react immediately. 

We see: the question of publisher's liability is a tricky one. Should it lie with the blogger or the company that provides a blogging plattform? 

Italian courts (briefly) allow personal (criminal) liability for online content

In September 2006 an individual posted a a video on Google videos that showed the taunting of a disabled child by other children. The video was online for three months before being removed by Google. The authors of the video were prosecuted (after Google provided identifying information), but so were four executives of Google for, as an article in the International Journal of Law and Information Technology  has it, “defamation and violation of data protection rules” in the form of “co-participation” and for illicitly processing personal and health data for profit." 

The Tribunale di Milano in 2010 (case no. 1972/2010) passed suspended prison sentences for three of the executives for the data protection violations. 
The tribunal did not find any guilt regarding co-participation in defamation as the current Italian legislation did not provide for Internet Service Providers’ liability for negligence in regarding delayed removal of postings. 

After outspoken criticism of the decision, an appeals court, on 21 December 2012, reversed the convictions and acquitted the three men. It argued, inter alia, that 
“[t]he possibility must be ruled out that a service provider, which offers active hosting can carry out effective, pre-emptive checks of the entire content uploaded by its users. […] An obligation for the Internet company to prevent the defamatory event would impose on the same company a pre-emptive filter on all the data uploaded on the network, which would alter its own functionality.”
Or, as Reuters put it in the title of an article reporting on the published judgement on 27 February 2013: 
"Google not expected to check every upload says Italian court". 
Such a pre-emptive filtering system would not only alter the network’s functionality but also violate freedom of expression, at least if such a system was imposed by a state, as the European Court of Justice ruled in SABAM v. Netlog NV (16 February 2012), C-360/10.

If some Google executives could  breath a sigh of relief, others had to worry. 

UK courts confirm publisher's liability for Google

On 14 February 2013, the Court of Appeal of England and Wales ruled, in 
Payam Tamiz v. Google Inc., that Google can be held liable for comments published on Blogger, its online blogging platform, unless it reacts immediately to a complaint.

The appeals judgment reversed a 2012 ruling which had considered, in line with international jurisprudence, that an Internet platform should not be treated as a publisher. 

Google had received complaints regarding certain comments on a blog post and had forwarded them on to the blogger who waited five weeks to delete them. The British NGO Article 19 considered the judgment to be a “serious step back for free speech online”

The judgment means, in effect, that the notice and takedown system is strengthened. This system encourages content hosts, such as Google (but also individual bloggers who have activated their commentary function) to immediately delete even potentially defamatory material immediately after having been notified even if the material is not illegal at all. 

This can have negative chilling effects. According to Article 19, this creates a situation where intermediaries will be more likel to censor "perfectly legitimate speech". 

(I'm not sure I agree with the notion of "legitimate" speech. I'd call the speech just 'perfectly legal'). 

Indeed, the ruling is bad news for free speech online, but - given the circumstances of the case (the connection to an election, the long period of five weeks without deletion of the comment) - probably not surprising. 

Future judgements will most likely draw a finer line. 

The negative implications of intermediaries being more likely to censor perfectly legitimate speech" is no new fear - and definitely not one connected only to this judgement. 

Intermediaries have always censored perfectly legitimate speech because of a variety of reasons, notably because they want a clean, safe and happy plattform on which users stay long, pay attention to ads and, ideally, also spend money. 

The trend, though, is worrying. 

And what is further worrying is the divergence between judgements even within Europe, which is bound to the European Convention on Human Rights and (for almost all EU states) the Fundamental Rights Charter. 

Strasbourg might want to have its say. And it can. 

For more on that, wait for the next posting.

And by the way: Comments are, as usual, enabled.

No comments:

Post a Comment