A blog on why norms matter online

My photo

I'm a Post-Doc Fellow at the Cluster of Excellence "Normative Orders" of the University of Frankfurt and lecturer at the Institute of International Law of the University of Graz, Austria. I've studied international law in Graz, Geneva and at Harvard Law School. I enjoy thinking and writing about Internet Governance and discussing and shaping the future of the Internet

Wednesday, March 27, 2013

To Delete or Not To Delete Comments - Is that a Question? Worrying Liability Trends for Online Contents (II)

Leaving a comment is a great way to interact with an 
article, its author and the broader public. But who 
should be liable if the comment is derogatory? 
(c) Kettemann, 2012
In the last blogpost I argued that diverging liability judgements can lead to insecrutiy and hyper-sensitive intermediaries who will delete content, even if it is not illegal. 

I called upon Strasbourg's European Court of Human Rights to provide some guidance. It has (some time back). And it will again.

But let's go back first. In 1999, the European Court of Human Rights could confidently claim in Sürek v. Turkey (1999) that the owner of a journal was responsible for having published aggressively written letters to the editor, even if he had not personally associated himself with these. His conviction did not violate Article 10 because of the threats that were contained in these letters to particular individuals.

Extending Sürek to Internet intermediaries would mean burdening them with an impossible task. 

Imagine: Google woudl be responsible for all comments on all sites of all of its services.

Clearly, however, Internet intermediaries are not the prima facie editors of the information contained on their sites.

Even the webmaster of the site of an organization is not necessarily responsible for all content published on that site, as in the case of Renaud v. France (2010) shows, where the Court deemed exaggerated the conviction of a webmaster for remarks published, within a emotional public debate, on the association’s site.

This sounds promising if onen wants to  make the case against publisher's liability for Internet intermediaries based on jurisprudence from Strasbourg.

But this is not the end of the story.  

There is still a pending case, Delfi AS v. Estonia (communicated in 2011), which has the potential for trouble on the liability front. 

In that case the operators of an Internet news portal were held responsible in national courts for defamatory comment posted by a non-identifiable user below an article. Commenting was possible through a non-moderated system, as the technology was in place to delete messages on the request of third parties and to filter out certain language. The portal deleted the impugned comment without delay but was nevertheless convicted. 

Applying  Renaud mutatits mutandis would make the Court's decision in Delfi AS seem like a foregone conclusion. 

The Court should clarify, when deciding Delfi, what limits can be set for Internet intermediaries, just as it has so admirably shown the limits of state censorship of Google sites in Yildirim v. Turkey.

Limiting the ex ante content moderation obligations of Internet intermediaries is essential for keeping the flow of ideas on the Internet open. Navigating between state laws and its own content moderation rules is often difficult for international Internet intermediaries, and especially social networking sites, who are faced with conflicting demands and threats by states to disallow access altogether in case of non-removal of impugned information.

hat a wholesale ban of a whole service in reaction to illegal content on a certain site violates Article 10 ECHR (freedom of expression) has been confirmed in Yildirim, as well.

Attempts by some states, such as India, to oblige Internet intermediaries to pre-censor content have been met with strong international opposition. The Internet thrives on openness and the quick and free exchange of ideas. Therefore the responsibilities of Internet Service Providers cannot be understood to extend to ex ante moderation. 

Distinguishing Sürek and relying on Renaud, this is what Delfi AS can be expected to come down to.

That decision would also allow us to assess more clearly national liability decisions and help develop a trans-European liability regime - or rather, hopefully, a liability-minimizing and liberty-maximizing regime. 

Tuesday, March 26, 2013

To Delete or Not To Delete Comments - Is that a Question? Worrying Liability Trends for Online Contents (I)

Leaving a comment is a great way to interact with an 
article, its author and the broader public. But who 
shold be liable if the comment is derogatory? 
(c) Kettemann, 2011
Should Google (or other Internet plattform providers) be held liable for content uploaded by users? 

Yes, said an Italian tribunal - even the managers can we held personally liable. 
No, said an Italian higher court. 

Yes, said a British court, if they do not react immediately. 

We see: the question of publisher's liability is a tricky one. Should it lie with the blogger or the company that provides a blogging plattform? 

Italian courts (briefly) allow personal (criminal) liability for online content

In September 2006 an individual posted a a video on Google videos that showed the taunting of a disabled child by other children. The video was online for three months before being removed by Google. The authors of the video were prosecuted (after Google provided identifying information), but so were four executives of Google for, as an article in the International Journal of Law and Information Technology  has it, “defamation and violation of data protection rules” in the form of “co-participation” and for illicitly processing personal and health data for profit." 

The Tribunale di Milano in 2010 (case no. 1972/2010) passed suspended prison sentences for three of the executives for the data protection violations. 
The tribunal did not find any guilt regarding co-participation in defamation as the current Italian legislation did not provide for Internet Service Providers’ liability for negligence in regarding delayed removal of postings. 

After outspoken criticism of the decision, an appeals court, on 21 December 2012, reversed the convictions and acquitted the three men. It argued, inter alia, that 
“[t]he possibility must be ruled out that a service provider, which offers active hosting can carry out effective, pre-emptive checks of the entire content uploaded by its users. […] An obligation for the Internet company to prevent the defamatory event would impose on the same company a pre-emptive filter on all the data uploaded on the network, which would alter its own functionality.”
Or, as Reuters put it in the title of an article reporting on the published judgement on 27 February 2013: 
"Google not expected to check every upload says Italian court". 
Such a pre-emptive filtering system would not only alter the network’s functionality but also violate freedom of expression, at least if such a system was imposed by a state, as the European Court of Justice ruled in SABAM v. Netlog NV (16 February 2012), C-360/10.

If some Google executives could  breath a sigh of relief, others had to worry. 

UK courts confirm publisher's liability for Google

On 14 February 2013, the Court of Appeal of England and Wales ruled, in 
Payam Tamiz v. Google Inc., that Google can be held liable for comments published on Blogger, its online blogging platform, unless it reacts immediately to a complaint.

The appeals judgment reversed a 2012 ruling which had considered, in line with international jurisprudence, that an Internet platform should not be treated as a publisher. 

Google had received complaints regarding certain comments on a blog post and had forwarded them on to the blogger who waited five weeks to delete them. The British NGO Article 19 considered the judgment to be a “serious step back for free speech online”

The judgment means, in effect, that the notice and takedown system is strengthened. This system encourages content hosts, such as Google (but also individual bloggers who have activated their commentary function) to immediately delete even potentially defamatory material immediately after having been notified even if the material is not illegal at all. 

This can have negative chilling effects. According to Article 19, this creates a situation where intermediaries will be more likel to censor "perfectly legitimate speech". 

(I'm not sure I agree with the notion of "legitimate" speech. I'd call the speech just 'perfectly legal'). 

Indeed, the ruling is bad news for free speech online, but - given the circumstances of the case (the connection to an election, the long period of five weeks without deletion of the comment) - probably not surprising. 

Future judgements will most likely draw a finer line. 

The negative implications of intermediaries being more likely to censor perfectly legitimate speech" is no new fear - and definitely not one connected only to this judgement. 

Intermediaries have always censored perfectly legitimate speech because of a variety of reasons, notably because they want a clean, safe and happy plattform on which users stay long, pay attention to ads and, ideally, also spend money. 

The trend, though, is worrying. 

And what is further worrying is the divergence between judgements even within Europe, which is bound to the European Convention on Human Rights and (for almost all EU states) the Fundamental Rights Charter. 

Strasbourg might want to have its say. And it can. 

For more on that, wait for the next posting.

And by the way: Comments are, as usual, enabled.

Sunday, March 24, 2013

Looking back to look ahead: In 1993, the Internet was “suddenly the place to be”

Going back in time can lead to
 interesting insights. Be it on the 

challenges facing the Internet or the 
advantages of growing a beard.
In my research for a book on Freedom of Expression and the Internet that I’m co-authoring for the Council of Europe, I came across an article published 20 years ago that takes us back in time: 

Philip Elmer-Dewitt, First Nation in Cyberspace. Twenty million strong and adding a million new users a month, the Internet is suddenly the place to be, TIME International, 6 December 1993, no. 49, available online thanks to – of all – the chemistry department at FU Berlin.

In 1993, Time magazine ran an article on the emergence of the Internet. It seems to come from a completely different world. “Suddenly the Internet is the place to be,” Time writes,

“American college students are queuing up outside computing centers to get online. Executives are ordering new business cards that show off their Internet addresses. Millions of people around the world are logging on to tap into libraries […]. Even the U.S. President and Vice President have their own Internet accounts.”
Imagine that: Students are queuing up to get online. Today they will be angry if the WLAN is slow. And they will only queue up to get new devices to go online.

What we consider today to be one of the key features of the Internet, namely the ubiquity of information and its uncoordinated, decentralized provision of information was a major issue 20 years ago. Time again:

“But the Internet is not ready for prime time. There are no TV Guides to sort through the 5,000 discussion groups or the 2,500 electronic newsletters or the tens of thousands of computers with files to share.”

Oh dear: there is no one ‘guide’ to the Internet.

Back in 1993, Companies were not yet active online: The Internet, as Time wrote 
“will have to go through some radical changes before it can join the world of commerce. […] It does not take kindly to unsolicited advertisements; use electronic mail to promote your product and you are likely to be inundated with hate mail […] ‘It's a perfect Marxist state, where almost nobody does any business,' says [University of Pennsylvania information science professor] Farber.’ But at some point that will have to change.”
As we all know, this has indeed changed substantially. Now, everybody does business online. And hate mail is no longer sent to spammers; indeed, they would probably appreciate that as it would signal that a spammed e-mail account was active.

Yet all was not well in 1993’s Internet: Early on the Internet contained speech that was deemed problematic:

“People […] may be in for a shock. Unlike the family-oriented commercial services, which censor messages they find offensive, the Internet imposes no restrictions. Anybody can start a discussion on any topic and say anything.”
Imagine that: Anybody can say anything. We know, of course, that is it not true. Laws (e.g. against hate speech) that apply offline also apply online. They may just be more difficult to enforce.

But even twenty years later this general right of anybody to “start a discussion on any topic and say anything” remains at the center of the right to freedom of expression online. A lot has changed in two decades, but free speech continues to fuel the Internet as a catalyst for human rights.

The Internet, as far as it can be personalized as ‘The Internet’, supports human rights protection online through its foundational principles, including net neutrality, the open architecture of the network and the end-to-end principle. As Internet activist John Gilmore put it in the Time article: “The Net interprets censorship as damage and routes around it”.

A growing number of states apply national policies to the Internet that limit Internet freedom and destroy in part or in whole the potential of the Internet as a catalyst for change and for reaching a higher level of human rights protection.

In retrospect, 1993 – though it was two years after the introduction of the World Wide Web in 1991 – seems like a long time ago. But we should pay attention: We do not know what the future holds.

The speed in which the Internet develops intensifies; a version of Moore’s Law is applicable not only to data processing but to data availability as well. We do not how what challenges will exist for freedom of expression in one year, five years or 20 years.

What four lessons can be draw from the Time article.

  1. The technological innovations of the future are impossible to predict. 
  2. What seems exciting, revolutionary and new can – in retrospect – look tiny, puny and unimportant. 
  3. To understand the key challenges of today, it makes sense to go back in time. 
  4. Technologies change, but law lasts. 
The standards developed by the European Court of Human Rights (and it institutional predecessor, the Commission) over more than 60 years hold true today and will hold true, with some adaptions, tomorrow. The key of that standard is the commitment to safeguarding freedom of expression and accepting only interferences when they are legal, pursue a legitimate goal and are necessary and proportionate with regard to the goal pursued.
As a post-script: If you liked the Time article, you’ll love this interview, also from 20+ years ago, with Isaac Asimov, who talks in glowing terms about the potential of the Internet. Everyone can have access to all human knowledge, he says. “Every student has his or her private school and it belongs to them. […] They can be dictators of what they want to study.” 

If I had only known that back in 1993, sitting in school at 10, fidgeting because I was looking forward to soccer practice.

Friday, March 22, 2013

Does multistakeholderism make decisions more legitimate?

The involvement of stakeholders in normative processes
has an impact on the legitimacy of their outcome.
There is an interesting iscussion going on on a list I am a member of on the true meaning of multistakeholderism and its relationship to legitimacy.

In a post, Mike Gurstein set out to defend multistakeholder processes as a framework of decision-making, but not a means to -necessarily - increase legitimacy.

He writes:
"Multistakeholder processes could and should enhance democracy by increasing opportunities for effective participation by those most directly impacted by decisions and particularly those at the grassroots who so often are voiceless in these processes"
"To do this means shifting away from multistakeholderism as a “means of legitimation” to being one among many strategies for making democracy more workable in this era of enhanced communications, enhanced interactivity and accelerated change."
While I agree with Mike on the importance of enhancing democratic participation in the development of norms, I feel that the legitimating dimension of multistakeholder processes may be underestimated.
I've written on the relationship of multistakeholderism and legitimacy at length in my recent book, but I'll restart my points here.
Building on Thomas M. Franck, The Power of Legitimacy Among Nations (Oxford: Oxford University Press, 1990), I argue that how legitimate a norm can be is to be measured according by referring to its determinacy (ascertainable normative content), symbolic validation through an authority figure, coherence, and adherence to a broader system of rules.

These legitimacy criteria can be refined and regrouped for application with regard to the law of Internet Governance. 

I've suggested in my thesis that an International Internet norm is legitimate if it meets a formal and a material legitimacy requirement:
- formally, it needs to be symbolically validated through its emergence in a multi-stakeholder process (the input and throughput dimension of legitimacy),
- materially, it needs to be determinate enough for its purpose (thus allowing for non-binding instruments), cohere with the Internet’s core principles and be consonant with the values of Internet Governance, and adhere systematically to the broader normative system of Internet Governance (the output dimension of legitimacy).
Multistakeholderism provides for a strong legitimation base for norms flowing out of representative and inclusive normation processes because of the triad of legitimating sources: the three key stakeholder groups (states, the private sector, and civil society).

Multistakeholderism as an approach is thus the best approximation of an ideal discourse we have. And an ideal discourse on norms is what we should strive for, because the norms developed in such a discourse, are legitimate in light of the criteria developed above. 

One example for that approach (and the consequences of ignoring it) is ACTA.

One of the main arguments brought forth by civil society against ACTA was that it was debated in secret without civil society involvement. The European Commission argues that this was untrue, but it was - also due to  reasons of EU competence -  mainly a Commission- and state-led exercise. 

I conclude in my book that any multistakeholder approach must ensure equilibrium between the actors and their normative inputs to the greatest extent possible. Therefore, the provision of clear procedural rules on how different stakeholders can contribute is necessary. Developing this on an international level is one of the more important challenges international law will face in the years to come.

By now, Internet Governance Law has developed to a point where individuals have a heightened expectation of legitimacy. There is an expectation of consultation with stakeholder groups; and there are – in certain areas of norm production – corresponding commitments to multistakeholderism by governments. These go back to the World Summit on the Information Society and have been reified in the declarations of rights and principles. 

Even though the European Commission was able to show that it had consulted other stakeholders (but barely so) and that the European Parliament was involved (to a limited degree) in the review of the results of ACTA negotiations, this was not perceived to be enough by certain civil society forces who organized, motivated by the emotionalizing power of an envisaged ‘assault’ on the Internet, a powerful movement against ACTA. This campaign ultimately let the norm entrepreneurs – states – to hold back from signing and ratifying ACTA. That ACTA included certain multistakeholder elements, though it was led by the Commission and thus could only demand technocratic-rational legitimation, did not sufficiently allow for an actualization of the expectation of legitimacy with regard to the normative output.

The implication for international treaty negotiations is this: There is a certain consonance between the post-interposition character of a regime and the level of multistakeholder participation expected by the community. The more individual-centric a regime traditionally is (or the greater individuals feel their involvement should be), the higher the level of multistakeholder participation must be provided for, for both forces to be in consonance. In civil society’s view, the result of the ACTA negotiations exhibited legitimatory dissonance. 

The integration of all stakeholders is essential for discovering, in the pre-normative phase, the challenges that regulatory attempts need to overcome and the regulatory demand they set out to answer. The multi-stakeholder approach, therefore, to which the international community is firmly committed with regard to Internet Governance law, has serious implications for the way in which international treaties should (and will) be negotiated in the future. 

Thursday, March 21, 2013

Al(i)as! No Right to Pseudonymity?

Can social media users request anonymity from
social media companies? A German Data
Protection Office thinks so. (c) Kettemann 2011
Wolfgang Benedek an I have been invited to write a book for the Council of Europe on "Freedom of Expression and the Internet". As we have were finalizing the manuscript I was struck again by the breadth of human rights challenges online. It seems that every day brings new decisions, new directions, new answers (but also new questions). 

In the few days since we've handed in our manuscript, for instance, new developments happened in the French #UnBonJuif case and Microsoft followed Google and Twitter to release its transparency report on law enforcement requests.

Privacy on social networks is valued deeply by some and considered superflouos by others (or at least their carless approach to personal data lets you think that). 

The NY Times reported with regard to Skype that 
"In 4,713 cases last year, Microsoft disclosed administrative details of Skype accounts — like a user’s Skype ID, name, e-mail address and billing information, as well as call detail records if a person subscribed to a Skype service that connects to a telephone number. But Microsoft said it had released no content from Skype transmissions last year. It has said that the peer-to-peer nature of Skype’s Internet conversations means the company does not store and has no access to past conversations."
This leads to the question how we can protect our privacy in social networks. One approach is anonymity or pseudonymity. Social media services providers dislike both, because they make interacting with users (and personalizing ads) more difficult. For them, an identifiable user is a more valuable user.

But Facebook's real name policy als leads to interesting legal questions, especially since the  regional data protection office of the German state of Schleswig-Holstein started an initiative to safeguarding freedom of expression online  

Back in December 2012, the Office ordered Facebook change its real name policy and allow for the use of pseudonyms. 

The Office based its arguments on para. 13 (6) of the German Telemediengesetz (TMG; Telemedia Act) which obliges online service providers 
“to enable the anonymous or pseudonymous use of telecommunications media […], as far as technically possible and reasonable”. (my translation)
According to the Office, the German legislation is compliant with European law and serves to protect “in particular the fundamental right to freedom of expression on the Internet”. 

Though identify theft and abuse of social networks is a problem, the real name obligation does not prevent them effectively. Therefore, the Office concluded, “[t]o ensure the data subjects' rights and data protection law in general, the real name obligation must be immediately abandoned by Facebook”. 

Facebook did not go down without a fight.

Two months after the decisions by the Data Protection Office, on 14 February 2013, the Upper Administrative Court of the German state of Schleswig-Holstein agreed to suspend the ruling of the Office on the grounds that German data protection law was not applicable as the relevant collection of data takes place in Ireland (where Facebook Ltd. is incorporated)

The Office announced that it would appeal against the suspension.

The two decisions raise the larger issue of how international Internet companies should react to different standards in national and regional decisions and legislation. It is important to clarify that certain standards have to be met and that international human rights commitment, and especially commitment to freedom of expression online are respected. It also raises the question how to ensure that an authoritative standard of interpretation of freedom of expression, as developed by the ECtHR, can be translated for the local and regional offices and judiciaries.

As I wrote earlier, human rights-related developments online happen quickly. A great overview is provided by the Internet & Jurisdiction projectBoth their annual report 2012 and the  summary of the key trends they see emerging is worth reading.