You are here: Home / Internet Governance / Blog / Learning to Forget the ECJ's Decision on the Right to be Forgotten and its Implications

Learning to Forget the ECJ's Decision on the Right to be Forgotten and its Implications

Posted by Divij Joshi at Aug 14, 2014 06:00 AM |
“The internet never forgets” is a proposition which is equally threatening and promising.

The phrase reflects the dichotomy presented by the extension on the lease of public memory granted by the internet – as information is more accessible and more permanent, letting go of the past is becoming increasingly difficult. The question of how to govern information on the internet – a space which is growing increasingly important in society and also one that presents a unique social environment - is one that persistently challenges courts and policy makers. A recent decision by the European Court of Justice, the highest judicial authority of the European Union, perfectly encapsulates the way the evolution of the internet is constantly changing our conceptions of individual privacy and the realm of information. On the 13th of May, 2014, the ECJ in its ruling in Google v Costeja,[1] effectively read a “right to be forgotten” into existing EU data protection law. The right, broadly, provides that an individual may be allowed to control the information available about them on the web by removing such information in certain situations - known as the right to erasure. In certain situations such a right is non-controversial, for example, the deletion of a social media profile by its user. However, the right to erasure has serious implications for the freedom of information on the internet when it extends to the removal of information not created by the person to whom it pertains.

Privacy and Perfect Memory

The internet has, in a short span, become the biggest and arguably the most important tool for communication on the planet. However, a peculiar and essential feature of the internet is that it acts as a repository and a reflection of public memory – usually, whatever is once made public and shared on the internet remains available for access across the world without an expiry date. From public information on social networks to comments on blog posts, home addresses, telephone numbers and candid photos, personal information is disseminated all across the internet, perpetually ready for access - and often without the possibility of correcting or deleting what was divulged. This aspect of the internet means that the internet is a now an ever-growing repository of personal data, indexed and permanently filed. This unlimited capacity for information has a profound impact on society and in shaping social relations.

The core of the internet lies in its openness and accessibility and the ability to share information with ease – most any information to any person is now a Google search away. The openness of information on the internet prevents history from being corrupted, facts from being manipulated and encourages unprecedented freedom of information. However, these virtues often become a peril when considering the vast amount of personal data that the internet now holds. This “perfect memory” of the internet means that people are perpetually under the risk of being constantly scrutinized and being tied to their pasts, specifically a generation of users that from their childhood have been active on the internet.[2] Consider the example of online criminal databases in the United States, which regularly and permanently upload criminal records of convicted offenders even after their release, which is accessible to all future employers;[3] or the example of the Canadian psychotherapist who was permanently banned from the United States after an internet search revealed that he had experimented with LSD in his past; [4] or the cases of “revenge porn” websites, which (in most cases legally) publically host deeply private photos or videos of persons, often with their personal information, for the specific purpose of causing them deep embarrassment. [5]

These examples show that, due to the radically unrestricted spread of personal data across the web, people are no longer able to control how and by whom and in what context their personal data is being viewed. This creates the vulnerability of the data collectively being “mined” for purposes of surveillance and also of individuals being unable to control the way personal data is revealed online and therefore lose autonomy over that information.

The Right to be Forgotten and the ECJ judgement in Costeja

The problems highlighted above were the considerations for the European Union data protection regulation, drafted in 2012, which specifically provides for a right to be forgotten, as well as the judgement of the European Court of Justice in Google Spain v Mario Costeja Gonzalves.

The petitioner in this case, sought for the removal of links related to attachment proceedings for his property, which showed up upon entering his name on Google’s search engine. After refusing to remove the links, he approached the Spanish Data Protection Agency (the AEPD) to order their removal. The AEPD accepted the complaints against Google Inc. and ordered the removal of the links. On appeal to the Spanish High Court, three questions were referred to the European Court of Justice. The first related to the applicability of the data protection directive (Directive 95/46/EC) to search engines, i.e. whether they could be said to be “processing personal data” under Article 2(a) and (b) of the directive,[6] and whether they can be considered data controllers as per Section 2(d) of the directive. The court found that, because the search engines retrieve, record and organize data, and make it available for viewing (as a list of results), they can be said to process data. Further, interpreting the definition of “data controller” broadly, the court found that ‘ It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of that activity and which must, consequently, be regarded as the ‘controller’ [7] and that ‘ it is undisputed that that activity of search engines plays a decisive role in the overall dissemination of those data in that it renders the latter accessible to any internet user making a search on the basis of the data subject’s name, including to internet users who otherwise would not have found the web page on which those data are published.’[8] The latter reasoning highlights the particular role of search engines, as indexers of data, in increasing the accessibility and visibility of data from multiple sources, lending to the “database” effect, which could allow the structured profiling of an individual, and therefore justifies imposing the same (and even higher) obligations on search engines as on other data controllers, notwithstanding that the search engine operator has no knowledge of the personal data which it is processing.

The second question relates to the territorial scope of the directions, i.e. whether Google Inc., being the parent company based out of the US, came within the court’s jurisdiction – which only applies to member states of the EU. The court held that even though it did not carry on the specific activity of processing personal data, Google Spain, being a subsidiary of Google Inc. which promotes and sells advertisement for the parent company, was an “establishment” in the EU and Google Inc., and, because it processed data “in the context of the activities” of the establishment specifically directed towards the inhabitants of a member state (here Spain), came under the scope of the EU law. The court also reaffirmed a broad interpretation of the data protection law in the interests of the fundamental right to privacy and therefore imputed policy considerations in interpreting the directive. [9]

The third question was whether Google Spain was in breach of the data protection directive, specifically Articles 12(b) and 14(1)(a), which state that a data subject may object to the processing of data by a data controller, and may enforce such a right against the data controller, as long as the conditions for their removal are met. The reasoning for enforcing such a claim against search engines in particular can be found in paragraphs 80 and 84 of the judgement, where the court holds that “(a search engine) enables any internet user to obtain through the list of results a structured overview of the information relating to that individual that can be found on the internet — information which potentially concerns a vast number of aspects of his private life and which, without the search engine, could not have been interconnected or could have been only with great difficulty — and thereby to establish a more or less detailed profile of him.” and that “ Given the ease with which information published on a website can be replicated on other sites and the fact that the persons responsible for its publication are not always subject to European Union legislation, effective and complete protection of data users could not be achieved if the latter had to obtain first or in parallel the erasure of the information relating to them from the publishers of websites.” In fact, the court seems to apply a higher threshold for search engines due to their peculiar nature as indexes and databases. [10]

Under the court’s conception of the right of erasure, search engines are mandated to remove content upon request by individuals, when the information is deemed to be personal data that is “ inadequate, irrelevant or excessive in relation to the purposes of the processing, that they are not kept up to date, or that they are kept for longer than is necessary unless they are required to be kept for historical, statistical or scientific purposes,” [11] notwithstanding that the publication itself is lawful and causes no prejudice to the data subject. The court reasoned that when the data being projected qualified on any of the above grounds, it would violate Article 6 of the directive, on grounds of the data not being processed “ fairly and lawfully’, that they are ‘collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes’, that they are ‘adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed’, that they are ‘accurate and, where necessary, kept up to date’ and, finally, that they are ‘kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the data were collected or for which they are further processed’.” [12] Therefore, the court held that, due to the nature of the information, the data subject has a right to no longer have such information linked to his or her name on a list of results following a search made on their name. The grounds laid down by the court, i.e. relevancy, inadequacy, etc. are very broad, yet such a broad conception is necessary in order to effectively deal with the problems of the nature described above.

The judgement of the ECJ concludes by applying a balancing test between the rights of the data subject and both the economic rights of the data controller as well as the general right of the public to information. It states that generally, as long as the information meets the criteria laid down by the directive, the right of the data subject trumps both these rights. However, it adds an important caveat – such a right is inapplicable “ the in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information, an interest which may vary, in particular, according to the role played by the data subject in public life.” This crucial point on the balancing of two rights directly hit by the judgement was only summarily dealt with by the ECJ, without effectively giving any clarity as to what standards to apply or laying down any specific guidelines for the application of the new rule. [13] Doing so, it effectively left the decision to determine what was in the public interest and how the rights are to be balanced to the search engines themselves. Delegating such a task to a private party takes away from the idea of the internet as a common resource which should be developed for the benefit of the larger internet community as a whole, by allowing it to be governed and controlled by private stakeholders, and therefore paves an uncertain path for this crucial aspect of internet governance.

Implications of the ECJ ruling

The decision has far reaching consequences on both privacy and on freedom of information on the internet. Google began implementing the decision through a form submission process, which requires the individual to specify which links to remove and why, and verifies that the request comes from the individual themselves via photo identification, and has also constituted an expert panel to oversee its implementation (similar to the process for removing links which infringe copyright law).[14] Google has since received more than 91,000 requests for removal, pertaining to 328,000 links of which it has approved more than half.[15] In light of such large volumes of data to process, the practical implementation of the ruling has been necessarily problematic. The implementation has been criticized both for implicating free speech on the internet as well as disregarding the spirit of the right to be forgotten. On the first count, Google has been criticized for taking down several links which are clearly are in public interest to be public, including several opinion pieces on politicians and corporate leaders, which amounts to censorship of a free press.[16] On the second count, EU privacy watchdogs have been critical of Google’s decision to notify sources of the removed content, which prompts further speculation on the issue, and secondly, privacy regulators have challenged Google’s claim that the decision is restricted to the localised versions of the websites, since the same content can be accessed through any other version of the search engine, for example, by switching over to “Google.com”.[17]

This second question also raises complicated questions about the standards for free speech and privacy which should apply on the internet. If the EU wishes for Google Inc. to remove all links from all versions of its search engine, it is, in essence, applying the balancing test of privacy and free speech which are peculiar to the EU (which evolved from a specific historical and social context, and from laws emerging out of the EU) across the entire world, and is radically different from the standard applicable in the USA or India, for example. In spirit, therefore, although the judgement seeks to protect individual privacy, the vagueness of the ruling and the lack of guidelines has had enormous negative implications for the freedom of information. In light of these problems, the uproar that has been caused in the two months since the decision is expected, especially amongst news media sites which are most affected by this ruling. However, the faulty application of the ruling does not necessarily mean that a right to be forgotten is a concept which should be buried. Proposed solutions such as archiving of data or limited restrictions, instead of erasure may be of some help in maintaining a balance between the two rights.[18] EU regulators hope to end the confusion through drafting comprehensive guidelines for the search engines, pursuant to meetings with various stakeholders, which should come out by the end of the year. [19] Until then, the confusion will most likely continue.

Is there a Right to be Forgotten in India?

Indian law is notorious for its lackadaisical approach towards both freedom of information and privacy on the internet. The law, mostly governed by the Information Technology Act, is vague and broad, and the essence of most laws is controlled by the rules enacted by non-legislative bodies pursuant to various sections of the Act. The “right to be forgotten” in India can probably be found within this framework, specifically under Rule 3(2) of the Intermediary Guideline Rules, 2011, under Section 79 of the IT Act. Under this rule, intermediaries are liable for content which is “invasive of another’s privacy”. Read with the broad definition of intermediaries under the same rules (which includes search engines specifically) and of “affected person”, the applicable law for takedown of online content is much more broad and vague than the standard laid down in Costeja. It remains to be seen whether the EU’s interpretation of privacy and the “right to be forgotten” would further the chilling effect caused by these rules.


[1] Google Spain v Mario Costeja Gonzalves, C‑131/12, Available at http://curia.europa.eu/juris/document/document.jsf?text=&docid=152065&pageIndex=0&doclang=en&mode=req&dir=&occ=first&part=1&cid=264438.

[2] See Victor Mayer-Schonberger, Delete: The Virtue of Forgetting in the Digital Age, (Princeton, 2009).

[3] For example, See http://mugshots.com/; and http://www.peoplesearchpro.com/resources/background-check/criminal-records/

[4] LSD as Therapy? Write about It, Get Barred from US, (April, 2007) available at

http://thetyee.ca/News/2007/04/23/Feldmar/

[5] It’s nearly impossible to get revenge porn of the internet, (June, 2014), available t http://www.vox.com/2014/6/25/5841510/its-nearly-impossible-to-get-revenge-porn-off-the-internet

[6] Article 2(a) - “personal data” shall mean any information relating to an identified or identifiable natural person (“data subject”); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity;

Article 2(b) - “ processing of personal data” (“processing”) shall mean any operation or set of operations which is performed upon personal data, whether or not by automatic means, such as collection, recording, organisation, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure or destruction;

[7] ¶36, judgment.

[8] The court also recognizes the implications on data profiling through the actions of search engines organizing results in ¶37.

[9] ¶74 judgment.

[10] In ¶83, the court notes that the processing by a search engine affect the data subject additionally to publication on a webpage; ¶87 - Indeed, since the inclusion in the list of results, displayed following a search made on the basis of a person’s name, of a web page and of the information contained on it relating to that person makes access to that information appreciably easier for any internet user making a search in respect of the person concerned and may play a decisive role in the dissemination of that information, it is liable to constitute a more significant interference with the data subject’s fundamental right to privacy than the publication on the web page.

[11] ¶92, judgment.

[12] ¶72, judgment.

[13] ¶81, judgment.

[14] The form is available at https://support.google.com/legal/contact/lr_eudpa?product=websearch

[15] Is Google intentionally overreacting on the right to be forgotten? (June, 2014), available at http://www.pcpro.co.uk/news/389602/is-google-intentionally-overreacting-on-right-to-be-forgotten.

[16] Will the right to be forgotten extend to Google.com?, (July, 2014), available at http://www.pcpro.co.uk/news/389983/will-right-to-be-forgotten-extend-to-google-com.

[17] The right to be forgotten is a nightmare to enforce, (July, 2014), available at http://www.forbes.com/sites/kashmirhill/2014/07/24/the-right-to-be-forgotten-is-a-nightmare-to-enforce.

[18] Michael Hoven, Balancing privacy and speech in the right to be forgotten, available ati http://jolt.law.harvard.edu/digest/privacy/balancing-privacy-and-speech-in-the-right-to-be-forgotten#_edn15

[19] EU poses 26 questions on the right to be forgotten, (July, 2014), available at http://www.cio-today.com/article/index.php?story_id=1310024135B0

Document Actions