'The Pathway to Google Spain': Jef Ausloos
Duration: 14 mins 50 secs
Share this media item:
Embed this media item:
Embed this media item:
About this item
Description: |
Jef Ausloos, KU Leuven delivers the second lecture from the "The Pathway to Google Spain" section of the "EU Internet Regulation After Google Spain" conference.
This conference was held at the Faculty of Law, University of Cambridge on 27 March 2015, and brought together leading experts on Data Protection and Privacy from around the World. The conference was held with the support of the Centre for European Legal Studies (CELS). This entry provides an audio source for iTunes U. |
---|
Created: | 2015-04-14 11:28 |
---|---|
Collection: | EU Internet Regulation After Google Spain: Conference 2015 MOVED |
Publisher: | University of Cambridge |
Copyright: | Jef Ausloos, Mr D.J. Bates |
Language: | eng (English) |
Transcript
Transcript:
So let me start by thanking David and Julia for inviting me here today. As a matter of fact, two years ago I was here as well talking about the exact same case at the time that the hearing had just taken place, and it's great to see that at least from an academic’s perspective (maybe not from an industry perspective) that it’s there’s still so much attention to all these issues.
So earlier this month some of you might have seen this big live-streamed debate in New York between Paul Nemitz, Jonathan Zittrain, and two others. So I was very surprised at the time that even between these people, but also in many other debates, that there's still people mixing up different issues, different concepts, different rights. They're not talking about the same stuff, actually. Still so much misunderstandings are injected into the debate, which often makes, you know, not for a constructive debate. So that's why I thought that I’d take this opportunity at the beginning of the day also to, you know, clarify and delineate some of the issues so we are all on the same wavelength for the rest of the day. Of course there's many things I could talk about, so I tried to centre them around two main topics. So first of all, the conceptual issues and secondly, the whole censorship argument that often always comes back.
So first of all, conceptual. So, as you all know, the right to be forgotten is a very evocative concept, void of any clear legal meaning, but we are all kind of guilty of using it in this debate. So I'm sure most of you know the movie this picture is from. [Indicates to the presentation.] The Men in Black – they have this device called the Neuralyzer, which makes witnesses to an alien incident forget what they have just seen. And so, in debates often this right to be forgotten is considered to be the legal equivalent of such a Neuralyzer, which frankly is a bit absurd; we cannot make people forget what they have just seen, let alone with a legal instrument. So, if you look more closely at this term, I think it makes more sense to look at it as an umbrella term for a lot of already existing rights. The droit à l’oubli, or [the] right to oblivion if you will, the right to object, the right to erasure, now the right to be delisted. So I’ll quickly run through them.
Droit à l’oubli, as you might have guessed, French origin. It’s case law-based; there's no clear legal ground. Depending on the facts of the case, judges have used the general right to privacy, IP rights, tort law even — so it all depends. The underlying rationale is actually to prevent [the] republication of information that would have a disproportionate impact on an individual. The classic example is the ex-convict, who ten years after being released from prison, sees information popping up again. So it’s this whole idea of starting anew, with a clean slate. By definition there will always be a conflict with information freedoms, but if you look at case law, courts have always found a balance and there is only a limited number of cases this right has been accorded. And traditional media outlets have also developed code of conducts, you know, to deal with these kinds of requests. And of course, with the digitization of news archives, and the Internet in general, the potential number of cases has increased dramatically in the last decade or two.
So right to erasure and right to object, contrary to this droit à l’oubli, right to oblivion, they have a specific legal ground in the Data Protection Directive. And so rather than focusing on avoiding publication of information, these are intended to empower data subjects in the relationship with data controllers—to exercise some control over what happens with your data and to that extent you could look at them as sort of tools in the data protection tool box, that could be used for a variety of purposes.
And then, finally, the right to be delisted originated in the Google Spain case, though it was never explicitly used as such by the Court of Justice itself. It’s only in the aftermath that people start using it, but now it's commonly accepted as the term to refer to use in this context, even by the Article 29 Working Party. In here, the rationale at least at the court’s side and it's because search engines create such detailed profiles of information on whatever you're looking for, combining information from all over the Internet—combining and compiling a very detailed profile. And of course this is the main reason why we use search engines in the first place, but it also explains why it has such a potentially big impact on whatever you're looking for, especially if that's a person.
And in a way, this is a great example of where all of the previous rights overlap, right? To a certain extent, the goal is similar to the droit à l’oubli: avoid further publication. But specifically based on data protection rights, targeting a very particular processing operation, and because I cannot stress this enough and you still see this misunderstanding in debates: it’s a very narrow scope of application — it's really about the link between a name search, between a search term and a search result. I’m certain I will talk about this later during the day — to what extent this might translate to other kinds of search engines — internal search engines, website-specific search engines, or other information intermediaries like social networks for example.
Okay, the censorship issue. Unsurprisingly, the ruling was welcomed with a lot of, you know, with a massive panic attack about how it would be the end of freedom of expression online. And indeed, many find it very surprising that the Court did not mention once this fundamental right to freedom of expression. So what I will try to do in the last five to ten minutes of my presentation is say: don’t panic. I'll do this by going into four different kinds of arguments that we often see returning.
First of all, public versus private nature of personal data. This is often presented as a binary: even if information, personal information, is published in the tiniest corner of the Internet, it’s part of the public domain and there would be no limits to its further dissemination, you can freely link to it, et cetera, et cetera. In this line of reasoning, soon everything will become public, right? Because so much stuff is being digitized today and with your smartphones, all of our interactions happen online. Bruce Schneier has called it ‘the loss of the ephemeral’—everything is being stored today. And this whole line of argumentation, in my opinion, ignores that this public versus private nature of data is not a binary — it's actually a continuum, right? You have many different in-between states. Just to list a couple of them, there's depending on the case — you shouldn’t look at it as one or the other. And depending you know, on the nature of the information or the nature of the publisher, the nature of the information, will you know, be on a different place on this continuum. It’s all the idea also of practical obscurity that Woodrow Hartzog has written about.
So secondly, the position of Internet search engines. As you all know we're becoming increasingly dependent on search engines or any information intermediary for that matter to access all kinds of information online. Google is often the first page we go to when browsing the web, when looking for something. In that regard you could consider search engines as a funnel or strainer through which we access most information online. And this is, as I said before, their most valuable characteristic. To find the information you need, compile a profile as detailed as possible about your search term, with all the information that is out there. Now that’s also the reason why it has such a potentially big impact on the person you're looking for, if you’re using person’s name as a search term. It’s important to keep in mind that you know, that this funnel in the middle there, is the underlying decision-making process for compiling this profile on the basis of whatever search term is entirely or largely secret. It's by no means neutral. So we have to be aware, briefly, you know, that these are corporate black boxes almost unilaterally deciding what we get to see, and sure, they do a very good job of it, but we have to be aware that, you know, it’s based on algorithms that are designed with a specific purpose in mind.
Alright, next. And this is related to the previous points. Actually so much of the arguments that you see returning in the discussions is that Google or search engines equal the Internet — and by extension, all information out there. If something would be removed from Google, even just on the basis of a name search, you would alter history. One of these Google hearings, I'm not sure where — I think someone from Index on Censorship even said that the ruling would endanger investigative journalism. So I’d be very wary of the investigative journalist who would only use Google as his primary resource. So they might be looked at as the strainer through we access information, so we should be very wary of them, of considering them our window to the Internet.
Paul Bernal, I think is here. In a blog post he argued that maybe we should look at search engines as we already look at Wikipedia: you know, it's a good start if you're looking for information about a certain topic, but by no means the definitive authoritative source. On that note, it's also interesting to see that Wikipedia has very strong guidelines in place enforced massively on the presence or deleting or maintaining personal data on their pages, so it also makes it very strange that Jimmy Wales was so heavily opposed in these Google hearings.
Alright, finally, the rights of publishers: often returning point, not touched upon really by the Court of Justice, what about the publishers? Don't they have a right? So first of all, I think this is a largely overplayed point — looking at the limited numbers that are available in Google's transparency report, we see that the top ten web sites that are targeted are by no means legitimate news sources; it’s social networks, it’s people’s search engines, so all third parties actually by themselves. Do you really want to give these actors a voice? And, secondly and more importantly, I think this argument seems to presume that publishers have a right to be indexed in the first place. And of course - no one argues - that search engines play a very important role in exercising one's freedom of expression. You know, there's also a lot of European Court of Human Rights case law also protecting the means to effectively exercise one's right to freedom of expression, but does this imply that publishers have a right to be included in a search engine based on specific search terms? Should Google allow publishers to put their information in the top rankings? They actually do this already -it's called Google Ads. So we've got to get organic results; as I said before, it's a black box. Anyone even trying to game these algorithms risks what is the so-called Google death penalty — you know, being entirely banned from search engine. So this whole argument and the other arguments as well: aren't we giving Google too much credit? In a democratic, open society, don’t we want diversity in our sources of information? In January this year, at CPDP in Brussels, Marc Rotenberg of Epic, he said that the news media suffers from a Stockholm Syndrome vis-à-vis Google. They're taken hostage by them but cannot live without. So I think I’ll stop, and welcome any questions later on.
So earlier this month some of you might have seen this big live-streamed debate in New York between Paul Nemitz, Jonathan Zittrain, and two others. So I was very surprised at the time that even between these people, but also in many other debates, that there's still people mixing up different issues, different concepts, different rights. They're not talking about the same stuff, actually. Still so much misunderstandings are injected into the debate, which often makes, you know, not for a constructive debate. So that's why I thought that I’d take this opportunity at the beginning of the day also to, you know, clarify and delineate some of the issues so we are all on the same wavelength for the rest of the day. Of course there's many things I could talk about, so I tried to centre them around two main topics. So first of all, the conceptual issues and secondly, the whole censorship argument that often always comes back.
So first of all, conceptual. So, as you all know, the right to be forgotten is a very evocative concept, void of any clear legal meaning, but we are all kind of guilty of using it in this debate. So I'm sure most of you know the movie this picture is from. [Indicates to the presentation.] The Men in Black – they have this device called the Neuralyzer, which makes witnesses to an alien incident forget what they have just seen. And so, in debates often this right to be forgotten is considered to be the legal equivalent of such a Neuralyzer, which frankly is a bit absurd; we cannot make people forget what they have just seen, let alone with a legal instrument. So, if you look more closely at this term, I think it makes more sense to look at it as an umbrella term for a lot of already existing rights. The droit à l’oubli, or [the] right to oblivion if you will, the right to object, the right to erasure, now the right to be delisted. So I’ll quickly run through them.
Droit à l’oubli, as you might have guessed, French origin. It’s case law-based; there's no clear legal ground. Depending on the facts of the case, judges have used the general right to privacy, IP rights, tort law even — so it all depends. The underlying rationale is actually to prevent [the] republication of information that would have a disproportionate impact on an individual. The classic example is the ex-convict, who ten years after being released from prison, sees information popping up again. So it’s this whole idea of starting anew, with a clean slate. By definition there will always be a conflict with information freedoms, but if you look at case law, courts have always found a balance and there is only a limited number of cases this right has been accorded. And traditional media outlets have also developed code of conducts, you know, to deal with these kinds of requests. And of course, with the digitization of news archives, and the Internet in general, the potential number of cases has increased dramatically in the last decade or two.
So right to erasure and right to object, contrary to this droit à l’oubli, right to oblivion, they have a specific legal ground in the Data Protection Directive. And so rather than focusing on avoiding publication of information, these are intended to empower data subjects in the relationship with data controllers—to exercise some control over what happens with your data and to that extent you could look at them as sort of tools in the data protection tool box, that could be used for a variety of purposes.
And then, finally, the right to be delisted originated in the Google Spain case, though it was never explicitly used as such by the Court of Justice itself. It’s only in the aftermath that people start using it, but now it's commonly accepted as the term to refer to use in this context, even by the Article 29 Working Party. In here, the rationale at least at the court’s side and it's because search engines create such detailed profiles of information on whatever you're looking for, combining information from all over the Internet—combining and compiling a very detailed profile. And of course this is the main reason why we use search engines in the first place, but it also explains why it has such a potentially big impact on whatever you're looking for, especially if that's a person.
And in a way, this is a great example of where all of the previous rights overlap, right? To a certain extent, the goal is similar to the droit à l’oubli: avoid further publication. But specifically based on data protection rights, targeting a very particular processing operation, and because I cannot stress this enough and you still see this misunderstanding in debates: it’s a very narrow scope of application — it's really about the link between a name search, between a search term and a search result. I’m certain I will talk about this later during the day — to what extent this might translate to other kinds of search engines — internal search engines, website-specific search engines, or other information intermediaries like social networks for example.
Okay, the censorship issue. Unsurprisingly, the ruling was welcomed with a lot of, you know, with a massive panic attack about how it would be the end of freedom of expression online. And indeed, many find it very surprising that the Court did not mention once this fundamental right to freedom of expression. So what I will try to do in the last five to ten minutes of my presentation is say: don’t panic. I'll do this by going into four different kinds of arguments that we often see returning.
First of all, public versus private nature of personal data. This is often presented as a binary: even if information, personal information, is published in the tiniest corner of the Internet, it’s part of the public domain and there would be no limits to its further dissemination, you can freely link to it, et cetera, et cetera. In this line of reasoning, soon everything will become public, right? Because so much stuff is being digitized today and with your smartphones, all of our interactions happen online. Bruce Schneier has called it ‘the loss of the ephemeral’—everything is being stored today. And this whole line of argumentation, in my opinion, ignores that this public versus private nature of data is not a binary — it's actually a continuum, right? You have many different in-between states. Just to list a couple of them, there's depending on the case — you shouldn’t look at it as one or the other. And depending you know, on the nature of the information or the nature of the publisher, the nature of the information, will you know, be on a different place on this continuum. It’s all the idea also of practical obscurity that Woodrow Hartzog has written about.
So secondly, the position of Internet search engines. As you all know we're becoming increasingly dependent on search engines or any information intermediary for that matter to access all kinds of information online. Google is often the first page we go to when browsing the web, when looking for something. In that regard you could consider search engines as a funnel or strainer through which we access most information online. And this is, as I said before, their most valuable characteristic. To find the information you need, compile a profile as detailed as possible about your search term, with all the information that is out there. Now that’s also the reason why it has such a potentially big impact on the person you're looking for, if you’re using person’s name as a search term. It’s important to keep in mind that you know, that this funnel in the middle there, is the underlying decision-making process for compiling this profile on the basis of whatever search term is entirely or largely secret. It's by no means neutral. So we have to be aware, briefly, you know, that these are corporate black boxes almost unilaterally deciding what we get to see, and sure, they do a very good job of it, but we have to be aware that, you know, it’s based on algorithms that are designed with a specific purpose in mind.
Alright, next. And this is related to the previous points. Actually so much of the arguments that you see returning in the discussions is that Google or search engines equal the Internet — and by extension, all information out there. If something would be removed from Google, even just on the basis of a name search, you would alter history. One of these Google hearings, I'm not sure where — I think someone from Index on Censorship even said that the ruling would endanger investigative journalism. So I’d be very wary of the investigative journalist who would only use Google as his primary resource. So they might be looked at as the strainer through we access information, so we should be very wary of them, of considering them our window to the Internet.
Paul Bernal, I think is here. In a blog post he argued that maybe we should look at search engines as we already look at Wikipedia: you know, it's a good start if you're looking for information about a certain topic, but by no means the definitive authoritative source. On that note, it's also interesting to see that Wikipedia has very strong guidelines in place enforced massively on the presence or deleting or maintaining personal data on their pages, so it also makes it very strange that Jimmy Wales was so heavily opposed in these Google hearings.
Alright, finally, the rights of publishers: often returning point, not touched upon really by the Court of Justice, what about the publishers? Don't they have a right? So first of all, I think this is a largely overplayed point — looking at the limited numbers that are available in Google's transparency report, we see that the top ten web sites that are targeted are by no means legitimate news sources; it’s social networks, it’s people’s search engines, so all third parties actually by themselves. Do you really want to give these actors a voice? And, secondly and more importantly, I think this argument seems to presume that publishers have a right to be indexed in the first place. And of course - no one argues - that search engines play a very important role in exercising one's freedom of expression. You know, there's also a lot of European Court of Human Rights case law also protecting the means to effectively exercise one's right to freedom of expression, but does this imply that publishers have a right to be included in a search engine based on specific search terms? Should Google allow publishers to put their information in the top rankings? They actually do this already -it's called Google Ads. So we've got to get organic results; as I said before, it's a black box. Anyone even trying to game these algorithms risks what is the so-called Google death penalty — you know, being entirely banned from search engine. So this whole argument and the other arguments as well: aren't we giving Google too much credit? In a democratic, open society, don’t we want diversity in our sources of information? In January this year, at CPDP in Brussels, Marc Rotenberg of Epic, he said that the news media suffers from a Stockholm Syndrome vis-à-vis Google. They're taken hostage by them but cannot live without. So I think I’ll stop, and welcome any questions later on.