Quantcast
Channel: News and Research articles on Privacy & Security
Viewing all 178 articles
Browse latest View live

Securitising Putin IV: the rationale behind Russia’s new “digital laws”

$
0
0

Russia’s internet watchers can hardly get bored. For the last five years, while the domestic landscape was being subject to political “consolidation” and the international environment largely shaped by digital/cyber-related events, a myriad of “defensive measures” were implemented by the Russian decision-makers to tighten state control over the internet. From a nascent though fragile “digital public sphere” under Dmitry Medvedev as president to a “dictatorship-of-the-law” approach to any digital-related issue since 2012,1 it is easy to see the internet as one of the main “losers” in Russian political life.

The recent series of laws passed by the State Duma2 are thus part of a cycle that is well likely to bring Russia’s internet not under mere high scrutiny by state officials or legislative grey area, but under firm government control. The parallel with the Chinese internet – for long not as relevant as it might have appeared at first sight because of great disparities between the two peoples’ mentalities and online behaviours – even matches in terms of schedule: Moscow and Beijing nearly simultaneously adopted repressive legislations against instant-messaging services such as Telegram and WhatsApp, as well as against anonymising tools such as VPNs and the TOR network. All these services are increasingly popular among Russian internet users.

Russian legislative frenzy comes just-in-time for elections

Today the internet is no longer considered as a “social decompression chamber” that would keep the Russian population out of politics:3 the 2011-2012 mass demonstrations in the streets of Moscow and Saint Petersburg are still well in the memories of Vladimir Putin and his entourage. Therefore, the legislative frenzy of Russian decision-makers must be seen under the prism of the upcoming electoral sequence: gubernatorial elections in September and, more importantly, presidential election in March 2018. Ahead of the presidential election the Kremlin unsurprisingly seeks to prevent any mass online-coordinated street protest that might stymie Putin’s upcoming fourth presidency and legitimacy.

Thus, this umpteenth tightening of the screws in the digital sphere is set to artificially inflate the costs of any contest – for civil society and for the local industry which provides the latter with tools and services for political emancipation. “Information warfare” starts at home: the set of repressive laws called “Yarovaya legislation”, signed by Putin in July 2016, had a huge psychological impact on both the industry and internet users.4 Permanent judiciary harassment was until recently “limited” to those figures that emerged in the public sphere thanks to sophisticated uses of the internet; however, prosecution now targets ordinary Russian citizens “guilty” of what they published online, even mere pictures or reposts.

Documentary distributed online causes panic

The nervousness of Russian authorities has been amplified by the great resonance on the internet of the documentary film about alleged corruption by Prime Minister Dmitry Medvedev.5 The film, narrated by the unpredictable and tech-savvy oppositionist Alexey Navalny, shows the yachts, luxurious mansions and a XVII-century villa in Tuscany all controlled by Medvedev through a complex network of acquaintances, charity funds and offshore companies. First posted on YouTube in March, the video has received more than 23.6 million views so far (and 1.5 million on its first day). The unintended popularity of the film logically aroused anxiety in the Kremlin, reviving the fear that internet-based communications might help toppling key political figures – in other words, the regime itself.6

For the past decade and a half, many developments in Russian domestic politics can be effectively explained and analysed through the discourse of “securitisation” and the use by Russia’s political leadership of the rhetoric of existential threat to justify actions increasingly perceived as authoritarian. The internet does not escape from this rationale: in strengthening their hold on the digital sphere and repression towards dissenting voices online, Russian leaders are about to shut one of the last open windows in the political realm.

Footnotes

1. Julien Nocetti, "Russia’s dictatorship-of-the-law approach to internet policy", Internet Policy Review, 4:4, 2015.

2. Vera Kholmogorova, Maria Makutina, « Glava FSB poprosil Dumu uskorit’ prinyatie zakonov o regulirovanii v Seti » [FSB head asked Duma to accelerate the adoption of laws on regulating the internet], RBK, 23 June 2017, http://www.rbc.ru/politics/23/06/2017/594ceb609a7947265009bfc8.

3. Unlike China, where online censorship is also about distracting the public to avoid discussing controversial issues. See Gary King, Jennifer Pan, Margaret Roberts, "How the Chinese government fabricates social media posts for strategic distraction, not engaged argument", American Political Science Review, forthcoming (2017).

4. Julien Nocetti, "Yarovaya laws, or the political and economic costs of anti-terror fight", Working paper, October 2016.

5. The video can be accessed at https://www.youtube.com/watch?v=qrwlk7_GF9g&yt:cc=on.

6. In April a Moscow-based Levada Center poll found that 45% of surveyed Russians support the resignation of Dmitry Medvedev, and indicated that 67% held Vladimir Putin personally responsible for high-level corruption.


The Israeli Digital Rights Movement's campaign for privacy

$
0
0
Acknowledgements: I would like to thank the participants of the Early Stage Researchers Colloquium (ESRC) of the Humboldt Institute for Internet and Society (HIIG) from 2014 and especially Ulrike Hoeppner and Jörg Pohle for their insightful ideas and advice.

 

Introduction

The digital era has expanded the boundaries and meanings of basic human rights such as freedom of expression, the right to privacy, and the right to information. These changes have triggered constant deliberations between national governments, global internet corporations, inter- and nongovernmental entities over the scope of these rights (Benedek, 2008; Kay, 2014). This paper focuses on one of these actors: civil society organisations which advocate for digital rights, also known as digital rights advocates. These organisations advocate for computer and internet-related civil liberties on parallel tracks: on the one hand, they confront governments and internet corporations in the constitutional, political, and judicial arenas, and on the other, educate the public about their rights. Thus, they are among the few social actors with the potential to challenge and sometimes even change the rules decided upon by powerful social actors (Breindl, 2011; Postigo, 2008).

In order for them to achieve their goals, digital rights advocates have to persuade other stakeholders, including the public. Yet such persuasion is not easy and usually requires them to reframe issues to their advantage. This is why, for example, the American Electronic Frontier Foundation (EFF) frames copyright issues as issues of fair use in order to legitimize expanding consumer privileges in copyrighted works (Postigo, 2008). This is also why, when dealing with net neutrality digital rights advocates worldwide have recently framed their campaigns as essential to saving the internet (Fernández Pérez, 2015; Kosoff, 2017; Panwar, 2015). Yet, only few studies explored in depth the persuasion techniques used by digital rights advocates, especially concerning the right for privacy (Bennett, 2008). This study wishes to contribute to the literature in the field by asking: “what are the persuasion techniques employed by Israel’s Digital Rights Movement organisation (DRM) in its campaign for privacy and against the biometric database in Israel?”

To do so, I have analysed the organisation’s textual products and involvement in legislation initiatives, judicial rulings, and public discourse in 2009-2017. This research sheds light on the role civil society organisations can play in constructing the boundaries of digital rights. Second, it contributes to the literature dealing with the right to privacy in a specific sociocultural context. Finally, it deepens our understanding of the global issue of privacy governance.

In what follows, I will elaborate on the role civil society organisations play in protecting digital rights, especially the right to privacy. I will then address the Israeli case, and present the research questions and methods. My findings will describe the main activities of the DRM against the biometric database, as well as the persuasion techniques employed thereby. I will conclude by discussing how the study of civil society privacy campaigns can assist in conceptualising and understanding issues of privacy governance.

Civil society organisations and privacy: learning to sail against the wind

Governing privacy – and even the very definition of privacy – have become controversial, as new technological and socio-political forms emerge around the globe. Most studies explore privacy governance by analysing the national or international laws and regulations (Newman, 2008; Regan, 1995). Others focus on the possible influence of technological developments on privacy governance (DeNardis, 2010; Lessig, 2006). Still others examine the social interaction between different stakeholders involved in issues of privacy (Bennett 2008; Solove & Hartzog, 2014). While the latter line of analysis is still uncommon within the study of privacy governance, it coincides with contemporary trends of internet governance research, which explore the role of various social actors in internet governance processes and decisions (DeNardis & Raymond, 2013; Mueller, 2010). This paper follows this line by analysing the activities of civil society organisations in constructing privacy governance.

Civil society organisations advocating for the right to privacy differ from one another vis-à-vis several issues, such as their framing of privacy, the nature of their activities, and even their objectives. While some consider advocating for privacy as a way of preserving a basic human right, others frame it as a way to fight surveillance. Some organisations focus on the individual level, while others focus on the societal level. Some fight against a wide span of technologies, while others focus on specific intrusive technologies and practices (Bennett, 2008). Despite these differences, they are all united in their belief that even in the twenty-first century, privacy is not dead, and it is worthwhile to preserve it.

However, advocating for privacy is a challenge. In their privacy-related campaigns, organisations often find themselves isolated for two main reasons. First, during campaigns concerning other digital rights such as internet access, net neutrality, or the right to fair use, the interests of digital rights advocates have often coincided with those of powerful stakeholders, such as internet corporations or governments. This was evident in the campaign against the Stop Online Piracy Act and Protect IP Act (SOPA/PIPA) in the US (Benkler, Roberts, Faris, Solow-Niederman & Etling, 2013), and the protests against the Anti-Counterfeiting Trade Agreement (ACTA) in Europe (Losey, 2014). Yet when it comes to privacy, they have no interest in assisting civil society organisations in their goals vis-à-vis privacy rights, since governments and internet corporations have proven to use technological innovations in a manner that violates citizens’ privacy either for security reasons or for financial and political gain (Greenwald, 2014; Rauhofer, 2008).

The second reason relates to the ability of civil society organisations to mobilise the public to their causes. To begin with, the decline in political and civic engagement (Norris, 2002) distances people from participating in the organisations' activities. Second, most citizens do not have sufficient knowledge or understanding of the topic (Livingstone, 2008; Osenga, 2013). This is of special importance when it comes to the right to privacy. Technological developments, along with violations committed by governments and internet corporations, have altered citizens’ personal understanding and social expectations for privacy (Andrews 2012; Worthington, Fitch-Hauser, Välikoski, Imhof & Kim, 2011), so much so that the right to privacy might no longer seem important or relevant to most people. Finally, since most digital rights advocates subscribe to a Western viewpoint (Tăbușcă, 2010), non-Western countries may perceive them as hostile strategic communicators (Monroe, 2015). Thus, to achieve their goals, the organisations have to adjust their activities to fit the local society, or, to put it differently, learn how to sail against the wind.

Despite these obstacles, in the past decade there have been several successful privacy campaigns by digital rights advocates worldwide, as documented by EFF (2017a). For example, in 2005 in the UK, No2ID and its affiliates managed to derail a government plan for creating a biometric ID database (EFF, 2017b). In 2008, Derechos Digitales in Chile protected the privacy of internet users by opposing police plans for retrieving personal information about web commenters from internet corporations (EFF, 2017c). Finally, in 2012, OpenMedia.ca in Canada managed to put on hold online surveillance legislation (EFF, 2017d). This is not to say that these small victories have ended all privacy violations. However, each represents a reconstruction of the boundaries of privacy in these countries – if only for a short while. Against this background, I now turn to examining the way DRM coped with similar obstacles in Israel.

The Israeli case: the (non)importance of privacy

When addressing privacy in Israel, one needs to take into consideration not only the legal right to privacy as enshrined in the country's legislation, but also the status of privacy as a cultural and social norm, since these two influence one another (Birnhack, 2010). A key cultural distinction in that regard is that between collectivism and individualism (Hofstede, 2001). In collectivist cultures, citizens are more likely to accept privacy intrusions in return for in-group belonging. Conversely, individualistic cultures are more concerned with online privacy because their citizens place higher value on private life and independence (Cho, Rivera-Sanchez, Lim, 2009; Milberg, Burke, Smith & Kallman, 1995).

Israel was established as a collectivist society: the value of privacy is thus not rooted in its culture, since it contradicts the culture of collectivism and the local ethos of sharing (Ribak & Turrow, 2003; Ribak, 2007). However, as a democratic state, despite its collectivist nature, the legal right to privacy in Israel is protected by law. First, according to Article 7 of The Basic Law for Human Dignity and Liberty (1992) which is part of the constitutional law of the country, everyone is entitled to privacy, then The Protection of Privacy Law (1981) which deals exclusively with the limits of the right to privacy in Israel. Second, there are several specific laws dealing with the right to privacy, among other issues, including The Wiretap Law (1979); The Basic Law: The Judiciary (1984); Patients' Rights Act (1996); The Criminal Procedure Law: Enforcement Powers – Body Search of Suspect (1996) ; The Freedom of Information Act (1998) ; The Prevention of Sexual Harassment Law (1998) and The Genetic Information Law (2000). Finally, in 2006, the Israeli government established The Israeli Law, Information, and Technology Authority and tasked it with strengthening the protection of personal data and tightening enforcement in cases of privacy violations. It is concerned with issues such as database protection, electronic signatures, and credit card information (Israeli Law Information and Technology Authority, 2017). However, socio-cultural norms in Israel lag far behind the legal normative public discourse (Karniel & Lavie-Dinur, 2012). Birnhack & Elkin-Koren (2009) demonstrate the gap by showing how most Israeli websites, including public and government websites, still do not provide users the adequate privacy protection as required by law.

This gap only widens when considered in the specific Israeli security context. Long before the digital revolution, Israel responded to security fears with laws and regulations that violate privacy in the name of national security (Ribak, 2003; Ribak & Turow, 2003). For example, according to The Identity Card Carrying and Displaying Law (1982), all adult citizens are obligated to carry their government-issued ID card and must present it to any representative of the police or military on demand, even without probable cause. In addition, upon entering a public place, Israelis are often obliged to open their bags for security inspection and pass through a metal detector as their belongings are X-rayed (Israeli, 2013). Another example is the amendment to The Criminal Procedure Law (Enforcement Powers – Communication Data (2007) , which allows security agencies to acquire citizens’ private communication data from internet and mobile service providers without any judicial oversight. Finally, in recent years, there has been a growing stream of legislation initiatives ostensibly designed to protect Israelis at the cost of violating citizens' privacy. The latest example is the Minister of Interior’s initiative to compile a database of citizens who support the boycott, divestment, and sanctions (BDS) movement (Ravid, 2017).

Although these laws, regulations, and initiatives violate privacy on a regular basis, the annual surveys of the Israeli Institute for Democracy indicate that most Israelis are willing to accept these violations, including online state surveillance, in exchange for security (Hermann, Heller, Cohen, Be’ery, & Lebel, 2015; Hermann, Heller, Cohen, & Bublil, 2016; Hermann, Heller, Cohen, Bublil & Omar, 2017).

The upshot is that the right to privacy in Israel is considered of limited importance: Israeli institutions are less sensitive to and Israeli citizens are more tolerant of violations compared to other Western societies, especially in exchange for personal security (Israeli, 2013; Shamah, 2013). As Ribak (2003, p. 20) puts it, privacy in Israel is “an unaffordable luxury that is willingly, unquestioningly surrendered and sacrificed”. Thus, claims Ribak, it is no wonder that in Israel criticism of violations of privacy is rare. Nevertheless, as elaborated in the next section, the recent creation of a national biometric database did encounter resistance by Israeli civil society.

The DRM: aiming to be the Israeli EFF

As in many countries, the Israeli government has initiated various well-meaning programmes that rely on surveillance and database technologies, which to some extent violate people’s privacy. For example, the Credit Score Law (Zarhia & Izesko, 2015) provides lending institutions with access to financial information regarding future clients; City Without Violence involves widespread surveillance cameras deployment; and the National Traffic Management Centre involves installing surveillance cameras on highways and crossroads (City Without Violence, 2017; Netivei Israel, 2017). One of the largest and most controversial projects of this kind is The Inclusion of Biometric Means of Identification in Identity Documents and in an Information Database Law (2009).

According to the law, each citizen is to be issued smart documents (ID card and passport) which include fingerprints and computerised tags of facial features. In addition, these biometric data are to be stored in encrypted form in a database supervised by the Biometric Database Management Authority (BDMA). As announced by the prime minister at the time, Ehud Olmert, the transition to smart ID and the creation of a biometric database served two purposes: reducing forgery and identity theft and providing better government services (Somfalvi & Ronen, 2008). The law provided for a two-year pilot in which the database was to operate on a trial basis and registration would be voluntary. During this period, the BDMA was tasked with examining the necessity of the database, designing measures of success, and looking into possible alternatives (due to possible violations of privacy). Only after this period was it to be decided whether to make it obligatory.

The initiative to establish the DRM came in 2009, in response to the creation of the biometric database. Its founders, whose expertise was mostly technological, feared the privacy implications of the database (Yaron, 2011). This led, in 2011, to the creation of the DRM as an official NGO dedicated to advocating for all digital rights. Prior to the establishment of the DRM, several civil society organisations in Israel – including the Association for Civil Rights in Israel (ACRI) and the Israel Internet Association (ISOC-IL) – had addressed digital rights among their other activities. Unlike them, however, the DRM distinguishes itself by dealing exclusively with digital rights (Yaron, 2011). As its founders declare, their aim is for the organisation to become the Israeli equivalent of the EFF. This ambition is manifested, for example, in the similarity between the organisations’ founding declarations, both emphasising civil liberties and technology. The founders of the EFF define the aims of their organisation thus:

The Electronic Frontier Foundation is the leading nonprofit organisation defending civil liberties in the digital world… We work to ensure that rights and freedoms are enhanced and protected as our use of technology grows. (EFF, 2017e)

And this is how the DRM defines its goals:

The DRM is engaged in protecting and promoting the rights of the individual and the community in the digital age. The organisation is engaged in protecting the right to privacy, freedom of expression, the right to equality, consumer rights, and the like, and relates to the possible infringement of these rights by information technologies… The organisation has set itself the goal to be a focal point of knowledge at points of interaction between technology and the rights of the individual and the community, and to promote those rights within the framework of its activities. (DRM, 2009a)

Interestingly, the similarities between the declarations also highlight the absence of any reference to local social or political aspects in the DRM’s declaration: Israel is not mentioned either in the name of the organisation or in its declaration. This seemingly neutral declaration also marks the organisation as an apolitical entity. I will refer to this point again when analysing its activities vis-à-vis the biometric database.

At the time of writing, the DRM has begun to deal with issues like consumer rights and freedom of speech, but its main concern remains the right to privacy. So far, the organisation has documented and acted against six major privacy violations, mostly by government institutions. These include the Pet App, a database of dog owners created by the Ministry of Agriculture that exposed personal information (DRM, 2014a); the smartcard system for public transportation (DRM, 2011); and most importantly, the biometric database. This latter was the only violation to give rise to a full-scale campaign. Given the aforementioned challenges of civil society organisations when advocating for privacy, as well as the unique situation in Israel, this study asks: What were the persuasion tactics used by the DRM in their campaign against the biometric database?

Methodology

To answer the research question, I collected texts concerning DRM activities between the years 2009-2017. These materials included the organisation’s official publications and announcements, retrieved from its website (n=22 documents); journalistic reports on the movement's work in 2009-2017 (n=76 documents); and minutes of the Joint Committee1 (n=37 documents).

The analysis was carried out in two stages. The first entailed mapping all the actions taken by the DRM in 2009-2017, and the second involved analysing its arguments throughout the campaign. The analysis was based on the persuasion tactics typology suggested by Keck and Sikkink (1999) in their work on transnational advocacy networks, combined with Aristotelian definitions of modes of persuasion (Tausig, 2015). In what follows, I present the evolution of the campaign followed by an analysis of DRM’s arguments.

Campaigning against biometrics: three arenas, three stories

Three arenas - From the beginning of the legislative process, the DRM opposed the law and began advocating against the database in three different arenas: political, judicial, and public. During the first stage of the campaign, the organisation focused on the political arena. Even before its official establishment, its activists had been engaged in lobbying and discussions about the legislation in various committees of the Knesset. One of the NGO’s first official acts was to send a letter to Knesset members stressing the potential problems of the database in hopes of persuading them to vote against the law (DRM, 2009b). As the campaign progressed, members of the organisation continued their lobbying work at the political arena, participating in 26 out of 37 meetings of the Joint Committee meetings on the database (The Joint Committee, 2009-2017).

During the next stage, in 2012, the DRM operated in the judicial arena by appealing to the High Court of Justice to overrule the Knesset and abolish the biometric database (H.C. 1516/12, 2012). The court ruled that during the pilot stage there was no reason to abolish the database. However, it did rule that the DRM could re-appeal afterwards (Zarhin, 2012). Following the ruling, the pilot began in July 2013, and the state launched a massive media campaign encouraging people to join the database, claiming it would protect them against identity theft (Keinan & Zilber, 2013). Since the ads failed to mention that by doing so they would be joining the biometric database, the DRM appealed once again to the High Court of Justice to force the state to make full disclosure. The ruling in favour of the organisation received mainstream media coverage, which it used to publicise the controversy surrounding the database (Zarhin, 2012).

Furthermore, in response to the state's campaign, the organisation turned to the third arena, the mediated public sphere, and launched for the first time a social media campaign aimed at convincing people not to register for the database. Although during previous years its activists had continuously lobbied against the law in the public mediated arena, this was the first time they had mounted an official campaign. To finance the campaign, the DRM initiated a successful small-scale crowdfunding campaign to raise money to produce viral videos (DRM, 2013a). In January 2014, using the money it had raised, the organisation produced two such videos – "Why anti?" and "Why shouldn't you join the biometric database?" Their launch received mainstream media attention on a national scale, which helped DRM gain some public attention (Golan, 2014).

From that point on, the organisation continued to operate in all three arenas, recognising that in order to succeed they could not withdraw from any. For example, in February 2015, prior to a discussion in the Knesset, the BDMA published a partial report mapping the use of biometric databases around the world. In response, the DRM publicly crowdsourced a large-scale internet search for complete and accurate information on the matter, and publicised the information in various online media outlets (Lilien, 2015). In November 2016, the Ministry of Interior announced that following to the completion of the pilot stage, the database would become permanent and all Israeli citizens would be obligated to register. In response, the DRM initiated a combined campaign that included lobbying politicians (and encouraging citizens to reach out to members of Knesset asking them to vote against the database); launching another crowdfunding campaign to raise money for another appeal; interviews in various media outlets; recruiting volunteers; and organising public meetings and demonstrations (Kabir, 2016). The use of all three arenas demonstrates the gradually growing efforts of the DRM to mobilize all relevant stakeholders.

Three stories - In all these arenas, the DRM attempted to persuade various stakeholders to act against the database. In their work on the persuasion tactics in transnational advocacy, Keck and Sikkink (1999) defined two tactics relevant to an analysis of the DRM’s arguments: information politics and symbolic politics. The tactic of information politics relies on activists’ ability to generate politically relevant information and to move it by the most effective means to the place it will have the most impact at the most critical time (Keck & Sikkink, 1999). Bennett (2008) elaborated on this tactic, reasoning that the politics of information in the context of privacy advocacy relies on the ability of privacy activists to produce reliable and accurate information about the possible harm caused by a certain intrusive technology or a new policy, for example by stressing its potentially hazardous consequences based on previous experience with similar surveillance systems at different times and places, or by arguing against its long-term ineffectiveness. In contrast, symbolic politics operates by evoking symbols, actions, values, beliefs, and stories so as to invest a situation with a meaning that resonates with a particular audience within a particular culture (Keck & Sikkink, 1999). By applying the Aristotelian modes of persuasion (Tausig, 2015) to the various stories of symbolic politics, I suggest that one can identify three venues of persuasion these stories trigger: logos (logic), ethos (the guiding beliefs of a person, group, or institution), and pathos (emotion).

In their work, Keck and Sikkink (1999) referred to each tactic separately; yet, when analysing the arguments raised by the DRM, it appears that each factual argument was backed up by a symbolic persuasion technique, whether explicitly or implicitly. The combination of both tacticscreated what I define as cultural informational framing (Daskal, 2017). This means that the organisation's arguments, as demonstrated below, were accurate and credible, but at the same time resonated with people's experiences, emotions, and knowledge, as well as with their socio-cultural expectations and norms.

1.Why the database should be abolished: because it's not necessary - As the organisation highlighted repeatedly throughout the campaign with the backing of cyber experts, there is a significant difference between issuing smart documents and creating a database. Issuing smart documents effectively solves the problem of stealing and forging official documents, but does it necessarily entail the creation of a database? The activists’ answer is no: they declared that while they do support the transition to smart documents (passports and ID cards) for Israeli citizens, they object to the creation of a database due to its violation of citizens' privacy.

The right to privacy is essential in a democracy, thus the creation of the database will erode Israeli democracy. Based on the Aristotelian typology, by raising this argument, the organisation appealed to a key ethos in Israel: its pride in being a democratic state. This is how the argument was phrased in the organisation's letter to the Knesset members: “Collecting biometric features means that the state treats citizens as suspects… This is a disproportional assault on privacy, which is a fundamental right according to the Basic Laws of Israel” (DRM, 2009b, para. 3). The letter also stresses the importance of privacy in a democratic society by showcasing the Western perspective; it argues, “There are no such databases in any Western country… such a database would put Israel on the same plane as states such as Yemen, Pakistan, and Indonesia, which are not examples of enlightened regimes” (DRM, 2009b, para. 3). The same argument was brought to bear in the organisation’s 2012 appeal to the High Court of Justice (H.C. 1516/12, 2012, p. 2): “a biometric database… constitutes an unprecedented mechanism of control and surveillance. It inflicts severe and unnecessary harm to human dignity, its freedom and right to privacy. It undermines the basis of democracy”.

2.Why the database should be abolished: because it's ineffective - Unlike the first argument, this argument justifies the database’s abolition because it is ineffective. From an informational point of view, in its very first appeal to court in 2012, the organisation pointed out that the state had failed to carry out the actions required by law concerning the creation of the database: appointing an external monitor, establishing criteria for success, defining measures for testing reliability and validity, and evaluating alternatives for the biometric database (H.C. 1516/12, 2012). Later in the campaign, on at least four separate occasions, the organisation pointed out various shortcomings in the construction of the database which might damage its professional, safe, and secure functioning. For example, in June 2013, the DRM sent a letter to the Attorney General, claiming that the tender terms for securing the database contravened the law by allowing private companies to perform hacking tests on the database (DRM, 2013b). In March 2014, it again sent a letter to the Minister of Interior and the Minister of Justice asking them to delay the operation of the biometric database since the security confirmation was not yet complete (DRM, 2014b). Finally, in June 2015, the DRM published a special report that summarised all the problems and malfunctions of the database as analysed by cyber experts. The report’s arguments (among others) were that

In 2014, 71 cases of phishing and forgery were discovered ... Not one was prevented by a biometric database. The planning of the system is incorrect in several respects... The Biometric Authority did not examine alternatives that have worldwide credibility, and as for the alternatives that were examined, their results made no sense… Thus we call on the Israeli government and Members of Knesset to abolish the biometric database (DRM, 2015).

This last sentence captures nicely the symbolic frame that accompanies this argument – the logic perspective. By repeatedly pointing out the disparity between the law on paper and its application in practice during the pilot stage and the problems with the database, the activists invoked the logic of the politicians in trying to persuade them not to approve the database because it did not make sense.

3.Why the database should be abolished: because it will be breached - The final argument was that the database should be abolished because the government would not be able to guarantee protection against security breaches, and hence possible identity theft. This argument first appeared in the first letter addressed to Knesset members. In this letter, the DRM made the following statement: “Past experience and reports from the General Ombudsman have proved that State authorities cannot be trusted to maintain the security of the database” (DRM, 2009b, para. 4). In this sentence, the organisation set into motion both the Informational frame ("past experience and reports from the General Ombudsman") as well as the symbolic frame ("cannot be trusted").

In the judicial arena, within the framework of the appeal, the organisation explained the meaning of past experience and reinforced the informational frame. It wrote: “Past performance of the State in this field is not a source of pride: Not many countries in the world allow the downloading of sensitive census databases from sharing file sites, as is possible with the Israeli census” (H.C. 1516/12, 2012, p. 15). In addition, the activists also refer in the lawsuit to the leak of the adoption database, and the General Ombudsman reports critical of the state’s failure to protect its citizens’ privacy were also mentioned.

This symbolic frame concerning lack of trust was especially emphasised in the commercials the DRM produced as part of its publicity campaign. In the "Why anti?" commercial, a futuristic horror scenario was presented in which the biometric database leaked and the information fell into the hands of criminals. It showed a criminal using this information to track down a potential victim -a young woman in a pub. In the "Why shouldn't you join the biometric database?" commercial, a presenter delivered the message by again stressing the argument that the government could not be trusted with the private information of its citizens. It emphasised how each citizen could become a victim (of extortion or assault) if the database were to be breached, and it assured the audience that based on past experience (by specifically mentioning the state’s inability to keep the information about Israel’s nuclear reactor safe), it was likely to be breached. Thus, concluded the presenter, if you wish to maintain your privacy and your security, do not register.

Through this framing of privacy, the DRM tried to subvert the Israel equation according to which security means lack of privacy. In contrast, according to the campaign, only by holding on to your privacy can you secure yourself. Interestingly, despite the differences between the public campaigns of the government and the DRM, they both used the Aristotelian persuasion technique of Pathos, arousing the emotion of fear among the public: the former regarding identity theft, and the latter regarding the risk of criminals obtaining the information.

Overall, it can be seen that all of the arguments appeared in all of the arenas. However, one can distinguish between the first two arguments – which were specifically directed to the judicial and political arenas and were heard and seen in the mediated public sphere only because of media coverage – and the third argument, which was specifically directed to the mediated public sphere. This means that while in the political and the judicial arenas the DRM acknowledged the importance of privacy as a value in democracies, the problem of state surveillance in its work, and the technical as well administrative problems associated with the database, in the mediated public arena the organisation spotlights privacy in the context of personal security, lack of trust and governmental incompetence.

Since the second argument involves complex technical and administrative jargon, it is understandable why the organisation refrains from using it in the mediated public arena. After all, it was addressed mostly to the members of the Knesset who voted on the law, and not to the public. However, the decision to avoid the first argument and highlight the third in the mediated public sphere coincides with the local perspective, which values security over privacy as a democratic value, and does not trust the government (Hermann, Heller, Cohen, Be’ery, & Lebel, 2015; Hermann, Heller, Cohen, & Bublil, 2016; Hermann, Heller, Cohen, Bublil & Omar, 2017). Furthermore, in Israel, organisations which advocate for issues such as human rights, civil liberties, and democracy are usually considered to be on the left of the political map (for example, ACRI). Thus, framing the biometric database as a violation of civil rights, especially in the mediated public sphere, might alienate the public support of people from the centre and the right of the political map within the Israeli society. However, framing the biometric database in an apolitical frame, as in the third argument, blurs traditional political divisions and coincides with the neutral political position the DRM tries to maintain in order to increase its public support.

Concluding remarks: exploring the national models of privacy governance

As mentioned above, on 30 November 2016, the Ministry of Interior declared that despite the criticism, the database would become obligatory for all Israeli citizens. On the same day, the DRM initiated another crowdfunding campaign (DRM, 2016). Within 24 hours, the target of about €15,000 was achieved. Furthermore, donations continued to arrive throughout the month: all told, some 1,000 people donated about €26,500. In comparison, the first crowdfunding campaign (DRM, 2013a) against the biometric database only drew 200 people who donated about €5,000. The results of this campaign indicate not only that DRM has begun to situate itself as a significant social actor in the Israeli society, but also that in the Israeli context, the issue of privacy grew in importance in the last few years, possibly due to the work of the DRM. As of now, the DRM has appealed to the high court to abolish the database by voicing all three arguments. Only time will tell if the movement will succeed in its campaign.

While focused on one case, important insights can be garnered from this study, concerning not only the role of civil society organisations in constructing privacy governance, but also its research. Digital rights are interpreted differently in every culture and society, but we must still differentiate the nature of these rights. For example, the meaning and boundaries of rights such as access to the internet and preservation of net neutrality are comparatively clear. While some stakeholders might object to defining them as rights to begin with, their meaning remains the same in different countries. In contrast, liberties such as the right to privacy and freedom of speech are more controversial, and their meaning and boundaries are inconsistent across cultures. Thus, when advocating for these rights in a given society, civil society organisations have to be flexible in the arguments they present and promote in order to achieve the political, public, and judicial support they need. The case of the DRM provides an example of such flexibility, which was manifested in three different cultural informational framings the organisation presented concerning the biometric database: the unnecessity of a biometric database in democracy; the database’s ineffectiveness; and governmental incompetence in securing it. The organisation's ability to navigate between these arguments allowed it to maintain its image as a non-political organisation, which transcends political disagreements and possibly enables it to recruit more support to its cause.

While Israel’s security situation is unique, it is not the only country whose government violates citizens' privacy in the name of security. In Europe, the refugee crisis and ISIS terrorist attacks have led to a series of various national legislative initiatives that infringe on citizens' civil liberties, not so different from the Israeli situation. For example, Germany, France, and the UK have passed laws granting their surveillance agencies autonomous power to conduct bulk interception of communications across Europe and beyond, almost without oversight. By doing so, they joined countries such as Poland, Austria, Italy, and Sweden, whose parliaments have already adopted extensive domestic and foreign surveillance legislation (Lubin, 2017). Yet at the same time, in these countries there are various civil society organisations which advocate for digital rights and against these legislative initiatives such as the Open Rights Group in the UK; La Quadrature du Net in France; Digitale Gesellschaft in Germany; Panoptykon Foundation in Poland; DFRI in Sweden; Initiative für Netzfreiheit (IfNf) in Austria, and many more. These organisations collaborate ad hoc regarding digital human rights issues in the regional context (Losey, 2014), and in 2002 over 30 civil rights organisations in Europe established the European Digital Rights (EDRI) advocacy group. Based in Brussels, it functions as an umbrella organisation, allowing for more systematic collaboration between the national organisations. Follow-up studies in this direction might explore how organisations in these countries use different persuasion techniques in their campaigns for privacy and what power these techniques might have in the age in which “privacy is dead”.

Finally, I wish to address the research of civil society campaigns within the broader perspective of privacy governance studies. Researching the point of view offered by a civil society organisation sheds light, from an emic point of view, on the existing boundaries of privacy governance as well as the perceived problematic aspect of these boundaries, in a given society. Furthermore, such line of inquiry can also reveal the kind of privacy governance. civil society wishes to create and its desirable boundaries. In this case, since the DRM focuses only on violations committed by political and public institutions, the model of privacy governance it is pursuing is based on protecting private information from political and public entities but not necessarily from internet corporations. This model is probably different from other models of privacy governance suggested by other civil society organisations in other countries, especially in Europe where internet corporations are more restricted in their work (Fioretti, 2017; Gibbs, 2017). Thus, following studies which define how national internet governance models are created such as the US or the Chinese model (Powers & Jablonski, 2015; MacKinnon, 2010), I would like to suggest a different avenue for future research that analyses and classifies models of national privacy governance based on the study of privacy advocates. Such analysis could help us understand more about the models of privacy governance that exist in a given society, how they are constructed and developed, how they can be modified, and why, sometimes, they will never change.

References

Andrews, L. (2012). I know who you are and I saw what you did: Social networks and the death of privacy. New York, NY: Free Press.

Benedek, W. (2008). Internet governance and human rights. In W. Benedek, V. Bauer and M. Kettemann (Eds.) Internet governance and the information society. The Netherlands: Eleven International Publishing.

Benkler, Y., Roberts, H., Faris, R., Solow-Niederman, A. & Etling, B. (2013). Social mobilization and the networked public sphere: Mapping the SOPA-PIPA debate. Cambridge, MA: Berkman Center Research Publication. Retrieved from http://cyber.law.harvard.edu/sites/cyber.law.harvard.edu/files/MediaCloud_Social_Mobilization_and_the_Networked_Public_Sphere_0.pdf.

Bennett, C. (2008). The privacy advocates: Resisting the spread of surveillance. Cambridge, MA: The MIT press.

Birnhack, M. (2010). Private space: The right to privacy, law and technology. Israel: Bar Ilan University Press & Nevo Press.

Birnhack, M. & Elkin-Koren, N. (2009). Does law matter online? Empirical evidence on privacy law compliance, Social Science Research Network, August 5-46.

Breindl, Y. (2011). Promoting openness by “patching” European directives: Internet based activism & EU telecommunication reform. Journal of Information, Technology and Politics,8(3), 346-366.

Cho, H., Rivera-Sanchez, M. Lim, S. S. (2009). A multidimensional study on online privacy: Global concerns and local responses. New Media Society, 11(3), 395-416.

City Without Violence, (2017). Municipal Command and Control Center. Retrieved from http://www.cwv.gov.il/Enforcement/Pages/MunicipalControlCenter.aspx

Daskal, E. (2017): Let’s be careful out there … : How digital rights advocates educate citizens in the digital age. Information, Communication & Society. doi:10.1080/1369118X.2016.1271903

DeNardis, L. (2010). The Emerging Field of Internet Governance. Yale Information Society Project Working Paper Seriesdoi:10.2139/ssrn.1678343

DeNardis, L. & Raymond, M. (2013). Thinking clearly about Multistakeholder internet governance. Paper Presented at 8th Annual GigaNet Symposium Bali, Indonesia. Retrieved from http://www.phibetaiota.net/wp-content/uploads/2013/11/Multistakeholder-Internet-Governance.pdf

Digital Rights Movement (2009 a). Who are we? Retrieved from https://www.digitalrights.org.il/who/

Digital Rights Movement (2009 b). The digital rights movement is calling for the members of the Knesset to vote in favour of the reservations from the creation of a central biometric database. Retrieved from https://www.digitalrights.org.il/2009/11/%D7%94%D7%95%D7%93%D7%A2%D7%94-%D7%9C%D7%A2%D7%99%D7%AA%D7%95%D7%A0%D7%95%D7%AA-151109/

Digital Rights Movement (2011). Position paper about the "Rav Kav" cards. Retrieved from https://www.digitalrights.org.il/2011/06/%D7%A0%D7%99%D7%99%D7%A8-%D7%A2%D7%9E%D7%93%D7%94-%D7%91%D7%A0%D7%95%D7%92%D7%A2-%D7%9C%D7%9B%D7%A8%D7%98%D7%99%D7%A1%D7%99-%D7%94%D7%A8%D7%91-%D7%A7%D7%95/

Digital Rights Movement (2013 a). Crowdfunding for a campaign against the biometric database. Retrieved from https://www.digitalrights.org.il/2013/08/%D7%9E%D7%99%D7%9E%D7%95%D7%9F-%D7%94%D7%9E%D7%95%D7%A0%D7%99%D7%9D-%D7%9C%D7%A7%D7%9E%D7%A4%D7%99%D7%99%D7%9F-%D7%A0%D7%92%D7%93-%D7%94%D7%9E%D7%90%D7%92%D7%A8-%D7%94%D7%91%D7%99%D7%95%D7%9E%D7%98/

Digital Rights Movement (2013 b). The Digital Rights Movement: the biometric database authority privatises the database information. Retrieved from https://www.digitalrights.org.il/2013/06/%D7%A4%D7%A0%D7%99%D7%99%D7%94-%D7%9C%D7%99%D7%95%D7%A2%D7%A5-%D7%94%D7%9E%D7%A9%D7%A4%D7%98%D7%99-%D7%9C%D7%9E%D7%9E%D7%A9%D7%9C%D7%94-%D7%91%D7%A0%D7%95%D7%92%D7%A2-%D7%9C%D7%9E%D7%90%D7%92%D7%A8/

Digital Rights Movement (2014 a). Personal details leaked through the "dogs database" application of Ministry of Agriculture. Retrieved from https://www.digitalrights.org.il/2014/11/%D7%A4%D7%A8%D7%98%D7%99%D7%9D-%D7%90%D7%99%D7%A9%D7%99%D7%99%D7%9D-%D7%93%D7%9C%D7%A4%D7%95-%D7%93%D7%A8%D7%9A-%D7%90%D7%A4%D7%9C%D7%99%D7%A7%D7%A6%D7%99%D7%99%D7%AA-%D7%9E%D7%90%D7%92%D7%A8/

Digital Rights Movement (2014 b). The Digital Rights Movement approached the Ministry of Interior for lack of sufficient security standards. Retrieved from https://www.digitalrights.org.il/2014/03/%D7%94%D7%9E%D7%90%D7%92%D7%A8-%D7%94%D7%91%D7%99%D7%95%D7%9E%D7%98%D7%A8%D7%99-%D7%90%D7%99%D7%A0%D7%95-%D7%A2%D7%95%D7%9E%D7%93-%D7%91%D7%AA%D7%A7%D7%A0%D7%99-%D7%94%D7%90%D7%91%D7%98%D7%97%D7%94/

Digital Rights Movement (2015). Experts' report of the Digital Rights Movement: a fear for a deliberate omission of information and an attempt to mislead the members of the Knesset and the public by presenting false information concerning the biometric database. Retrieved from https://www.digitalrights.org.il/2015/06/%D7%94%D7%AA%D7%A0%D7%95%D7%A2%D7%94-%D7%9C%D7%96%D7%9B%D7%95%D7%99%D7%95%D7%AA-%D7%93%D7%99%D7%92%D7%99%D7%98%D7%9C%D7%99%D7%95%D7%AA-%D7%9E%D7%A4%D7%A8%D7%A1%D7%9E%D7%AA-%D7%93%D7%95%D7%B4%D7%97/

Digital Rights Movement (2016). Crowdfunding for a high court appeal to abolish the biometric database.Retrieved from https://www.digitalrights.org.il/2016/11/%D7%92%D7%99%D7%95%D7%A1-%D7%94%D7%9E%D7%95%D7%A0%D7%99%D7%9D-%D7%9C%D7%9E%D7%A2%D7%9F-%D7%91%D7%92%D7%B4%D7%A5-%D7%9C%D7%94%D7%A4%D7%9C%D7%AA-%D7%94%D7%9E%D7%90%D7%92%D7%A8-%D7%94%D7%91%D7%99%D7%95/

Electronic Frontier Foundation, (2017 a). Counter-Surveillance Success Stories. Retrieved from https://www.eff.org/csss

Electronic Frontier Foundation, (2017 b). Success Story: Dismantling UK’s Biometric ID Database. Retrieved from https://www.eff.org/pages/success-story-dismantling-uk%E2%80%99s-biometric-id-database

Electronic Frontier Foundation, (2017 c). Success Story: Protecting Privacy of Web Commenters (Chile). Retrieved from https://www.eff.org/pages/success-story-protecting-privacy-web-commenters-chile

Electronic Frontier Foundation, (2017 d). Success Story: Turning the Tide Against Online Spying. Retrieved from https://www.eff.org/pages/success-story-turning-tide-against-online-spying

Electronic Frontier Foundation, (2017e). About EFF. Retrieved from https://www.eff.org/about

Fernández Pérez, M. (2015). The final countdown for net neutrality in the EU. EDRi. Retrieved From https://edri.org/the-final-countdown-for-net-neutrality-in-the-eu/

Fioretti, J. (2017, July 24). EU increases pressure on Facebook, Google and Twitter over user terms. Reuters. Retrieved from http://www.businessinsider.com/r-eu-increases-pressure-on-facebook-google-and-twitter-over-user-terms-2017-7

Gibbs, S. (2017, January 10). WhatsApp, Facebook and Google face tough new privacy rules under EC proposal. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/jan/10/whatsapp-facebook-google-privacy-rules-ec-european-directive

Golan, A. (2014, February 2). A virtual campaign was launched against the biometric database. Nrg. Retrieved from http://www.nrg.co.il/online/13/ART2/548/154.html?hp=13&cat=131&loc=51 [Heb].

Greenwald, G. (2014). No place to hide: Edward Snowden, the NSA, and the U.S. surveillance state. New York, NY: Metropolitan Books.

H.C. 1516/12 (2012). Nahon v. the Knesset. Retrieved from http://www.acri.org.il/he/wp-content/uploads/2012/02/hit1516.pdf

Hermann, T., Heller, E., Cohen, C., Be’ery, G., & Lebel, Y. (2015). The Israeli Democracy Index 2014. Israel: The Israel Democracy Institute. Retrieved from https://www.idi.org.il/media/3667/democracy_index_2014.pdf [Heb].

Hermann, T., Heller, E., Cohen, C., & Bublil, D. (2016). The Israeli Democracy Index 2015. Israel: The Israel Democracy Institute. Retrieved from https://www.idi.org.il/media/3573/democracy_index_2015.pdf [Heb].

Hermann, T., Heller, E., Cohen, C., Bublil, D. & Omar, F. (2017). The Israeli Democracy Index 2016. Israel: The Israel Democracy Institute. Retrieved from https://www.idi.org.il/media/7799/democracy-index-2016.pdf [Heb].

Hofstede, G. (2001).Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations. Thousand Oaks, CA: Sage Publications.

Israeli, T. (2013). Who is afraid of “Google”: Attitudes towards privacy on-line. Mida’at, 9, 28-45 [Heb].

Israeli Law Information and Technology Authority (2017). About The Israeli Law, Information and Technology Authority. Ministry of Justice. Retrieved from http://www.justice.gov.il/Units/ilita/Odot/Pages/Odot.aspx

Kabir, O. (2016, November 30). The digital rights movement will appeal to the high court against the biometric database decision. Calcalist. Retrieved from https://www.calcalist.co.il/internet/articles/0,7340,L-3702896,00.html [Heb].

Karniel, Y. & Lavie‐Dinur, A. (2012). Privacy in new media in Israel: How social networks are helping to shape the perception of privacy in Israeli society. Journal of Information, Communication and Ethics in Society, 10(4), 288-304. doi:10.1108/14779961211285908

Kay, M. (2014). Human rights for the digital age. Journal of Mass Media Ethics, 29(1), 2-18.

Keck, M. E. & Sikkink, K. (1999). Transnational advocacy networks in international and regional politics. International Social Science Journal,51, 89-101.

Keinan, I & Zilber, J. (2013, October 9). The biometric database: Payed talkbackers and product placement on the way to smart ID. Haaretz. Retrieved from https://www.haaretz.co.il/captain/room404/.premium-1.2136462 [Heb].

Kosoff, M. (2017, May 18). The battle to save the internet from Trump begins. Vanity Fair. Retrieved from https://www.vanityfair.com/news/2017/05/inside-the-battle-to-save-the-internet-from-donald-trump

Kulesza, J. (2008). Freedom of information in the global information society – the question of The Internet Bill of Rights, UWM Law Review, 1, 81 – 95. doi:10.2139/ssrn.1446771

Lilien, N. (2015, February 11). Where in the world are there biometric databases? The Uplink: A Hebrew technology magazine. Retrieved from https://www.lnk.co.il/shorty/world-biometric-database [Heb].

Livingstone, S. (2008). Internet Literacy: Young people’s negotiation of new online Opportunities. In T. M. (Eds.), Digital Youth, Innovation, and the unexpected (pp. 101-122). Cambridge: The MIT Press.

Losey, J. (2014). The Anti-Counterfeiting Trade Agreement and European civil society: A case study on networked advocacy. Journal of Information Policy, 4, 205-227.

Lubin, A. (2017, January 9). A New Era of Mass Surveillance is Emerging Across

Europe. Just Security. Retrieved from https://www.justsecurity.org/36098/era-mass-surveillance-emerging-europe/.

Lessig L. (2006). Code is law: On liberty in cyberspace. Harvard Magazine. Retrieved from http://harvardmagazine.com/2000/01/code-is-law-html

MacKinnon, R (2010). Networked Authoritarianism in China and beyond: Implications for Global Internet Freedom (White paper). CA: Stanford University. Retrieved from http://fsi-media.stanford.edu/evnts/6349/MacKinnon_Libtech.pdf.

Milberg, S. J., Burke, S. J., Smith, J. H., & Kallman, E.A. (1995). Rethinking copyright issues and ethics on the net: Values, personal information privacy, and regulatory approaches. Communications of the ACM,38(12), 65-73.

Monroe, E. P. (2015). Free expression, globalism and the new strategic communication. UK: Cambridge University press.

Mueller, M. L. (2010). Network and States: The Global Politics of Internet Governance. MA: MIT press.

Netivei Israel (2017). National Traffic Management Center. Retrieved from https://www.iroads.co.il/en/content/national-traffic-management-center

Newman, A. (2008). Protectors of privacy: Regulating personal data in the global economy. Ithaca: Cornell University Press.

Norris, P. (2002). Democratic Phoenix: Reinventing Political Activism. New York: Cambridge University Press

Osenga, K. J. (2013). The internet is not a super highway: Using metaphors to communicate information and communications policy. J. Info. Pol'y, 3, 30-54.

Panwar, P. (2015, April 15). Know all about #netneutrality in India & save the internet: Explained. OneIndia Retrieved from http://www.oneindia.com/feature/know-what-is-net-neutrality-and-save-the-internet-explained-1713980.html

Patient's Rights Act (1996). Knesset Israel, 1591:327-336. Retrieved from http://fs.knesset.gov.il//13/law/13_lsr_211755.PDF 

Postigo, H. (2008). Capturing fair use for the Youtube generation: The digital rights movement, the Electronic Frontier Foundation, and the user-centered framing of fair use. Information, Communication & Society,11(7), 1008-1027.

Powers, S. M. & Jablonski, M. (2015). The real cyber war: the political economy of Internet Freedom. IL: University of Illinois Press.

Ravid, B. (2017, March 21). Israeli ministry trying to compile database of citizens who support BDS. Haaretz. Retrieved from http://www.haaretz.com/israel-news/1.778516 [Heb].

Rauhofer J. (2008). Privacy is dead, get over it! Information privacy and the dream of a risk-free society. Information & Communications Technology Law, 17(3), 185-197.

Regan, P. M. (1995) Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill: The University of North Carolina Press.

Ribak, R. (2003, May). Parents’ concerns over the internet: A cross-cultural comparison. Paper presented at the annual meeting of the International Communication Association, San Diego, CA. Retrieved from http://www.allacademic.com/meta/p112185_index.html

Ribak, R. (2007). Privacy is a basic American value: Globalization and the construction of web privacy in Israel. Communication Review, 10(1), 1-27.

Ribak, R. & Turow, J. (2003). Internet power and social context: A globalization approach to web privacy concerns. Journal of Broadcasting & Electronic Media, 47(3), 328-349.

Shamah, D. (2013, June 9). Israelis are used to being spied on all the time. The Times of Israel. Retrieved from http://www.timesofisrael.com/israeli-authorities-use-far-wider-surveillance-powers-than-those-causing-storm-in-us/ [Heb].

Solove, D. J. & Hartzog, W. (2014). The FTC and the New Common Law of

Privacy. Columbia Law Review, 114, 583-676.

Somfalvi, A. & Ronen, E. (2008, August 3). The government approves: Biometric database for Israeli citizens. Ynet. Retrieved from http://www.ynet.co.il/articles/1,7340,L-3576961,00.html [Heb].

Tăbușcă, S. M. (2010). The internet access as a fundamental right. Journal of Information Systems and Operations Management, 4(2), 206 – 212.

Tausig, D. (2015). Living proof: Autobiographical political argument in We Are the 99 Percent and We Are the 53 Percent. International Journal of Communication, 9, 1256–1274

The Basic Law: Human Dignity and Liberty (1992). Knesset Israel, 1391, 150. Retrieved from http://fs.knesset.gov.il/12/law/12_lsr_211801.PDF

The Basic Law: The Judiciary, (1984). Knesset Israel, 1123, 198-218. Retrieved from http://fs.knesset.gov.il/11/law/11_lsr_311021.PDF

The Criminal Procedure Law (Enforcement Powers – Body Search of Suspect), (1996). Knesset Israel, 1573: 136-149. Retrieved from http://fs.knesset.gov.il/13/law/13_lsr_211315.PDF

The Criminal Procedure Law (Enforcement Powers – Communication Data), (2007). Knesset Israel, 2122: 72-78. Retrieved from http://fs.knesset.gov.il/17/law/17_lsr_300150.pdf

The Freedom of Information Act, (1998). Knesset Israel, 1667, 226-232. Retrieved from http://fs.knesset.gov.il/14/law/14_lsr_211487.PDF

The Genetic Information Law, (2000). Knesset Israel, 1766, 62-74. Retrieved from http://fs.knesset.gov.il/15/law/15_lsr_300291.pdf

The Identity Card Carrying and Displaying Law, (1982). Knesset Israel, 1070: 20. Retrieved from http://fs.knesset.gov.il//10/law/10_lsr_210028.PDF

The Inclusion of Biometric Means of Identification in Identity Documents and in an Information Database Law (2009). Knesset Israel, 2217, 255-272. Retrieved from http://fs.knesset.gov.il/18/law/18_lsr_300928.pdf

The Joint Committee of the Science and Technology Committee and the Interior and Environmental Protection Committee protocols (2009-2017). The official protocols of the Knesset. Retrieved from http://main.knesset.gov.il/Activity/Legislation/Laws/Pages/lawlaws.aspx?t=lawlaws&st=lawlaws

The Prevention of Sexual Harassment Law, (1998). Knesset Israel, 1661: 166-170. Retrieved from http://fs.knesset.gov.il/14/law/14_lsr_211481.PDF

The Protection of Privacy Law (1981). Knesset Israel, 1011: 128-134. Retrieved from http://fs.knesset.gov.il//9/law/9_lsr_208332.PDF 

The Wiretap Law (1979). Knesset Israel, 938: 118-120. Retrieved from http://fs.knesset.gov.il//9/law/9_lsr_208328.PDF 

Worthington D., Fitch-Hauser, M., Välikoski, T.R., Imhof, M. & Kim, S.H. (2011). Listening and privacy management in mobile phone conversations: A cross-cultural comparison of Finnish, German, Korean and United States students. Empedocles: European Journal for the Philosophy of Communication, 3(1), 43-60.

Yaron, O. (2011, August 12). Fighting to keep privacy alive. Haaretz. Retrieved from http://www.haaretz.co.il/captain/net/1.1372639 [Heb].

Zarhin, T. (2012, July 23). Justices of the High Court of Justice: The necessity of the biometric database should be examined. Haaretz. Retrieved from http://www.haaretz.co.il/news/law/1.1783741 [Heb].

Zarhia, T. & Izesko, S. (2015, September 6). The law that will change the credit market. TheMarker. Retrieved from http://www.themarker.com/news/1.2725229 [Heb].

Footnotes

1. The Joint Committee is a parliamentary committee formed by the Knesset (the Israeli parliament) to deal with the biometric database.

Accountability challenges confronting cyberspace governance

$
0
0

Introduction

What a little more than forty years ago started as a government-sponsored network research project has evolved into a “global [...] substrate that [...] underpins the world’s critical socio-economic systems” (Demchak & Dombrowski, 2013, p. 29; Weber, 2013). Cyberspace has become a key domain of power execution and a core issue of global politics (Nye, 2010). Initially construed as a space free from regulation and intervention (Barlow, 1996; Johnson & Post, 1996), the rising tide of threats to the stability and future development of cyberspace has spurred calls for more expansive governance.

Over the course of the past two decades, the term governance has enjoyed widespread use across a great number of discourses (Enderlein, Wälti, & Zürn, 2010). In the context of cyberspace, governance has come to refer to the sum of regulatory efforts put forward with regard to addressing and guiding the future development and evolution of cyberspace (Baldwin, Cave, & Lodge, 2010, p. 525). Cyberspace governance is characterised by a large quantity of actors, issue areas, and fora involved in processes of steering. Accountability structures are often incoherent in settings of this nature and questions such as who is accountable to whom for what by which standards and why remain opaque, and warrant closer examination (Bovens, Goodin, & Schillemans, 2014). For purposes of illustration, it is worth considering the following: while critically important to the workings of the digital realm, the activities of some of the largest cyberspace governance entities, including among others the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Governance Forum (IGF), or the Internet Engineering Taskforce (IETF) are not based on or mandated by international legal instruments. Furthermore, “there are no clear [or only few] existing structures such as courts, legislative committees, national auditors, ombudsmen, and so on, to which recourse can be made to render [these cyberspace governance institutions] accountable” (Black, 2008, p. 138).

Taking note of the complexities related to processes of account rendering in the context of cyberspace governance, this paper asks the following interrelated research questions:

  • Conceptually, what are the key accountability challenges confronting cyberspace governance?
  • How can these accountability challenges be addressed?

Attaining a better understanding of how accountability structures play out in cyberspace governance is key for increasing transparency, assessing processes of legitimisation, and scrutinising impending models of regulation.

This paper is structured along four sections: Section I reviews relevant background information and concepts, and lays out the methodology. Section II highlights key accountability challenges confronting cyberspace governance. Section III stipulates a set of policy recommendations geared towards addressing the accountability challenges identified as part of Section II. Section IV summarises the findings of this paper and offers some concluding remarks.

Conceptual frame and methodology

In order to grasp the accountability challenges confronting cyberspace governance, it is necessary to establish a common point of departure and lay out key concepts, i.e. cyberspace and accountability.

Cyberspace

Termed by William Gibson in the mid-1980s (Ottis & Lorents, 2010), cyberspace is the most elemental concept with regard to cyberspace governance (Kello, 2013, p. 17). It lays out the domain within which cyberspace governance can be construed. Even though cyberspace has become deeply embedded in everyday life, there is little clarity on what it comprises (Murray, 2007). The understanding of cyberspace is still nascent and the concept riddled with terminological ambiguity. The number of definitional accounts pertaining to cyberspace is bewilderingly large, ranging from technological to socio-political and economic descriptions (NATO Cooperative Cyber Defence Centre of Excellence, 2017).

Cyberspace is often equated with the World Wide Web but the two are not the same. Cyberspace can be thought of as a complex, highly distributed network infrastructure (Clarke & Knake, 2012). In contrast, the World Wide Web denotes a collection of resources (e.g. webpages) identifiable by means of global Uniform Resource Identifiers (URI), and accessible via cyberspace (World Wide Web Consortium, 2004).

The view of cyberspace adopted in this a paper is consistent with Chris Demchak’s and Peter Dombrowski’s understanding of cyberspace as a “global [...] substrate that [...] underpins the world’s critical socio-economic systems” (Demchak & Dombrowski, 2013, p. 29). Their definition underscores the economic, social, and political importance of the network infrastructure, and alludes to the multitude of docking points for governance and policy interventions, as well as stakeholder concerns.

Accountability

In terms of conceptual coherence, accountability struggles with similar definitional ambiguity to that of cyberspace. Over the past decade, accountability has become something of a catchword, and has been assigned various meanings by scholars of different disciplines, impairing consistent and comprehensive terminological application and research (Bovens et al., 2014). Although scholars seem to agree on the concept’s overall importance, they appear to be less unified apropos its constitutive elements.

Consciously abstaining from advancing yet another definition or reconceptualisation of accountability, and increasing the term’s elusiveness, this paper relies on what Bovens, Goodin and Schillemans call the minimal conceptual consensus:

“The minimal conceptual consensus entails, first of all, that accountability is about providing answers; is about answerability towards others with a legitimate claim to demand an account. Accountability is then a relational concept, linking those who owe an account and those to whom it is owed. Accountability is a relational concept in another sense as well, linking agents and others for whom they perform tasks or who are affected by the tasks they perform” (Bovens et al., 2014, p. 6).

Emphasising the concept’s socio-relational core, i.e. the onus of an actor or body to give reasons for or defend conduct to another set of actors, the minimal definitional consensus is concise, yet broad enough to ascertain empirical validity and operationalisation in complex analytical environments, such as cyberspace governance (Bovens, 2007, p. 13).

Far from a coherent system, cyberspace governance resembles a jungle of different, at times competing, regulatory endeavours. Such endeavours can take many forms: they can be hierarchical with clear sanctions attached, e.g. legal rules and ordinances, international and national contracts and agreements, or softer, e.g. voluntary technical standards and protocols, and informal codes of conduct (Levi-Faur, 2011, p. xvi). In order to counter tendencies of disintegration and ensure continuous openness and stability of the digital environment, tangible accountability structures are of critical importance (Scholte, 2008, p. 15; Weber, 2014, p. 78).

Methods

From a methodological point of view, this paper employs qualitative means of data collection and analysis. It is grounded in a review of policy documents and secondary academic literature on accountability, cyberspace governance, and international relations. Data was collected by means of online desk research. Databases queried included among others: Taylor & Francis Online, EBSCOhost, Elsevier Science Direct, Google Scholar, Google Books, as well as Search Oxford Libraries Online (SOLO). The sources identified were grouped and examined by means of content analysis.

Building on existing accountability scholarship and engaging in further theorisation, this paper serves as a steppingstone for thinking more rigorously about accountability in the context of cyberspace governance. Its goal is to contribute to current scholarly debates, and formulate relevant policy recommendations.

The findings of this paper are contextually and temporally specific and need to be understood as such. Much of the topic under investigation is still very much in flux. Conceptually, the governance of cyberspace is a field that is likely to remain under construction for the foreseeable future (Dutton & Peltu, 2007).

Key challenges

Cyberspace governance involves a great number of different constituencies, spans across various issue areas, and exhibits a high degree of institutional malleability (Kleinwächter, 2011; Mueller, Mathiason, & Klein, 2007, p. 237; Raymond & DeNardis, 2015, p. 41). Cumulatively, these factors contribute to a rise in complexity apropos basic structures of accountability.

A juxtaposition of the concepts of cyberspace and accountability reveals the following accountability challenges with regard to the governance of the virtual domain: the problem of many hands, the profusion of issue areas, and the hybridity and malleability of institutional arrangements.

The problem of many hands refers to a condition of accountability obfuscation caused by a great number of actors engaged in concurring regulatory ventures (Bovens, 2007; Papadopoulos, 2003). “Because many different officials contribute in many ways
to decisions and policies […] it is difficult even in principle to identify who is morally responsible for political [and technical] outcomes” (Thompson, 1980, p. 905). In the context of cyberspace governance, the number of stakeholders contributing to policy outcomes and regulatory deliberations is immense. To illustrate, questions such as “who is accountable for the current and future development of the virtual realm” may yield any of the following answers: the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force (IETF), the World Wide Web Consortium (W3C), the Internet Governance Forum (IGF), the International Telecommunications Union (ITU), large Internet Service Providers (ISPs) such as AT&T, powerful nation states or departments, such as the US Department of Commerce or the US National Security Agency, influential software companies, as well as civil society groups and individual experts who take part in and contribute to the operations of organisations, such as ICANN or the IETF (DeNardis, 2014; Scholte, 2008, p. 19). While the abundance of actors involved in cyberspace governance does not (necessarily) imply an absence of accountability mechanisms, it does mean higher degrees of complexity.

The heterogeneity of stakeholder configurations can aggravate questions of agency and contribution. Accountability structures are more difficult to determine because actors co-produce outcomes and contribute to the end-product in hybrid constellations. Accountability structures can further be complicated by the conflation of stakeholder-specific traditions, standards, and expectations (Koppell, 2005, p. 94). Not only is the variety of actors contributing to governance ventures and their goals larger, making the identification of accountability objects more difficult (i.e. for which goals should accountability be rendered?), but their expectations can diverge and complicate the emergence of clear lines of responsibility or accountability (Bovens et al., 2014; Carr, 2016, p. 43). Indeed, environments characterised by multiple stakeholders tend to provide opportunities for blame-shifting (Papadopoulos, 2010, p. 1039).

The problem of many hands represents but one accountability challenge in the context of cyberspace governance. The profusion of issue areas, spanning across technical, socio-political, and economic spheres, constitutes another conundrum. In the context of cyberspace governance, the excess and coming together of technical and non-technical issue areas can severely complicate accountability structures. Seemingly unrelated issue areas may suddenly converge. Examples of such convergence can, among others, be found in areas related to intellectual property rights protection and address naming and numbering:

“The names and numbers given to Internet entities, such as domain names used in Internet addresses, may seem to be a [solely technical] issue to be managed by the Internet Corporation for Assigned Names and Numbers (ICANN). But, the registration of a well-known trademark as a domain name with the intention of selling it back to the owner, called ‘cyber-squatting’, has led to governance issues that are also the concern of international organisations, like the World Intellectual Property Organisation (WIPO), and national and international legislation and regulations which also cover more traditional trademark and related concerns” (Dutton & Peltu, 2007, p. 8).

The confluence of issue areas can lead to “tangled web[s] of relationships” (Dubnick & Frederickson, 2014, p. xxi). Left untangled, these intertwined webs of relationships can have fatal consequences for accountability structures. For one thing, they can result in the erosion of (pre-existing) accountability structures and cause accountability deficits. For another thing, they can lead to dysfunctional amalgamations of accountability arrangements and bring about situations of accountability overcrowding (Bovens, 2007, p. 462).

The hybridity of institutional arrangements pertaining to cyberspace governance poses yet another accountability challenge. Cyberspace governance is characterised by the absence of a coherent regime or organisation in charge of enacting globally consistent and comprehensive norms and policies. A considerable number of institutions involved in cyberspace governance exhibit characteristics of fluidity and ad-hocism. Accountability structures tend to suffer from the dispersion of topics across different organisational settings and related institutional volatility. They are further aggravated by the fact that stakeholders can take on different roles across different fora of interaction.

The propensity for role-shifting means that certain actors may be involved in the production of outcomes in one forum (be accountors) but may play the part of accountees in other institutional settings. For example, an academic research group may contribute substantially to the development of new security protocols, e.g. in the context of IETF meetings, but may hold private sector companies accountable for faulty implementation/commercialisation of said security protocols, e.g. in circumstances of dispute resolution (Dickinson, 2014). “Insofar as accountability mechanisms are present, […] mechanisms [can] become mixed. The [jumble] of accountability mechanisms that results from this [can give] rise to uncertainty, confusion, or shrinking” (Bovens et al., 2014, p. 250).

The hybridity of institutional setups also makes developments hard to track and procedural access for some stakeholders, including civil society, uneven, thereby undermining processes of public account giving (Jayawardane, Larik, & Jackson, 2015, p. 7). Civil society organisations have voiced concerns re unequal participation and the fact that decisions of sensitive, yet far-reaching nature are made behind closed doors across several I* organisations, including, for example, the Internet Society (ISOC), IETF, ICANN, W3C, the Internet Architecture Board (IAB), as well as the regional Internet registries (RIRs), and country code domain name registries (APNIC, 2017).

Policy recommendations

In the context of cyberspace governance, the heterogeneity of stakeholders, the profusion of issue areas, as well as the malleability and distribution of institutional arrangements generate deep-rooted accountability tensions that are not easy to resolve. However, these tensions should not discourage researchers and policymakers from thinking about potential solutions and devising relevant strategies (Black, 2012). The subsequent paragraphs offer a set of policy recommendations geared towards addressing the three challenges identified above.

Heterogeneity of stakeholders

Cyberspace governance is not a unitary undertaking but exhibits characteristics of post-sovereignty. Processes of steering are “institutionally diffuse and lack a single locus of supreme, absolute, and comprehensive authority” (Scholte, 2008, p. 18). Given the complexity of the realm and the absence of a final arbiter, policy prescriptions centring on hierarchical command and control mechanisms appear ill-suited to resolve the tensions identified. Accountability structures should be reflective of the diversity of stakeholders, and be established on a collective basis. In view of the dominance of sovereigntist (hierarchical) accountability artefacts, the implementation of shared accountability structures may entail a deliberate rehashing of account rendering functions and processes. While the call for collective accountability structures does not imply the participation of the entirety of stakeholders, it does mean the enfranchisement of all relevant parties (Malcolm, 2015, p. 2). The enlistment of stakeholders essential to the resolution of specific cyberspace governance problems presents an important first step with regard to streamlining collective accountability structures and identifying corresponding responsibilities.

In terms of accountability enforcement, the institutionalisation of multistakeholder-oriented checks and balances is key. Independent, constitutionally inspired oversight mechanisms, such as ombudsmen or multistakeholder-versed third-party supervisory and review authorities, and clear standards provide useful instruments in this regard. The latter support the introduction of meaningful benchmarks of expected behaviour and set criteria against which conduct can be assessed (Weber, 2009, p. 159). Given the heterogeneity of stakeholders, relevant standards need to be flexible, yet specific enough to take effect in the respective cyberspace governance arenas.

The adoption of constitutionally inspired enforcement mechanisms has proven fruitful in various cases. In the context of ICANN, for example, the appointment of an ombudsman has helped clarify otherwise murky accountability structures, and provided community members with a useful mechanism of recourse. The ICANN ombudsman evaluates complaints about the organisation (including staff, board, supporting organisations, and advisory committees) lodged by community members, and promotes understanding of pertinent community issues (Davidson, 2009, p. 137).

Profusion of issue areas

The intertwining of political, technical, economic, and cultural dimensions, requires a conscious re-calibration of cyberspace governance debates. Given the scale and scope of the cyberspace governance landscape, accountability arrangements cannot meaningfully be established based on broadly framed, overarching legal instruments, e.g. global treaties or covenants. Rather, discussions of accountability should be organised around specific, manageable issue areas, and include stakeholders from different backgrounds, which are capable of flagging areas of intersection and convergence. The identification of relevant issue areas around which procedures and actor expectations can converge is critical for the emergence of tangible accountability structures (Krasner, 1985, p. 2). Issue specificity helps to reduce ambiguity apropos actor relations, incentives, and goals, and allows for the strategic construction and connection of different cyberspace governance debates, as well as for the attribution of stakeholder responsibilities (Slack, 2016, p. 76).

In the absence of clearly defined processes of account rendering, issue-specific policy networks can offer a useful corrective. In the context of the IGF, for example, so-called Dynamic Coalitions have served as critical means for creating accountability-related anchor points. Dynamic Coalitions are informal, issue-oriented groups of stakeholders working on specific cyberspace governance topics, e.g. freedom of expression and freedom of the media on the internet, network neutrality, or the internet of thing. To be recognised, they have to “produce a written statement which [outlines] the need for the coalition, an action plan, a mailing list, the contact person(s), [as well as] a list of representatives from at least three stakeholder groups” (Internet Governance Forum, 2016). Such thematic groupings go some way in creating a collective identity and sense of responsibility among stakeholders (Harlow & Rawlings, 2007, p. 560).

Malleability and distribution of institutional arrangements

To avoid forum-related accountability confusion, institutions and stakeholders involved in processes of cyberspace governance are well advised to clearly specify their mission and openly communicate their role (Malcolm, 2015, p. 4). Well-defined mission statements and mandates help to create longer-term commitment and guidance, and reduce the risk of ad-hocism and agenda shifting brought about by changing stakeholder configurations.

Institutional inaccessibility and discrimination should be addressed through proactive engagement and resourcing, as well as through flexible institutional set-ups. Cyberspace governance bodies need to be procedurally and structurally open to admit the participation of all stakeholders who are significantly affected by specific policy problems, or interested in the deliberation and resolution of cyberspace governance issues (Malcolm, 2015). “Proactive dissemination of pertinent, appropriate and quality information […] at the right time, in the right format, and through the right channels increases the likelihood of uptake by [relevant stakeholders and decreases the possibility of defection and exclusion]” (World Health Organisation, 2015, p. 10). Organisational transparency and certainty, as well as meaningful stakeholder inclusion structured around specific issue areas are of critical importance for the creation of clear accountability structures and the assurance of continuous stakeholder buy-in.

Conclusion

In as complex and dispersed an environment as cyberspace, the examination and institutionalisation of accountability structures is not a straightforward undertaking. Researchers and policymakers are confronted with tangled webs of accountability relationships of different texture and design. Untangling these webs, requires conscious and concerted efforts at process and institutional levels (Bovens et al., 2014, p. 251).

This paper has argued that accountability structures are contested by the very elements that are constitutive of cyberspace governance, namely, the number of stakeholders contributing to regulatory ventures, the multiplicity of issue areas concerned, and the hybridity and distribution of institutional arrangements involved. Taken together, these factors bring about the following accountability challenges: the problem of many hands, the profusion of issue areas, as well as the malleability of institutional arrangements.

With a view to addressing the challenges identified, this paper has reasoned that in accordance with the distributed nature of the realm, accountability needs to be exercised and structured in a collective fashion. Given the polycentric nature of cyberspace governance, one-dimensional, sovereigntist conceptions of accountability that intend to attach ultimate responsibility to a unitary source of authority are misplaced. In the absence of a single locus of authority, accountability structures need to be consciously reframed, involving all relevant stakeholders. “All nodes in a given [cyberspace governance venture] must play their part in delivering transparency, consultation, evaluation, and correction” (Scholte, 2008, p. 20). Clear communication of and clarity about institutional and stakeholder-related roles, goals, and expectations are key success factors for establishing accountability structures in complex governance settings. Greater organisational transparency, proactive stakeholder engagement, and procedural openness are key prerequisites for tackling institutional malleability and elusiveness.

No claim is made that the recommendations stipulated by this paper will resolve all accountability challenges pertaining to the governance of the digital realm. On the contrary, this paper recognises that much of what has been discussed is still very much terra incognita and requires continuing research. Establishing accountability structures in polycentric governance environments is a demanding and difficult enterprise which requires concerted and sustained efforts by scholars and practitioners alike.

References

APNIC. (2017). I* organizations – APNIC. Retrieved May 31, 2017, from https://www.apnic.net/community/ecosystem/iorgs/

Baldwin, R., Cave, M., & Lodge, M. (2010). The Oxford Handbook of Regulation. (R. Baldwin, M. Cave, & M. Lodge, Eds.). Oxford University Press. doi:10.1093/oxfordhb/9780199560219.001.0001

Barlow, J. P. (1996). A Declaration of the Independence of Cyberspace. Retrieved from https://projects.eff.org/~barlow/Declaration-Final.html

Black, J. (2008). Constructing and contesting legitimacy and accountability in polycentric regulatory regimes. Regulation & Governance, 2(2), 137–164. doi:10.1111/j.1748-5991.2008.00034.x

Black, J. (2012). Calling Regulators to Account: Challenges, Capacities and Prospects. SSRN Electronic Journal. doi:10.2139/ssrn.2160220

Bovens, M. (2007). Analysing and assessing accountability: a conceptual framework. European Law Journal, 13(4), 447–468. doi:10.1111/j.1468-0386.2007.00378.x

Bovens, M., Goodin, R. E., & Schillemans, T. (2014). The Oxford Handbook Public Accountability. (M. Bovens, R. E. Goodin, & T. Schillemans, Eds.). Oxford University Press. Retrieved from http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199641253.001.0001/oxfordhb-9780199641253

Carr, M. (2016). Public-private partnerships in national cyber-security strategies. International Affairs, 92(1), 43–62. doi:10.1111/1468-2346.12504

Clarke, R. A., & Knake, R. K. (2012). Cyber War: The Next Threat to National Security and What To Do About It. Terrorism and Political Violence. HarperCollins. doi:10.1080/09546553.2011.533082

Davidson, A. (2009). The Law of Electronic Commerce. Cambridge University Press. Retrieved from https://books.google.co.uk/books?id=VfIfAwAAQBAJ

Demchak, C., & Dombrowski, P. (2013). Cyber Westphalia: Asserting State Prerogatives in Cyberspace. Georgetown Journal of International Affairs. Retrieved from http://www.jstor.org/stable/43134320

DeNardis, L. (2014). The Global War for Internet Governance. New York, New York, USA: Yale University Press. doi:10.12987/yale/9780300181357.001.0001

Dickinson, S. (2014). Background Paper (IGF 2014 Workshop 96: Accountability challenges facing Internet governance today). Retrieved from http://www.intgovforum.org/cms/wks2014/uploads/proposal_background_paper/internet-governance-accountability-challenges-background-paper.pdf

Dubnick, M. J., & Frederickson, H. G. (2014). Accountable Governance: Problems and Promises. M.E. Sharpe. Retrieved from https://books.google.co.uk/books?id=M32XUtMBSh4C

Dutton, W. H., & Peltu, M. (2007). The emerging Internet governance mosaic: connecting the pieces. Information Polity, 12(1–2), 63–81. Retrieved from https://www.oii.ox.ac.uk/archive/downloads/publications/FD5.pdf

Enderlein, H., Wälti, S., & Zürn, M. (2010). Handbook on Multi-Level Governance. Edward Elgar Publishing Limited. Retrieved from https://books.google.ch/books?id=YlmoCs207UAC

Harlow, C., & Rawlings, R. (2007). Promoting Accountability in Multilevel Governance: A Network Approach. European Law Journal, 13(4), 542–562. doi:10.1111/j.1468-0386.2007.00383.x

Internet Governance Forum. (2016). Dynamic Coalitions. Retrieved September 14, 2017, from http://www.intgovforum.org/cms/dynamiccoalitions

Jayawardane, S., Larik, J., & Jackson, E. (2015). Cyber Governance: Challenges, Solutions, and Lessons for Effective Global Governance. Retrieved from http://www.thehagueinstituteforglobaljustice.org/information-for-policy-makers/policy-brief/cyber-governance-challenges-solutions-and-lessons-for-effective-global-governance/

Johnson, D. R., & Post, D. (1996). Law and Borders: The Rise of Law in Cyberspace. Stanford Law Review, 48(5), 1367. doi:10.2307/1229390

Kello, L. (2013). The Meaning of the Cyber Revolution: Perils to Theory and Statecraft. International Security, 38(2), 7–40. doi:10.1162/ISEC_a_00138

Kleinwächter, W. (2011). A new Generation of Regulatory Frameworks: The Multistakeholder Internet Governance Model. In Kommunikation: Festschrift für Rolf H. Weber zum 60. Geburtstag (pp. 559–580). Stämpfli Verlag.

Koppell, J. G. S. (2005). Pathologies of accountability: ICANN and the challenge of “Multiple Accountabilities Disorder.” Public Administration Review, 65(1), 94–108. doi:10.1111/j.1540-6210.2005.00434.x

Krasner, S. D. (1985). International regimes (Vol. 3a). Cornell University Press. Retrieved from https://books.google.de/books?id=WIYKBNM5zagC

Levi-Faur, D. (2011). Handbook on the Politics of Regulation. (D. Levi-Faur, Ed.). Cheltenham: Edward Elgar Publishing. doi:10.4337/9780857936110

Malcolm, J. (2015). Criteria of meaningful stakeholder inclusion in internet governance. doi:10.14763/2015.4.391

Mueller, M., Mathiason, J., & Klein, H. (2007). The Internet and Global Governance: Principles and Norms for a New Regime. Global Governance, 13, 237–254.

Murray, A. (2007). The Regulation of Cyberspace. Taylor & Francis. doi:10.4324/9780203945407

NATO Cooperative Cyber Defence Centre of Excellence. (2017). Cyber Definitions. Retrieved May 31, 2017, from https://ccdcoe.org/cyber-definitions.html

Nye, J. S. (2010). Cyber Power. Belfer Center for Science and International Affairs, (May), 1–31. Retrieved from http://belfercenter.ksg.harvard.edu/files/cyber-power.pdf

Ottis, R., & Lorents, P. (2010). Cyberspace: Definition and Implications. In Proceedings of the 5th International Conference on Information Warfare and Security, Dayton, OH, US, 8-9 April (pp. 267–270).

Papadopoulos, Y. (2003). Cooperative forms of governance: Problems of democratic accountability in complex environments. European Journal of Political Research, 42(4), 473–501. doi:10.1111/1475-6765.00093

Papadopoulos, Y. (2010). Accountability and Multi-level Governance: More Accountability, Less Democracy? West European Politics, 33(5), 1030–1049. doi:10.1080/01402382.2010.486126

Raymond, M., & DeNardis, L. (2015). Multistakeholderism: anatomy of an inchoate global institution. International Theory, 7(3), 572–616. doi:10.1017/S1752971915000081

Scholte, J. A. (2008). Global governance, accountability and civil society. doi:10.1017/CBO9780511921476

Slack, C. (2016). Wired yet Disconnected: The Governance of International Cyber Relations. Global Policy, 7(1), 69–78. doi:10.1111/1758-5899.12268

Thompson, D. F. (1980). Moral Responsibility of Public Officials: The Problem of Many Hands. American Political Science Review, 74(4), 905–916. doi:10.2307/1954312

Weber, R. H. (2009). Accountability in Internet Governance. International Journal of Communications Law and Policy, 13, 152–167.

Weber, R. H. (2013). The legitimacy and accountability of the internet’s governing institutions. In Research Handbook on Governance of the Internet (pp. 99–120). Edward Elgar Publishing Limited.

Weber, R. H. (2014). Realizing a New Global Cyberspace Framework: Normative Foundations and Guiding Principles. Springer. Retrieved from https://books.google.co.uk/books?id=YemZBAAAQBAJ

World Health Organisation. (2015). WHO Accountability Framework. Retrieved from http://www.who.int/about/who_reform/managerial/accountability-framework.pdf

World Wide Web Consortium. (2004). Architecture of the World Wide Web. (I. Jacobs & N. Walsh, Eds.). W3C. Retrieved from https://www.w3.org/TR/webarch/

Two crates of beer and 40 pizzas: the adoption of innovative political behavioural targeting techniques

$
0
0

This paper is part of 'A Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research’, a Special issue of the Internet Policy Review.

Introduction

As political campaigns compete, they try to outsmart each other by all sorts of actions: from dropping witty puns during a televised debate, to strategically knocking on doors and convincing voters. Technological innovation can help political parties improve the effectiveness of their campaigns. By using technology to collect, process, and analyse information about voters, campaigns can improve their knowledge about the electorate. Subsequently, technology can extend campaigns' capabilities of targeting specific groups with tailored messages resulting in more efficient campaigning. We call this phenomenon 'political behavioural targeting' (PBT).

Several scholars have researched political behavioural targeting in the US context (e.g., Kreiss, 2012, 2016; Nielsen, 2012; Hersh, 2015). However, the US differs in several obvious ways from most European countries. One can imagine that differences in electoral systems, privacy laws, and party financing influence campaigns’ ability to collect, process, and use personal voter data. Therefore, the findings from these studies do not necessarily apply to European countries. As there is little research in a European context, it remains unclear to what extent and how campaigns in a multiparty democracy, such as the Netherlands, use PBT-techniques. Also, it is unclear if and why there are differences between parties. In line with Colin Bennett (2016, p. 261), we wonder: "can political parties campaign in Europe as they do in North America?"

Such a question is relevant, as some scholars fear that the use of data and targeting techniques hinders public deliberation (Gorton, 2016), weakens the mandate of elected officials (Barocas, 2012), has negative effects on citizens’ privacy (Howard, 2006; Rubinstein, 2014; Tene, 2011), and enables campaigns to send tailored messages directly to citizens, thereby avoiding scrutiny from journalists (Jamieson, 2013). As a result, campaigns can potentially make opposite promises to different people, without anyone noticing.

This article sheds light on how Dutch political campaigns adopt and use PBT-techniques. Through interviews with campaign leaders, using a grounded theory approach, we answer the following overarching research question: What barriers and facilitators for the adoption and use of PBT-techniques do Dutch political parties perceive?

Theoretical framework

We will first summarise innovations in political campaigns over time, leading up to the advent of political behavioural targeting. Then, we identify the factors influencing the adoption of PBT on a campaign team level. Finally, we explore the factors that can shape the adoption of PBT on the level of national systems.

Innovations in political campaigns

Political campaigns have continuously been adapting to technological developments. Pippa Norris (2000) describes how the advent of television and the shift from partisan newspapers to national television news triggered a process of modernisation in the way political campaigns operated. Notable consequences of this shift were the adoption of a media-centred strategy in order to set the agenda, the rise of political marketing, the collection and use of data (such as opinion polls) to "shape, fine-tune and monitor campaign efforts" (Blumler, Kavanagh, and Nossiter, 1996; p. 53). Another shift came with the internet and the new possibilities for party-voter interaction that came along with the medium, which led campaigns to a new stage of the modernisation process: the postmodern campaign (Norris, 2000).

It would be an oversimplification to point to 'the internet' as a game-changer in political communication, because of the rapidly changing nature of the internet itself. As David Karpf (2012, p. 640) notes: "the internet of 2002 has important differences from the internet of 2005, or 2009, or 2012". Accordingly, much more than the advent of the internet itself, it is the advent of social media such as Facebook (2004), YouTube (2005) and Twitter (2006) which provided political campaigns with new ways of communication with the electorate (e.g., Gibson & McAllister, 2011; Conway, Kenski, & Wang, 2015; Vaccari, 2012). Together with companies such as Google, whose core business is actually not its well-known search engine but rather its advertisement business, social media not only facilitate new ways of communication, but also the tracking and collection of behavioural data of internet users (Zuiderveen Borgesius, 2016). This technique ("behavioural targeting") originates from the advertisement business. Ad agencies monitor people's online behaviour and combine this information with consumer data provided by data brokers, to target them individually with tailored ads (Turow, 2011, p.75). When applying this concept to the political realm, we can dub this phenomenon as political behavioural targeting (PBT).

Of course, PBT is not about selling products but about winning votes. And political campaigns have different means to do so than advertisement agencies have (e.g. canvassing efforts); which means that PBT happens offline as well as online. We distinguish traditional canvassing from PBT-canvassing if campaigns are able to process information about individual conversations (such as the voter's likelihood to vote for a party or her most important voting consideration), and subsequently use that information to gain strategic insights about the electorate and/or to target the voter at a later stage with a tailored message, while skipping the 'wrong' doors in a neighbourhood (Kreiss, 2016; Nielsen, 2012).

Arguably, the use of PBT can be seen as the latest step within the modernisation of political campaigns. However, as we have seen in earlier phases, not all parties in all countries adopt new techniques at the same pace and rate. Below, we identify the factors influencing the adoption of PBT. We organise these factors at two levels: (1) the individual campaign around a candidate/party and (2) the national system (i.e., the electoral system, regulatory framework, and culture). This translates into the model shown in Figure 1, which will be elaborated on in the next paragraphs.

Figure 1: Factors influencing the adoption of PBT
Figure 1: Factors influencing the adoption of PBT

The campaign team level

In his extensive research of US political campaigns, Daniel Kreiss (2016) identified four factors concerned with technological innovation within political campaigns. There are resource factors, such as campaign budgets and the number of volunteers a campaign can employ; infrastructural factors, such as technological tools or skills within the organization; organizational factors, such as organisational culture and structure; structural electoral cycle factors, such as election results. Building upon Kreiss' factors, we add an additional four (one campaign team level factor and three system level factors) to examine the use of PBT. On a campaign team level, the factor is ethical and legal concerns, such as normative reservations towards PBT. On a system level, the factors are electoral context, regulatory framework,and culture (discussed below). These new factors were identified through a review of literature about innovation in data-driven political campaigning techniques (e.g., Anstead, 2017; Kreiss, 2016; Jungherr, 2016; Hersh, 2015; Nielsen, 2012), and literature about (hybridisation of) campaign evolvement (e.g., Lijphart, 2012; Plasser & Plasser, 2002; Karlsen, 2010; Norris, 2000).

Resource factors

The main elements within this factor that could influence the extent to which campaigns can use PBT-techniques are: the budget and the effort needed to carry out a PBT-operation. A large budget enables campaigns to hire skilled personnel, acquire data, or buy targeted ads. The same dynamic applies to the number of volunteers a campaign can mobilise: having a lot of them facilitates a campaign in collecting data by canvassing, and sending potential voters targeted messages (the use of volunteers, of course, is dependent on their skills). Having a small budget and few volunteers, consequently, can be a barrier for campaigns because it bars them from acquiring the same amount of capabilities or from carrying out an operation on a large scale. This is in line with normalization theory (Margolis & Resnick, 2000), according to which the possibilities of the internet will not upset traditional power structures, but will rather develop along traditional lines as in the 'offline world'.

We can also view PBT as a means of using a campaign's resources as efficient as possible, to ensure parties do not spend money and effort on voters who will vote for another party anyway, or on citizens who will not vote altogether. Then, parties with limited resources could be more inclined to use PBT to not waste precious money, time, and labour. This is in accordance with the idea of equalization, which views the internet as an empowering tool for smaller parties due to its low costs and its new ways of direct communication with the electorate (Margolis, Resnick, & Levy, 2003; Bimber & Davis, 2003; Stanyer, 2010). A meta-analysis found evidence for the existence of both normalisation and equalisation in election campaigns (Strandberg, 2008). The occurrence of either process can differ per country and is dependent on several contextual factors, which will be discussed later on.

Organisational factors

The elements in this factor are about how campaign leaders perceive campaigning. Do they rely on proven best practices from previous campaigns or is there a culture of innovation? John Padgett and Walter Powell (2012) describe the concept of network folding. Applied to the political realm, this entails the extent to which campaigns employ skilled personnel from non-political sectors and to integrate that expertise into their existing institutions. An example is the hiring of Google engineer Stephanie Hannon as chief technology officer by the Clinton campaign (Easton, 2015). The 'cognitive diversity' following from network folding can lead to creative ideas (De Vaan, Stark, & Vedres, 2015). Furthermore, the organisational structure can be expected to resemble the way the campaign perceives PBT. A campaign with an autonomous data department is probably more prone to rolling out a PBT-strategy than a campaign that sees 'data' as only one of the many tasks of a communication staffer. Also, a change in leadership can be a facilitator for innovation (Gibson & Römmele, 2001).

Infrastructural factors

Elements are the technological tools available to campaigns, which enable them to roll out a PBT-operation. For instance, such tools might assist volunteers in the field by enabling them to collect data. They can be developed in-house or outsourced; in fact, there are specialised third party consultancies, that offer off-the-shelf tools, which in turn allow campaigns to employ innovative technology even though the campaigners do not have any technical expertise.

Structural electoral factors1

The actions of rival campaigns fall under the umbrella of structural electoral factors. A successful PBT-campaign of a rival can facilitate innovation in other campaigns, especially if those other campaigns themselves look back at an unsuccessful election. This connects with the 'critical event' (Kreiss, 2016), such as losing an election that should have been won, or with the experience of an 'external shock', which can be an incentive for professionalisation (Gibson & Römmele, 2001).

A second element influencing campaigns' likelihood to use PBT-techniques, is issue ownership (Petrocik, 1996), and the subsequent statements of party candidates propagating standpoints of the party. A political campaign 'caught' using privacy-infringing PBT-techniques, while its candidates present themselves as privacy champions, is likely to come across as hypocritical. Being perceived as such should be avoided, considering the negative electoral consequences following political-ideological hypocrisy perpetrated by politicians (Bhatti, Hansen, & Olsen, 2013).

Ethical and legal concerns

Elements within this factor consist of ethical and legal restrictions on how campaigns operate. For example, a political party could believe that PBT is ethically wrong as it infringes on citizens' right to privacy, and citizens' autonomy to form their own opinions. As a result, the party 'self-regulates' and refrains from using campaigning techniques violating its ethical beliefs.

Another element is the legal uncertainty that occurs when a campaign does not know how to behave in accordance with data protection and election laws, because of a lack of internal expertise. Such confusion can result in differences in the actions taken by comparable actors (e.g. Raskolnikov, 2017). Legal uncertainty can lead to 'overcompliance', which can be seen as a barrier towards the adoption of PBT-techniques, or to 'undercompliance', which facilitates the adoption of PBT-techniques (Calfee & Craswell, 1984). For instance, Anstead (2017) notes how parties felt disadvantaged by targeting possibilities facilitated by the perceived undercompliance with UK campaign finance law during the 2015 general elections.

The system level

Aside from campaign level factors, we look at contextual factors as well. These factors may limit the extent to which (US-American) campaigning techniques can be adopted in other countries (Karlsen, 2010). Therefore, we add three new contextual factors to our model. We expect that the electoral system, the regulatory system, and the culture of a democracy influence the extent to which the campaign team level factors are applicable. Below, we explore how the adoption of PBT-techniques can be influenced by properties of different systems. We will later apply our model (see Figure 1) to one specific case.

Electoral system

The three dominant electoral systems are first-past-the-post (FPTP), proportional representation (PR), and two-round (TR) (Birch, 2001, 2003). How these systems function, can influence how campaigns are run. The FPTP-system, first, can lead to an overvaluation of some key districts. Such districts sometimes 'swing' to one party and sometimes to another party, whereas other districts go to the same party in each election. As an effect, campaigns in a FPTP-system are inclined to spend a disproportionate amount of money and labour in these key districts in the hope of swinging the election their way (e.g., Anstead, 2017; Lipsitz, 2004). The PR-system, second, does not favour a select group of voters in a few key districts (Plasser & Plasser, 2002). This is especially true when the PR-system consists of only one district, in which every vote counts equally. As a result, campaigns have to spread their means more equally over the country. The TR-system, third, makes for a relatively unpredictable campaign, since it often is unclear which candidates will make it to the second round. Furthermore, the TR-system makes it important for campaigns to collect the votes of the supporters of the losing candidates of the first round. Therefore, campaigns should not only focus on their own base but other candidates' bases as well (Blais & Indridason, 2003). This has consequences for PBT, since campaigns should not only correctly classify potential voters as their own, but the other voters as well in order to target them in the next round.

A different aspect of electoral systems that influences how a campaign is run, is the degree of fractionalisation in a democracy (Duverger, 1959; Lijphart, 2012; Wang, 2012). FPTP-systems favour relatively few candidates/parties. PR-systems, in contrast, enable a large number of parties to run in an election. The first round of a TR-system can consist of many different candidates. As a result, campaigns that operate in a PR or a TR-system are less likely to launch attack campaigns against competitors. This is because PR-systems generally require a coalition of parties working together after the elections (Plasser & Plasser, 2002). And in a TR-system, campaigns should not mistreat competing candidates too much because winning campaigns have to court the bases of losing candidates in order to win the second round. Furthermore, in a highly fractionalised democracy, parties represent different (minority) groups within the electorate. This results in a high risk of 'mistargeting', in which campaigns approach a member of group A with appeals for a member of group B. Mistargeting can lead to voters penalising the campaign for their mistake (Hersh, 2013). These contextual circumstances may call for different PBT-strategies.

Regulatory framework

We distinguish strictly regulated campaign environments, moderately regulated campaign environments and minimally regulated campaign environments (Plasser & Plasser, 2002). Strictly regulated environments are characterised by "severe restrictions on the contact and communication between candidates and their constituencies" (p. 137). Moderately regulated environments typically focus on regulating access to TV-advertising and campaign funding. Minimally regulated environments impose few regulatory restrictions on political campaigns. It may be infeasible to implement innovative PBT-techniques in strictly regulated environments. Legal uncertainty can play a role on a system level too (e.g. because of a gap in the law).

Culture

Differences in the adoption and use of innovative PBT-practices can also be influenced by the culture or tradition in a democracy. For example, turnout culture is important because campaigns operating in countries where turnout is high will focus more on convincing voters on getting out the vote than campaigns operating in a low-turnout culture. In a low-turnout culture, campaigns sometimes target specific groups of voters (e.g. the elderly, who are more likely to turn out) more than other groups (e.g. the poor), who are unlikely to turn out (Herrnson, 2001). The turnout culture can influence the data campaigns collect on someone (and how campaigns tailor their messages), because a campaign message meant to convince someone typically leans on more data than a message meant to mobilise a voter does. Furthermore, cultural norms can dictate the strategy of political campaigns. In Japan, for example, posting dark post attack ads, such as the 'super predator' ad Trump launched against Clinton (Green & Issenberg, 2016), is improbable because of the cultural convention of averting direct conflict (Plasser & Plasser, 2002).

System level context is likely to affect campaign level factors. A campaign operating in a multiparty PR-system needs to pour more resources into identifying potential supporters than a campaign in a FPTP-system. After all, identifying potential Republicans or Democrats is easier than identifying potential voters in a ten-party race. Moreover, other than in US campaigns, most European campaigns are unable to access voter registration files provided by an electoral register. In many countries citizens can just show up at the voting booth, which means that the whole act of ‘registering’ for voting, as it is the case in the US, does not exist. Since Hersh (2015) has found that voter lists are one of the most valuable pieces of data to US campaigns, this principal unavailability, or non-existence, of such data poses a challenge for the PBT-capabilities of campaigns. But this challenge should by no means imply a PBT-operation in Europe is impossible. We would argue that while the lack of access to voter lists makes it very difficult to achieve the same level of granularity when engaging in PBT as in the US, by using other commercially available or self-collected data, it can be possible to come reasonably close to the desired level (which may be more modest for European campaign leaders than for American ones). The extent to which there is an actual difference between the degree of granularity between US and European campaigns, however, is outside of the scope of this paper, as we focus on perceptions and strategies of campaign leaders.

System level context also affects infrastructure. For instance, should the groundwork be spread equally across the nation, or focused on a number of battleground states? Also, a campaign operating in a heavily regulated context is likely to encounter some legal barriers. For example, because of campaign financing regulations (may influence resource factors), and data protection regulations (infrastructural factors). The absence of regulations, conversely, can facilitate PBT. Cultural context, finally, can influence campaigns' ethical considerations regarding PBT. Campaigns operating in a culture that favours privacy, for example, can be expected to avoid (or use less-invasive) PBT-techniques than campaigns run in a culture in which privacy is less important. In sum, there are several factors, both on campaign and system level, which can form a barrier or facilitate the extent to which campaigns are able to use PBT-techniques and how they use them.

Extending existing research to a European context, we have developed and will apply an improved model (applicable in different electoral contexts) to analyse barriers and facilitators to innovative PBT-practices by political campaigns. As the context of the research case differs from the US, we expect to contribute to the framework and to shed light on how contextual factors influence innovation of political campaigns. Furthermore, in answering our research question, we provide insight into the way political campaigns in a multiparty democracy organise, communicate and innovate. Given these considerations, our key question is: What barriers and facilitators for the adoption and use of PBT-techniques do Dutch political parties perceive?

Method

This study focuses on campaigns in the Netherlands because of the national elections taking place in the research period (15 March 2017), the advanced technological infrastructure (Coy, 2015), and the interesting contextual factors. The Dutch electoral system is one of open list proportional representation (PR), in which all members of parliament come from one nationwide district (Lijphart, 2012). This means that in the Netherlands, every vote counts equally. Moreover, the system of PR (and the very low de-facto threshold) enables a relatively large number of political parties to run in an election. 28 parties participated in the 2017 national election ("Partijen nemen deel", 2017). Of these parties, 13 actually gained a seat in parliament ("Officiële uitslag", 2017).

The Dutch national elections have a relatively high turnout: around 80% in the previous two elections ("Officiële uitslag", 2017). But where the US presidential campaigns can spend hundreds of millions of dollars (Narayanswamy, Cameron, & Gold, 2017), the Dutch campaign with the biggest funds (VVD) has no more than 5 million dollars to spend. And even if the budgets were sufficiently large, the question is whether voter data would be usable for a political campaign. The Dutch data protection law categorises political preference as sensitive personal data. This means that campaigns are only allowed to process such information if the potential voter explicitly gives permission to do so.

Finally, as party membership in the Netherlands steadily decreases, political campaigns can rely less on their members to do labour-intensive tasks (such as canvassing). In 2016, the number of party members of all political parties combined, was at its lowest point since the second world war. Although this number has picked up slightly since, party membership is still quite low ("Membership Dutch parties still low", 2017).

After approval from the ethical committee of the University of Amsterdam, we carried out eight in-depth interviews with campaign leaders. We interviewed 11 campaign leaders, belonging to eight political parties in total (three interviews were double-interviews). In addition to this, we held two background interviews (with one local campaigner for the municipality of Amsterdam and one political consultant offering PBT-services). The eight elite-interviews on average lasted 53 minutes. Two were conducted by phone, the others face-to-face. We took a qualitative research approach for several reasons: the small group of people concerned with the coordination of political campaigns in the Netherlands, the lack of knowledge on this topic in the Netherlands, and because interviewing is a suitable method for understanding the mechanisms behind and perceptions of a phenomenon (Boeije, 2005). As we want to understand how campaigns see PBT, what they are actually doing, and how they perceive possible barriers and facilitators to the adoption and use of PBT-techniques, the interview is a suitable data-collection method. Using an interview guide (see appendix A), we held semi-structured interviews, allowing follow-up questions.

Interviewees

We selected the interviewees via purposive sampling. Campaign leaders qualified for an interview when they had a coordinating role in the campaign and were campaigning for a party that gained at least one seat in the 2012 national parliamentary elections. Eleven campaigns satisfied this second criterion (see Table 1). We contacted interviewees via email, explaining the objective of the study. The interviewees signed an informed consent document before the interview started. We also promised the campaign leaders anonymity, and confidentiality until after election day (15 March 2017). By doing so, we tried to provide the interviewees with a safe environment in which they felt like speaking freely, and without concern of somehow 'leaking' strategic information. Because no information would become public before election day, the risk that interviewees might provide biased information due to a strategic agenda was minimised. Another large advantage of interviewing the campaign leaders before the elections took place, is the prevention of hindsight bias by the interviewees. Unfortunately, we were unable to convince three parties to comply (VVD [right on the political spectrum], PVV [right-wing nationalist party], PvdD [Party for the animals; left-wing]). These three parties were unwilling to cooperate, either because they still found the risk of leaking their strategy too large, or they did not offer an explanation.

Table 1. Interviewees

Interviewee

Date of interview

Political party

Description

Campaign leader 1

01-11-2016

PvdA

Social Democratic Party (left wing)

Campaign leader 2

01-11-2016

PvdA

Social Democratic Party (left wing)

Campaign leader 3

02-11-2016

D66

Liberal Democrat Party

(right of center)

Campaign leader 4

08-11-2016

ChristenUnie

Christian party

(right of center)

Campaign leader 5

08-11-2016

ChristenUnie

Christian party

(right of center)

Campaign leader 6

15-11-2016

50PLUS

Seniors party

(left of center)

Campaign leader 7

22-11-2016

GroenLinks

Green party

(left wing)

Campaign leader 8

22-11-2016

CDA

Christian Democrats

(right of center)

Campaign leader 9

09-01-2017

SGP (Reformed Political Party)

Orthodox Calvinist party

(right wing)

Campaign leader 10

09-01-2017

SGP (Reformed Political Party)

Orthodox Calvinist party

(right wing)

Campaign leader 11

10-01-2017

Socialistische Partij (SP)

Socialist Party

(left wing)

Analysis

Using a grounded theory approach, this study has passed four phases: the exploration phase, the specification phase, the reduction phase, and the integration phase (Wester, 1995). In the exploration phase, two background interviews took place (with a campaigner for the municipality of Amsterdam and with a political consultant offering PBT-services). These were coded using AtlasTI, 'tentatively labelling' relevant information (Glaser, 1978). Thereafter, the first interviews with campaign leaders took place. These were transcribed and open-coded. Furthermore, fellow researchers also coded these interviews and discussed the content (peer debriefing). In the next phase, new interviews took place and the data was subject to axial coding. The first dimensions were identified (e.g., what forms a barrier and what facilitates the use of PBT-techniques?). The reduction phase saw the emergence of the core category (innovation). In the integration phase, we completed the conceptual framework, finalised our analysis, and had the campaign leaders approve the quotes used (member checking). This means the campaign leaders agreed with the way they were quoted, and with the publishing of the names of the political parties. Member checking increased the willingness of campaign leaders to cooperate with the study.

Results

We first describe the field: to what extent do campaigns use PBT-techniques? Then we explain differences between parties by focusing on the five campaign level factors concerning the use of PBT (resource, infrastructural, organisational, structural electoral cycle factors, and ethical and legal concerns). Finally, we zoom out to the system level and discuss the influence of contextual factors (electoral system, regulatory framework and culture) on the adoption of PBT-techniques.

PBT in Dutch campaigns

As campaigns in the Netherlands can have recourse to relatively detailed public census data and detailed election results, all campaigns, to some extent, adopt a PBT-approach. Furthermore, Facebook is an important tool for all parties, but the parties differ in how they use Facebook's capabilities. Some parties occasionally post content targeted to broad age groups, while other campaigns frequently post content tailored to more specific groups. Two campaigns stand out, as they have developed their own PBT-tools, which they can use to continuously refine their knowledge of the electorate. We will now use our model to explain the differences between campaigns.

Resource factors

All campaigns cite financial costs as a barrier. Table 2 shows that budgets are modest, and differ between parties.

Table 2. Party budgets

Party

Budget in 2012 national election (€)

VVD

3,227,038

PvdA

2,192,641

CDA

1,619,919

SP

1,589,300

D66

884,693

GroenLinks

873,831

ChristenUnie

393,661

PvdD

289,437

SGP

181,290

50+

Not available

PVV

Not available

Source: parties' annual financial reports, on file with authors.

These small budgets form a barrier for the cooperation with expert political consultants (such as Blue State Digital) to enhance their PBT-operations. Parties refer to the financial costs as the main reason not to hire consultants. Campaign leader 1 of the Social Democratic Party (PvdA) explains why he does not work with Blue State Digital (BSD):

Their system is very expensive, that's a factor. And you need the people to carry out the work for you. In an ideal world, such a cooperation would be really cool though."

Liberal democrat party D66 agrees: "because it costs a lot of money and we don't have that kind of money. And if we spend it on a consultant, we can't spend it on the campaign itself."

The same barrier appears when campaigns speak about other technological means, such as canvassing apps, allowing campaigns to directly process information from canvassers. Christian Democrats CDA, for instance, would like such an app. Campaign leader 8: "yes, but that would demand a financial investment that we can't afford."Green Party GroenLinks has a contrasting perspective: "I believe it usually costs around €100,000 to build an app such as our own. (..) We, however, paid our programmers two crates of beer and 40 pizzas." Several facilitators help GroenLinks and also socialist party SP to overcome this barrier of financial costs. First: the personal network of the campaign leader. This facilitator is especially prominent for GroenLinks, where campaign leader 7 employs his own network to optimise the BSD-systems, but also to help him with setting up other parts of the campaign:

We had to adjust it [the BSD system] somewhat for the Netherlands. The people with whom I did so, Swedish folks ... they are simply a little network of people of around my own age, and some people who are a bit older and have already set up a similar campaign in their own countries. A guy who set up the grassroots organization for Trudeau, for example, he's a couple of years older than I am, but I Skype with him to talk about how I should handle certain things.

Campaign leader 7's personal network plays (or at least played)an important role in cheaply setting up technological tools and creating content:

Through the network, I'm aware of the crowdfunding streams for a normal campaign. (..) I'm meeting a friend tomorrow, who has experience with mail flows. (..) I have a network of volunteering writers, poets, freelance journalists who write for us for free. (..) So partly, I just have a good personal network.

However, as PvdA notes, having lots of data is of no use if you don't have the capacity to use it. GroenLinks tries to overcome this barrier by organising their campaigns, to some extent, in a citizen-initiated manner (Gibson, 2015). A citizen-initiated campaign (CIC), devolves "power over core tasks to the grassroots" (p.183). As campaign leader 7 puts it:

Grassroots is about creating an infrastructure to enable as many sympathizers as possible to volunteer as canvassers on a large scale. So voter contact on a large scale, but also – and that's Bernie's [Sanders] lesson – to have places in which a few people make stuff by themselves without us having any control over it. (..) Embracing people's creativity without managing it.

SP has less need of a citizen initiated campaign, because of their relatively large number of active party members. "The big difference [with GroenLinks] is the fact that we already have the volunteers. Many other parties lack the numbers. We have thousands of party members who gladly canvass for us two weekdays and on Saturday as well." [Campaign leader 11, SP].

Infrastructural factors

Having a good infrastructure allows campaigns to actually collect data and send tailored messages. What kind of PBT-infrastructure can parties rely on and how does it facilitate their use of PBT?

All campaigns use the PBT-infrastructure Facebook offers, although some more than others. Nearly all campaigns use its lookalike audiences function to find new potential voters. Campaign leader 8: "we search for profiles of people who look like the ones who’ve already liked our Facebook page, and then serve them with advertisements." Campaigns also look at people who like pages that are close to the values of the political parties. Christian party ChristenUnie, for instance, tries to target voters who like the page of evangelical broadcaster EO. So does the Calvinist Political Party SGP, which tries to find out people’s interests on Facebook: "For example… farming, or Israel, off the top of my head; you try to approach people along the lines of their interest, or the region in which they reside." [Campaign leader 9]

Some campaigns also employ ‘dark posts’, a Facebook function that enables campaigns to opaquely target specific audiences, while its messages are not visible to untargeted Facebook users. Campaign leader 1 exemplifies:

We’ve managed to get something done related to gas extraction in Groningen. It doesn’t make sense to share that on the national Facebook page, because it was only important news locally. So we put out a dark post, only for Groningen residents. Sometimes we can specify it even more.”

Using Facebook for PBT-purposes, campaigns do not actually gather or own data themselves. There are a few campaigns that do gather their own data, by using canvassing apps. Campaign leader 7:

We use the election results per voting location and use that information to establish the GroenLinks mindedness of a neighbourhood. Then we can prioritise which addresses to visit and which to ignore. When we visit addresses, our volunteers use the app to answer the following questions: 1. Is anyone home? 2. Does she want to talk? 3. Is she going to vote? 4. Is she planning to vote for GroenLinks? 5. What is the most important theme to her? 6. How GroenLinks minded was she? If she considers to vote for GroenLinks, two questions follow: 1. Do you want to stay informed of our campaign by e-mail? 2. Can I have your phone number, so we can ask you to do canvassing talks?”

The GroenLinks app facilitates large scale collection of information about people's political preferences, thereby informing strategic decisions. Also, the personal data can facilitate accurate PBT on an individual level. The secondary objective of the app is to provide an infrastructure for volunteers to campaign on their own terms, whenever they feel like doing so:

Our app, built by hackers, enables others to campaign for us. (..) Someone in [small town] Lutjebroek can install our app and go ahead and work for our campaign. No campaign leader needed. [Campaign leader 7]

Some campaigns monitor the visitors of their own websites. Campaign leader 1: “What are people searching for on our website, how do they get to our website, how much time do they spend, (..) which button should you colour red? How does that work?” At the time of the interview, CDA was not yet tracking their website visitors, but: “we’ve just migrated to a new website, on which we want to start collecting more data on our visitors. I’m curious what kinds of people are visiting the website. And what kinds of people don’t, and therefore have to be reached through different channels.”

SP has built a system which combines previous election results, census data and their own membership Constituent Relationship Management (CRM) data. Plotted on a Google Map, they can identify interesting areas for them to canvass. This system facilitates efficient use of means:

We would do nothing more happily than knocking on every single door in every city, but unfortunately, we do not yet have that kind of manpower. So we do an analysis: What kinds of neighbourhoods are especially interesting for us? We have built our own system to help us make that decision [Campaign leader 11].

Organisational factors

Circumstances within the campaign’s organisation itself can form a barrier for the uptake of PBT-techniques. Less innovative parties, for instance, do not have a dedicated data, tech, or digital department. As a campaign leader notes: "The department responsible for that [tech/data/digital] is our Communication department. So that's four or five people. And sometimes someone of the department picks it up, but there's not one specific person who's responsible."This contrasts with GroenLinks, which has a Digital and Grassroots department and with SP's Digital department.

The "state of mind"within a campaign can also be seen as a barrier:"In the sense that internally, people are still very much inclined to think offline. The culture within the campaign is quite offline." [anonymous campaign leader]2

New leadership and younger staffers can play a facilitating role in political organisations. Campaign leader 11 argues that, because he is young, their new party chair brings a more tech-savvy vision than his predecessor. According to the campaign leader, younger staffers are more likely to implement tech and data in their work procedures.

A final organisational barrier is the primary goal a political party pursues. Campaign leader 10:

Maybe the strange thing about SGP is that we do not care that much about seat maximization. For us, it's about the impact of our principles. And sure, we would rather have four seats than three, but if we have to settle for three seats: that's fine too. And that's, in my opinion, a reason why we have a feeling like: do we really need data?

Structural electoral cycle factors

These circumstances are largely beyond the control of the campaigns, but they can influence the uptake of PBT-techniques. Campaign leaders see the PBT-actions of other political campaigns as a motivational factor. As campaign leader 11 notes about the development of their app: "I've looked a little bit at how GroenLinks have their app and canvassing system." Or as campaign leader 2 concludes: "If every party does it, you don't win very much by it. But if you're the only party that does nothing..."

Ethical and legal concerns

Especially D66 and the seniors’ party 50PLUS take a principled stance against the collection of data and the use of PBT. Where D66 presents itself as a privacy champion and therefore will never gather and use information about (groups of) voters, 50PLUS campaign leader 6 warns about the risk of irresponsible use of the data gathered by the "almost stalking of people", which he calls "morally irresponsible".

Furthermore, a lack of internal legal expertise appears to contribute to a feeling of legal uncertainty, which affects the likelihood of adopting PBT-techniques: "Legislation has grown so very comprehensive and complex. It's almost impossible to cope for us as a small organisation." [Campaign leader 5]

While ethical and legal concerns can form a barrier, a left- or right-wing orientation does not seem to be instrumental therein. After all, we have seen left-wing parties GroenLinks and SP develop relatively advanced PBT-tools. And we have seen right-of-centre party CDA express clear interest in advancing their own PBT capabilities. At the same time, left-of-centre 50PLUS and right-of-centre D66 both oppose the use of PBT.

System level

Electoral system

Although the Dutch one district PR-system should make for a rather equal distribution of campaign efforts, campaigns still divide the country into smaller areas of interest called 'key areas'. These areas differ per party, but do receive a relatively large part of campaign attention. Campaign leader 1 describes these as areas: "where we know the turnout is low, but the number of PvdA-voters is high". All campaigns use data provided by the Electoral Council, showing the election results per party, per voting location to establish key areas. Campaign leader 3 explains:

Using that [the election results], you see: Okay, we do well in this neighbourhood or this street. And then you combine that information with the CBS3 data, to find out what kind of neighbourhood it is, what kind of people live there, what are their backgrounds, how much do they earn, what does the family composition look like, et cetera.

Facilitated by these public data, campaigns enrich their knowledge of specific areas. A next step would be to use those data to make personalised appeals to (subgroups of) people living in those specific key areas.

Regulatory framework

Although the Netherlands would qualify as a minimally regulated environment (Plasser & Plasser, 2002; Esser & Strömbäck, 2012), campaigns all experience regulatory pressure and legal uncertainty on a system level. They cite an abundance of regulations, forming a barrier to their ability to innovate.

The technological developments have been taking place so very quickly. And, in that timeframe, to adjust all your procedures and everything. And also to meet the privacy regulations, I think many parties face a huge challenge in that respect." [Campaign leader 4]

Campaigns sometimes face a dilemma, having to decide between innovative techniques and privacy regulations. Campaign leader 11:

Regulations sometimes are unclear, which leads us to decide to go for the safe option because you do not know where the red line is. And you never want to abuse someone's personal data. So yes, regulations sometimes cause us to hit the brake and that's a good thing.

Culture

There is a recurring worry about the perceived low level of political knowledge of the average Dutch voter. PBT-techniques can facilitate campaigns' efforts to convince or educate such low-information voters, for example by "having a conversation with someone, especially if you share some characteristics," [Campaign leader 7] or by interesting "people for things that are relevant to them and to make them aware of the political dimension of those things." [Campaign leader 8] Campaign leader 3, in contrast, concludes that the electorate's low level of political knowledge (together with the perceived volatility of the electorate, the decrease in political trust, and their focus on persons instead of parties) forms an insurmountable barrier, making PBT-techniques irrelevant.

Discussion and conclusion

In the 2017 elections, used here as a case study, all campaigns use PBT through Facebook, but some parties are more advanced than others, and have even developed their own PBT-tools. We have established what the main barriers and facilitators for PBT are, using five factors on a campaign level and three factors on a system level. Not only does this study shed light on the conditions under which these barriers and facilitators manifest themselves, it also gives insight into their different workings across parties. Our study provides information about the data collected by parties and the PBT-techniques used to attract voters. We demonstrate how personal networks and cognitive diversity within a campaign can level barriers. We show how PBT is not only perceived as useful for campaigns in a FPTP-system, but in a PR-system as well. And we show how regulatory pressure is perceived as an obstacle and as a welcome 'normative red line'.

A triangulated research approach can improve our understanding of the campaign leaders' constructs. Observation of their (use of) PBT-tools and how these tools help campaigns make strategic decisions, can give more insight into the workings of these techniques. Another approach would be to interview canvassers and identify 'field-level' barriers and facilitators. Furthermore, ideally, we would have spoken to all parties holding a seat in parliament. Unfortunately, three parties did not cooperate. Two of those became the largest (VVD) and second largest (PVV; in a very close field) party. Since we did have access to eight of 11 parties, we are confident about our findings and we do not expect to identify additional factors influencing the adoption of PBT from interviews with the remaining parties.

Compared to related recent studies by Anstead (2017), Hersh (2015), Kreiss (2016), and Nielsen (2012), this study makes a number of contributions. In general, we focus our exploratory research on a PR-system instead of a FPTP-system, and we develop a model that takes system level contextual factors into account. Specifically, unlike Anstead (2017) we have found evidence for equalisation (which occurs when smaller parties take advantage of the internet's low costs and direct communication possibilities, and, in doing so, use the internet as a tool of empowerment [e.g. Margolis, Resnick, & Levy, 2003]). This evidence is especially clear in the case of GroenLinks, which was, at the time of the campaign, one of the smallest parties in parliament (now the fifth party). Furthermore, we provide an insightful point of view into Anstead's question of whether "parties develop data-driven capabilities more rapidly in electoral systems with a tendency towards disproportionate outcomes" (2017; p. 23). In comparison with Hersh (2015), we focus less on how differences in data-availability lead to different strategic decisions, and more on how differences in the perception of campaign level and system level factors lead to a variation in the occurrence in PBT-innovation. With regard to Kreiss (2016), we have extended his model and applied it to a multiparty democracy. In comparison with Nielsen (2012), we focus solely on the perception of campaign leaders and not on canvassers. Furthermore, we focus on PBT on online as well as offline platforms.

Our attention for the system level factors has enabled us to identify perceived influence of the PR-system on the adoption of PBT. Contrary to theoretical expectations (Plasser & Plasser, 2002), campaigns in a one-district PR-system do identify key-areas that are more heavily campaigned than other districts. These key-areas differ from 'battleground states' in FPTP-systems in the sense that the key-area does not sometimes swing one way and sometimes the other, but rather that potential voters in key-areas are supportive of a certain party, but not very likely to show up at the polls. Campaigns use PBT-techniques to convince these potential voters of the personal relevance of politics and to motivate them to cast their vote. Areas with firm turnout numbers and clear support for a certain party, in contrast, are perceived as less decisive and less of a priority. This leads to a hierarchy of areas, which differs per party. Also, as a PR-system typically leads to a relatively large number of parties partaking in an election, PBT can be seen as an asset for a campaign to organise in a more efficient manner. Moreover, according to the campaign leaders, PBT-techniques offered by Facebook do allow smaller parties a degree of visibility that they are unable to achieve through traditional media.

On a campaign level, in the coming years, we expect more citizen-initiated campaigning (Gibson, 2015) by campaigns low in labour-resources. This requires a solid infrastructure, which opens the door for third party intermediaries offering off-the-shelf infrastructure. In this regard, it would be interesting to track the development of PvdA, which has suffered its biggest loss in history. This critical event could lead to the prototyping (Kreiss, 2016) of GroenLinks' innovative campaign by PvdA. As the party's chairman has resigned, the door is open to a more cognitive diverse party structure (Du Pre, 2017; De Vaan et al., 2015). Of course, these developments might apply less to parties that are officially more cognisant of campaign ethics (e.g. D66). This is why ethics and legal aspects are important factors to take into consideration. It would be interesting to see how these campaigns act as PBT-capabilities of rival parties improve. Their self-imposed barrier can limit their future chances, but can also attract voters growing more aware of the value of privacy. In the former case, this could lead to an overhaul of their privacy principles, or perhaps to a legislative push towards the restriction of PBT (similar to Hersh, 2015). In the latter case, campaigns can be expected to develop innovative non privacy-invasive campaigning techniques. Either way, our model would provide tools to study the process.

So 'can political parties campaign in Europe as they do in North America' (Bennett, 2016)? We would say ‘mostly yes’. We agree with Bennett (2015) that there are important differences between the US and Europe, and indeed, they influence how PBT is used. But based on our findings, we are hesitant to conclude that those differences (severely) constrain the export of PBT-practices to European multiparty systems. We have shown that relatively small campaign budgets do not need to bar parties from engaging in PBT-practices (or even from cooperating with BSD, an 'expensive' American political consultancy). The same is true of the electoral system: campaign leaders generally perceive PBT-techniques as useful in a PR-system. What remains is the relatively strict Dutch data protection law, labelling political preference as 'sensitive personal data', which can only be processed with explicit consent from the potential voter. 'Explicit consent', however, sounds harsher on paper than it is in practice and is easily achieved (e.g., Beales & Muris, 2008; Calo, 2012; Joergensen, 2014). Of course, because of data regulations and/or their non-existence, European campaigns are unable to consult voting lists showing whether an individual showed up at the polls in the last elections. In most European countries, the electoral register is inaccessible to political parties. One might argue that, from a campaign's perspective, US voter data are superior to European voter data. We would argue that European data are different, but they do not bar European campaigns in the use of PBT-techniques. Dutch campaigns, for instance, can (and do) rely on election results on voting booth level (which comprises a couple of streets). They can (and do) combine these results with detailed, accurate, and a multitude of data about the neighbourhoods surrounding those voting booths. And then there is Facebook, facilitating easy targeting of its users with personalised messages. As potential challenges for democracy come with PBT, such as ignoring 'less valuable' citizens (e.g. reliable non-voters), more research into the workings and effects of PBT is needed.

References

Anderson, M., & Perrin, A. (2016, September 7). 13% of Americans don't use the internet. Who are they? Pew Research Center. Retrieved from http://www.pewresearch.org/fact-tank/2016/09/07/some-americans-dont-use-the-internet-who-are-they/

Anstead, N. (2017). Data-driven campaigning in the 2015 UK general election. The International Journal of Press/Politics, 22(3), 294–313. doi:10.1177/1940161217706163

Barocas, S. (2012). The price of precision: voter microtargeting and its potential harms to the democratic process. Proceedings of the First Edition Workshop on Politics, Elections and Data - PLEAD ’12, 31. doi:10.1145/2389661.2389671

Beales, H., & Muris, T. (2008). Choice or consequences: protecting privacy in commercial information. The University of Chicago Law Review, 75(1), 109–135.

Bennett, C. J. (2015). Trends in voter surveillance in western societies: privacy intrusions and democratic implications. Surveillance and Society, 13(3), 370–384.

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. doi:10.1093/idpl/ipw021

Bhatti, Y., Hansen, K. M., & Olsen, A. L. (2013). Political hypocrisy: The effect of political scandals on candidate evaluations. Acta Politica, 48, 408–428. doi:10.1057/ap.2013.6

Bimber, B. (2014). Digital media in the Obama campaigns of 2008 and 2012: adaptation to the personalized political communication environment. Journal of Information Technology & Politics, 11(2), 130–150. doi:10.1080/19331681.2014.895691

Bimber, B. and R. Davis (2003) Campaigning online: the internet in U.S. elections. New York, NY: Oxford University Press

Birch, S. (2001). Electoral systems and party systems in Europe East and West. Perspectives on European Politics and Society, 2(3), 355–377. doi:10.1080/1570585018458768

Birch, S. (2003). Two-round electoral systems and democracy. Comparative Political Studies, 36(3), 319–344. doi:10.1177/0010414002250678

Blais, A., & Indridason, I. H. (2007). Making candidates count: The logic of electoral alliances in two-round legislative elections. Journal of Politics, 69(1), 193–205. doi:10.1111/j.1468-2508.2007.00504.x

Blumler, J. G., & Kavanagh, D. (1999). The third age of political communication: influences and features. Political Communication, 16(3), 209–230. doi:10.1080/105846099198596

Blumler, J. G., Kavanagh, D., & Nossiter, T. J. (1996). Modern communications versus traditional politics in Britain: Unstable marriage of conveniece. In D. L. Swanson, & P. Mancini (Eds.), Politics, media, and modern democracy (pp. 49-72). Westport, CT: Praeger Publishers.

Calo, R. (2012). Against notice skepticism in privacy (and elsewhere). Notre Dame Law Review87(3), 1027–1072

Esser, F., & Strömbäck, J. (2012). Comparing election communication. In F. Esser, & T. Hanitzsch (Eds.), Handbook of comparative communication research (pp. 289-307). New York, NY: Routledge.

Boeije, H. (2005). Analyseren in kwalitatief onderzoek: denken en doen. Amsterdam, Netherlands: Boom.

Calfee, J. E., & Craswell, R. (1984). Some effects of uncertainty on compliance with legal standards. Virginia Law Review, 70(5), 965–1003.

Conway, B. A., Kenski, K., & Wang, D. (2015). The rise of Twitter in the political campaign: Searching for intermedia Agenda-setting effects in the presidential primary. Journal of Computer-Mediated Communication, 20(4), 363–380. doi:10.1111/jcc4.12124

Coy, P. (2015, n.d.). Bloomberg's 2015 ranking of the world's 50 most innovative countries. Bloomberg. Retrieved from https://www.bloomberg.com/graphics/2015-innovative-countries/

De Vaan, M., Stark, D., & Vedres, B. (2015). Game changer: the topology of creativity. American Journal of Sociology, 120(4), 1144-1194. doi:10.1086/681213

Du Pre, R. (2017, March 17). PvdA-partijvoorzitter Hans Spekman stapt later dit jaar op. Volkskrant. Retrieved from http://www.volkskrant.nl/politiek/pvda-partijvoorzitter-hans-spekman-stapt-later-dit-jaar-op~a4475904/

Duverger, M. (1959). Political Parties: Their Organization and Activity in the Modern State. Second English Revised edn. London: Methuen & Co.

Easton, N. (2015, May 29). Meet the ex-Googler making Hillary Clinton more tech-savvy. Fortune. Retrieved from http://fortune.com/2015/05/29/stephanie-hannon-hillary- clinton/

Garrett, R. K., & Danziger, J. N. (2011). The internet electorate. Communications of the ACM, 54(3), 117-123. doi:10.1145/1897852.1897881

Gibson, R. K. (2015). Party change, social media and the rise of “citizen-initiated” campaigning. Party Politics, 21(2), 183–197. doi:10.1177/1354068812472575

Gibson, R. K., & McAllister, I. (2011). Do online election campaigns win votes? The 2007 Australian “YouTube” election. Political Communication, 28(2), 227–244. doi:10.1080/10584609.2011.568042

Gibson, R. K., & McAllister, I. (2015). Normalising or equalising party competition? Assessing the impact of the web on election campaigning. Political Studies, 63(3), 529–547. doi:10.1111/1467-9248.12107

Gibson, R., Römmele, A., & Williamson, A. (2014). Chasing the digital wave: international perspectives on the growth of online campaigning. Journal of Information Technology & Politics, 11(2), 123–129. doi:10.1080/19331681.2014.903064

Glaser, B. G. (1978). Theoretical sensitivity: Advances in the methodology of grounded theory. Mill Valley, CA: Sociology Press.

Gorton, W. A. (2016). Manipulating citizens: How political campaigns’ use of behavioural social science harms democracy. New Political Science, 38(1), 61–80. doi:10.1080/07393148.2015.1125119

Green, J., & Issenberg, S. (2016, September). Inside the Trump bunker, with days to go. Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

Hansen, K. M., & Kosiara-Pedersen, K. (2014). Cyber-campaigning in Denmark: application and effects of candidate campaigning. Journal of Information Technology & Politics, 11(2), 206–219. doi:10.1080/19331681.2014.895476

Hersh, E. D., & Schaffner, B. F. (2013). Targeted campaign appeals and the value of ambiguity. The Journal of Politics, 75(2), 520–534. doi:10.1017/S0022381613000182

Hersh, E. (2015). Hacking the electorate: how campaigns perceive voters. New York, NY: Cambridge University Press.

Herrnson, P. S. (2009). The roles of party organizations, party-connected committees, and party allies in elections. The Journal of Politics, 71(4), 1207–1224. doi:10.1017/S0022381609990065

Howard, P. N. (2006). New media campaigns and the managed citizen. Cambridge: Cambridge University Press.

Hyun, B., Park, S., & Choi, S. M. (2002). Focus group interviews: the internet as a political campaign medium. Public Relations Quarterly, 4, 36–42.

Jamieson, K. H. (2013). Messages, micro-targeting, and new media technologies. Forum (Germany), 11(3), 429–435. doi:10.1515/for-2013-0052

Janjigian, L. (2016, November 9). You don’t need a tech team to win an election – you need a Twitter account. Business Insider. Retrieved from https://www.businessinsider.nl/you- only-need-a-twitter-account-to-win-an-election-2016-11/

Joergensen, R. F. (2014). The unbearable lightness of user consent. Internet Policy Review: Journal of Internet Regulation, 3(4), 1–14. doi:10.14763/2014.4.330

Jungherr, A. (2015). The role of the internet in political campaigns in Germany. German Politics, 24(4), 427–434. doi:10.1080/09644008.2014.989218

Jungherr, A. (2016). Four functions of digital tools in election campaigns: the German case. The International Journal of Press/Politics, 21(3), 358–377. doi:10.1177/1940161216642597

Jungherr, A. (2017). Book review. International Journal of Press/politics, 168(3–4), 282–284. doi:10.1016/j.anifeedsci.2011.04.090

Karlsen, R. (2010). Does new media technology drive election campaign change? Information Polity, 15(3), 215–225. doi:10.3233/IP-2010-0208

Karpf, D. (2012). Social science research methods in internet time. Information Communication & Society, 15(5), 639–661. doi:10.1080/1369118x.2012.665468

Kreiss, D. (2016). Prototype politics: Technology-intensive campaigning and the data of democracy. New York, NY: Oxford University Press.

Kreiss, D. (2012). Taking our country back: The crafting of networked politics from Howard Dean to Barack Obama. New York, NY: Oxford University Press.

Lapowsky, I. (2016, July 14). Clinton has a team of Silicon Valley stars. Trump has Twitter, Wired. Retrieved from: https://www.wired.com/2016/07/clinton-team-silicon-valley-stars-trump-twitter/

Lebowsky, J. (2007). 'Deanspace, social networks, and politics', in J. Lebowsky and M. Ratcliffe (eds) Extreme democracy, pp. 299–314. Raleigh, NC: Lulu Press.

Hynes, A. (2007). 'What is Deanspace?', in J. Lebowsky and M. Ratcliffe (eds) Extreme democracy, pp. 315–323. Raleigh, NC: Lulu Press.

Lee, B., & Campbell, V. (2016). Looking out or turning in? Organizational ramifications of online political posters on Facebook. The International Journal of Press/Politics, 21(3), 313–337. doi:10.1177/1940161216645928

Lipsitz, K. (2004). Democratic theory and political campaigns. Journal of Political Philosophy, 12(2), 163–189. doi:10.1111/j.1467-9760.2004.00196.x

Lohr, S., & Singer, N. (2016, November 10). The data said Clinton would win. Why you shouldn’t have believed It. The New York Times. Retrieved from http://www.nytimes.com/2016/11/10/technology/the-data-said-clinton-would-win-why-you-shouldnt-have-believed-it.html

Lijphart, A. (2012). Patterns of democracy: Government forms and performance in thirty-six countries. New Haven: Yale University Press.

Margolis, M., & Resnick, D. (2000). Politics as usual: The cyberspace "revolution". Los Angeles, CA: SAGE.

Margolis, M., D. Resnick and J. Levy (2003) `Major parties dominate, minor parties struggle: US elections and the internet', in R. Gibson , P. Nixon and S. Ward (eds) Political Parties and the Internet: Net Gain?, pp. 51—69. London: Routledge.

McGill, H. M., & Scola, N. (2016, August 24). Clinton quietly amasses tech policy corps. Politico. Retrieved from http://www.politico.com/story/2016/08/hillary-clinton-technology-policy-227381

McKeown, C. A., & Plowman, K. (1999). Reaching publics on the web during the 1996 presidential campaign. Journal of Public Relations Research, 11(4), 321-347. doi:10.1207/s1532754xjprr1104_03

Membership Dutch parties still low (2017, February 8). Retrieved from http://pub.dnpp.eldoc.ub.rug.nl/FILES/root/DNPPpersberichten/pers_lt2016.pdf

Metz, C. (2016, November 9). Trump’s win wasn’t the death of data – it was flawed all along. Wired. Retrieved from https://www.wired.com/2016/11/trumps-win-isnt-death-data-flawed-along/

Narayanswamy, A., Cameron, D., & Gold, M. (2017, February 1). How much money is behind each campaign. Washington Post. Retrieved from: https://www.washingtonpost.com/graphics/politics/2016-election/campaign-finance/

Nielsen, R. K. (2012). Ground wars: Personalized communication in political campaigns. Princeton, NJ: Princeton University Press.

Norris, P. (2000). A virtuous circle: Political communications in postindustrial societies. Cambridge: Cambridge University Press.

Officiële uitslag (2017, March 21). Retrieved from https://www.kiesraad.nl/actueel/nieuws/2017/03/20/officiele-uitslag-tweede-kamerverkiezing-15-maart-2017

Park, H. S., & Choi, S. M. (2002). Focus group interviews: The internet as a political campaign medium. Public Relations Quarterly, 47(4), 36-41.

Partijen nemen deel. (2017, February 2). Retrieved from https://www.kiesraad.nl/actueel/nieuws/2017/02/03/partijen-nemen-deel-aan-tweede-kamerverkiezing-2017

Petrocik, J. R. (1996). Issue ownership in presidential elections, with a 1980 case study. American Journal of Political Science, 40(3), 825–850. doi:10.2307/2111797

Plasser, F., & Plasser, G. (2002). Global political campaigning: A worldwide analysis of campaign professionals and their practice. Westport, CT: Praeger Publishers.

Raskolnikov, A. (2017). Probabilistic Compliance. Yale Journal on Regulation, 34, 101–154.

Rubinstein, I. (2014). Voter privacy in the age of big data. Wisconsin Law Review, 861–936.

Schweitzer, E. J. (2011). Normalization 2.0: a longitudinal analysis of German online campaigns in the national elections 2002-9. European Journal of Communication, 26(4), 310–327. doi:10.1177/0267323111423378

Stanyer, J. (2005). Political parties, the internet and the 2005 general election: from web presence to e-campaigning? Journal of Marketing Management, 21(9-10), 1049–1065. doi:10.1362/026725705775194094

Strandberg, K. (2008). Online electoral competition in different settings: a comparative meta-analysis of the research on party websites and online electoral competition. Party Politics, 14(2), 223–244. doi:10.1177/1354068807085891

Tene, O. (2011). Privacy : The new generations. International Data Privacy Law, 1(1), 15–27. doi:10.1093/idpl/ipq003

Turow, J. (2011). The daily you. New Haven, CT: Yale University Press.

Vaccari, C. (2012). From echo chamber to persuasive device? Rethinking the role of the Internet in campaigns. New Media & Society, 15(1), 109–127. doi:10.1177/1461444812457336

Vergeer, M., Hermans, L., & Sams, S. (2011). Online social networks and micro-blogging in political campaigning: The exploration of a new campaign tool and a new campaign style. Party Politics, 19(3), 477–501. doi:10.1177/1354068811407580

Wang, C.-H. (2014). The effects of party fractionalization and party polarization on democracy. Party Politics, 20(5), 687–699. doi:10.1177/1354068812448691

Wester, F. (1995). Strategieën voor kwalitatief onderzoek. Bussum, Netherlands: Coutinho.

Wright, S. (2011). Politics as usual? Revolution, normalization and a new agenda for online deliberation. New Media & Society, 14(2), 244–261. doi:10.1177/1461444811410679

Zuiderveen Borgesius, F. J. (2016). Singling out people without knowing their names - Behavioural targeting, pseudonymous data, and the new Data Protection Regulation. Computer Law and Security Review (Vol. 32). doi:10.1016/j.clsr.2015.12.013

Appendix A - Translated interview guide (was originally in Dutch)

[potential follow-up questions are in italic]

General introduction

Organisation

I would like to talk a bit about the way the campaign is organised.

Data use and targeting

Now, I would like to talk about the use of personal data in political campaigns. I am curious about the types of data the campaign uses to send political messages.

Democratic implications

  1. Thank you for cooperating with this study. I am quite curious about your daily professional activities. Can you tell me what your function entails?
  2. Is there a dedicated tech, data (or something similar) department in the campaign? (How autonomous does the department operate? How many people are part of that department?What kind of backgrounds do they have?)
  3. What kind of data does the campaign use? (How large is the database?)
  4. How does the campaign collect personal data? (Does the campaign use consumer data from commercial databases?)
  5. How does the campaign use its data in practice? (Does the campaign construct voter profiles based on personal data? How do those profiles come about? Does the campaign construct profiles on an individual level or on a group level? What kinds of techniques does the campaign use to analyse the data?)
  6. How do you decide who to target in the campaign? (and how do you try to reach them?)
  7. Does the campaign send tailored messages to specific voter groups? (How does this work in practice? What role do data play herein? How do you decide which message you send to whom? Does the campaign target its data-driven messages to individuals, household, or larger subgroups?)
  8. What kind of role does Facebook play in the campaign? (How do you use Facebook to reach specific voters? Do you use lookalike audiences? Dark posts? Other techniques? Other social media?)
  9. A campaign can use several campaigning instruments: from TV-advertisements, to newspaper ads or posters. In relation to other campaigning instruments: how important are data for the campaign? (And how will this be in four years, do you think?)
  10. How big is the budget for data-driven campaigning?
  11. What is needed for a good data-driven campaign?
  12. What kind of circumstances obstruct data use?
  13. What kind of circumstances enable data use?
  14. What kind of role do commercial consulting organizations such as Politieke Academie or Blue State Digital play in the campaign?
  15. To what extent do you find the present campaign advanced?
  16. What are the differences concerning data use between the present campaign and the previous national campaign?
  17. To what extent does the party exchange data-driven campaigning techniques with foreign political parties?
  18. What kind of measures does the campaign have in place to safeguard its data? (Are there guidelines for the fair use of data? What do those guidelines look like? Does the campaign train people to handle personal voter information? Are campaign staffers obliged to sign non-disclosure forms? Does the campaign share data with third parties [commercial or political]? Does the campaign inform voters about the fact that they receive personalised messages?)
  19. To what extent do the current data protection regulations influence the use of data in the campaign? (How does this work? Do laws and regulations make it more difficult for a campaign to carry out a data-driven campaign? How? To what extent are the current regulations up to date?
  20. In how far can the use of data improve the election results?
  21. How do you feel about a possible increase in the use of data by political campaigns in general? (And when do campaigns cross the red line to unacceptable practices?
  22. Thank you very much for this interview. I have one last, practical, question: with whom can I seek contact when I have additional questions?

Footnotes

1. We find this term a bit ambiguous, but have decided not to alter Kreiss' terminology. The word 'electoral' here refers to the context in a specific electoral cycle

2. During the member-check, the campaign leader stressed that the state of mind within the campaign has started to turn for the better after the 2017 campaign.

3. CBS stands for 'Statistics Netherlands', and is financed by the Dutch ministry of Economic Affairs. It operates autonomously.

Micro-targeting, the quantified persuasion

$
0
0

Disclaimer: This guest essay in the Special issue on political micro-targeting has not been peer reviewed. It is treated here as a reflection.

During the past three decades there has been a persistent, and dark, narrative about political micro-targeting. Phil Howard (2006) vividly described a present and future where politicians would use data to “redline” the citizens that received political information, manufacturing attitudes and beliefs, leading to “managed citizenship”. In the years since Howard wrote his monumental book, the concerns over micro-targeting have only grown. The explosion of data about the electorate in Western democracies such as Australia, Canada, the UK, and the United States (Howard & Kreiss, 2010) has triggered deep unease among scholars and privacy advocates alike. Sophisticated voter databases now contain everything from political party data gleaned through millions of interactions with the electorate, public data obtained from state agencies, and commercial marketing information that is bought and sold on international open markets. The 2016 US presidential election revealed the new ways that individuals can be profiled, identified, found, tracked, and messaged to on social media platforms such as Facebook and YouTube, which these companies themselves help facilitate (Kreiss and McGregor, 2017).

While it might seem that the micro-targeting practices of campaigns have massive, and un-democratic, electoral effects, decades of work in political communication should give us pause. Although we lack the first-hand data from political campaigns, consultancies, and technology firms such as Facebook to know for sure, previous research tells us that people are seldom the unwitting dupes of strategic political communication. Partisanship shapes much of how people vote and decades of research reveals that it is really hard to change people’s minds through campaigns (Kalla & Broockman, 2017; Henderson & Theodoridis, 2017). This has large implications for the effectiveness of micro-targeting. For example, Eitan Hersh’s (2015) deeply and carefully researched ground-breaking study using data from a major vendor to the US Democratic Party discovers that campaign practitioners find it really hard to persuade voters. This is because political practitioners lack reliable and identifiable data on cross-pressured and low information voters. Given this, campaigns often focus on known voters rather than risk targeting and messaging to the wrong people. Indeed, Hersh reveals that despite hundreds of data points on members of the electorate, it is a small cluster of publically available data – such as turnout history, party identification, and demographic data – that matters far more for predicting vote choice.

The lesson is that micro-targeted campaign ads are likely most effective in the short run when campaigns use them to mobilise identified supporters or partisans, spurring volunteerism, donations, and ultimately turnout – hardly the image of a managed, manipulated, or duped public (Baldwin-Philippi, 2017). Ironically, campaigns often use micro-targeting to further these forms of democratic participation, making appeals to targeted subsets of voters on the basis of the parties and issues they already care about. Campaigns also use micro-targeting in the attempt to decrease voter turnout on the opposing side, sending negative messages to the oppositions’ likely voters in the hopes this will make them less excited to turn out for their candidate. But two decades of social science suggests that this can be a risky strategy given that partisans can rally behind their candidate who is being attacked (Dunn & Tedesco, 2017).

What explains the outsized concerns about micro-targeting in the face of the generally thin evidence of its widespread and pernicious effects? This essay argues that we have anxieties about micro-targeting because we have anxieties about democracy itself. Or, to put it differently, that scholars often hold up an idealised vision of democracy as the standard upon which to judge all political communication. In a world where many scholars and journalists both hope and ardently believe, in the face of all available evidence, that members of the public are fundamentally rational, seek to be informed, and consider the general interest, micro-targeting appears to be manipulative, perverting the capacity of citizens to reason about politics. Meanwhile, for many scholars and journalists, political elites are fundamentally opposed to members of the public, seeking domination or control as opposed to representing their interests. In this world, much of the concern over micro-targeting reads as a classic “third-person effect”, where scholars and journalists presume that members of the public are more affected by campaign advertising than they themselves are.

And yet, this idealised version is not how democracy really is, nor necessarily how it should be. The argument of this brief essay is that, as a quantifiable practice premised on strategically identifying targeted groups of voters and crafting messages designed to appeal to them, micro-targeting is broadly reflective of the fact that democracy is often partisan, identity-based, and agonistic – in short, political. Following communication scholar Michael Schudson’s (1986) study of commercial advertising nearly three decades ago, this essay asks the following questions in the US context: what is the work that micro-targeting does, where does it fit into the political culture, and, what kind of political culture has given rise to it? I argue that micro-targeting is only imaginable, and efficacious, in a polity that prizes partisan mobilisation, group solidarity, agonism, and the clash of opposing moral views in its politics. Following from this, I suggest different democratic concerns about micro-targeting that relate to its cultural power to, over time, create a powerful set of representations of democracy that undermines the legitimacy of political representation, pluralism, and political leadership.

The cultural work of micro-targeting

To analyse the role that micro-targeting plays in politics, first we need to understand how and why citizens vote. In their recent book Democracy for Realists, political scientists Christopher Achen and Larry Bartels (2016) offer a sustained critique of what they call the “folk theory” of American democracy. According to this “folk theory” that underlies conceptions of popular sovereignty, Americans have identifiable and consistent policy preferences. During the course of an election, they inform themselves about the policy positions of candidates and make rational decisions as to which best represents their preferences, which in turn leads parties to be responsive to the wishes of the public.

As Achen and Bartels (ibid.) argue, this is a fiction. They outline a “group theory of democracy”, where it is social attachments and group identification that largely determine both partisanship and vote choice. Achen and Bartels argue that people see themselves in relation to the groups that they belong to and those that they do not. Identity is so strong, in this account, that it conditions both what partisans believe parties stand for but also their interpretation of facts (ibid., 267; see also Prasad et al., 2009). As Achen and Bartels demonstrate, this identity and group theory of politics has expansive empirical support over seventy years of research which demonstrates, time and again, that people have little knowledge about politics and yet detailed understandings of the social groups that the Democratic and Republican parties are perceived to represent. It is in this context that candidate performances of partisan and social identity become more important for electoral outcomes then the informational content of journalism. Events and candidates make identity more or less salient and strengthen group attachments. During campaigns, parties and candidates work to remind voters of their partisan and social attachments and strengthen them so they are mobilised to participate in the election. As Achen and Bartels (ibid., 311) argue:

Political campaigns consist in large part of reminding voters of their partisan identities – “mobilizing” them to support their group at the polls. Formal communications by the groups and informal communication networks among group members also help citizens understand how their identity groups connect to the candidates and parties.

In this context, what is important about political campaigns is this work of communicating the partisan and social identities of candidates to voters. Candidates and their campaigns use micro-targeting, along with other strategic communications, to accomplish this. Micro-targeting is both a campaign practice of using data to craft and deliver strategic messages to subsets of the electorate (historically across many different media), and a genre of campaign communications that, much like political advertising more broadly, reinforces and amplifies the partisan, group, and identity conflicts at the heart of US politics. There has been extensive research on how micro-targeting works as a data-driven and quantifiable practice (see, for instance, Karpf, 2016). What these messages do as a genre of campaign communications, however, has received considerably less scrutiny. Drawing on my own previous work in the US context (Kreiss, 2016), the first argument that I develop here is that micro-targeting furthers the mobilisation that Achen and Bartels (2015) identify, primarily through reminding citizens of and shoring up their partisan and group identities. I then discuss the potential democratic consequences of this in a more expansive, cultural sense.

Micro-targeted ads have an aesthetic of what I call “political realism”, building on Michael Schudson’s work on commercial advertising. In Advertising, The Uneasy Persuasion, Schudson (1986) compared commercial advertising with Soviet realist art (the official state-sanctioned art of the former Soviet Union), arguing that it offers a form of “commercial realism”. As commercial realism, commercial advertising “simplifies and typifies” (215); advertising is abstracted, presenting the world as it should be, not as it is, and it exemplifies individuals as members of larger social groups. As it does so, “the aesthetic of capitalist realism — without a masterplan of purposes — glorifies the pleasures and freedoms of consumer choice in defense of the virtues of private life and material ambitions.” (ibid., 218) 

We can see micro-targeted digital advertising as a cultural form of ‘political realism’ that reflects, reinforces, and celebrates a political culture, at least in the United States, premised on identity, moral certainty, and mobilisation - not weighty considerations of the general interest or deliberation. Micro-targeted digital content shares a few central characteristics, which I adapt here for politics from Schudson’s (1986) work on commercial realism:

  • It presents social and political life in simplified and typified ways;
  • It presents life as it should become, or for negative ads, as it must not become;
  • It presents reality in its larger social significance, not in its actual workings;
  • It presents progress towards the future and positive social struggle, or for negative ads, the ideas of the other party as negative steps back into the past. It carries a message of optimism for one partisan side, and takes a stance of pessimism towards political opponents; and,
  • It tells us that political conflict is necessary, a clash of different groups and worldviews; moral certainty is assured, political identity is certain, and political agonism is reality.

For example, micro-targeted ads present social life in simplified ways, not presenting actual lives but abstract, stylised ones designed to be rife with larger meaning. A depiction of a farmer’s daily work in a campaign ad, for instance, is not about actual events or daily labours, but is meant to be an abstract, simplified, symbol of the American values of hard work and cultivation of the earth and celebration of ordinary people in a democratic society. The farmer here is typified; the campaign ad is not about a real person who farms. The farmer is a representation of the larger social categories, values, and ideas the ad presents as desirable or worthy of emulation for all Americans. At the same time, the two dominant US political parties often stress different themes in their ads, a recognition that they have different visions of what life should be become, what progress is, and what worldviews and moral claims the public should embrace. While doing so, political micro-targeting is inherently pluralist. It reflects a basic claim that “everyone has interests to defend and opinions to advance about his or her own good, or the group’s good, or the public good, and every interest was at least potentially a political interest group.” (Rosenblum, 2010, 259)

While it is impossible to know the full range of micro-targeted ads run during the course of an election cycle, consider some of the examples culled from the non-profit and non-partisan Democracy in Action website that chronicles US campaigns and the Hillary for America Design 2016 website that compiles the creative design from the campaign. To start, much of political micro-targeting is about building campaign databases by finding supporters online, signing them up for the cause through email, and repeatedly messaging them to enlist them in becoming a volunteer or a donor.

Take, for instance, the declarative “I am a Hillary Voter” digital ad (see Figure 1), presumably (but also logically) directed (although we can never know for sure) at the candidate’s supporters. What separates micro-targeted political ads from their mass broadcast counterparts is the data that lies behind them: campaigns can explicitly try to find and send messages to their partisan audiences or intra-party supporters, linking the names in their databases to identities online or on social media platforms such as Facebook. Campaigns can also try to find additional partisans and supporters by starting with the online behaviours, lifestyles, or likes or dislikes of known audiences and then seeking out ‘look-alike audiences’, to use industry parlance. And, what people do when they see these ads is quantified in terms of their performance, measured through things such as engagement and click-throughs. Micro-targeting is about mobilisation through conveying and building social solidarity. While there is much concern over candidates speaking out of both sides of their mouths to the electorate through hyper-targeted digital ads, likely far more often campaigns use micro-targeting to provide occasions for social identification and group belonging, conveying and constructing the sense of shared identity and group membership at the heart of politics. The “Wish Hillary a Happy Birthday” ad captures this (see Figure 2). Not only is this appeal directed at supporters (what Republican will want to wish Hillary a happy birthday after all), it constructs a sense of what social identification with Hillary Clinton means: motherhood, family, warmth, care, and nurturing.

"I'm a Hillary Voter"
Figure 1: Hillary Clinton digital campaign advertisements
"Wish Hillary a Happy Mother's Day! – Sign the card"
Figure 2: Hillary Clinton digital campaign advertisement

Source: Hillary for America Design 2016

Micro-targeting is also about the marking of difference. This is, perhaps, the most common trope in micro-targeted digital campaign ads. Campaigns look to not only establish the cultural meaning of their candidates and supporters, but also that of their opposition (Alexander, 2010). Donald Trump’s ads during the 2016 election reflected his rhetoric from the campaign trail in stressing themes of safety and security, in addition to the need to draw boundaries around civic incorporation (i.e., who should be allowed to be a citizen). For Hillary Clinton, micro-targeted ads were celebrations of diversity and multi-culturalism, especially the empowerment of women and racial and ethnic minorities. Political advertisements attempt to connect the candidates they promote with the demographic and social groups they seek to represent (in the United States this is at times drawn on racial and ethnic terms: whites for Republicans and a more diverse coalition for Democrats, see the discussion in Grossmann & Hopkins, 2016, 43-45).

In this, micro-targeting reflects and reinforces political agonism, the clash of competing social groups, interests, and values. Through micro-targeting, candidates stake out their claim to be on the civil side of the moral binary of the political sphere and strive to paint their opponents as anti-civil (Alexander, 2010). More colloquially, micro-targeted advertisements offer the beautiful affirmation of our values and the sharp critique of those of our opponents. Hillary Clinton’s campaign, for instance, clearly sought to portray Trump in terms of anti-civil racism, xenophobia, and sexism. And, the campaign used issues, such as abortion rights, and values, such as autonomy and choice, to build group identity and social solidarity around opposition to Trump: “Let’s stand together, join millions of women” (see Figure 3). This Facebook ad pits Clinton and her supporters against Trump and his supporters. Trump, in turn, combined nationalist and security appeals with an implicit construction of the American body politic in white identity terms (Figure 4). These ads capture the reality that political conflict is not only inevitable, but necessary: there are opposing views in politics on fundamental questions such as life, autonomy, and country. The audiences for these ads are not being presented with information to help them make up their own minds, they are being invited into a political struggle with clear opposing worldviews and moral values (see Figure 5). This is why mobilisation ads are directed towards identity-congruent audiences.

"Join Women for Hillary"
Figure 3: Hillary Clinton Facebook advertisement
"Immigration Reform – Build a Wall"
Figure 4: Donald Trump digital advertisement

Source: Democracy in Action

"Nope" / "Stop Trump"
Figure 5: Anti-Trump Hillary Clinton digital advertisements

Source: Hillary for America Design 2016

In these advertisements, it is also clear that micro-targeted ads present life as it should become, or as it must not become, linking the preferred candidate and political party with a civil vision of the future and the opposition with an anti-civil vision of the future, to use Alexander’s (2010) framework. As an example, for Ted Cruz (see Figure 6), the opposing side wants to infringe on the Bill of Rights, the fundamental liberty of Americans to defend their lives, liberties, families, and properties. Candidates run these issue ads to stake out their stance on the conflicting values, visions of the good life, plans for the future, and ends that are desirable in politics – whether it is embracing the freedom and security of gun rights for American Republicans or autonomy and choice in the context of reproductive rights for Democrats. These appeals are designed to mobilise the committed around the candidate’s vision of America’s past and future – they are designed for a world where we are sure of who we are and committed to our values and the ends we pursue.

"Obama wants your guns!"
Figure 6: Ted Cruz digital campaign advertisement

Source: Democracy in Action

Conclusion: democratic anxieties

I believe that there is such democratic anxiety about micro-targeting because citizens are supposed to be independent, autonomous, and rational. Micro-targeted advertising works to reinforce group identities and solidarity, mobilise partisans, and further the clash of political values. These things are all suspect from the perspective of the powerful and potent “folk theory” of democracy, as Achen and Bartels phrase it. As these realists argue, however, it’s far better to grapple with the reality of group-based democracy, with its attendant ingrained social allegiances and conflicts over values and power, rather than wishing for a transcendent and pure form of democracy without politics. These authors argue that we need to make peace with conflictual and competitive forms of group-based and pluralistic democracy premised on institutionally organised opposition. As Achen and Bartels (2015, 318) conclude:

Freedom is to faction what air is to fire, Madison said. But ordinary citizens often dislike the conflict and bickering that comes with freedom. They wish their elected officials would just do the people’s work without so much squabbling amongst themselves. They dislike the compromises that result when many different groups are free to propose alternative policies, leaving politicians to adjust their differences. Voters want “a real leader, not a politician,” by which they generally mean that their own ideas should be adopted and other people’s opinions disregarded, because views different from their own are obviously self-interested and erroneous. To the contrary, politicians with vision who are also skilled at creative compromise are the soul of successful democracy, and they exemplify real leadership.

My own view is that micro-targeting comes in the necessary service of this “conflict and bickering”. At its normative best, micro-targeting strengthens the hands of opposing factions, enabling them to identify and mobilise partisans to their cause, providing them with resources in terms of boots on the ground and money in the coffers. When opposing politicians and parties square off, they carry these resources into battle trying to advance their agendas or win concessions for their side. Compromise may be harder in a world of stronger factions, their hands steadied by the resources that micro-targeting can deliver, but that does not make compromise any less necessary or essential.

On the other hand, there are reasons for democratic concern about micro-targeting, but they look a bit different from narratives about public manipulation. Schudson (1986, 232) concludes that “advertising does not make people believe in capitalist institutions or even in consumer values, but so long as alternative articulations of values are relatively hard to locate in the culture, capitalist realist art will have some power.” I suspect that the same is true of political micro-targeting. The cultural power of political micro-targeting, but also political advertising more generally, lies in its creation of a set of ready-to-hand representations of democracy that citizens can express easily and fall back on. Taken to its extreme in a polarized political climate, micro-targeting can work to undermine the legitimacy of conflicts over opposing values and claims in democratic life. For example, in an undemocratic political culture micro-targeting can portray the other side as crooked and dangerous to the polity, political compromise as selling out, political expertise and representation as not to be trusted, and partisans’ own beliefs and identities as the only legitimate ones, not simply those among many in a pluralistic democracy. Micro-targeting also melds symbolic and social power in new ways, culturally legitimating and furthering the fortunes of autonomous and independent candidates, divorced from their parties and taking their appeals directly to voters (see Hersh, 2017).

References

Achen, C. H., & Bartels, L. M. (2016). Democracy for realists: Why elections do not produce responsive government. Princeton University Press.

Alexander, J. C. (2010). The performance of politics: Obama's victory and the democratic struggle for power. Oxford University Press.

Baldwin-Philippi, J. (2017). The myths of data-driven campaigning. Political Communication, 34(4), 627-633. doi:10.1080/10584609.2017.1372999

Dunn, S., & Tedesco, J. C. (2017). Political Advertising in the 2016 Presidential Election. In The 2016 US Presidential Campaign (pp. 99-120). Palgrave Macmillan, Cham.

Grossmann, M., & Hopkins, D. A. (2016). Asymmetric politics: Ideological Republicans and group interest Democrats. Oxford University Press.

Hersh, E. D. (2015). Hacking the electorate: How campaigns perceive voters. Cambridge University Press.

Hersh, E. D. (2017). Political Hobbyism: A Theory of Mass Behavior.

Howard, P. N., and Kreiss, D. (2010). Political Parties and Voter Privacy: Australia, Canada, the United Kingdom, and United States in Comparative Perspective. First Monday, 15(12). 

Howard, P.N. (2006) New Media Campaigns and the Managed Citizen. Cambridge University Press.

Kalla, J. L., & Broockman, D. E. (2017). The Minimal Persuasive Effects of Campaign Contact in General Elections: Evidence from 49 Field Experiments. American Political Science Review, 1-19. doi:10.1017/S0003055417000363

Karpf, D. (2016). Analytic activism: Digital listening and the new political strategy. Oxford University Press.

Kreiss, D., & McGregor, S.C. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 US Presidential Cycle. Political Communication, 1-23. doi:10.1080/10584609.2017.1364814

Kreiss, D. (2016). Prototype politics: Technology-intensive campaigning and the data of democracy. Oxford University Press.

Henderson, J. A., & Theodoridis, A. G. (2017). Seeing Spots: Partisanship, Negativity and the Conditional Receipt of Campaign Advertisements. Political Behavior, 1-23. doi:10.1007/s11109-017-9432-6

Prasad, M., Perrin, A. J., Bezila, K., Hoffman, S. G., Kindleberger, K., Manturuk, K., … Payton, A. R. (2009). The Undeserving Rich: “Moral Values” and the White Working Class. Sociological Forum, 24(2), 225–253. doi:10.1111/j.1573-7861.2009.01098.x

Rosenblum, N. L. (2010). On the side of the angels: an appreciation of parties and partisanship. Princeton University Press.

Schudson, M. (1986). Advertising, the uneasy persuasion: its dubious impact in American Society. New York: Routledge.

The role of digital marketing in political campaigns

$
0
0

This paper is part of 'A Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research’, a Special issue of the Internet Policy Review.

Introduction

Political campaigns in the United States have employed digital technologies for more than a decade, developing increasingly sophisticated tools and techniques during each election cycle, as “computational politics” have become standard operating procedure (Tufecki, 2014; Kreiss, 2016). However, the most recent election marked a critical turning point, as candidates, political action committees, and other interest groups were able to take advantage of significant breakthroughs in data-driven marketing techniques, such as cross-device targeting, developed since the previous presidential election (“Bernie Sanders”, 2016; Edelman Digital, 2016). Electoral politics has now become fully integrated into a growing, global commercial digital media and marketing ecosystem that has already transformed how corporations market their products and influence consumers (Chahal, 2013; LiveRamp, 2015; Rubinstein, 2014; Schuster, 2015).The strategies, technologies, and tools of digital political marketing are more complex and far-reaching than anything we have seen before, with further innovations already underway (WARC, 2017). But because most commercial and political digital operations take place below the radar, they are not fully understood by the public. 1

In the following pages, we briefly describe the growth and maturity of digital marketing, highlighting its basic features, key players, and major practices. We then document how data-driven digital marketing has moved into the centre of American political operations, along with a growing infrastructure of specialised firms, services, technologies and software systems. We identify the prevailing digital strategies, tactics, and techniques of today’s political operations, explaining how they were employed during the most recent US election cycle. Finally, we explore the implications of their use for democratic discourse and governance, discussing several recent policy developments aimed at increasing transparency and accountability in digital politics.

Our research for this paper draws from our extensive experience tracking the growth of digital marketing over the past two decades in the United States and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007; Montgomery, Chester, & Kopp, 2017). During the 2016 US presidential election, we monitored commercial digital advertising and data use by candidates, parties and special interest groups across the political spectrum. We collected examples of these ads, along with technical and market impact information from the developers of the applications. We also reviewed trade journals, research reports, and other industry documents, and attended conferences that were focused on digital technologies and politics. In the process, we identified all of the major providers of political digital data targeting applications (e.g., Google, Facebook, data clouds, ad agencies) and analysed all their key materials and case studies related to their 2016 operations. The source for much of this work was our ongoing gathering and analysis of cross-sectional commercial digital marketing practices worldwide.

Marriage of politics and commerce

Since the mid-20th century, advertising has been an increasingly powerful and pervasive presence in US political campaigns, as a growing cadre of ad agencies, public relations firms, and consultants perfected the use of opinion polls, focus groups, and psychographics to reach and influence voters through radio, television, direct mail, and other media outlets (A. Jamieson, 2016; K. H. Jamieson, 1996; Sabato, 1981). With the rise of the internet, campaign operatives began to harness digital technologies and tools to mobilize voter turnout, engage young people, raise money, and support grassroots ground operations (Karpf, 2016; Kreiss, 2016; Tufecki, 2014). Both major political parties in the United States developed large, sophisticated data and digital operations (Kreiss, 2016).

Many of the digital strategies, tools, and techniques employed in the 2016 election were initially developed, deployed, tested, and refined by the commercial sector (Tufecki, 2014).Since its origins in the mid-1990s, digital marketing has operated with a core business model that relies on continuous data collection and monitoring of individual online behaviour patterns (Montgomery, 2011). This system emerged in the United States amid a political culture of minimal government interference, and within a prevailing laissez-faire ethos regarding the internet and new technologies (Barlow, 1996). In the earliest days of the “dot-com boom”, a strong political alliance was forged between the digital media companies and their partners in the advertising and media business, enabling the nascent industry to effectively ward off any attempts to restrain its business operations through privacy regulation or other public policies (Solon & Siddiqui, 2017). As a consequence, the advertising industry played a central role in shaping the operations of platforms and applications in the digital media ecosystem. Digital marketing is now well established and thriving, with expenditures reaching nearly $72.5bn in 2016 for the US alone, and worldwide spending predicted to reach more than $223bn this year (eMarketer, 2017; IAB, n.d.-d).

Ongoing innovations over the years have increased the capacity of data and digital marketing applications. Data collection, analysis, and targeting were further woven into the daily lives of consumers with the rise of social media platforms and mobile devices. Because of the unique role that they play in users’ lives, these platforms are able to sweep up enormous amounts of information, including not only what users post about themselves, but also what is collected from them throughout their daily activities (Smith, 2014). A growing arsenal of software and analytic tools has enhanced the ability of digital media companies and their advertisers to glean valuable insights from the oceans of data they generate (Smith, 2014). Predictive analytics introduced an expanded set of tools for scoring, rating, and categorising individuals, based on an increasingly granular set of behavioural, demographic, and psychographic data (“What is Predictive Intelligence”, 2017). US digital marketers have helped popularise and spur the successful adoption of digital advertising platforms and applications in nearly every geographical location with an internet connection or a link to a mobile device (IAB, n.d.-c). Google, Facebook, and other major players in the digital marketing industry have also developed a global research infrastructure to allow them, and especially their major advertising clients, to make continuous improvements in reaching and influencing the public, and to measure with increasing accuracy the success of their efforts (Facebook IQ, n.d.-a). These developments have created what some observers have called the “surveillance economy” (Singer, 2012).

The growth of data-driven political marketing

Though political campaigns have employed micro-targeting techniques—which use an array of personalised and other data sets and marketing applications to influence the actions of individuals—during the last several election cycles, recent technological innovations and industry advances have created a much more robust system than what was in place in 2012 (IAB, n.d.-b; Rubinstein, 2014). For years, political campaigns have been able to combine public voter files with commercial information from data brokers, to develop detailed and comprehensive dossiers on American voters (Rubinstein, 2014). With recent advances in the advertising technology and data industries, they can now take advantage of a growing infrastructure of specialty firms offering more extensive resources for data mining and targeting voters. Among the new entities are data marketing clouds. Developed by well-known companies such as Adobe, Oracle, Salesforce, Nielsen, and IBM, these clouds sell political data along with an exhaustive amount of detailed consumer information for each potential target, including, for example, credit card use, personal interests, consumption patterns, and TV viewing patterns (Salesforce DMP, 2017).

Some of these massive cloud services also operate what has become a new and essential component for contemporary digital targeting—the data management platform (DMP) (Chavez, 2017). DMPs provide marketers with “centralized control of all of their audience and campaign data” (BlueKai, 2011). They do this by collecting and analysing data about individuals from a wide variety of online and offline sources, including first-party data from a customer’s own record, such as the use of a supermarket loyalty card, or their activities captured on a website, mobile phone, or wearable device; second-party data, information collected about a person by another company, such as an online publisher, and sold to others; and third-party data drawn from thousands of sources, comprising demographic, financial, and other data-broker information, including race, ethnicity, and presence of children (O’Hara, 2016). All of this information can be matched to create highly granular “target audience segments” and to identify and “activate” individuals “across third party ad networks and exchanges”. DMPs are quickly becoming a critical tool for political campaigns (Bennett, 2016; Kaye, 2016, July; Regan, J., 2016).

Facebook and Google now play a central role in political operations, offering a full spectrum of commercial digital marketing tools and techniques, along with specialied ad “products” designed for political use (Bond, 2017). Not surprisingly, these companies have also made generating revenues from political campaigns an important “vertical” category within their ad business (Facebook, n.d.-d; Facebook IQ, n.d.-b; Stanford, 2016). Facebook’s role in the 2016 election was particularly important. With users required to give their real names when they sign up as members, Facebook has created a powerful “identity-based” targeting paradigm, enabling political campaigns to access its more than 162 million US users and to target them individually by age, gender, congressional district, and interests (Facebook, n.d.-b).Its online guide for political campaign marketing urges political campaigns to use all the social media platform tools it makes available to advertisers—including through Instagram and other properties—in order to track individuals, capture their data through various “lead-generation” tactics, and target them by uploading voter files and other data (Facebook, n.d.-a-c-f). The company also employs teams of internal staff aligned with each of the major political parties to provide technical assistance and other services to candidates and their campaigns (Chester, 2017; Kreiss & Mcgregor, 2017). Google heavily promoted the use of YouTube, as well as its other digital marketing assets, during the 2016 US election, reaching out to both major political parties (YouTube, 2017).

The growth and increasing sophistication of the digital marketplace has enhanced the capacities of political campaigns to identify, reach, and interact with individual voters. Below we identify seven key techniques that are emblematic of this new digital political marketing system, providing brief illustrations of how they were employed during the 2016 election.

Cross-device targeting

Getting a complete picture of a person’s persistent “identity” through an “identity-graph” has become a key strategy for successfully reaching consumers across their “omnichannel” experience (use of mobile, TV, streaming devices, etc.) (Winterberry Group, 2016). “Cross-device recognition” allows marketers to determine if the same person who is on a social network is also using a personal computer and later watching video on a mobile phone. Through data “onboarding,” a customer record that may contain a physical and email address is linked through various matching processes, associating it with what is believed to be that individual’s online identification—cookies, IP addresses, and other persistent identifiers (Levine, 2016). Cross-device targeting is now a standard procedure for political initiatives and other campaigns. Voter files are uploaded into the onboarding process, enabling the campaigns to find their targets on mobile devices and at specific times when they may be more receptive to a message (Kaye, 2016, April; L2, n.d.-b). Such granularity of information also enables a more tailored advertisement—so-called “dynamic creative”—which can be changed over time to “deliver very specific messaging” to individuals (Schuster, 2015). Leading cross-device marketing company Drawbridge offered a suite of election services in 2016 that provided campaigns a number of ways to impact voters, including through “Voter-Centric Cross Device Storytelling”, “Political Influencer Identification”, and via “Real-Time Voter Attribution Measurement” (Drawbridge, n.d.).

Programmatic advertising

Programmatic advertising refers to new automated forms of ad buying and placement on digital media using computer programmes and algorithmic processes to find and target a customer wherever she goes. The process can also involve real-time “auctions” that occur in milliseconds in order to “show an ad to a specific customer, in a specific context” (Allen, 2016). The use of programmatic advertising was one of the major changes in political campaign digital operations between 2012 and 2016—“the first time in American History”, according to one ad company, “that such precise targeting has ever been made available at such great scale” (Briscoe, 2017; Kaye, 2015). Programmatic advertising has itself grown in its capabilities to reach individuals, taking advantage of new sources of data to reach them on all of their devices (Regan, T., 2016). In 2016, for example, global ad giant WPP’s Xaxis system—“the world’s largest programmatic and technology platform”—launched “Xaxis Politics”. Capable of “reaching US voters across all digital channels”. the system is said to “segment audiences by hundreds of hot button issues as well as by party affiliation”, including via “real-time campaigns tied to specific real-world events” (Xaxis, 2015). Candidates were able to use the services of a growing list of companies, including Google, Rubicon, AOL, PubMatic, Appnexus and Criteo, that offered programmatic advertising platforms (“Political Campaigns”, 2016; Yatrakis, 2016).

Lookalike modelling

The use of big data analytics enables marketers to acquire information about an individual without directly observing behaviour or obtaining consent. They do this by “cloning” their “most valuable customers”in order to identify and target other prospective individuals for marketing purposes (LiveRamp, 2015). For example, Stirista (n.d.), a digital marketing firm that also serves the political world, offers lookalike modelling to identify people who are potential supporters and voters. The company claims it has matched 155 million voters to their “email addresses, online cookies, and social handles”, as well as “culture, religion, interests, political positions and hundreds of other data points to create rich, detailed voter profiles”. Facebook offers a range of lookalike modelling tools through its “Lookalike Audiences” ad platform. For example, Brad Parscale, the Trump campaign’s digital director, used the Lookalike Audiences ad tool to “expand” the number of people the campaign could target (Green & Issenberg, 2016). Facebook’s “Custom Audiences” product, similarly, enables marketers to upload their own data files so they can be matched and then targeted to Facebook users (Facebook, n.d.-e).

Geolocation targeting

Mobile devices continually send signals that enable advertisers (and others) to take advantage of an individual’s location—through the phone’s GPS (global positioning system), Wi-Fi, and Bluetooth communications. All of this can be done with increasing speed and efficiency. Through a host of new location-targeting technologies, consumers can now be identified and targeted wherever they go, while driving a car, pulling into a mall, or shopping in a store (Son, Kim, & Shmatikov, 2016). A complex and growing infrastructure of geolocation-based data-marketing services has emerged, with specialised mobile data firms, machine-learning technologies, measurement companies, and new technical standards to facilitate on-the-go targeting (Warrington, 2015). The use of mobile geo-targeting techniques played a central role in the 2016 election cycle, with a growing number of specialists offering their services to campaign operatives. For example, L2 (n.d.-a) made its voter file, along with HaystaqDNA modelling data, available for mobile device targeting, offering granular profile data on voters based on their interest in such contested topics as gun laws, gay marriage, voter fraud, and school choice, among others. Conde Nast’s Advance Publications’ election advertising worked with campaigns to append geo-location, profile data, and buying behaviour “to sculpt a very specific voter profile and target down to few hundred readers in a given geo location” (Ellwanger, 2016).

Online video advertising

Digital video, via mobile and other devices, is perceived as a highly effective way of delivering emotional content on behalf of brands and marketing campaigns (IAB, n.d.-a). There are a variety of online video ad formats that provide both short- and long-form content, and that work well for political and other marketing efforts. Progressive political campaign company Revolution Messaging, which worked for the Sanders campaign, developed what it calls “smart cookies” that it says take video and other ad placement “to the next level, delivering precision and accuracy” (Revolution Messaging, n.d.). Google’s YouTube has become a key platform for political ads, with the company claiming that today, voters make their political decisions not in “living rooms” in front of a television but in what it calls “micromoments” as people watch mobile video (DoubleClick, n.d.). According to the company’s political ad services research, mobile devices were used in nearly 60 percent of election-related searches during 2016. Content producers (which it calls “Creators”) on YouTube were able to seize on these election micro-moments to influence the political opinions of potential voters 18-49 (“Letter from the Guest Editors,” 2016).

Targeted TV advertising

Television advertising, which remains a linchpin of political campaign strategy, is undergoing a major transformation, as digital technologies and “addressable” set-top boxes have changed cable and broadcast TV into powerful micro-targeting machines capable of delivering the same kinds of granular, personalised advertising messages to individual voters that have become the hallmark of online marketing. Political campaigns are in the forefront of using set-top box “second-to-second viewing data”, amplified with other data sources, such as “demographic and cross-platform data from a multitude of sources” via information brokers, to deliver more precise ads (Fourthwall Media, n.d.; Leahey, 2016; NCC Media, n.d.). NCC Media, the US cable TV ad platform owned by Comcast, Cox, and Spectrum, provided campaigns the ability to target potential voters via the integration of its set-top box viewing information with voter and other data from Experian and others (Miller, 2017). Deals between TV data viewing companies and organisations representing both Republican- and Democratic-leaning groups brought the “targeting capabilities of online advertising to TV ad buys…bringing what was once accessible only to large state-wide or national campaigns to smaller, down-ballot candidates”, explained Advertising Age (Delgado, 2016).

Psychographic, neuromarketing, and emotion-based targeting

Psychographics, mood measurement, and emotional testing have been used by advertisers for many decades, and have also been a core strategy in political campaign advertising (Key, 1974; Packard, 2007; Schiller, 1975). The digital advertising industry has developed these tools even further, taking advantage of advances in neuroscience, cognitive computing, data analytics, behavioural tracking, and other recent developments (Crupi, 2015). Granular-based messages that trigger a range of emotional and subconscious responses, to better “engage” with individuals and deepen relationships with commercial brands, have become part of the DNA of digital advertising (McEleny, 2016). Facebook (2015), Nielsen, and most leading brands use “neuromarketing” services worldwide, which utilise neuroscience tools to determine the emotional impact of advertising messages. There is a growing field, recently promoted by Google, of “Emotion Analytics” that takes advantage of “new types of data and new tracking methods” to help advertisers “understand the impact of campaigns—and their individual assets—on an emotional level…” (Kelshaw, 2017). Scholars have identified that the use of “psychological targeting” in advertising enables the influencing of large groups of people by “tailoring persuasive appeals to the psychological needs” of the specific audiences (Matz, et al, 2017). Data company Experian Marketing Services for political campaigns offered data that weaved together “demographic, psychographic and attitudinal attributes” to target voters digitally. Experian claims its data enables campaigns to examine a target’s “heart and mind” via attributes related to their “political persona” as well as “attitudes, expectations, behaviours, lifestyles, purchase habits and media preferences (Experian, 2011, 2015). One of the most well publicised and controversial players in the 2016 election was Cambridge Analytica (CA), a prominent data analytics and behavioural communications firm that claimed to be a key component in Donald Trump’s victorious campaign. The company used a “five-factor personality model” aimed at determining “the personality of every single adult in the United States of America” (Albright, 2016; Kranish, 2016).Known as OCEAN, the model rated individuals based on five key traits: openness, conscientiousness, extroversion, agreeableness, and neuroticism. Drawing from digital data, voter history, and marketing resources supplied by leading companies, including Acxiom, Experian, Nielsen, GOP firm Data Trust, Aristotle, L2, Infogroup, and Facebook, CA was able to develop an “internal database with thousands of data points per person”. Its research also identified key segments that were considered “persuadable”, and shaped the advertising content placed “across multiple digital channels (with the most effective ads also appearing on television) (Advertising Research Foundation, 2017; Nix, 2016). The strategy was based on developing messages that were tailored to the vulnerabilities of individual voters (Nix, 2016; Schwartz, 2017). CA has become the subject of much scrutiny and debate, and itself has made conflicting claims, with critics raising concerns over its techniques and expressing scepticism about the extent of its impact (Confessore & Hakim, 2017; Karpf, 2017). However, the company’s work was sufficiently convincing to the leading advertising industry research organisation, the Advertising Research Foundation (2017, March), that it honoured the firm with a “Gold” award in 2017 under its “Big Data” category.

Discussion

The above description provides only a brief overview of the data-driven marketing system that is already widely in use by candidate and issue campaigns in the United States. The increasingly central role of commercial digital marketing in contemporary political campaigns is reshaping modern-day politics in fundamental ways, altering relationships among candidates, parties, voters, and the media. We acknowledge that digital technologies have made important positive contributions to the vibrancy of the political sphere, including greatly expanding sources of news and information, significantly increasing opportunities for citizen participation, and empowering people from diverse backgrounds to form coalitions and influence policy. The same tools developed for digital marketing have also helped political campaigns substantially improve voter engagement, enhance their capacities for “small-donor” fundraising, and more efficiently generate turnout (Moonshadow Mobile, n.d.; Owen, 2017). However, many of the techniques we address in this paper raise serious concerns—over privacy, discrimination, manipulation, and lack of transparency.

Several recent controversies over the 2016 election have triggered greater public scrutiny over some of the practices that have become standard operating procedure in the digital media and marketing ecosystem. For example, “fake news” has a direct relationship to programmatic advertising, the automated system of “intelligent” buying and selling of individuals and groups (Weissbrot, 2016). These impersonal algorithmic machines are focused primarily on finding and targeting individual consumers wherever they are, often with little regard for the content where the ads may appear (Maheshwari & Isaac, 2016). As a consequence, in the middle of the 2016 election, many companies found themselves with ads placed on “sites featuring pornography, pirated content, fake news, videos supporting terrorists, or outlets whose traffic is artificially generated by computer programs”, noted the Wall Street Journal (Nicas, 2016; Vranica, 2017). As a major US publisher explained in the trade publication Advertising Age,

Programmatic’s golden promise was allowing advertisers to efficiently buy targeted, quality, ad placements at the best price, and publishers to sell available space to the highest bidders…. What was supposed to be a tech-driven quality guarantee became, in some instances, a “race to the bottom” to make as much money as possible across a complex daisy chain of partners. With billions of impressions bought and sold every month, it is impossible to keep track of where ads appear, so “fake news” sites proliferated. Shady publishers can put up new sites every day, so even if an exchange or bidding platform identifies one site as suspect, another can spring up (Clark, 2017).

Criticism from news organisations and civil society groups, along with a major backlash by leading global advertisers, led to several initiatives to place safeguards on these practices (McDermott, 2017; Minsker, 2017). For example, in an effort to ensure “brand safety”, leading global advertisers and trade associations demanded changes in how Google, Facebook and others conduct their data and advertising technology operations. As a consequence, new measures have been introduced to enable companies to more closely monitor and control where their ads are placed (Association of National Advertisers, 2017; Benes, 2017; IPA, 2017; Johnson, 2017; Liyakasa, 2017; Marshall, 2017; Timmers, 2015).

The Trump campaign relied heavily on Facebook’s digital marketing system to identify specific voters who were not supporters of Trump in the first place, and to target them with psychographic messaging designed to discourage them from voting (Green & Issenberg, 2016). Campaign operatives openly referred to such efforts as “voter suppression” aimed at three targeted groups: “idealistic white liberals, young women and African Americans”. The operations used standard Facebook advertising tools, including “custom audiences” and so-called “dark posts”—“nonpublic paid posts shown only to the Facebook users that Trump chose” with personalised negative messages (Green & Issenberg, 2016). Such tactics also took advantage of commonplace digital practices that target individual consumers based on factors such as race, ethnicity, and socio-economic status (Google, 2017; Martinez, 2016; Nielsen, 2016). Civil rights groups have had some success in getting companies to change their practices. However, for the most part, the digital marketing industry has not been held sufficiently accountable for its use of race and ethnicity in data-marketing products, and there is a need for much broader, industry-wide policies.

Conclusion

Contemporary digital marketing practices have raised serious issues about consumer privacy over the years (Schwartz & Solove, 2011; Solove & Hartzog, 2014). When applied to the political arena, where political information about individuals is only one of thousands of highly sensitive data points collected and analysed by the modern machinery of data analytics and targeting, the risks are even greater. Yet, in the United States, very little has been done in terms of public policy to provide any significant protections. In contrast to the European Union, where privacy is encoded in law as a fundamental right, privacy regulation in the US is much weaker (Bennett, 1997; Solove & Hartzog, 2014; U.S. Senate Committee on Commerce, Science, and Transportation, 2013). The US is one of the only developed countries without a general privacy law. As a consequence, except in specific areas, such as children’s privacy, consumers in the US enjoy no significant data protection in the commercial marketplace. In the political arena, there is even less protection for US citizens. As legal scholar Ira S. Rubinstein (2014) explains, “the collection, use and transfer of voter data face almost no regulation”. The First Amendment plays a crucial role in this regard, allowing the use of political data as a protected form of speech (Persily, 2016).

The political fallout over the how Russian operatives used Facebook, Twitter, and other sites in the 2016 presidential campaign has triggered unprecedented focus on the data and marketing operations of these and other powerful digital media companies. Lawmakers, civil society, and many in the press are calling for new laws and regulations to ensure transparency and accountability for online political ads (“McCain, Klobuchar & Warner Introduce Legislation”, 2017). The U.S. Federal Election Commission, which regulates political advertising, has asked for public comments on whether it should develop new disclosure rules for online ads (Glaser, 2017). In an effort to head-off regulation, both Facebook and Twitter have announced their own internal policy initiatives designed to provide the public with more information, including what organisations or individuals paid for political ads and who the intended targets were. These companies have also promised to establish archives for political advertising, which would be accessible to the public (Falck, 2017; Goldman, 2017; Koltun, 2017). The US online advertising industry trade association is urging Congress not to legislate in this area, but to allow the industry to develop new self-regulatory regimes in order to police itself (IAB, 2017). However, relying on self-regulation is not likely to address the problems raised by these practices and may, in fact, compound them. Industry self-regulatory guidelines are typically written in ways that do not challenge many of the prevailing (and problematic) business practices employed by their own members. Nor do they provide meaningful or effective accountability mechanisms (Center for Digital Democracy, 2013; Gellman & Dixon, 2011; Hoofnagle, 2005). It remains to be seen what the outcome of the current policy debate over digital politics will be, and whether any meaningful safeguards emerge from it.

While any regulation of political speech must meet the legal challenges posed by the First Amendment, limiting how the mining of commercial data can be used in the first place can serve as a critically important new electoral safeguard. Advocacy groups should call for consumer privacy legislation in the US that would place limits on what data can be gathered by the commercial online advertising industry, and how that information can be used. Americans currently have no way to decide for themselves (such as via an opt-in) whether data collected on their finances, health, geo-location, as well as race or ethnicity can be used for digital ad profiling. Certain online advertising practices, such as the use of psychographics and lookalike modelling, also call for rules to ensure they are used fairly.

Without effective interventions, the campaign strategies and practices we have documented in this paper will become increasingly sophisticated in coming elections, most likely with little oversight, transparency, or public accountability. The digital media and marketing industry will continue its research and development efforts, with an intense focus on harnessing the capabilities of new technologies, such as artificial intelligence, virtual reality, and cognitive computing, for advertising purposes. Advertising agencies are already applying some of these advances to the political field (Facebook, 2016; Google, n.d.-a; Havas Cognitive, n.d.). Academic scholars and civil society organisations will need to keep a close watch on all these developments, in order to understand fully how these digital practices operate as a system, and how they are influencing the political process. Only through effective public policies and enforceable best practices can we ensure that digital technology enhances democratic institutions, without undermining their fundamental goals.

References

Advertising Research Foundation. (2017, March 21). Cambridge Analytica receives top honor in the 2017 ARF David Ogilvy Awards. Retrieved from http://www.prnewswire.com/news-releases/cambridge-analytica-receives-top-honor-in-the-2017-arf-david-ogilvy-awards-300426997.html

Advertising Research Foundation. (2017). Cambridge Analytica: Make America number one. Case study. Retrieved from https://thearf.org/2017-arf-david-ogilvy-awards/winners/

Albright, J. (2016, November 11). What’s missing from the Trump election equation? Let’s start with military-grade psyops. Medium. Retrieved from https://medium.com/@d1gi/whats-missing-from-the-trump-election-equation-let-s-start-with-military-grade-psyops-fa22090c8c17

Allen, R. (2016, February 8). What is programmatic marketing? Smart Insights. Retrieved from http://www.smartinsights.com/internet-advertising/internet-advertising-targeting/what-is-programmatic-marketing/

Association of National Advertisers. (2017, March 24). Statement from ANA CEO on Suspending Advertising on YouTube. Retrieved from http://www.ana.net/blogs/show/id/mm-blog-2017-03-statement-from-ana-ceo

Barlow, J. P. (1996, February 8). A declaration of the independence of cyberspace. Electronic Frontier Foundation. Retrieved from https://www.eff.org/cyberspace-independence

Benes, R. (2017, August 29). Ad buyers blast Facebook Audience Network for placing ads on Breitbart,” Digiday. Retrieved from https://digiday.com/marketing/ad-buyers-blast-facebook-audience-network-placing-ads-breitbart/

Bennett, C. J. (1997). Convergence revisited: Toward a global policy for the protection of personal data? In P. Agre & M. Rotenberg (Eds.), Technology and privacy: the new landscape (pp. 99–124). Cambridge, MA: MIT Press.

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: Can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261-275. doi:10.1093/idpl/ipw021 Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2776299

Bernie Sanders: 2016 presidential campaign. (2016). Facebook Business. Retrieved from https://www.facebook.com/business/success/bernie-sanders

BlueKai. (2011). Whitepaper: Data management platforms demystified. Retrieved from http://www.bluekai.com/files/DMP_Demystified_Whitepaper_BlueKai.pdf

Bond, S. (2017, March 14). Google and Facebook build digital ad duopoly. Financial Times. Retrieved from https://www.ft.com/content/30c81d12-08c8-11e7-97d1-5e720a26771b

Briscoe, G. (2017, March 7). How political digital advertising lessons of 2016 applies to 2017. Centro. Retrieved from https://www.centro.net/blog/political-digital-advertising-lessons-2016-applies-2017/

Center for Digital Democracy. (2013, May 29). U.S. online data trade groups spin digital fairy tale to USTR about US consumer privacy prowess—CDD says privacy out of bounds in TTIP. Retrieved from http://www.democraticmedia.org/us-online-data-trade-groups-spin-digital-fairy-tale-ustr-about-us-consumer-privacy-prowess-cdd-say-0

Chahal, G. (2013, May). Election 2016: Marriage of big data, social data will determine the next president. Wired. Retrieved from https://www.wired.com/insights/2013/05/election-2016-marriage-of-big-data-social-data-will-determine-the-next-president/

Chavez, T. (2017, May 17). Krux is now Salesforce DMP. Salesforce Blog. Retrieved from https://www.salesforce.com/blog/2017/05/krux-is-now-salesforce-dmp.html

Chester, J. (2007). Digital destiny: New media and the future of democracy. New York: The New Press.

Chester, J. (2017, January 6). Our next president: Also brought to you by big data and digital advertising. Moyers and Company. Retrieved from http://billmoyers.com/story/our-next-president-also-brought-to-you-by-big-data-and-digital-advertising/

Clark, J. (2017, April 25). Fake news: New name, old problem. Can premium programmatic help? Advertising Age. Retrieved from http://adage.com/article/digitalnext/fake-news-problem-premium-programmatic/308774/

Confessore, N., & Hakim, D. (2017, March 6). Data firm says “secret sauce” aided Trump; many scoff. New York Times. Retrieved from https://www.nytimes.com/2017/03/06/us/politics/cambridge-analytica.html?_r=0

Crupi, A. (2015, May 27). Nielsen buys neuromarketing research company Innerscope. Advertising Age. Retrieved fromhttp://adage.com/article/media/nielsen-buys/298771/

Delgado, M. (2016, April 28). Experian launches audience management platform to make programmatic TV a reality across advertising industry. Experian. Retrieved from http://www.experian.com/blogs/news/2016/04/28/experian-launches-audience-management-platform/

DoubleClick. (n.d.). DoubleClick campaign manager. Retrieved from https://www.doubleclickbygoogle.com/solutions/digital-marketing/campaign-manager/

Drawbridge. (n.d.). Cross-device election playbook. Retrieved from https://drawbridge.com/c/vote

Edelman Digital (2016, April 1). How digital is shaking up presidential campaigns. Retrieved from https://www.edelman.com/post/how-digital-is-shaking-up-presidential-campaigns/

Ellwanger, S. (2016, September 15). Advance Local’s Sutton sees bid demand for digital advertising in politics. Retrieved from http://www.beet.tv/2016/09/jeff-sutton.html

eMarketer. (2017, April 12). Worldwide ad spending: The eMarketer forecast for 2017.

Experian. (2015, March). Audience guide. Retrieved from https://www.experian.com/assets/marketing-services/product-sheets/attitudinal-and-psychographic-audiences.pdf

Experian. (2011, December). Political affiliation and Beyond. Retrieved from https://www.experian.com/assets/marketing-services/product-sheets/das-political-data-sheet.pdf

Facebook. (2016, June 16). Inside marketing science at Facebook. Retrieved from https://www.facebook.com/notes/facebook-careers/inside-marketing-science-at-facebook/936165389815348/

Facebook. (n.d.-a). Activate. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/activate/

Facebook. (n.d.-b). Advanced strategies for performance marketers. Facebook Business. Retrieved from https://www.facebook.com/business/a/performance-marketing-strategies; https://www.facebook.com/business/help/202297959811696

Facebook. (n.d.-c). Impact. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/impact/

Facebook. (n.d.-d). Mobilize your voters. Facebook Business. Retrieved from https://www.facebook.com/business/a/mobilizevoters

Facebook. (n.d.-e). Toomey for Senate. Facebook Business. Retrieved from https://www.facebook.com/business/success/toomey-for-senate

Facebook. (n.d.-f). Turnout. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/turnout/

Facebook IQ. (n.d.-a). Unlock the insights that matter. Retrieved from https://www.facebook.com/iq

Facebook IQ. (n.d.-b). Vertical insights. https://www.facebook.com/iq/vertical-insights

Falck, B. (2017, October 24). New transparency for ads on Twitter. Twitter Blog. https://blog.twitter.com/official/en_us/topics/product/2017/New-Transparency-For-Ads-on-Twitter.html

Fourthwall Media. (n.d.). Solutions: Analytics firms. Retrieved from http://www.fourthwallmedia.tv/analytics-firms.

Gellman, R., & Dixon, P. (2011, October 14). Many failures: A brief history of privacy self-regulation in the United States. World Privacy Forum. Retrieved from http://www.worldprivacyforum.org/wp-content/uploads/2011/10/WPFselfregulationhistory.pdf

Glaser, A. (2017, October 17). Should political ads on Facebook include disclaimers? Slate. Retrieved from http://www.slate.com/articles/technology/future_tense/2017/10/the_fec_wants_your_opinion_on_transparency_for_online_political_ads.html

Goldman, R. (2017, October 27). Update on our advertising transparency and authenticity efforts. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2017/10/update-on-our-advertising-transparency-and-authenticity-efforts/

Google. (2017, May 1). Marketing in a multicultural world: 2017 Google's marketing forum. Google Agency Blog. Retrieved from https://agency.googleblog.com/2017/05/marketing-in-multicultural-world-2017.html

Google. (n.d.-a). Google NYC algorithms and optimization. Research at Google. Retrieved from https://research.google.com/teams/nycalg/

Google. (n.d.-b). Insights you want. Data you need. Think with Google. Retrieved from https://www.thinkwithgoogle.com

Green, J., & Issenberg, S. (2016, October 27). Inside the Trump bunker, with days to go. Bloomberg Businessweek. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

Havas Cognitive. (n.d.). EagleAi has landed. Retrieved from http://cognitive.havas.com/case-studies/eagle-ai

Hoofnagle, C. (2005, March 4). Privacy self-regulation: A decade of disappointment. Electronic Privacy Information Center. Retrieved from http://epic.org/reports/decadedisappoint.html

IPA. (2017, August). IPA issues direct call to action to Google YouTube and Facebook to clean up safety, measurement and viewability of their online video. Retrieved from http://www.ipa.co.uk/news/ipa-issues-direct-call-to-action-to-google-youtube-and-facebook-to-clean-up-safety,-measurement-and-viewability-of-their-online-video-#.Wa126YqQzQj

IAB. (2017, October 24). IAB President & CEO, Randall Rothenberg testifies before Congress on digital political advertising. Retrieved from https://www.iab.com/news/read-the-testimony-from-randall-rothenberg-president-and-ceo-iab/

IAB. (n.d.-a). The digital video advertising landscape. Retrieved from https://video-guide.iab.com/digital-video-advertising-landscape

IAB. (n.d.-b). Global digital advertising revenue reports. Retrieved from https://www.iab.com/global/

IAB. (n.d.-c). Glossary: Digital media planning & buying. Retrieved from https://www.iab.com/wp-content/uploads/2016/04/Glossary-Formatted.pdf

IAB. (n.d.-d). IAB internet advertising revenue report conducted by PricewaterhouseCoopers (PWC). Retrieved from https://www.iab.com/insights/iab-internet-advertising-revenue-report-conducted-by-pricewaterhousecoopers-pwc-2

Jamieson, A. (2016, April 5). The first Snapchat election: How Bernie and Hillary are targeting the youth vote. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/apr/05/snapchat-election-2016-sanders-clinton-youth-millennial-vote

Jamieson, K. H. (1996). Packaging the presidency: A history and criticism of presidential campaign advertising. New York: Oxford University Press.

Johnson, L. (2017, April 16). How brands and agencies are fighting back against Facebook and Google’s measurement snafus. Adweek. Retrieved from http://www.adweek.com/digital/how-brands-and-agencies-are-fighting-back-against-facebooks-and-googles-measurement-snafus/

Karpf, D. (2017, February 1). Will the real psychometric targeters please stand up? Civicist. Retrieved from https://civichall.org/civicist/will-the-real-psychometric-targeters-please-stand-up/

Karpf, D. (2016, October 31). Preparing for the campaign tech bullshit season. Civicist. Retrieved from https://civichall.org/civicist/preparing-campaign-tech-bullshit-season/

Kaye, K. (2015, June 3). Programmatic buying coming to the political arena in 2016. Advertising Age. Retrieved from http://adage.com/article/digital/programmatic-buying-political-arena-2016/298810/

Kaye, K. (2016, April 15). RNC'S voter data provider teams up with Google, Facebook and other ad firms. Advertising Age. Retrieved from http://adage.com/article/campaign-trail/rnc-voter-data-provider-joins-ad-firms-including-facebook/303534/

Kaye, K. (2016, July 13). Democrats' data platform opens access to smaller campaigns. Advertising Age. Retrieved from http://adage.com/article/campaign-trail/democratic-data-platform-opens-access-smaller-campaigns/304935/

Kelshaw, T. (2017, August). Emotion analytics: A powerful tool to augment gut instinct. Think with Google. https://www.thinkwithgoogle.com/nordics/article/emotion-analytics-a-powerful-tool-to-augment-gut-instinct/

Key, W. B. (1974). Subliminal Seduction. New York: Berkeley Press.

Koltun, N. (2017, October 27). Facebook significantly ramps up transparency efforts to cover all ads. Mobile Marketer. Retrieved from https://www.mobilemarketer.com/news/facebook-significantly-ramps-up-transparency-efforts-to-cover-all-ads/508380/

Kranish, M. (2016, October 27). Trump’s plan for a comeback includes building a “psychographic” profile of every voter. The Washington Post. Retrieved from https://www.washingtonpost.com/politics/trumps-plan-for-a-comeback-includes-building-a-psychographic-profile-of-every-voter/2016/10/27/9064a706-9611-11e6-9b7c-57290af48a49_story.html?utm_term=.28322875475d

Kreiss, D. (2016). Prototype politics: Technology-intensive campaigning and the data of democracy. New York: Oxford University Press.

Kreiss, D., & Mcgregor, S.C. (2017). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 U.S. presidential cycle, Political Communication, 1-23. doi:10.1080/10584609.2017.1364814

L2. (n.d.-a). Digital advertising device targeting. Retrieved from http://www.l2political.com/products/data/digital-advertising/device-targeting/

L2. (n.d.-b). L2 voter file enhancements. Retrieved from http://www.l2political.com/products/data/voter-file-enhancements/

Leahey, L. (2016, July 15). (Ad) campaign season: How political advertisers are using data and digital to move the needle in 2016. Cynopsis Media. Retrieved from http://www.cynopsis.com/cyncity/ad-campaign-season-how-political-advertisers-are-using-data-and-digital-to-move-the-needle-in-2016/

Letter from the guest editors: Julie Hootkin and Frank Luntz. (2016, June). Think with Google. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/guest-editors-political-consultants-julie-hootkin-frank-luntz

Levine, B. (2016, December 2). Report: What is data onboarding, and why is it important to marketers? Martech Today. Retrieved from https://martechtoday.com/report-data-onboarding-important-marketers-192924

LiveRamp (2015, August 5). Look-alike modeling: The what, why, and how. LiveRamp Blog. Retrieved from http://liveramp.com/blog/look-alike-modeling-the-what-why-and-how/

Liyakasa, K. (2017, August 24). Standard media index: YouTube’s direct ad spend down 26% in Q2 amid brand safety crackdown. Ad Exchanger. Retrieved from https://adexchanger.com/ad-exchange-news/standard-media-index-youtubes-direct-ad-spend-26-q2-amid-brand-safety-crackdown/

Maheshwari, S., & Isaac, M. (2016, November 6). Facebook will stop some ads from targeting users by race. New York Times Retrieved from https://www.nytimes.com/2016/11/12/business/media/facebook-will-stop-some-ads-from-targeting-users-by-race.html?mcubz=0

Marshall, J. (2017, January 30). IAB chief calls on online ad industry to fight fake news. Wall Street Journal. Retrieved from https://www.wsj.com/articles/iab-chief-calls-on-online-ad-industry-to-fight-fake-news-1485812139

Martinez, C. (2016, October 28). Driving relevance and inclusion with multicultural marketing. Facebook Business. Retrieved from https://www.facebook.com/business/news/driving-relevance-and-inclusion-with-multicultural-marketing

Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. (2017, October 17). Proceedings of the National Academy of Sciences (PNAS Early Edition). Retrieved from http://www.michalkosinski.com/home/publications

McCain, Klobuchar & Warner introduce legislation to protect integrity of U.S. elections & provide transparency of political ads on digital platforms. (2017, October 19). Retrieved from https://www.mccain.senate.gov/public/index.cfm/2017/10/mccain-klobuchar-warner-introduce-legislation-to-protect-integrity-of-u-s-elections-provide-transparency-of-political-ads-on-digital-platforms

McDermott, M. J. (2017, May 12). Brand safety issue vexes marketers. ANA Magazine. Retrieved from http://www.ana.net/magazines/show/id/ana-2017-05-brand-safety-issue-vexes-marketers

McEleny, C. (2016, October 16). Ford and Xaxis score in Vietnam using emotional triggers around the UEFA Champions League. The Drum. Retrieved from http://www.thedrum.com/news/2016/10/18/ford-and-xaxis-score-vietnam-using-emotional-triggers-around-the-uefa-champions

Miller, S. J. (2017, March 24). Local cable and the future of campaign media strategy. Campaigns & Elections. Retrieved from https://www.campaignsandelections.com/campaign-insider/local-cable-and-the-future-of-campaign-media-strategy

Minsker, M. (2017, August 30). Advertisers want programmatic tech players to fight fake news. eMarketer. Retrieved from https://www.emarketer.com/Article/Advertisers-Want-Programmatic-Tech-Players-Fight-Fake-News/1016406

Montgomery, K. C. (2007). Generation digital: Politics, commerce, and childhood in the age of the internet. Cambridge, MA: MIT Press.

Montgomery, K. C. (2011) Safeguards for youth in the digital marketing ecosystem. In D. G. Singer and J. L. Singer (Eds.), Handbook of children and the media (2nd ed.), pp. 631-648. Thousand Oaks, CA: Sage Publications.

Montgomery, K. C., Chester, J., & Kopp, K. (2017). Health wearable devices in the big data era: Ensuring privacy, security, and consumer protection. Center for Digital Democracy. Retrieved from https://www.democraticmedia.org/sites/default/files/field/public/2016/aucdd_wearablesreport_final121516.pdf

Moonshadow Mobile. (n.d.). Ground Game is a groundbreaking battle-tested mobile canvassing app. Retrieved from http://www.moonshadowmobile.com/products/ground-game-mobile-canvassing/

NCC Media. (n.d.). The essential guide to political advertising. Retrieved from https://nccmedia.com/PoliticalEssentialGuide/html5/index.html?page=1&noflash

Nicas, J. (2016, December 8). Fake-news sites inadvertently funded by big brands. Wall Street Journal. Retrieved from https://www.wsj.com/articles/fake-news-sites-inadvertently-funded-by-big-brands-1481193004

Nielsen. (2016, October 4). Nielsen and Ethnifacts introduce intercultural affinity segmentation to drive deeper understanding of total U.S. cultural landscape for brand marketers. Retrieved from http://www.nielsen.com/us/en/press-room/2016/nielsen-and-ethnifacts-introduce-intercultural-affinity-segmentation.html

Nix, A. (2016, September). The Power of Big Data and Psychographics in the Electoral Process. Presented at the Concordia Annual Summit, New York. Retrieved from https://www.youtube.com/watch?v=n8Dd5aVXLCc

O’Hara, C. (2016, January 25). Data triangulation: How second-party data will eat the digital world. Ad Exchanger. Retrieved from http://adexchanger.com/data-driven-thinking/data-triangulation-how-second-party-data-will-eat-the-digital-world/

Owen, D. (2017). New Media and Political Campaigns. New York: Oxford University Press.

Packard, V. (2007). The Hidden Persuaders (reissue ed.). New York: Ig Publishing.

Persily, N. (2016, August 10). Facebook may soon have more power over elections than the FEC. Are we ready? Washington Post. Retrieved from https://www.washingtonpost.com/news/in-theory/wp/2016/08/10/facebook-may-soon-have-more-power-over-elections-than-the-fec-are-we-ready/?utm_term=.ed10eef711a1

Political campaigns in 2016: The climax of digital advertising. (2016, May 10). Media Radar. Retrieved from https://www.slideshare.net/JesseSherb/mediaradarwhitepaperdigitalpoliticalfinpdf

Regan, J. (2016, July 29). Donkeys, elephants, and DMPs. Merkle. Retrieved from https://www.merkleinc.com/blog/donkeys-elephants-and-dmps

Regan, T. (2016, January). Media planning toolkit: Programmatic planning. WARC. Retrieved from https://www.warc.com/content/article/bestprac/media_planning_toolkit_programmatic_planning/106391

Revolution Messaging. (n.d.). Smart cookies. Retrieved from https://revolutionmessaging.com/marketing/smart-cookies.

Rubinstein, I. S. (2014) Voter privacy in the age of big data. Wisconsin Law Review. Retrieved from http://wisconsinlawreview.org/wp-content/uploads/2015/02/1-Rubinstein-Final-Online.pdf

Sabato, L. J. (1981). The rise of political consultants: New ways of winning elections. New York: Basic Books.

Salesforce DMP. (2017, October 20). Third-party data marketplace. Retrieved from https://konsole.zendesk.com/hc/en-us/articles/217592967-Third-Party-Data-Marketplace

Schiller, H. I. (2007). The Mind Managers. New York: Beacon Press.

Schuster, J. (2015, October 7) Political campaigns: The art and science of reaching voters. LiveRamp. Retrieved from https://liveramp.com/blog/political-campaigns-the-art-and-science-of-reaching-voters/

Schwartz, M. (2017, March 30). Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate. The Intercept. Retrieved from https://theintercept.com/2017/03/30/facebook-failed-to-protect-30-million-users-from-having-their-data-harvested-by-trump-campaign-affiliate/

Schwartz, P. M., & Solove, D. J. (2011). The PII problem: Privacy and a new concept of personally identifiable information. New York University Law Review, 86,1814-1895. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1909366

Singer, N. (2012, October 13). Do not track? Advertisers say “don’t tread on us.” New York Times. Retrieved from http://www.nytimes.com/2012/10/14/technology/do-not-track-movement-is-drawing-advertisers-fire.html?_r=0

Smith, C. (2014, March 20). Reinventing social media: Deep learning, predictive marketing, and image recognition will change everything. Business Insider. Retrieved from http://www.businessinsider.com/social-medias-big-data-future-2014-3

Solon, O., & Siddiqui, S. (2017, September 3). Forget Wall Street—Silicon Valley is the new political power in Washington. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/sep/03/silicon-valley-politics-lobbying-washington

Solove, D. J., & Hartzog, W. (2014). The FTC and the new common law of privacy. Columbia Law Review, 114, 583-677. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312913

Son, S., Kim, D., & Shmatikov, V. (2016). What mobile ads know about mobile users. NDSS ’16. Retrieved from http://www.cs.cornell.edu/~shmat/shmat_ndss16.pdf

Stanford, K. (2016, March). How political ads and video content influence voter opinion. Think with Google. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/content-marketing/political-ads-video-content-influence-voter-opinion/

Stirista. (n.d.). Political data. Retrieved from https://www.stirista.com/what-we-do/data/political-data

Timmers, B. (2015, December). Everything you wanted to know about fake news. IAS Insider Retrieved from https://insider.integralads.com/everything-wanted-know-fake-news/

Tufecki, Z. (2014, July 7). Engineering the public: Big data, surveillance and computational politics. First Monday19(7). Retrieved from http://firstmonday.org/article/view/4901/4097

U.S. Senate Committee on Commerce, Science, and Transportation. (2013, December 18). A review of the data broker industry: Collection, use, and sale of consumer data for marketing purposes. Staff report for Chairman Rockefeller. Retrieved from https://www.commerce.senate.gov/public/_cache/files/0d2b3642-6221-4888-a631-08f2f255b577/AE5D72CBE7F44F5BFC846BECE22C875B.12.18.13-senate-commerce-committee-report-on-data-broker-industry.pdf

Vranica, S. (2017, June 18). Advertisers try to avoid the web’s dark side, from fake news to extremist videos. Wall Street Journal. Retrieved from https://www.wsj.com/articles/advertisers-try-to-avoid-the-webs-dark-side-from-fake-news-to-extremist-videos-1497778201

WARC. (2017, December). Toolkit 2018: How brands can respond to the year's biggest challenges. Retrieved from https://www.warc.com/content/article/Toolkit_2018_How_brands_can_respond_to_the_yearamp;39;s_biggest_challenges/117399

Warrington, G. (2015, November 18). Tiles, proxies and exact places: Building location audience profiles. LinkedIn. Retrieved from https://www.linkedin.com/pulse/tiles-proxies-exact-places-building-location-audience-warrington

Weissbrot, A. (2016, June 20). MAGNA and Zenith: Digital growth fueled by programmatic, mobile and video. Ad Exchanger. Retrieved from https://adexchanger.com/agencies/magna-zenith-digital-growth-fueled-programmatic-mobile-video/

What is predictive intelligence and how it’s set to change marketing in 2016. (2016, February 11). Smart Insights. http://www.smartinsights.com/digital-marketing-strategy/predictive-intelligence-set-change-marketing-2016/

Winterberry Group. (2016, November). The state of consumer data onboarding: Identity resolution in an omnichannel environment. Retrieved from http://www.winterberrygroup.com/our-insights/state-consumer-data-onboarding-identity-resolution-omnichannel-environment

Xaxis. (2015, November 9). Xaxis brings programmatic to political advertising with Xaxis politics, first ad targeting solution to leverage offline voter data for reaching U.S. voters across all digital channels. https://www.businesswire.com/news/home/20151109006051/en/Xaxis-Brings-Programmatic-Political-Advertising-Xaxis-Politics

Yatrakis, C. (2016, June 28). The Trade Desk partner spotlight: Q&A with Factual. Factual. Retrieved from https://www.factual.com/blog/partner-spotlight

YouTube. (2017). The presidential elections on YouTube. Retrieved from https://think.storage.googleapis.com/docs/The_Presidential_Elections_On_YouTube.pdf

Footnotes

1. The research for this paper is based on industry reports, trade publications, and policy documents, as well as review of relevant scholarly and legal literature. The authors thank Gary O. Larson and Arthur Soto-Vasquez for their research and editorial assistance.

On democracy

$
0
0

Disclaimer: This guest essay in the Special issue on political micro-targeting has not been peer reviewed. It is an abbreviated version of a speech delivered by the Member of the European Partiament (MEP) Sophie in ‘t Veld in Amsterdam in May 2017 to Data & Democracy, a conference on political micro-targeting.

Democracy

Democracy is valuable and vulnerable, which is reason enough to remain alert for new developments that can undermine her. In recent months, we have seen enough examples of the growing impact of personal data in campaigns and elections. It is important and urgent for us to publicly debate this development. It is easy to see why we should take action against extremist propaganda of hatemongers aiming to recruit young people for violent acts. But we euphemistically speak of 'fake news' when lies, 'half-truths’, conspiracy theories, and sedition creepily poison public opinion.

The literal meaning of democracy is 'the power of the people'. 'Power' presupposes freedom. Freedom to choose and to decide. Freedom from coercion and pressure. Freedom from manipulation. 'Power' also presupposes knowledge. Knowledge of all facts, aspects, and options. And knowing how to balance them against each other. When freedom and knowledge are restricted, there can be no power.

In a democracy, every individual choice influences society as a whole. Therefore, the common interest is served with everyone's ability to make their choices in complete freedom, and with complete knowledge.

The interests of parties and political candidates who compete for citizen’s votes may differ from that higher interest. They want citizens to see their political advertising, and only theirs, not that of their competitors. Not only do parties and candidates compete for the voter's favour. They contend for his exclusive time and attention as well.

Political targeting

No laws dictate what kind of information a voter should rely on to be able to make the right consideration. For lamb chops, toothpaste, mortgages or cars, for example, it’s mandatory for producers to mention the origin and properties. This enables consumers to make a responsible decision. Providing false information is illegal. All ingredients, properties, and risks have to be mentioned on the label.

Political communication, however, is protected by freedom of speech. Political parties are allowed to use all kinds of sales tricks.

And, of course, campaigns do their utmost and continuously test the limits of the socially acceptable.

Nothing new, so far. There is no holding back in getting the voters to cast their vote on your party or your candidate. From temptation with attractive promises, to outright bribery. From applying pressure to straightforward intimidation.

Important therein is how and where you can reach the voter. In the old days it was easy: Catholics were told on Sundays in church that they had no other choice in the voting booth than the catholic choice. And no righteous Catholic dared to think about voting differently. At home, the father told the mother how to vote. The children received their political preference from home and from school. Catholics learned about current affairs via a catholic newspaper, and through the catholic radio broadcaster. In the Dutch society, which consisted of a few of such pillars, one was only offered the opinions of one's own pillar1. A kind of filter bubble avant la lettre.

Political micro-targeting

Nowadays, political parties have a different approach. With new technologies, the sky is the limit.

Increasingly advanced techniques allow the mapping of voter preferences, activities, and connections. Using endless amounts of personal data, any individual on earth can be reconstructed in detail. Not only can their personal beliefs be distilled from large troves of data, no, it even is possible to predict a person's beliefs, even before they have formed them themselves. And, subsequently, it is possible to subtly steer those beliefs, while leaving the person thinking they made their decision all by themselves.

As often is the case, the Americans lead in the use of new techniques. While we Europeans, touchingly old-fashioned knock on doors and hand out flyers at Saturday's market, the American employ the latest technology to identify, approach, and influence voters.

Of course, trying to find out where voters can be reached and how they can be influenced is no novelty. Political parties map which neighbourhoods predominantly vote for them, which neighbourhoods have potential, and in which neighbourhoods campaigning would be a wasted effort. Parties work with detailed profiles and target audiences, for which they can tailor their messages.

But the usage of personal data on a large scale has a lot more to offer. Obviously, this is a big opportunity for political parties, and for anyone else, who runs campaigns or aims to influence the elections.

However, the influencing techniques become increasingly opaque. As a result of the alleged filter bubble, voters are being reaffirmed in their own beliefs, and they hardly receive information anymore about the beliefs and arguments of other groups. This new kind of segmentation may stifle critical thinking. There may not be enough incentive to test one's own ideas, to find new arguments, or to critically reflect on the truthfulness of information.

I am a social and economic liberal D66 politician, and I get suggestions for news articles from websites like The Guardian or Le Monde. My colleague from the right wing nationalist PVV, may well receive URLs from Breitbart.

Pluralism is essential for a healthy, robust democracy. In a polarised society, people live in tightly knit groups, which hardly communicate with each other. In a pluralist society people engage in the free exchange, confrontation, and fusion of ideas.

The concept pluralism is under pressure. Populist parties declare themselves representative of The People. In their vision, The People, is uniform and homogenous. There is a dominant cultural norm, dictated from the top-down, to which everyone must conform. Whomever refuses, gets chewed out. Often, it is about one-dimensional symbolism such as Easter eggs and Christmas trees. There is no place for pluralism in the world of the populists. But when there is no pluralism, there is no democracy. Without pluralism, democracy is nothing more than a simple tribal dispute, instead of the expression of the will of all citizens together.

Voter data

European privacy legislation limits the use of personal data. In the world of ‘big data’, one of the explicit goals of regulation is to prevent restriction of the consumer's choice. Oddly enough, lawmakers do not explicitly aspire to guarantee voters as broad a choice as possible. But in politics, individual choices have consequences for society as a whole.

In 2018, the General Data Protection Regulation (GDPR) comes into effect. We have worked five years on the GDPR. At this moment, we work on the modernisation of the e-Privacy Directive, which is mainly about the protection of communication. As was the case with the GDPR, companies from certain sectors scream bloody murder. European privacy protection would mean certain death for the European industry. According to some corporate Cassandras, entire European industries will move to other continents. That very same death of corporate Europe is also predicted for any measure concerning, say, environmental norms, procurement rules, or employee rights. All those measures are in place, but, as far as I know, the nightmare scenario has never occurred...

There are some corporate sectors, such as publishing and marketing, which have a huge impact on the information supply to citizens. They are the ones who now cry wolf. It is understandable that they are unhappy with stricter rules concerning their activities, but as the potential impact of the use of personal data and ‘big data’ increases, so does their social responsibility.

At the moment, there is not much public debate about the new techniques. Peculiar. Thirty years ago, 'subliminal advertising', as we called it then, was prohibited because people found it unethical to influence people without their knowledge. We need to have a similar debate. What do we think of opaque influencing? Do we need ethical norms? Should such norms apply only to political campaigns, or should we look at this from a broader perspective? In the ‘big data’ debate, we tend to speak in technical or legal terms, while actually the issue is fundamentally ethical, holding far-reaching consequences for the vitality of our democracy.

Such a public debate demands more clarity on the impact of ‘big data’, profiling, targeting, and similar techniques on the individual, her behaviour, and her choices, which determine in what direction society progresses. Which voters are being reached? How sensitive are they for the subtle influencing and what makes them resilient? How do people who are hardly reached only compare to the others? How do voters and non-voters compare? Is the voter truly predictable? Can we identify or influence the floating voter? Do voters actually float between different parties? Or do they especially float within their own party, their own bubble, their own segment? How important are other factors, such as the social context? If the new influencing techniques are indeed as potent as we think, how can polls get it so wrong? What can we learn from advertisers who return to contextual advertising, because targeting turns out less effective than they thought?

We need to stay cool-headed. New technologies have a huge impact, but human nature will not suddenly change due to ‘big data’ and its use. Our natural instincts and reflexes will definitely not evolve in a few years. That would take many thousands of years, as even in the 21st century, we seem to have more than a few cavemen traits, so losing internalised behaviour is not as easy as 1-2-3. Humans are resilient, but democracy is vulnerable. On a short term, the societal impact is large. This gives us all the reason to reflect on how to deal with the new reality, and how we can keep up our values in this new reality.

The use of personal data, clearly, is not solely reserved for decent political parties. Other persons and organisations, from the Kremlin to Breitbart, can bombard European voters with information and misinformation. But European governments, controlling endless amounts of personal data of their citizens, can also manipulate information, or circulate utter nonsense to advance their own interests. A random example: the Hungarian government influencing their voters with lies and manipulation about the so-called consultation on asylum seekers.

Beyond voter data

This issue is not only about the personal data of voters, but also about the personal data of political competitors, opponents, and critics, which are increasingly being employed. Recently, we have seen efforts of external parties to influence the results of the 2017 French elections. We saw a large-scale hack of the Emmanuel Macron campaign, and the spread of false information, coming obviously from the Kremlin and the American Alt-Right, meant to discredit Macron's candidacy.

Also, the American elections show the shady game of hacking, leaking, and manipulating. The issue of the Hillary Clinton mails will undoubtedly occupy our minds for years. Who knows how the elections would have turned out without this affair?

Other democratic pillars can get corrupted as well by the misuse of data. Critical voices, opposition, and checks and balances are democracy's oxygen. Democracy is in acute jeopardy when data are employed to attack, undermine, discredit, blackmail, or persecute journalists, judges, lawyers, NGOs, whistleblowers, and opposition parties.

In Europe, we tend to shrug our shoulders at these dangers. "Oh well, we'll see, such things occur only in banana republics, not right here". Of course, this trust in our democratic rule of law is wonderful. But if we treat our rule of law this neglectfully, we will lose it eventually.

Within the European Union, we currently see this happening in Poland and Hungary. The governments of both nations ruthlessly attack independent judges, critical media, inconvenient NGOs. They do so with quasi-lawful means. Under the banner of transparency, they force NGOs to register. In doing so, they misuse laws against money laundering, and terror finance. Or the governments bring out compromising information about judges or politicians in strategic moments.

But critical voices struggle in other member states as well. Lawyers are being monitored, even without a legal basis. In the years after 9/11, we have created endless new abilities for intelligence services, police and justice departments to spy on citizens, even without suspicion, without the signature of a judge. The companies to which we unwittingly surrender our personal data, in exchange for service, are forced to hand over all information to the government, or forced to build in backdoors. Governments hack computers in other countries. Usually, it starts out with unlawful practices, but soon enough laws are put in place to legalise those practices. The magic word 'terrorism' silences any critique on such legislation.

But when politicians, journalists, NGOs, whistleblowers, lawyers, and many others cannot perform their tasks freely and without worry, our democracy withers. Not only do they have to operate without someone keeping an eye on them, they have to know nobody is in fact watching them. The mere possibility of being watched, results in a chilling effect.

For this principal reason, I have contested a French mass surveillance law before the French Conseil d'Etat. Since, as a member of the European Parliament, I spend four days a month on French soil (in Strasbourg), I could potentially be the target of the French eavesdropping programme. This is not totally imaginary, as I am not only a politician, but also a vocal critic of certain French anti-terror measures. It is not about me actually worrying about being spied on, but about the fact that I might be spied on. Luckily, I am not easily startled, but I can imagine that many politicians are vulnerable. That is a risk for democracy.

I do not discard the possibility of a ruling of the European Court of Human Rights on my case. In that turn of events, it will lead to jurisprudence valid in the entire EU (and the geographical area covered by the Council of Europe).

But, of course, this should not depend on the actions of one obstinate individual whether politicians, NGOs, journalists, and so on, can do their jobs fearlessly, to fulfil their watchdog role.

It is my personal, deep, conviction that the biggest threat to our democracy is the fact that we have enabled the powerful to access, with almost no limitations, the personal data of those who should control those very same powerful entities.

What can we do?

Some propose new forms of democracy, in which universal suffrage is weakened or even abolished. In his book ‘Against elections: the case for democracy’, David Van Reybrouck had the idea to appoint representatives on the basis of chance, and in his book ‘Against democracy’ Jason Brennan wants to give the elite more votes than the lower classes, presuming that people with more education or development make better choices. Others want to replace representative democracy with direct democracy.

I oppose those ideas. Universal suffrage and the representative democracy are great achievements, which have led to enormous progress in society.

First of all, we have to make sure our children grow up to be critical, independent thinkers. Think differently, deviate, provoke: this must be encouraged instead of condemned. A democracy needs non-conformists.

We must teach our children to contextualise information and to compare sources.

The counterpart of ‘big data’ must be ‘big transparency’. We need to understand not just open administration, but also insights into the techniques of influence.

The regulation and limitation of the use of personal data, as I hope to have argued effectively, is not a game of out-of-touch privacy activists. It is essential for democracy. We need safeguards, not only to be sure people really are free in their choices, but also to protect the necessary checks and balances. As such, I plea for a rigorous application of the GDPR, and in the European Parliament, I will work for a firm e-Privacy Directive.

And yes, perhaps we should examine whether the rules for political campaigning are still up-to-date. In most countries, those rules cover a cap on campaign expenditures, a prohibition of campaigning or polling on the day before election day, or a ban on publishing information that may influence the election results, such as the leaked e-mails in France. But these rules have little impact on the use of personal data to subtly influence elections.

Last year, the European Parliament supported my proposal for a mechanism to guard democracy, the rule of law, and fundamental rights in Europe.2

On this day (editor’s note: 9 May, Europe Day) of European democracy, I plead for equal, high norms in Europe. The last years have shown that national elections are European elections. It is crucial for us to trust that all elections in EU member states are open, free, and honest elections, free of improper influencing.

These last sixty years, the European Union has developed itself into a world leader in democracy and freedom. If we start a public debate, Europe can remain a world leader.

Footnotes

1. Pillars are referred to here as societal cleavages along ideological or religious lines

2. The report I refer to is a legislative initiative of the European Parliament. I was the initiator and the rapporteur. This is a proposal to guard democracy, the rule of law, and the fundamental rights in the EU. The Commission, at first, did not want to proceed with the initiative. Recently, however, the Commission has announced a legislative proposal for such a mechanism. I suspect this proposal will look quite different from Parliament’s. But the fact that there will be a mechanism, is most important. The realization that the EU is a community of values, and not just on paper, spreads quickly. The URL to the proposal’s text is added below. It was approved in the EP in October 2016, with 404 Yea votes and 171 Nay’s. Source (last accessed 15 January 2018): http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bREPORT%2bA8-2016-0283%2b0%2bDOC%2bWORD%2bV0%2f%2fEN

Political micro-targeting: a Manchurian candidate or just a dark horse?

$
0
0

Papers in this special issue

EDITORIAL: A Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research
Balázs Bodó, Natali Helberger, & Claes H. de Vreese, University of Amsterdam

The role of digital marketing in political campaigns
Jeff Chester, Center for Digital Democracy
Kathryn C. Montgomery, American University

WhatsApp in Brazil: mobilising voters through door-to-door and personal messages
Mauricio Moura, The George Washington University
Melissa R. Michelson, Menlo College

Two crates of beer and 40 pizzas: the adoption of innovative political behavioural targeting techniques
Tom Dobber, Damian Trilling, Natali Helberger, & Claes H. de Vreese, University of Amsterdam

Restrictions on data-driven political micro-targeting in Germany
Simon Kruschinski, Johannes Gutenberg University Mainz
André Haller, University of Mainz

On democracy
Sophie in 't Veld, European Parliament

Micro-targeting, the quantified persuasion
Daniel Kreiss, University of North Carolina at Chapel Hill

Editorial: Towards the next generation of political micro-targeting research

 

It is time to take a critical look at political micro-targeting

Political targeting is not a new phenomenon. Political parties, candidates, and campaigns have a long history of classifying and segmenting the ‘voter market’ in order to optimise their messages to different profiles. Political micro-targeting (PMT), however, only became prominent in recent election cycles. PMT refers to the use of different communications (mail, phone, canvassing, direct mail, and social media advertising, etc.) to communicate and build a relationship with prospective voters. At the core of the concept is the use of data and analytics to craft and convey a tailored message to a subgroup or individual members of the electorate. The 2008 US presidential elections was the first major election that perfected micro-targeting models, by, for example, using information about voters to direct volunteers to ‘scripted conversations with specific voters at the door or over the phone’ (Issenberg, 2012).

The popular myth about PMT is, that it is about voter persuasion. Of course, it be used towards the ‘ultimate’ goal of influencing candidate and party preferences and ultimately behaviour, but it would be a mistake to think if this as its only use. PMT can be used to (dis)encourage political participation, including election turnout. It can (dis)encourage donations and contributions to candidates and campaigns. It can be employed to create energy and interest in a campaign, election and candidate, but it can also be used to create disinterest and apathy.

In the past decade, the reference to and discussion of PMT has increased. The most significant public discussion took place in the aftermath of the 2016 US presidential elections. As the public realised the full scope of PMT possibilities, the public debates and news coverage of targeted ads on Facebook, the use of algorithms and foreign funding, and the centrality of data analytics companies led to moral panic, congressional hearings, and the quest for more transparency and understanding of this particular election, and PMT more generally.

At the end of 2017 we have the UK, French, German and Dutch elections behind us, and the Italian one before us. All these elections used one type of PMT or another, marking the point in time, when the PMT techniques developed in the highly innovative and liberal (i.e. under-regulated) US context are being adopted to all kinds of political systems across Europe, and ultimately, across the globe. The task of scrutinising the use and impact of PMT techniques on local elections and political systems in Europe could not be more timely.

Tensions in research

Given the historical development of PMT and today’s uncertainties about the prevalence, impact, and possible lack of regulation of PMT, it becomes important to highlight some of the tensions and inconsistencies in current thinking about the phenomenon. There is a whole range of issues that research needs to address in order to properly grasp PMT, such as how new (or old) the phenomenon really is, whether it represents more of an opportunity than a threat, and finally, whether PMT means the same in different political and regulatory contexts around the globe.

How much is new, how much old?

PMT does not come out of a vacuum. Political advertising and the professionalisation of politics and campaigns have been topics of investigation for decades. On the one hand it is tempting to consider PMT a natural evolution of developing practices in campaigning and advertising. On the other hand, PMT coincides with other fast-paced and almost disruptive developments. PMT is not just a change in approach and methods, it also speaks to a change of institutions. As the campaign spending shifts from billboards, broadcast, and print to online, social, and data, so do the institutions that connect and separate parties and politicians to/from the electorate change (see also Montgomery and Chester in this special issue). These institutions, online media platforms, data brokers, technology providers, analytics companies are perhaps the most novel aspect of today’s PMT compared to other aspects which are more organically evolving aspects of campaigning, data usage, and advertising in general.

As the research on PMT matures it is becoming a relevant question whether PMT is solely an election phenomenon. In the burgeoning literature on how politics and campaigns professionalized in the 1980s and 1990s, it became apparent that some of these developments apply not only to election time, but also to governing (Cook, 2005). Holding an office today does not mean a full electoral cycle break away from monitoring, interacting with and nudging constituencies (Csigó, 2017). In the same vein, PMT can conceivably be used in interesting ways to be part of government communication and thereby unfold not only as a feature of campaigning but also of governing.

Risks vs opportunities: utopian vs dystopian perspectives 

The most important opportunities offered by political micro-targeting techniques coincide with the promises of digital advertising. Digital technologies offer more data points to profile individual consumers by. Digital communication channels offer more precise targeting. Taken together, better profiling and better targeting are hoped to provide consumers (voters) with more relevant information, with which they are more likely to engage. Political advertisers, mainly political parties, hope to get more efficient and effective services out those same technologies, for a fraction of the cost. Lower costs and more efficient targeting in turn could lower the entry barriers to the political communication market for smaller parties, and enable parties with limited resources or with a more specific message to reach out to constituents.

All things considered these benefits may lead to a more diverse political marketplace in terms of products and sellers, i.e., issues, parties, candidates; and better informed, more conscious political consumers making better informed political choices. (Borgesius et al., forthcoming)

Some of the risks associated with PMT also mirror concerns raised in the commercial advertising domain: profiling entails a loss of user privacy, targeting opens the door for selective information exposure, and potential manipulation (Bennett, 2015). Political parties, who have to deal with the new, digital intermediaries often in exclusive control of access to voters, may face monopoly prices, and experience a loss of power. Taken together these buyer and seller effects may result in the fragmentation of the marketplace (of ideas), and make it costly for the consumers (voters) to be sufficiently informed about the true nature of the available supply (of ideas, and candidates) on the political marketplace (Borgesius et al., forthcoming).

Yet, at the end of 2017, it seems that the greatest concrete threats of PMT were not what an analogy with commercial advertising would have suggested. First, as we discuss it in more detail in the next section, it seems that different rules apply to commercial and political advertising. While both in the US, the UK and in the EU sophisticated regulatory frameworks protect consumers from false, and misleading commercial advertisements, political speech is not judged by the same standards. As a result, negative advertisement, smear campaigns and the like are often widely used strategies in campaigns (Johnson-Cartee & Copeland, 2013). Second, as it has been realised too late in the 2016 US presidential elections, and just in time in the French one, local political parties are not the only ones to see PMT as an effective tool to influence the outcome of elections. PMT techniques were used by foreign state actors, independent non-governmental organisations, and entrepreneurial individuals alike to take advantage of an election process in various ways. Balkan teens were found to exploit the heated US campaign by spreading fake news for commercial gains (Subramanian, 2017); the independent tech libertarian organisation Wikileaks had a substantial impact on multiple elections, with its sometimes curiously timed releases of confidential information (Bodó, 2014); while the Russian meddling with political events from the 2016 US presidential elections via the 2017 French elections to the current tensions around Catalonia is currently under heavy scrutiny by both local security services and global publics (Director of National Intelligence, 2017; Emmott, 2017; Greenberg, 2017). Third, to make things worse, the intransparency of PMT techniques makes it close to impossible to detect and fully assess such activities.

Finally, it is increasingly apparent the interests of digital intermediaries who enable PMT techniques are not necessarily aligned with the interests of voters, political parties, or the interests of society at large (Caplan, 2016). Such a conflict of interest of the commercial arbitrators of political speech was first described by Robert McChesney (1999) in the context of the US commercial broadcast media. He suggested that the fact that commercial media rather sees political speech in the form of revenue generating ad than a cost-generating news programme is detrimental to democracy. This conflict of interest is especially true for digital intermediaries, most of whom do not produce any content in the first place. There is no guaranteed “organic reach” for even those civil movements, causes, organisations, political parties, political candidates, activists, who otherwise successfully nurture large and well managed online communities; serious news media need to pay if they want to reach audiences, so there is no “public service”, commercially disinterested, non-profit marketplace for political speech on digital intermediaries (Madrigal, 2017; Wagner, 2017). All political audiences are ultimately sold to the highest bidder.

In other words, the biggest opportunity of PMT is also its biggest threat: sophisticated technologies allow anyone to reach any individual or group in an electorate with any message. Moreover, this access is on commercial terms, and on commercial terms only, where the audiences are sold to the highest bidder, without any regard to wider, societal concerns, such as the protection of voters from fraud; a diverse, balanced and competitive marketplace of ideas; a well-informed citizenry; or a level playing field for political parties.

Context vs conditions: legal frameworks matter

Political campaigning and the electoral system are at the heart of the democratic process. Ensuring the fairness, transparency, and openness of this process is paramount to its success. Accordingly, democratic societies have a long tradition of devising rules and procedures to guide political campaigning and political advertising, with the goal of promoting and defending democratic values and fundamental rights in this very process. In Europe, the the Council of Europe’s Venice Commission - with representatives from 60 countries - in 2002 established five fundamental electoral principles, including “universal, equal, free, secret and direct suffrage” - principles that have guided the way national governments design and enable their electoral systems, including the way political advertising is regulated.

Political advertising is protected speech in the sense of Art. 10 ECHR (European Court of Human Rights, VgT Verein gegen Tierfabriken v. Switzerland), and citizens have the right to inform themselves and receive political information from a diversity of sources and perspectives. Council of Europe member states are under the obligation to enable “free elections at reasonable intervals by secret ballot, under conditions which will ensure the free expression of the opinion of the people in the choice of the legislature.” (Article 3 of Protocol No. 1 to the European Convention on Human Rights). This obligation has resulted in rules on various aspects of political campaigning in general, and political advertising specifically.

So far much of the legal debates around political advertising have concentrated on the scope and affordances of data protection law (Bennet, 2016). Data protection law, however, can only go so far to protect principles of universality, equality or freedom of the political, data-driven process. In other words: data protection law is no ‘all purpose tool’, equipped to deal with all challenges of political micro-targeting, simply because they deal with data. Instead, it is high time to include other legal areas into the discussion, and their role in addressing the effects from data analytics, including election, party financing laws, and media law.

With regard to political advertising, one set of rules that is particularly relevant are the rules about party financing that must ensure transparency and fairness in the way parties receive and spend public and private money to fund their (advertising) campaigns (Anstaed and Chadwick, 2016). In response to the use of PMT in the UK Brexit campaign, these provisions stood central in an investigation of the UK Electoral Commission into the extent to which the divestitures in PMT were potentially in conflict with the British rules on campaign financing. Political advertising is also subject to a battery of rules in national media laws - rules that determine meticulously the way in which political parties may or may not advertise on the (traditional) media (many national broadcasting law issue a ban in principle on political advertising in broadcasting), who has to be granted how much advertising time, and what are the limits and who exercises oversight (for an overview, see Apa et al. 2017).

These rules were written for the traditional mass media, and media campaigning. They apply to the broadcasting media, not to platforms, as new central players in political advertising. The challenge is to transport the provisions about equality and fairness, transparency and oversight to the new digital realities of PMT, as well as to the new central players: digital platforms, but also data brokers and analytics firms. The discussion about the regulation of PMT on platforms is part of a broader discussion about the social responsibility of platforms, and the extent to which we can oblige large commercial Silicon Valley companies to observe democratic values, such as media diversity, fairness and equality in the democratic process. It is clear and established that the public service media and newspapers have an important task in providing a platform for the diverse encounters of ideas and opinions. The role and public responsibility of platforms still needs to be carved out. Platforms have come to play a central role in gathering information and forming (political) opinions. They have also become part of the mission-critical campaign infrastructure, next to the traditional media. And yet, many of these platforms are driven by an advertising business model, and from their perspective as advertising companies, political micro-targeting is ‘just’ another form of advertising. And current laws, e-commerce law do little to correct that assumption. PMT is another example that shows that the discrepancy in current regulation between demanding responsible behaviour from the offline media, and providing maximum leeway and ‘room to play’ for the online media as the ‘youngest kid’ in the family, is no longer tenable. Online media, and platforms in particular, have grown up, and it is high time to create more similar standards across different media and different forms of advertising, and more guidance on ‘fair targeting practices’.

For the past decade, a central ambition of regulatory reforms in the area of data protection law but also consumer law sought to create a fairer level playing field for the commercial sister of political advertising: behavioural targeting (BT). Similarly to PMT, behavioural targeting seeks to utilise data-driven insights into consumer preferences and characteristics to optimise advertising, make it more personally relevant, efficient and persuasive. Regulatory reforms in the European General Data Protection Regulation (GDPR) must give consumers more information and control rights vis-à-vis data-driven advertisers. As outlined in a symposium1 contribution by Maximilian von Grafenstein and Jessica Schmeiss, the provisions in the European General Data Protection Regulation apply to the collection, processing of personal data in general (irrespective of whether this is done for purposes of commercial or political targeting), as well as to the use of such data to target users with personalised messages. Insofar, data protection law affords consumers and voters a comparable level of protection in terms of the collection and use of their personal data.

This is different for the protection from false and misleading claims. Here, the European Commission addressed in its latest guidance on the Unfair Commercial Practice Directive a range of potentially unfair forms of behavioural targeting; and the ongoing reform of European consumer law must afford more protection to consumers in the digital marketplace. False, misleading or aggressive advertising claims are banned under the provisions of Unfair Commercial Practices. These rules apply to commercial communication, but not necessarily to ideological advertising (also depending on the country in question).

The recent debate about the obligation of platforms to disclose political advertising, too, is from the perspective of consumer law, nothing new under the sun: the obligation to label commercial advertising messages and disclose sponsored content is an iron rule in advertising law and unfair commercial practice law. For political micro-targeting on social networks, such obligations are in many countries still missing, or just about to be drafted.

In other words, many of the rules that must guarantee informed and autonomous decision making in the light of behavioural targeting do not apply to ideological advertising. Similarly, while the commercial advertising industry has developed since long codes of conduct and instances to guard over the fairness and truth in advertising, including digital advertising, such self-regulatory initiatives in the field of political advertising still need to be developed. The question is by whom? Political parties? Platforms? Analytics companies such as Cambridge Analytica? Fairness, truthfulness and transparency are arguably important values, also in the relationship between voters and political parties.

Behavioural and political micro-targeting use essentially the same technologies, sometimes even the same data. And yet we face in Europe the puzzling situation that the trust of consumers in the truthfulness and fairness of commercial behavioural advertising seems to be better protected than that of citizens that are the recipients of political micro-targeting. A critical challenge for the law, and legal scholars in the years to come will be to even out this inadequate and undemocratic imbalance in protection, and devise principles and rules about Unfair Political Practices.

In this context, it can be useful to learn from the diversity of emerging (self)regulatory approaches and experimentations with new solutions in the European Union member states. For example, the Dutch "Social Media Advertising Code" (Reclamecode Social Media)2 does address surreptitious political advertising through social media and requires, among others, disclosure of any payment social media users receive for the distribution of advertising messages. In the UK, the Electoral Commission has begun to develop guidance on the rules applicable to online media. In Ireland, the Data Protection Commissioner issued guidelines to candidates for election, and their representatives, on canvassing, data protection and electronic marketing for the General Election in 2016,3 as well as guidance on the use of electronic media for political direct marketing under the principles developed for the regulation for spam. All these are examples for initiatives that seek to develop guidance, either through self-regulation or issued by the regulator, and that have started to carve out principles of fairness in political micro-targeting vis-à-vis voters.

Theory vs practice

The advance of digital technologies and the subsequent uncertainties unleashed a period of moral panics among academics as well as social commentators. There is a strong market for both “TED-talk-sized utopian visions”, and the most dystopian doomsday scenarios of political deception, fragmentation and the ultimate collapse of democracies.

Comparing such speculations with the campaigns and elections until the end of 2017 proved that speculation and the actual practices on the ground were rarely in line. As outlined by the piece on the 2017 Dutch national elections by Tom Dobber et al., or on the 2017 German federal elections by Simon Kruschinski and André Haller a number of practical limitations, such as the lack of expertise, funds or real demand; the particularities of the local jurisdiction; or the political and media systems tend to moderate the theoretically expected (positive or negative) impact of PMT. On the other hand, few, if any theoretical models managed to foresee what turned out to be the greatest threat of not only PMT, but of digital platforms and intermediaries in general, namely that bad faith (state) actors do possess the resources and the incentives to weaponise digital communication platforms, and in times of elections PMTs, and turn them into highly disruptive tools of information warfare (Maréchal, 2016; Patrikarakos, 2017, Powers, 2010).

One might ponder, whether this unexpected turn of events will actually be beneficial in mitigating the worst theoretically predicted threats of PMT and enable us to exploit the opportunities it offers in a more systematic and orderly manner. Threats, such as the aforementioned Russian interference in the 2016 US presidential elections via social media suddenly turned PMT techniques into the most closely watched domains, where everyone, from the media via congressional committees to national security services seemed to be ready to monitor and regulate this space. One can only hope that under such intense scrutiny the more ordinary threats envisioned by theory will have less chance to turn into reality.

We also have to be aware that much of the literature on the opportunities, threats and the actual use of PMT originates in the US. The reasons for that are obvious: US political campaigners have been developing PMT techniques since the early 2000s. Yet, as more than one contribution in this issue suggests, the PMT practices developed in the US do not enjoy universal appeal or success in other countries. As we discussed above, there are many factors that affect the applicability and effectiveness of particular PMT techniques, from the size of campaign budgets, via data protection rules, to the modalities of the political and media systems. As studies on actual PMT applications start to emerge from Brazil to Germany, it is high time to consider what parts of the US based theoretical discourse is applicable beyond the borders of the US political sphere. It would be a mistake to treat the US experience with PMT as a point each country will unavoidably pass in its teleological development of PMT-driven political communication. Instead, future research should carefully assess which of the opportunities and threats are the product of the particular, and rather exceptional US political, social, economic, cultural conditions, and what are the barriers in front of the global transfer of PMT practices, positive or negative effects, actors, knowledge, etc.

Moving beyond the state of play: the next generation of political micro-targeting research

Research on PMT is burgeoning. Put bluntly, PMT research suffers from an overdose of US perspectives, an underdose of European, South American, Asian perspectives, a too heavy reliance on interview data, and too much of mono-theoretical paradigms.

As outlined above there are good reasons why the US perspective is so dominant: many industry based developments are spearheaded in the US, the US electoral advertising industry is large and advanced, and the US has databases such as voter registration files which are game changers in terms of PMT. However, these very characteristics also make it problematic that so much of our knowledge is based in this single case. Most other western democracies (to allow for some comparative benchmark) have much smaller political advertising industries (limited by generally more modest campaign budgets), different political, electoral and media systems, and less accessible voter databases. This means that non-US studies, in particular comparative ones, are needed as the field moves forward so as to avoid basing our collective knowledge on a case which is exceptional (and thus abnormal) on many parameters.

Extant PMT research also suffers from heavy reliance on one type of studies. Whereas other areas of political communication research undoubtedly suffer from a strong reliance on content analyses, survey data or experimentation, the study of campaign techniques and the use of PMT relies too heavily on interview data. Such data are very informative and useful but typically reveal less information about the distribution of PMT, its usage, or let alone its effects. This is important since much of the public debate centers on assumptions about usage and effects. The prevalence of qualitative approaches is especially limiting because PMT could easily lend itself to big data based quantitative research methods. Having access to such datasets is an issue, and researchers should advocate transparency rules that enable such research (Bodo et al., 2017), but as a number of studies have proven, such quantitative approaches are not just feasible, but also shed light to phenomena invisible to qualitative methods (Puschmann, 2017).

Current PMT research also suffers from mono-theoretical blindness. Oftentimes the understanding and investigation of PMT provides only an overview of regulations, or focuses only on campaign practices or big data analytics. Too rarely are studies integrated such that different theoretical concepts, disciplines, and designs converge around answering PMT-related questions.

In this issue

This collection of papers comes out of the Personalised Communication project4 - an interdisciplinary project currently underway at the University of Amsterdam. It is the result of an international call for proposals, and subsequent international symposium held in Amsterdam on 22 September 2017. The symposium had the explicit aim of bringing researchers together from different disciplines and different political and media systems, all with an interest in PMT.

As we outlined above, we consider research PMT of eminent importance. On the one hand global players, such as digital platforms, agencies, consultants developed PMT based products and services in the US market, which they are now offering across the globe. On the other hand, there are more and more willing buyers among local politicians and parties, who are glad to test if PMT is able to deliver at least a fraction of its promised results. As the use of PMTs is slowly becoming the new normal in political communication, it is also pertinent that we, as scholars develop and refine the appropriate theories, research methods, and frames in which we can study PMT across different territories, jurisdictions, media and political systems, and compare our findings, before making bold claims about the effects and normative implications of PMT.

This special issue does not alleviate all of these potential concerns about PMT, and the research environment around PMT driven practices. Nevertheless, this selection of studies does offer bigger diversity of cases and foci. Most empirical papers are cognizant about regulatory and normative implications and vice versa. And while the prominence of interview based findings is not combatted in this issue either, papers rely on a wide variety of data.

The special issue includes four peer reviewed articles. They touch on broader industry regulations, and offer case studies and insights from diverse jurisdictions such as Brazil, Germany and the Netherlands. At the end of the special issue we include an abbreviated version of a speech delivered by MEP Sophie in ‘t Veld to Data & Democracy, a conference on political micro-targeting held in Amsterdam in May 2017, as well as a reflection by renowned political communication scholar and expert on political micro-targeting, Daniel Kreiss.

Jeff Chester and Kathryn Montgomery provide a much needed in depth insight into the current state of the industry that services political micro-targeting efforts in the US. Even if in a few years none of the firms mentioned in this article would remain, we believe we need a thorough overview of the state of play, the stakeholders and their claims, as they stood in late 2017, after the double, and rather unexpected shock of the surprise election of Donald Trump (in 2016!) and the vote for UK's exit of the European Union (June 2016!).

Mauricio Moura and Melissa Michelson’s empirical study on the effectiveness of different mobilisation efforts in Brazil serves as a useful reminder of why and how we should not take whatever we learn from the US experiences for granted. Due to its unregulated nature Whatsapp played a central role in the general elections in Brazil. Moura and Michelson’s study suggests that it has a strong impact when it comes to its ability to mobilise voters. Their study also reminds us that digital intermediaries can be used as a mass media as much as a personalised media.

Tom Dobber, Damian Trilling, Natali Helberger and Claes de Vreese’s account on the use of PMT techniques during the 2017 Dutch general elections highlights the fundamental differences between how PMT is regarded and used in the US and another, technologically and economically highly developed, but European country. To explain the strikingly lower appeal of PMT in the Netherlands, Dobber et al. make the first steps to outline the theoretical framework in which all factors that might affect local PMT applicability can be situated and assessed against each other.

Simon Kruchinski and André Haller look at use of PMT in the canvassing efforts of German parties during the 2016 state parliament elections in Rhineland-Palatinate. The authors introduce a framework of macro-, meso-, and micro level constraints that define the use, and usefulness of various PMT techniques in different countries. Using in-depth interviews they then identify the limited German parties’ use of PMT. They suggest that besides system-level, contextual factors such as budgetary restraints, and strict legal rules on data protection and privacy, the meso-, and micro level factors, such as the party structures and campaign knowledge, are also have a strong impact on the usefulness of micro-targeting.

At last, we invited two contributions from two leading voices on political micro-targeting. Daniel Kreiss is the leading scholar on the use PMT in the US political system. Sophie in 't Veld is a Dutch politician, Member of the European Parliament for Democrats 66, a Dutch liberal party. Their respective texts have the common concern for the future of democracy, in an era dominated by unchecked commercial influence over the political communication process, foreign meddling, and unaccountable, intransparent digital middlemen. But the differences between their perspectives, and analysis is as important as their shared anxieties, as they reflect the particular social, economic, political conditions in which these texts were born.

Kreiss sees PMT as a technology that reflects, reinforces and amplifies the partisan conflicts at the heart of US politics, the political agonism, the clash of competing social groups, interests, and values in the binary US political sphere. For him, the loss of the theoretical ideal of a rational debate between well informed citizens due to PMT is not much relevant in the context of the ideological war between the two opposing US parties. In t’ Veld, on the other hand, comes from the Netherlands, where 13 different parties make up the national parliament, another dozen are present in local legislatures, and works in the European Parliament where seven different party groups exist. For her, the deliberative aspects of democracy, and the negative effects PMT might have on those, are in the forefront.

The two closing opinions and the five peer reviewed studies are all based on a shared concern for democracy. Their differences in methods, topics, coverage and angle highlight the diversity of not just the issues at hand, but the much needed diversity of voices, this PMT research needs, if we are to understand the role of political micro-targeting in our democracies.

References

Anstead, N., & Chadwick, A. J. (2009). Parties, election campaigning, and the Internet: toward a comparative institutional approach. In A. J. Chadwick & P. N. Howard (Eds.), Routledge handbook of internet politics (pp. 56–71). London: Routledge.

Apa, E., Bassini, M., Bruna, A., Blázquez, F. J. C., Cunningham, I., Etteldorf, C., … Rozendaal, M. (2017). Media coverage of elections: the legal framework in Europe (IRIS Special). (F. Courrèges, M. P. Sarl, N. Sturlèse, S. Pooth, E. Rohwer, & S. Schmidt, Trans.). Strasbourg: European Audiovisual Observatory. Retrieved from http://www.obs.coe.int/documents/205595/8714633/IRIS+Special+2017-1+Media+coverage+of+elections+-+the+legal+framework+in+Europe.pdf

Bennett, C. J. (2015). Trends in voter surveillance in western societies: privacy intrusions and democratic implications. Surveillance & Society, 13(3/4), 370-384. Retrieved from: https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/voter_surv

Bennet, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261-275. doi:10.1093/idpl/ipw021

Balázs, B. (2014). Hacktivism 1-2-3: How Privacy Enhancing Technologies Change the Face of Anonymous Hacktivism. Internet Policy Review, 3(4). doi:10.14763/2014.4.340

Bodo, B., Helberger, N., Irion, K., Zuiderveen Borgesius, F., Möller, J., van de Velde, B., Bol, N., van Es, B., & de Vreese, C. (2017). Tackling the Algorithmic Control Crisis – The Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents. Yale Journal of Law and Technology, 19, 133–180. Retrieved from: http://yjolt.org/tackling-algorithmic-control-crisis-technical-legal-and-ethical-challenges-research-algorithmic

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Fathaigh, R. Ó., Irion, K., T. Dobber, T., Bodo, B., & de Vreese, C. . Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review. Forthcoming.

Caplan, R. (2016, November 22). Facebook Must Acknowledge and Change Its Financial Incentives. The New York Times. Retrieved from https://www.nytimes.com/roomfordebate/2016/11/22/how-to-stop-the-spread-of-fake-news/facebook-must-acknowledge-and-change-its-financial-incentives

Csigó, P. (2016). The neopopular bubble: speculating on “the people” in late modern democracy. Budapest; New York: Central European University Press.

Director of National Intelligence. 2017. Background to "Assessing Russian Activities and Intentions in Recent US Elections": The Analytic Process and Cyber Incident Attribution. Washington, DC: Office of The Director of National Intelligence

Emmot, R. (2017, November 13). Spain sees Russian interference in Catalonia separatist vote. Reuters. Retrieved from https://www.reuters.com/article/us-spain-politics-catalonia-russia/spain-sees-russian-interference-in-catalonia-separatist-vote-idUSKBN1DD20Y

Greenberg, A. (2017, May 9). NSA Director Confirms That Russia Really Did Hack the French Election. Wired. Retrieved from https://www.wired.com/2017/05/nsa-director-confirms-russia-hacked-french-election-infrastructure/

Johnson-Cartee, K. S., & Copeland, G. (2013). Negative political advertising: Coming of age. Routledge.

Madrigal, A. C. (2017, October 24). When the Facebook Traffic Goes Away. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2017/10/when-the-facebook-traffic-goes-away/543828/

Maréchal, N. (2016). Automation, Algorithms, and Politics | When Bots Tweet: Toward a Normative Framework for Bots on Social Networking Sites (Feature). International Journal of Communication, 10, 10. Retrieved from: http://ijoc.org/index.php/ijoc/article/view/6180

McChesney, R. W. (1999). Rich media, poor democracy: communication politics in dubious times. Urbana: University of Illinois Press.

Parliamentary Assembly of the Council of Europe Resolution 2143. (2017). Online media and journalism: challenges and accountability. Retrieved from: http://assembly.coe.int/nw/xml/XRef/Xref-DocDetails-EN.asp?FileID=23455&lang=2

Patrikarakos, D. 2017. War in 140 characters : how social media is reshaping conflict in the twenty-first century. New York: Basic Books.

Powers, S. 2010. Weaponized Media, Legitimacy and the Fourth Estate: A Comment. Ethnopolitics, 9(2), 255-258. doi:10.1080/17449051003764855

Puschmann, C. (2017, August 2). How significant is algorithmic personalization in searches for political parties and candidates? Retrieved from https://aps.hans-bredow-institut.de/personalization-google/

Subramanian, S. (2017). Inside the Macedonian Fake-News Complex. Wired Magazine, 15. Retrieved from https://www.wired.com/2017/02/veles-macedonia-fake-news/

Wagner, K. (2017). Publishers might have to start paying Facebook if they want anyone to see their stories Facebook’s latest test should terrify publishers. RECODE. Retrieved from https://www.recode.net/2017/10/23/16525192/facebook-explore-feed-news-media-audience-reach-traffic-test

Footnotes

1. Symposium on Political micro-targeting held in Amsterdam on 22 September 2017.

2. Stichting Reclame Code, Reclamecode Social Media (RSM) https://www.reclamecode.nl/nrc/pagina.asp?paginaID=289%20&deel=2

3.https://www.dataprotection.ie/docimages/documents/DPCanvasGuide.pdf

4. For more information on the research project, please visit www.personalised-communication.net


What kind of cyber security? Theorising cyber security and mapping approaches

$
0
0

Introduction

Cyber security has become a matter of increasing public prominence. This is evidenced by incidents broadly discussed in the media, such as Snowden’s 2013 leaks of secret and classified NSA surveillance programmes (Szoldra, 2016), the alleged Russian hacking of the 2016 US national elections (CNN Library, 2018), 2017’s Equifax breach, where hackers gained access to sensitive, credit-relevant data on more than 100 million customers (Wattles& Larson, 2017), and the same years’ Wannacry attack which held thousands of Microsoft-run computers ransom (Fox-Brewster, 2017). However, the question of what cyber security is about and which kinds of actions cyber security concerns should lead to remains open. All of the above examples relate to cyber security, yet they are about different issues and concerns, relating to, for example, governmental surveillance, economics of privacy, and cyber security and political decision-making. They also describe different kinds of incidents and breaches, involve different actors, ranging from corporations to intelligence agencies, citizens, and nation states, and focus on different relationships between them.

Common definitions of cyber security often unite or sit above issues, threats, activities and aspects. A German cyber security strategy for example states: “the availability of cyberspace and the integrity, authenticity and confidentiality of data in cyberspace have become vital questions of the 21st century. Ensuring cyber security has thus turned into a central challenge for the state, business and society both at national and international levels” (European Network and Information Security Agency, 2012, p. 4). In 2011, the Dutch Ministry of Security and Justice defined cyber security as a state of “being free from danger or harm caused by the malfunction or failure of ICT or its misuse” (Van Den Berg et al., 2014, p. 4). Others define cyber security as the “harmonisation of capabilities in people, processes, and technologies; to secure and control both authorised and/or unlawful access, disruption, or destruction of electronic computing systems (hardware, software, and networks), the data and information they hold”, or as the “effective cyber-secure operations that guarantee pre-set system objectives” (Ani, He, & Tiwari, 2016, p. 170).

This paper explores how such meanings of cyber security arise by identifying four structural components which approaches to cyber security include, and by examining four common approaches to cyber security. The analysis starts with the theoretical framework of Securitisation Studies and David Baldwin’s conceptual work on security (Baldwin, 1997; Buzan, Wæver, & de Wilde, 1998; Emmers, 2016). These works have provided answers to the questions of what exactly security is, how security issues are constructed and with what effects these issues are communicated. The two frameworks will be used in a complementary fashion in this paper, in order to build a constructivist account of cyber security. The following section then discusses how the insights this literature provides apply to cyber security in the context of internet governance. The section presents research from science and technology studies (STS) and computer ethics, which have sought to demonstrate the applicability of the Securitisation framework to information technologies and cyber security (Dunn Cavelty, 2013; Hansen & Nissenbaum, 2009; Nissenbaum, 2005; Wolff, 2016). Based on these conceptual clarifications, the paper describes the four structural components identified and proceeds to present and discuss four common approaches to cyber security: as data protection, as safeguarding financial interests, as the protection of public and political infrastructures, and as the control of information and communication flows.

The approaches each differently define the structural components presented, such as the threats they concern (i.e., posed by corporations, hackers, citizens, other states), the objects they protect (i.e., public infrastructures, personal information, economic rules), the cyber security measures they utilise (i.e., technical measures, policies), and the responsibilities they give to actors and stakeholders (i.e., corporate or governmental actors, citizens and individuals). Thus, each approach constructs a unique set of relationships between the actors involved. When actions are taken based on the approach chosen, these relationships are encoded into the technologies concerned. Interestingly, each approach is motivated and justified by its own set of values. This implies that, depending on the underlying approach, actions taken in the name of cyber security can promote the particular values that motivate and justify them. Conversely, cyber security approaches might be favoured depending on the values important to those who are making decisions.

The article postulates a close connection between cyber security governance, stakeholder relations and the promotion of values such as safety, privacy, fairness, free market competition and democracy. As any cyber security approach chosen shapes this connection, taking a closer look at the approaches behind cyber security initiatives or policies enables a better understanding of which values are prioritized and promoted, and of how relationships and responsibilities are constituted between the participating actors in the public sphere. This makes the question of how decisions about cyber security are made a question of public and political interest. The paper closes with a discussion of its findings and reflections on future research.

Section 2: Securitisation Studies and the concept of security

Securitisation and the Copenhagen School

A prominent approach to studying security known as the Copenhagen School has taken a constructivist approach to answering the question of what security entails. It offers a framework for studying the construction of security issues and its effects. The Copenhagen School proposes to widen the study of security beyond its traditional focus on military affairs and nation state actors to include a variety of threats posed in various sectors. These can be problematised and responded to by actors located on seperate analytical levels (Buzan et al., 1998, pp. 5–10; Emmers, 2016, p. 132).

The Copenhagen School conceptualises security as a way of establishing relations and relationships. Responses to security issues establish relations between the entities and actors involved, for example between human collectives and groups, or between collectives and their environment (Buzan et al., 1998, p. 10). Further, different kinds of security as they pertain to different sectors (economic security, environmental security, social security) are about different kinds of relations. For example, “the political sector is about relationships of authority, governing status, and recognition; the economic sector is about relationships of trade, production, and finance; the societal sector is about relationships of collective identity” (Buzan et al., 1998, p. 7).

The school’s constructivist approach holds there is no security issue in itself or by virtue of its ‘essence’ (Buzan et al., 1998, p. 31; Emmers, 2016, p. 135). Issues are constructed and positioned as security issues within (public and political) discourses. Security is a speech act which moves an issue from the realm of normal politics to the realm of security, a move called securitisation (Buzan et al., 1998, pp. 23–26). When an issue is politicised, it becomes a matter of policy and governance to be debated and addressed by political procedures of decision-making. When an issue is securitised, it moves from the realm of standard political procedures to the realm of security and takes precedence over other issues, allowing for the employment of extraordinary measures outside what would normally be deemed acceptable (Buzan et al., 1998, pp. 21–22). Securitising an issue can thus help to justify certain activities, initiatives and policies and override other concerns as well as ethical or societal considerations.

In order for securitisation to be successful, an audience needs to be convinced of the existence of an existential and imminent threat to a cherished referent object (Buzan et al., 1998, pp. 21–24; Emmers, 2016, p. 132). A referent object is an entity seen as existential and fundamental to the survival of (human) life and the proper functioning of society. This threat and its power to potential catastrophe justifies precedence over other issues and the abrogation or breach of standard procedures and established rules and protocols (Buzan et al., 1998, pp. 24–25). As Buzan, Wæver and de Wilde put it, when an issue is successfully securitised, the very existence of human life and social order seems at stake: “If we do not tackle this problem, everything else will be irrelevant (because we will not be here or will not be free to deal with it in our own way)” (Buzan et al., 1998, p. 24). Securitisation studies “who securitizes [sic], on what issues (threats), for whom (referent objects), why, with what results, and […] under what conditions” (Buzan et al., 1998, p. 32).

The concept of security

Responding to the Copenhagen School of Securitisation, David A. Baldwin, in his article on ‘the concept of security’, criticises the way Securitisation scholars approach security, as it can lead to the view that security is an “essentially contested concept” (Gallie, 1955) “so value-laden that no amount of argument or evidence can ever lead to agreement” (Baldwin, 1997, p. 10). Rather than thinking of security as an essentially contested concept he finds it to be a confused, insufficiently explicated and under-theorised concept which has fallen short of conceptual work (Baldwin, 1997, pp. 8–9, 24). There is something which distinguishes security issues from other issues, he argues, proposing to define security “in terms of two specifications: Security for whom? And security for which values?” (Baldwin, 1997, p. 13). Baldwin extends these two initial questions by formulating the further questions of ‘security from what threats?’ and ‘by what means? (Baldwin, 1997, pp. 15–16). The formulation of the structural components of cyber security in this paper departs from Baldwin’s questions, but excludes his further questions of ‘how much security’, ‘at what costs’ and ‘in what time period’, because the paper focuses on the values cyber security approaches promote and the relationships they create.

To summarise: Securitisation’s constructivist perspective on security states that nothing is a security issue in and of itself but rather issues are constructed as security issues. Constructing something as a security issue is a discursive move which equips the issue with a sense of urgency and priority and can be used to convince an audience of the need for taking action. However, as David Baldwin has argued, there are a number of structural questions which characterise security issues and distinguish them from others, such as ‘security for whom?’ and ‘security from which threats?’. The following section applies this constructivist approach to security in order to ask what exactly cyber security is and in order to analyse how securitisation applies to information technology and the internet.

Section 3: Cyber security and internet governance

The field of internet governance

Cyber security is a central part of internet governance, a field which is concerned with how to operate the internet on a structural and infrastructural level. The field addresses the technological, political and legal norms and rules of how we interact on and through the internet. Viewing internet governance as a multifaceted, “heterogeneous process of ordering without a clear beginning or endpoint” (Hofmann, Katzenbach, & Gollatz, 2016, p. 1412), scholars study diverse practices which have effects on the internet’s structure, infrastructure and operation, like institutional decisions and standardisation processes, governmental policies and the practices of service providers (Hofmann et al., 2016; van Eeten & Mueller, 2013). These practices are carried out by and include a broad range of actors, such as internet service providers (Marsden, 2013), nation states and their institutions (Deibert, 2009), international bodies like the EU (European Union, 2016), technical experts, and corporate and individual internet users (i.e., van Eeten & Mueller, 2013). A model often discussed with regards to internet governance is multistakeholder governance which refers to the “joint management of Internet resources by governments, business and the civil society in their respective roles” (Cruz-Cunha & Portela, 2015, p. 397). Actors in multistakeholder governance can, amongst others, be states, formal intergovernmental organisations, firms, NGOs, civil society groups or individuals. There are different forms of multistakeholder involvement, depending on the actors involved and the relationships between them (DeNardis & Raymond, 2013).

Cyber security as a central area of internet governance similarly involves and relates different actors. According to Laura DeNardis, cyber security concerns “a variety of solutions and problems related to authentication, critical infrastructure protection, encryption, worms, viruses, denial of service attacks, and data interception and modification” (DeNardis, 2010, p. 10). Cyber security issues can be addressed by various internet governance mechanisms, for example by governance institutions which tackle issues via the design of technologies, protocols and policies or aim to secure infrastructures against breaches and attacks. At the same time, cyber security policies can also function as leverage points for effectuating broader structural effects and shaping relationships between actors (Fichtner, Pieters, & Teixeira, 2016). Together with Wolter Pieters and André Teixeira, I have already highlighted elsewhere the political dimension of cyber security and argued that ways of framing cyber security or making cyber security arguments shape how technological infrastructures are implemented and access and control rights are allocated (Fichtner et al., 2016).

Securitisation and information technology

Researchers from the fields of science and technology studies and computer ethics have identified cases in which alternative definitions of cyber security lead to policies with different, sometimes opposite, effects. For instance, in her essay on “where computer security meets national security”, Helen Nissenbaum contrasts two notions of security within the context of ICTs, ‘computer security’ and ‘cyber-security’. She explains how the notions imply different technical measures and protocols because they differ in their subjects and objects of threats (Nissenbaum, 2005). (Technical) computer security is concerned with “[a]ttacks that render systems, information, and networks unavailable to users, including for example, denial-of-service attacks and malware such as viruses, worms, etc. that disable systems or parts of them” or which “threaten the integrity of information or of systems and networks by corrupting data, destroying files or disrupting code, etc.” (Nissenbaum, 2005, p. 63). “Cyber-security”, in Nissenbaum’s terms, on the other hand concerns threats “posed by the use of networked computers as a medium or staging ground for antisocial, disruptive, or dangerous organizations and communications [...or t]hreats of attack on critical societal infrastructures, including utilities, banking, government administration, education, healthcare, manufacturing and communications media” (Nissenbaum, 2005, p. 64) where law enforcement and surveillance agencies are called upon. Together with Lene Hansen, Nissenbaum demonstrates that cyber security is a valid subject of investigation under the Securitisation framework which involves multiple discourses with their own unique constellations of referent objects, reaching across geographical and political boundaries (Hansen & Nissenbaum, 2009).

Taking up Securitisation’s focus on discourse, Miriam Dunn Cavelty identifies three dominant metaphors in the cyber security discourse: parasitic metaphors (worms, viruses), space metaphors (new frontier, cyberspace) and ecological metaphors (organism, ecosphere) (Dunn Cavelty, 2013). She demonstrates how these metaphors conceive of cyber security in ways which warrant different socio-technical responses and allocations of responsibility. When cyberspace is seen as a territory under threat of anarchism, cyber security is about physical infrastructures “subjected to the principles of territoriality and sovereignty” where state actors need to establish law and order, “control and borders” (Dunn Cavelty, 2013, p. 118). Ideas about cyberspace as its own, self-regulating organism conceptualise “the role of the state” less as that of a much-needed authority but rather “of a gardener and facilitator” (Dunn Cavelty, 2013, p. 119).

In her study on cyber security conflicts in internet governance forums, Josephine Wolff similarly presents interesting cases of dispute over the meaning and definition of cyber security, where “conflicting notions of security” sparked debates about which rules to implement (Wolff, 2016). She finds definitions of security take place within a network of corporate and political interests. Accounts of what cyber security entails lead to the implementation of different infrastructural protocols and norms which play out to the advantage of some and to the disadvantage of other stakeholders (i.e., corporations, civil society organisations, governments). In order to sustain the cyber security approach they defended, the involved actors drew upon values and value conflicts, such as whether to prioritise protecting consumer safety and trust or the privacy of campaigners and fundraisers. These positions led to respectively corresponding responses such as permitting or prohibiting WHOIS privacy for websites engaged in commercial transactions (Wolff, 2016).

Scholars such as Nissenbaum, Dunn Cavelty and Wolff have found that, similarly to how Securitisation sees security, cyber security is a contested concept which can be constructed to be about different referent objects, threats and responses. The structural effects cyber security responses can have for internet governance depend on how cyber security is understood and realised and on who or what it ought to protect. Especially when security concerns override other concerns, it is important to carefully dissect what is being presented as a security issue and by whom.

Section 4: The structural components of cyber security

Building on Securitisation’s understanding of security as being contingent and constructed within discourses, and on Baldwin’s conceptual work on security, this section outlines four structural components which together can build an approach to cyber security. The mapping of these components bases on the vocabulary Securitisation Studies provide and the questions Baldwin has formulated.

Referent objects

Baldwin’s first question was “security for who” (Baldwin, 1997, p. 13) which corresponds to what the Copenhagen School calls a referent object (Buzan et al., 1998, p. 36). This is the entity, object, system, unit or the like which is considered to be under threat or which ought to be protected. Referent objects are “seen to be existentially threatened and [as having] a legitimate claim to survival” (Buzan et al., 1998, p. 36). They can be concrete or abstract concepts like national sovereignty, political order and collective identities, but also human lives, values such as freedom and equality, the environment, cities, countries or technological infrastructures.

Securitising actors

Securitising actors on the other hand are the actors who securitise an issue and propose the existence of a security issue and existential security threat. While Securitisation’s notion of ‘securitising actors’ focuses on who securitises an issue within a discourse, i.e., who proposes or promotes a security issue, I expand the notion and utilise it for also describing actors who take over responsibilities and tasks for ensuring cyber security. This adaptation allows me to add a question of security by whom in order to look at who takes over responsibilities and attains certain rights. This question is interesting, because the actors who take over responsibility are those who have to invest resources, but also those who can access and process information as well as control infrastructures.

Threats

Further, a security threat needs to be defined and conceptualised, corresponding to Baldwin’s question of “security from what threats”. Being defined as a threat implies being an unwanted participant in a technical infrastructure or system: an entity or actor deemed not to have certain rights to exert control or access data, for instance. When cyber security concerns the protection of personal communications from governmental surveillance, this implies the governmental institutions which engage in surveillance are understood as a threat and are not deemed to have the right to break into systems or devices or to access information.

Responses

Finally, either explicitly or implicitly, the proposition of certain responses or actions to be taken as a reaction to a proposed cyber security threat is another part of an approach to cyber security. Cyber security responses can be named or proposed directly, but they can also be implicit in the way the problem is framed. For instance, if cyber security is understood as data protection, the range of possible responses is limited to those which protect sensitive information. Responses can be realised on different infrastructural levels, take different (technological) forms and can be implemented by different means. They can be technical, for instance, when systems are technically secured against hacking attacks or when data is encrypted in order to be protected from unauthorised access. Other solutions can take place on a legal or policy level, such as when laws and agreements are implemented which define how data can be shared (i.e., the EU-US privacy shield) or, for instance, when organisations put password policies into place. Other possibilities are the establishment of best practices or cyber security education, for example when citizens learn about how phishing attacks operate. In practice, cyber security responses will often involve a combination of responses. Nevertheless, the kind of response put into place is shaped by the assumed causes of a security problem, for instance whether the problem is considered as a technical issue or as a regulatory loophole. Similarly, responses and their effects differ depending on who is made responsible for ensuring cyber security, whether, for example, technologists, engineers, lawmakers, politicians or citizens are held to be responsible.

Actors

The roles of referent object, securitising actor or security threat can be taken up by a variety of actors or by a combination of them. These actors can be for instance international institutions such as international governance bodies like the EU, military partnerships like NATO, and international internet governance bodies like the Internet Engineering Task Force (IETF) or the Internet Corporation for Assigned Names and Numbers (ICANN). They can also be states, i.e., governments or heads of state, national institutions like ministries, law enforcement, secret services, political parties, research organisations, etc., or they can be non-governmental groups like activist and political groups, institutes or NGOs. In addition, corporate actors such as companies, production units or internet service providers can be involved in cyber security issues. And finally, there are individual internet users who can be concerned, either as hackers or as vulnerable groups threatened by certain risks. The way in which an approach to cyber security defines these roles creates relationships between those actors, stating who ought to be protected by whom and by what means, who can control infrastructures and access data and who is deemed an unwanted participant in a technological infrastructure.

Next to human and institutional actors, technologies and technological infrastructures can function as referent objects or cyber security threats, or as the locus of cyber security responses. In many cases, technologies and technological infrastructures are related to human and institutional actors and are similarly located on different organisational levels within the internet infrastructure. Single devices belong to individuals and contain their personal data, corporations have their own networks or offer their services via the networks they sustain, states and public institutions run critical and public infrastructures such as health care or public transport – these infrastructures in turn ensure their smooth functioning as is the case with the electricity network. Individual hackers and organised criminal groups can use software to hack into corporate computer systems or employ botnets to break into people’s computers. National and international infrastructures can be targeted by hacker groups or governmental hackers, and individuals’ computers can be targeted for surveillance by governments and corporations.

The definition of the four components (referent object, securitising actor, threat, security response) distinguish a security issue from another kind of issue. However, each cyber security approach separately defines and interprets these components, distinguishing cyber security approaches and their consequences from each other. In cyber security practice, a similar mapping out and defining of such elements is called threat modelling, which includes describing the security threat that is being protected against as well as how it is expected to operate in order to penetrate a system or reach a protected asset (Shostack, 2014). The mapping of cyber security approaches in this paper goes beyond the technical notion of threat modelling. While threat modelling is mainly concerned with decision trees along which threats can attempt to compromise a system, the approaches here are concerned with mapping out kinds of threats posed to kinds of systems, the values which sustain them and the distributions of responsibility in resolving them.

Section 5: Four approaches to cyber security

This section outlines four common approaches to cyber security which differ in their referent objects, such as personal data, economic or political order, national infrastructures and public safety, the technological infrastructures they aim to secure, the threats they conceptualise and the actors they make responsible for ensuring security. The four approaches are separated for analytical purposes and in order to demonstrate how, by defining the structural components outlined above, divergent issues can be understood as cyber security issues. In practice, the approaches can also be entangled: for instance, a cyber security initiative can be oriented at protecting infrastructures against hacks that compromise the functionality of this infrastructure, while at the same time protecting (personal) data flowing through this infrastructure. Cyber security approaches can also be opposed to each other – this is where possibly hard choices have to be made. For instance, should cyber security responses protect personal communications against all kinds of intrusions or should they allow governmental or corporate agents to intercept and analyse communications in order to identify potential threats?

Cyber security as data protection

Where cyber security is concerned with the protection of sensitive and personal data and communications, or otherwise confidential information to be protected from interception and wiretapping, it is closely related to privacy concerns and data protection. Threats to data protection can be posed by criminal hackers who aim to break into systems in order to attain information. They can also be posed by governments or corporations, where they for instance engage in surveillance of citizens and consumers, respectively. On a smaller scale, spouses and other individual people close to another person’s data can also pose a threat. Thus, there is a variety of actors who can act as threats. What unites approaches to cyber security as data protection is their aim to protect information against threats of unlawful or unwarranted access by other parties and against surveillance and wiretapping. Cyber security as data protection can be addressed by technical means such as techniques of encrypted data storing, transmission and end-to-end messaging. But there are also non-technical kinds of responses, such as data protection legislation. Further, where individual users are seen as capable and responsible, educating them about safe data and internet practices and about privacy-friendly technologies can be another cyber security measure taken within this approach. Within corporations or organisations, this approach can also be taken for instance by instituting password policies for employees.

Which kind of solution is applied in any particular case of cyber security as data protection depends on where trust and responsibilities are placed. Where governments, law enforcement and public institutions are not trusted or even seen as potential threats, technical measures that can be independently developed, tested and implemented might appear as the best solution. This is for instance the case where encrypted messaging apps are developed open-source in order to help prevent private communications from being intercepted. Where governmental institutions and their ability to regulate are trusted and corporations are seen as adversarial, legal measures might be chosen. This is the case, for instance, when it comes to regulations such as the EU’s General Data Protection Regulation which posits rules of how corporations can handle the data they collect on citizens. Where criminal activities such as identity theft are concerned, law enforcement might be called upon.

Where cyber security ought to protect data in order to ensure privacy, it often seeks to assert the rights of individuals and (vulnerable) groups and to protect them against more powerful agents, or against, for instance, exploitation and manipulation by companies or intrusive or authoritarian governments. This happens where citizens are protected against governmental overreach or where consumer protections are enforced in the form of responsible data policies. However, cyber security as data protection is also concerned in cases where companies aim to protect their confidential information from industrial espionage or where governments aim to protect their employees and institutions. This is where the approach can overlap with the one presented in the next sections.

A case exemplifying the approach of cyber security as data protection was the case of Apple vs. the FBI in 2016 (Krüger, 2016; Spiegel Online, 2016). In this case, the US Federal Bureau of Investigation, a state institution and national law enforcement unit, requested the technology giant Apple to provide them with software that would enable the agency to break into a suspect’s phone in order to access the personal information it had stored. Apple declined this request, arguing that providing the state with software able to break its product’s privacy protection would compromise the company’s customers’ privacy and consequently their trust in the company. In this case, Apple ‘sided’ with citizens and civil rights activists aiming to protect privacy against possibilities of governmental surveillance. Here, the private information of customers was supposed to be protected from the threat of governmental surveillance, which could also create security loopholes that could be exploited. In this case, the company acted as the securitising actor: it had put strong encryption on its devises, it refused to crack this encryption, and it argued that doing so would compromise cyber security in the sense of data protection.

Cyber security as safeguarding financial interests

Another cyber security approach is aimed at protecting financial assets or securing commercial revenues. In this area, cyber security is perceived to be steered by the market – if information technologies ought to become more, say, privacy-friendly, this development would need to be enforced via consumer choices. Cyber security ensures compliance with the existing economic rules and laws and ought to protect fair competition and market principles; states and governments have to ensure principles by means of regulation and law enforcement. The exact cyber security response proposed by an approach to cyber security as safeguarding economic interests depends on the kinds of economic losses expected and the revenue models considered. Potential threats can be posed by cybercriminals or blackmailers, other companies and competitors, the governments of other states, political groups and activists, amongst others.

Most companies use ICTs to organise business processes, relying on ICT systems and digitally stored business information. Protecting these systems and confidential information against potential intruders and eavesdroppers guarantees economic advantages and ensures what is understood as fair competition. ICT systems are also responsible for the smooth functioning of production and services. Systems that malfunction as a consequence of intrusion, manipulation and shut-down can result in a loss of revenue. Where services relate to critical infrastructures of public transportation or health care, or where sold products can potentially harm consumers (i.e., self-driving cars) (see for example European Network and Information Security Agency, 2013), cyber security incidents could hurt consumers and lead to a loss of trust in the company, if not legal consequences. In addition, companies hold much personal data; some even make their money off personal data. Protecting this data on behalf of their customers is necessary for complying with the law, but also for maintaining customers’ trust.

This last aspect is closely related to the Apple vs. FBI case mentioned in the previous section. When looking at this case from another perspective, it could also be used as an example for the approach presented in this section. While an approach to cyber security as data protection would argue that cyber security responses need to protect people’s privacy and personal data, an approach to cyber security as safeguarding economic interests would see the FBI’s request as a threat to the company and its revenue by compromising its products and alienating its customers. In this case, cyber security is an essential business asset. What is thus important from this perspective are economic and financial aspects and the values related to those.

Another case of cyber security as safeguarding economic interests is the enforcement of (digital) copyrights. The American “Digital Millennium Copyright Act” prohibits the owners of digital devices from tampering with or breaking any digital locks put on the device in order to protect against copyright infringements by users (Doctorow, 2016; Mullin, 2013). These locks seek, for instance, to prevent the recording of streamed videos. Some have argued that the prohibition to tamper with digital locks actually decreases cyber security however, because it does not allow researchers to test the locks’ ‘actual’, read technical, security or to disclose discovered vulnerabilities (Doctorow, 2016). Hence, the locks can end up making devices more vulnerable, jeopardising the security of individuals’ devices and their personal information stored on them.

Cyber security as the protection of public and political infrastructures

Where politicians and public policy officials talk about cyber security, they often speak about the protection of public, sometimes vital, infrastructures such as communication systems, electric grids, hospitals and public transport. Under the use of advanced information technology, more and more public and vital infrastructures are connected to or operate on the internet. A compromise of these infrastructures can slow down a country’s development, upset social order or result in injuries and deaths. In addition, political parties or political systems such as e-voting systems can be attacked and manipulated (CNN Library, 2018). This can threaten due political process and national integrity of elections.

Threats are posed by lone hackers and even experimenting teenagers (Computerwoche, 2008), but the most severe threats seem to be politically motivated and come from political and (para)military groups, activists, and (hostile) states and their military and secret services. They aim at destabilising a country and proving military strength, and are often considered military threats and linked to acts of cyber-warfare (i.e., Davis, 2007; Traynor, 2007). Consequently, securitising actors are often military units and international military alliances as well as national law enforcement and intelligence agencies. Where public and political infrastructures are run by private corporations or as public-private partnerships, responsibilities can also be assigned to companies. One example of such an attack on public infrastructure is the virus Stuxnet which seemed to have aimed at slowing down Iran’s nuclear developments (Stöcker, 2010). Another example are the attacks on US electricity and water infrastructures, which appear to continue since 2016 and are allegedly carried out by Russian actors (Perlroth & Sanger, 2018).

Within this approach, possible responses can take the form of systems and security engineering which ought to make breaking into and manipulating systems more difficult and provide effective ways for mitigating breaches. Technical measures are network monitoring and data analysis; some even propose more offensive strategies which fight back and attack the attackers themselves (Roggeveen, 2017; Paganini, 2013). Developing cyber security standards and policies (i.e., NIST, 2014) and applying political diplomacy are additional responses.

Of course, this approach can also have overlaps with other approaches, for instance where infrastructures are run by private companies or as public-private partnerships, involving corporate financial interests. What separates this approach from the others is its focus on cyber security as being about protecting public and political infrastructures in order to ensure their smooth functioning within our societies and the kinds of lives they enable for us. The values that motivate such an approach to cyber security are social and public values such as public safety, national integrity, peace and democracy. Protecting public infrastructures is essential for the functioning of society as a whole: it ensures things like the internet, electricity, health care and public transport and protects public safety and the functioning of political structures.

Cyber security as control of information and communication flows

The final approach to cyber security presented can at times appear antagonistic to the other approaches. It is often more concerned with breaking into systems than with protecting against breaches. What holds together different cyber security issues and responses within this approach is their shared aim of controlling information flows. The approach focuses on the human use of communication systems for a variety of purposes including political activism, activities and opposition, spreading political messages, (false) information and propaganda, or for organising (politically motivated) acts of violence.

Approaches to cyber security as the control of information and communication flows focus on methods which involve extensive data surveillance. There are two separate aspects involved: one is surveillance of communications and collection of intelligence in order to identify potential threats, and the second is utilising surveillance in order to directly moderate and censor information shared online. Both aspects are concerned with the content of online communications and the transition between them is fluid. Surveillance can be used to identify undesired political activism or political violence, but also to regulate what is allowed to be communicated and to enforce censorship rules. The collection and analysis of information flows on the network can be used for identifying and countering potential threats and conspiracies, but also for regulating the content of information and opinions posted and shared.

In many cases, governments, state institutions and regulators act as securitising actors – security issues are often evoked where justifications for governmental surveillance are made (Owen & McCarthy, 2013). But corporations and internet service providers are also involved. They are called upon to combat hate speech and fake news and provide governments, intelligence and law enforcement agencies with data about their users (Eddy & Scott, 2017; MacAskill & Rushe, 2013; Timm, 2014; Wong, 2016). They further apply measures according to their own terms and conditions, following values they see fit and acceptable for the majority of their customers (Bhattacharya, 2016; Heath, 2017). An example is a law passed in Germany in 2017, the so-called Netzwerkdurchsetzungsgesetz (or NetzDG, for its acronym in German), which obliges social media companies to delete illegal posts which have been flagged or reported by users, such as those perpetrating hate speech, within 24 hours (“Germany starts enforcing hate speech law,” 2018).

Approaches to cyber security as the control of information and communication flows are often motivated by values like national security, the rule of law, public safety and political stability. Here, the approach overlaps with approaches to cyber security as the protection of public and political infrastructures. Both can aim at ensuring public safety and political integrity, and technical surveillance and data analysis techniques can be used to identify threats on the network. However, while the former approach is concerned with ensuring the smooth functioning of infrastructures operated by ICTs, this approach is concerned with identifying threats via intelligence on human activity and then acting upon those threats, either outside the infrastructure or by controlling communications.

Where surveillance is used for identifying threats via the interception of communications, this is often justified by a need to protect against activities seen to threaten the state, its stability and integrity, and public order. For instance, the government’s stance in the Apple vs. FBI case was that breaking into the IPhone would provide important information for ensuring national security. Similarly, where online information and content is censored, this information is often seen as seditious, as threatening social order and societal and political norms and rules.

Whether or not activities of data-based surveillance and online content moderation should be considered cases of cyber security remains contested. One may argue that they are rather activities which use data analytics in order to identify and prevent threats for diverse security purposes, but that in contrast to the other approaches they are not necessarily concerned with securing technical systems. At the same time, identifying and controlling information flows in order ensure predefined system functionality, and in order to control who can do what, seems like a prototypical cyber security activity even though the aim is not to keep threats from breaking into technical systems.Further, many of the activities which are carried out under this approach will require extensive cyber security expertise. Surveillance – collecting and analysing data streams – is a major activity within the umbrella of cyber security and there is significant overlap and entanglement of issues of information control with cyber security issues. For these reasons, the paper includes this approach to cyber security here. In addition, it would have seemed reductive to bracket out from a conceptual discussion on cyber security the whole range of activities related to surveillance, censorship and online content moderation.

Section 6: Discussion and future research

Building on previous research in STS and computer ethics, the paper presents a constructivist approach to cyber security, describing how issues can be constructed as cyber security issues, sometimes with adverse effects. The framework the paper develops bases on the Copenhagen School of Securitisation, which states issues are securitised within discourses - constructed as security issues where an existential and imminent threat is posed to a cherished, invaluable referent object - in order to justify actions presented as necessary security responses. Combining the insights Securitisation provides with David Baldwin’s conceptual work on security, the essay proposed four structural components which build an approach to cyber security. Which approach to cyber security is then chosen or given priority will determine how these structural components are filled and which roles are given to the actors involved, relating them in definite ways.

The paper’s distinction between four common approaches to cyber security is of analytical nature. Concrete instances of cyber security located within the four approaches can still vary – for instance, data can be protected against governmental or against corporate surveillance. Approaches to cyber security can have overlaps where threats, referent objects or security responses are identical – from the viewpoint of the companies which operate public infrastructures, the protection of public and political infrastructures for instance can coincide with safeguarding their economic interests. Approaches stand in opposition to each other where the threat of one is the securitising actor of the other, and where the one approach’s response jeopardises the other’s security, for instance when data protection threatens surveillance mechanisms.

The role of values for cyber security decisions

Each cyber security approach is motivated and justified by appealing to values which it aims to promote, such as freedom of speech, democracy, social order, economic freedoms, public safety and human rights. The set of values underlying an approach to cyber security shapes how the approach defines its structural components. Approaches aiming to protect privacy, freedom of speech, economic interests, human rights, public safety, political order, human integrity, national sovereignty, cultural norms, fair competition, and so on, will differ in the referent objects, technological infrastructures and threats they consider, the actors they trust, and the priorities they have. A debate about which cyber security issues we face and which cyber security responses to adopt is not just a debate about which responses are most effective or in least conflict with other values such as privacy or innovation. Rather, it is a debate about which values ought to be upheld and promoted, which values we, as a society, find most important or see most threatened.

The paper thus demonstrates how deeply intertwined seemingly technical matters of cyber security can be with societal, political and ethical issues. There are close connections and interactions between decisions on internet and cyber security governance, stakeholder relationships and social, ethical and political norms and values. Paying attention to which kind of approach underpins a debate on cyber security or motivates responses to cyber security issues can help us understand the values at stake and the relationships enforced. Thus, analysing adopted approaches to cyber security can tell us which values might be most important to those making the decisions as well as which kind of audience can potentially be convinced by the approach and the case it makes for why cyber security is important. Similarly, we have elsewhere argued that ways of framing cyber security can be a means of governing information infrastructures and mediating access and control rights (Fichtner et al., 2016).

How values, approaches to cyber security and audiences relate, and how securitisation works in the case of cyber security, are empirical questions, but they also raise normative ones. Questions which follow from a constructivist approach to cyber security are how the term should be defined and which responses it should entail. If it is true that presenting an issue as a cyber security issue is a convincing argument for taking action, this is a highly significant aspect for public and political debate. The question then remains of what we ought to do and what the ethics are of talking about cyber security. How resources for cyber security should be allocated or which approaches to cyber security should be prioritised are other related questions. Another important point is how to critically reflect on the respective approach taken, as there might be overlaps or conflicts with other approaches and the values they safeguard. For instance, a completely anonymous network might protect sensitive information about its participants but endanger network security. When only one approach to cyber security is considered, this might distract from other important issues or obscure that only certain kinds of risks are secured against, while others are not considered.

Scope of the framework

The paper presented one framework for conceptualising cyber security by distinguishing cyber security approaches based on the structural components presented above and the values which motivate and justify them. This perspective says little about how cyber security issues should be approached; the normative claim it makes is that when devising cyber security policies, we should pay attention to the approach chosen and make explicit the underlying norms and assumptions. The proposed conceptualisation also says little about how to implement cyber security, about the process of how to do cyber security, how to build cyber security capacities or develop cyber security incentives.

Other frameworks for conceptualising cyber security differ in their analytic focus and in the kind of analysis they enable. For instance, the Oxford Cybersecurity Capacity Maturity Model is concerned with cyber security capacity building (Global Cyber Security Capacity Centre, 2014). The model distinguishes five dimensions of cyber security capacity building which are further divided into factors, which are then analysed based on categories classified according to levels of maturity. While the model makes other distinctions than the ones proposed in this paper, it includes references to the distinctions proposed here. For example, it talks about involved actors as “strategy ‘owners’” (Global Cyber Security Capacity Centre, 2014, p. 8), which corresponds to the notion of securitising actors, and it includes reference to kinds of cyber security responses such as legal, technical and educative ones. It also distinguishes between a number of actors and sectors, such as civil society, the public and private sectors and refers to the responsibilities and responses of corporate, governmental and military actors (Global Cyber Security Capacity Centre, 2014, pp. 17 & 28). The model does not systematically differentiate between various threats, but for instance includes a subcategory of “privacy, data protection & other human rights” (Global Cyber Security Capacity Centre, 2014, p. 30), and it discusses the protection of critical infrastructure. Different frameworks for conceptualising cyber security do not necessarily need to compete, but can complement each other.

Directions for future research

The paper opens up new empirical, conceptual and normative questions for future research. An empirical analysis could aim at measuring the effects of securitisation with regards to cyber security and at refining and expanding the cyber security approaches presented. The challenge here would be to identify the discourses relevant for cyber security decisions. Central empirical questions are: who securitises issues with regard to information technologies, who is the relevant audience to be convinced, and who makes decisions with regards to cyber security? Is there a public or political discourse on these matters, and if yes, for which questions? And which questions are perhaps left to technologists and internet governance forums? While there might be a quite obvious public discourse on questions of privacy and national security, are there other aspects of cyber security not or only very implicitly debated? And in what way, if at all, does cyber security take up a special status that allows for the implementation of special responses? Or does cyber security turn out to be just one issue of many in internet governance? A conceptual question on the other hand would be what counts as cyber security and what does not. So, which activities does the label ‘cyber security’ describe? Here, conceptual clarifications about how the meaning and usage of the term relates to other security terminology used with regards to information technology, such as information security, digital security and internet security, could help shed further light on conceptual matters of cyber security. These questions are closely intertwined with normative questions, namely what should be labelled a cyber security issue and why? Which cyber security approaches should be prioritised and with what effect? Which values should cyber security initiatives promote? Who should make decisions concerning cyber security issues and how should they approach these issues? And which approaches to cyber security might be problematic because they conflict with other approaches and the values they aim to uphold?

Acknowledgements

I would like to thank Wolter Pieters, André Teixeira and Jan van den Berg for their supervision, support and feedback during my time at TU Delft. The discussions with them provided an important contribution to developing the ideas presented in this paper. I would also like to thank Michel van Eeten and his cyber security group for inspiring discussions on the issues discussed in this paper. Finally, I would like to thank Judith Simon for valuable feedback on an earlier draft.

References

Ani, U. P. D., He, H. M., & Tiwari, A. (2016). Human capability evaluation approach for cyber security in critical industrial infrastructure. In D. Nicholson (Ed.), Advances in Human Factors in Cybersecurity (501st ed., pp. 169–182). Cham, CH: Springer. doi:10.1007/978-3-319-41932-9_14

Baldwin, D. A. (1997). The concept of security. Review of International Studies, 23(1), 5–26. Retrieved from https://www.cambridge.org/core/journals/review-of-international-studies/article/the-concept-of-security/67188B6038200A97C0B0A370FDC9D6B8

Bhattacharya, A. (2016, October 12). Facebook is under fire for censorship again, this time for blocking an image of a mammogram. Quartz. Retrieved from https://qz.com/807427/facebook-fb-is-under-fire-for-censorship-again-this-time-for-blocking-an-image-of-a-mammogram/

Buzan, B., Wæver, O., & de Wilde, J. (1998). Security: A new framework for analysis. Boulder, CO: Lynne Rienner Pub. Retrieved from https://books.google.de/books/about/Security.html?id=j4BGr-Elsp8C&redir_esc=y

CNN Library. (2018, February 21). 2016 presidential campaign hacking fast facts. CNN. Retrieved from http://edition.cnn.com/2016/12/26/us/2016-presidential-campaign-hacking-fast-facts/index.html

Computerwoche. (2008, January 18). Infrarotes Licht als Steuersystem: Polen: Teenager hackt Straßenbahn mit Fernbedienung. Retrieved from https://www.tecchannel.de/a/polen-teenager-hackt-strassenbahn-mit-fernbedienung,1744101

Cruz-Cunha, M. M., & Portela, I. M. (Eds.). (2015). Handbook of research on digital crime, cyberspace security, and information assurance. Hershey, PA: IGI Global.

Davis, J. (2007, August 21). Hackers take down the most wired country in Europe. Wired. Retrieved from https://www.wired.com/2007/08/ff-estonia/

Deibert, R. J. (2009). The geopolitics of internet control: Censorship, sovereignty, and cyberspace. In A. Chadwick & P. N. Howard (Eds.), Routledge Handbook of Internet Politics (pp. 323–336). Oxon, UK: Routledge.

DeNardis, L. (2010). The emerging field of internet governance. Yale Information Society Project Working Paper Series. doi:10.2139/ssrn.1678343

DeNardis, L., & Raymond, M. (2013). Thinking clearly about multistakeholder internet governance. GigaNet: Global Internet Governance Academic Network, Annual Symposium 2013. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2354377

Doctorow, C. (2016, July 21). America’s broken digital copyright law is about to be challenged in court. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/jul/21/digital-millennium-copyright-act-eff-supreme-court

Dunn Cavelty, M. (2013). From cyber-bombs to political fallout: Threat representations with an impact in the cyber-security discourse. International Studies Review, 15(1), 105–122. doi:10.1111/misr.12023

Eddy, M., & Scott, M. (2017, June 30). Delete hate speech or pay up, Germany tells social media companies. New York Times. Retrieved from https://www.nytimes.com/2017/06/30/business/germany-facebook-google-twitter.html

Emmers, R. (2016). Securitization. In A. Collins (Ed.), Contemporary Security Studies (pp. 168–181). Oxford, UK: Oxford University Press.

European Network and Information Security Agency. (2012). National cyber security strategies: Setting the course for national efforts to strengthen security in cyberspace.

European Union. European Union General Data Protection Regulation (2016). Retrieved from http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679

European Union Agency for Network and Information Security. (2013). Smart grid threat landscape and good practice guide. doi:10.2824/34387

Fichtner, L., Pieters, W., & Teixeira, A. (2016). Cybersecurity as a Politikum: Implications of security discourses for infrastructures. In NSPW ’16 Proceedings of the 2016 New Security Paradigms Workshop (pp. 36–48). New York, New York, USA: ACM Press. doi:10.1145/3011883.3011887

Fox-Brewster, T. (2017, May 12). An NSA cyber weapon might be behind a massive global ransomware outbreak. Forbes. Retrieved from https://www.forbes.com/sites/thomasbrewster/2017/05/12/nsa-exploit-used-by-wannacry-ransomware-in-global-explosion/#5d074224e599

Gallie, W. B. (1955). Essentially Contested Concepts. Proceedings of the Aristotelian Society, 56, 167–198.

Germany starts enforcing hate speech law. (2018, January 1). BBC News. Retrieved from http://www.bbc.com/news/technology-42510868

Global Cyber Security Capacity Centre. (2014). Cyber security capability maturity model (CMM). Retrieved from http://www.sbs.ox.ac.uk/cybersecurity-capacity/system/files/CMM Version 1_2_0.pdf

Hansen, L., & Nissenbaum, H. (2009). Digital disaster, cyber security, and the Copenhagen school. International Studies Quarterly, 53(4), 1155–1175. doi:10.1111/j.1468-2478.2009.00572.x

Heath, A. (2017, June 28). Facebook’s rules on hate speech leaked in new investigation. Business Insider Deutschland. Retrieved from http://www.businessinsider.de/facebook-hate-speech-rules-leaked-2017-6?r=US&IR=T

Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: Finding the governance in Internet governance. New Media & Society, 19(9), 1406–1423. doi:10.1177/1461444816639975

Krüger, P. S. (2016, March 17). Warum der Streit zwischen Apple und dem FBI so wichtig ist. Süddeutsche Zeitung. Retrieved from http://www.sueddeutsche.de/digital/verschluesseltes-iphone-warum-der-streit-zwischen-apple-und-dem-fbi-so-wichtig-ist-1.2908166

MacAskill, E., & Rushe, D. (2013, November 1). Snowden document reveals key role of companies in NSA data collection. The Guardian. Retrieved from https://www.theguardian.com/world/2013/nov/01/nsa-data-collection-tech-firms

Marsden, C. T. (2013). Network neutrality: A research guide. In I. Brown (Ed.), Research Handbook on Governance of the Internet (pp. 419–444). Cheltenham, UK: Edward Elgar Publishing.

Mullin, J. (2013, March 6). Copyright reformers launch attack on DMCA’s “digital locks” rule. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2013/03/copyright-reformers-launch-attack-on-dmcas-digital-locks-rule/

National Institute of Standards and Technology. (2014). Framework for improving critical infrastructure cybersecurity.

Nissenbaum, H. (2005). Where computer security meets national security. Ethics and Information Technology, 7(2), 61–73. doi:10.1007/s10676-005-4582-3

Owen, P., & McCarthy, T. (2013, October 29). Intelligence officials defend surveillance tactics in Congressional hearing. The Guardian. Retrieved from https://www.theguardian.com/world/2013/oct/29/nsa-files-us-intelligence-officials-testify-in-congress-live-coverage

Paganini, P. (2013, July 18). The offensive approach to cyber security in government and private industry. Infosec Institute. Retrieved from http://resources.infosecinstitute.com/the-offensive-approach-to-cyber-security-in-government-and-private-industry/#gref

Perlroth, N., & Sanger, D. E. (2018, March 15). Cyberattacks put Russian fingers on the switch at power plants, U.S. says. The New York Times. Retrieved from https://www.nytimes.com/2018/03/15/us/politics/russia-cyberattacks.html

Roggeveen, B. (2017, August 8). NATO needs an offensive cybersecurity policy. Atlantic Council. Retrieved from http://www.atlanticcouncil.org/blogs/new-atlanticist/nato-needs-an-offensive-cybersecurity-policy

Shostack, A. (2014). Threat modeling: Designing for security. Indianapolis, IN: John Wiley & Sons. Retrieved from https://news.asis.io/sites/default/files/Threat Modeling.pdf

Spiegel Online. (2016, March 2). Apple vs. FBI: Jetzt geht es auch offiziell um mehr als ein iPhone - SPIEGEL ONLINE. Retrieved from http://www.spiegel.de/netzwelt/netzpolitik/apple-vs-fbi-nur-ein-iphone-jetzt-geht-es-auch-offiziell-um-mehr-a-1080209.html

Stöcker, C. (2010, December 26). Angriff auf Irans Atomprogramm: Stuxnet-Virus könnte tausend Uran-Zentrifugen zerstört haben. Spiegel Online. Retrieved from http://www.spiegel.de/netzwelt/netzpolitik/angriff-auf-irans-atomprogramm-stuxnet-virus-koennte-tausend-uran-zentrifugen-zerstoert-haben-a-736604.html

Szoldra, P. (2016, September 16). This is everything Edward Snowden revealed in one year of unprecedented top-secret leaks. Business Insider. Retrieved from http://www.businessinsider.de/snowden-leaks-timeline-2016-9?r=US&IR=T

Timm, T. (2014, October 17). The government wants tech companies to give them a backdoor to your electronic life. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2014/oct/17/government-internet-backdoor-surveillance-fbi

Traynor, I. (2007, May 17). Russia accused of unleashing cyberwar to disable Estonia. The Guardian. Retrieved from https://www.theguardian.com/world/2007/may/17/topstories3.russia

Van Den Berg, J., Van Zoggel, J., Snels, M., Van Leeuwen, M., Boeke, S., Van De Koppen, L., … De Bos, T. (2014). On (the emergence of) cyber security science and its challenges for cyber security education. Proceedings of the NATO IST-122 Cyber Security Science and Engineering Symposium, 1–12. Retrieved from https://www.csacademy.nl/images/MP-IST-122-12-paper-published.pdf

Van Eeten, M. J. G., & Mueller, M. (2013). Where is the governance in Internet governance? New Media & Society, 15(5), 1–17. doi:10.1177/1461444812462850

Wattles, J., & Larson, S. (2017, September 16). How the Equifax data breach happened: What we know now. CNN Tech. Retrieved from http://money.cnn.com/2017/09/16/technology/equifax-breach-security-hole/index.html

Wolff, J. (2016). What we talk about when we talk about cybersecurity: Security in internet governance debates. Internet Policy Review, 5(3). doi:10.14763/2016.3.430

Wong, J. I. (2016, February 19). Here’s how often Apple, Google, and others handed over data when the US government asked for it. Quartz. Retrieved from https://qz.com/620423/heres-how-often-apple-google-and-others-handed-over-data-when-the-us-government-asked-for-it/

Algorithmic governance and the need for consumer empowerment in data-driven markets

$
0
0

Introduction: personal data as currency

The Cambridge Analytica case, where third-party app developers gained access to a large amount of Facebook users’ data and used it for political campaigning, has not only spurred much debate on the need for algorithmic governance of platforms in our current status of networked publics, but also stresses the need for consumer empowerment in data-driven markets overall. The case highlights a lack of transparency that the ecology of actors collecting, handling and sharing personal data for various purposes ultimately mean for consumers and thereby also the difficulties in assessing the true value of the data collected. However, the lack of transparency and lack of consumers being informed over the data-handling is not an anomaly only true for some specific cases, but rather the norm for a data-driven economy. The main argument here therefore addresses the need for consumer empowerment in terms of transparency and ill-functioning notions of consent, in general, and methodological capabilities of consumer protection agencies, in particular.

In other words, from a consumer protection perspective, the data-driven economy poses great challenges in terms of the application of consumer regulations to information asymmetric relations – where one party has more or better information than the other, and the use of personalised services that include transactions of personal data (Larsson, 2017a; 2017c; Rhoen, 2016). Part of this regulatory challenge is arguably of a conceptual nature; that is, the practice of and supervision by consumer protection authorities is likely dependent on how the transactions, relations and conditions of the market are understood (cf. Larsson, 2017b). Given that the role, use and transactions of personal data is both opaque and part of an increasingly complex setting in what law professor Frank Pasquale has described as an “era of runaway data” (Pasquale, 2015), the need for a clarified description and understanding of the transactional character of personal data in the digital economy is called for. The main reason, here, is to be able to point out weaknesses in consumer protection, specifically with regards to imbalances or asymmetries in both information and power between consumers and digital service providers, how to deal with calls for transparency as well as the lack of consumer awareness when taking part in consent-based data collection.

There is little doubt that personal data do indeed hold significant value in the digital economy, and therefore can be understood as a sort of currency for services that are for free on the consumer level (cf. Schwartz, 2004; Spiekermann & Korunovska; Larsson & Ledendal, 2017). The notion of personal data as currency, in a context of consumer protection, is used here as a means to stress that 1) it carries transactional value for services that may be for free in a direct monetary sense; and 2) it is hard for consumers to assess what the value is – given the travel, trade and repurposing of much personal data (cf. Christl, 2017) – and hence to what extent the bargain is fair, and 3) it could therefore be a way to reconceptualise “free”, data-collecting services, in order to trigger consumer protection for market practices otherwise not dealt with by consumer protection agencies.

For example, at the end of January 2012, the European Commission presented the proposal for the comprehensive reform of EU data protection provisions, which resulted in, among other things, the Data Protection Regulation (also known as GDPR) which comes into force in May 2018. In connection with this, EU Commissioner Viviane Reding described personal data as “the currency of today’s digital market”.1 She did this by emphasising the importance of trust and the confidence that is lacking in order for the digital markets to work satisfactorily. She argued that what was needed was a strong, clear and uniform legal framework that could enhance the potential of the digital single market. The EU Commissioner revisited the same argument two years later: “In an increasingly digital society, personal data has become a new form of currency. The biggest challenge for political and business leaders is to establish the trust that enables that currency to keep flowing”. In other words, this view is one of the stated motives that underlie strengthened individual protection for personal data in a digital context, described as a prerequisite for market development and growth.

This approach – that personal data has a central value in the digital economy and in practice can function as a currency – has become a fairly common view of many analysts of the computer-driven economy. For example, this notion is developed in a report from 2012 – The Value of Our Digital Identity– from the consulting firm Boston Consulting Group (BCG). BCG’s perspective, as in Reding’s referral to an internal digital market, is to strive for trust that keeps the data flowing and in use, which can be described as a transaction-promoting perspective. A conclusion of the BCG study, which included 3,000 European participants, is that consumers want to share their data if the benefits and the privacy management are appropriate (2012). That is, consumers may be more willing to share data if companies can implement confidentiality tools that provide choice and control, establish user-friendliness, and provide a sufficient benefit in exchange. However, this rather optimistic account of free will and users’ abilities to make informed choices in relation to the multitude of agreements being made in an everyday digital context is questioned in a number of studies, for example described by Turow et al. (2015) as “the tradeoff fallacy” with regards to consumers in stores, reading privacy agreements (Cranor et al., 2014) and, specifically, social networking sites (Bechmann, 2014), which will be returned to below.

A problematic imbalance pointed out by Sarah Spiekermann and Jana Korunovska (2016), who research social and ethical problems of computer systems, is that there is a big difference in how individuals perceive the value of their personal information, on the one hand, and industrial players that utilise personal data as a central source of value, on the other:

Analysts, investors and entrepreneurs have recognized the value of personal data for Internet economics. Personal data is viewed as ‘the oil’ of the digital economy. Yet, ordinary people are barely aware of this. Marketers collect personal data at minimal cost in exchange for free services (Spiekermann & Korunovska, 2016, p. 1).

The value of personal data is furtherunderlined by the fact that many online companies see their stock prices as a direct function of the data they have on their users (Spiekermann & Korunovska, 2016). As a sign of this, the top five companies with the largest absolute increase in market capitalisation in 2009-2017 were – among other commonalities – all consumer-focused data-driven tech companies: Apple, Alphabet (Google), Amazon, Microsoft, and Facebook (PwC, 2017). By March 2017, these five data-driven companies had taken five of the top six spots in terms of global market capitalisation, overtaking the oil companies and to some extent the financial institutions that were at the top only a decade ago (Larsson, 2017a). In sum, personal data holds great value for those who can utilise it, which at the same time entails a challenge to consumer protection since this value is not easily understood by consumers, leading to questions of consumer empowerment in terms of transparency, the role of consent, and how consumer protection authorities could improve their methods when supervising data-driven business practices.

The purpose

The article develops the argument of consumer empowerment and algorithmic governance in data-driven markets, and asks what this means for consumer protection policies and for the role of consumer protection authorities in terms of their supervision:

  • What role does consumers’ consent play for data collection, and what main weaknesses does it pose?
  • How transparent can and should collection of consumer’s data be and how should transparency be understood in this context?
  • What do algorithmically mediated and personalised services – dependent on consumer profiling – entail for methods of consumer protection supervision?

The article describes the current state of the knowledge on digital consumer profiling based in a forward-looking, if critical and consumer-based, perspective. This is then related to findings on consumer attitudes and sentiment. To what extent do consumers find the personal data collection to be problematic or worrying? This means pointing out some of the more important operators and more significant emerging markets in order to thereafter analyse, based in a policy relevant perspective, the most important aspects of consumer protection – a key issue being the wide use of user agreements to regulate what has become a strong information asymmetry in some of the data-driven markets.

Consumer profiling, risks of misuse, and consumers’ sentiments on data collection

One of the reasons for collecting large amounts of consumer data is to improve consumer profiling, that is, the practice of obtaining an understanding of consumers to form an underlying data basis for strategic decisions and, for example, marketing or product design. It is part of a development that can be described as industries’ attempt to create a “seamless, personalised digital customer journey” (cf. Edelman & Singer, 2015). This means combining information linked to an individual using methods that match specific consumer behaviour, demographic or psychographic characteristics (Harrison & Ti Gray, 2012; Hildebrandt, 2008). Profiling has become important not least in the marketing industry where this “new” kind of advertising can be described as “consumer-centric”, meaning it focuses on individuals (cf. Brown et al., 2016). In order to accomplish this, it is data-driven, i.e., effectuated by monitoring consumers’ actual internet-mediated behaviour – possibly in real time – in combination with collected data of previous behaviour, with the purpose of predicting future behaviour. 

Profiles are used to categorise customers or customer segments in order to separate, for instance, the most profitable from the least, which information then comprises the strategic, underlying data used for marketing and other decisions. Consumers are therefore routinely studied, registered, analysed and ranked and may be offered both different prices and, to some extent, different services, depending on the individually associated information (“the data”), and their place of residence (Kitchin & Lauriault, 2014).

The field involved in collecting individual consumer information and profiling has also been described, using somewhat more negative connotations, as a growing “surveillance economy” (cf. Singh & Lyon, 2013; Teknologirådet & Datatilsynet, 2016) that also may lead to a misuse of consumer data. In an Australian and American context, Harrison and Ti Gray (2012) demonstrate how credit companies and banks use individual consumer profiling not only to identify the needs of individuals but also their weaknesses. This means among other things that they can specifically focus on consumers who will not be able to manage their credit payments during the interest-free period. This type of credit card user is also more profitable than users who do not incur credit card related interest costs. This entails, in other words, the identification of profitable customers that other operators might rate as being economically vulnerable (Stone, 2008). Others have shown a link between the increase in consumer credit and financial institutions’ access to consumer information (Sanchez, 2009), which emphasises the need for further research on digital consumption, credit and risks of over-indebtedness (cf. Larsson et al., 2016).

Studies conducted in an American market context show that consumers may be resigned about being able to influence traders’ use of their personal information rather than satisfied with the discounts they receive in exchange (Turow et al., 2015). A number of studies show that users are concerned about not having control over their Internet generated data as well as the fact that their information could be used in situations that are quite different to where the information was originally collected or shared (Lilley et al., 2012; Pew, 2014; cf. Halbert & Larsson, 2015). According to a Swedish study, 60% of the Swedish population is opposed to news companies collecting data to enhance the user experience (Appelgren & Leckner, 2016). Other studies conclude that consumers are concerned that third parties such as advertisers or other commercial operators may be able to access their personal information (for example, Kshetri 2014, Narayanaswamy & McGrath 2014, Pew Research Center 2014). Overall, this indicates that consumer data is a key issue in much of the current market changes, and that this area and these relationships are complex and need further study.

Online user agreements and the consent dilemma

The main model utilised by data-driven services for the regulation of how to collect and handle consumers’ personal data is through user agreements based on the notion of informed consent. Formally, the users agree to the collection of their data. Critics, however, argue that this kind of “privacy self-management” does not provide meaningful control and that there is a need to move beyond relying too heavily on it (Solove, 2013). At least three main critical aspects can be put forward here.

Firstly, part of the challenge – as this model has become so common for our everyday digital practices – lies in what can be described as an information overflow of consent agreements and what has been called an “autonomy fatigue” (Greenstein, 2017, p. 404). For example, the Norwegian Consumer Council (Forbrukerrådet) recently conducted a reading of the terms and conditions of apps that are commonly found on the average smartphone – they then broadcast the reading in real time on the internet.2 It took 31 hours, 49 minutes and 11 seconds to read through the 250,000 words-long agreements, the iTunes agreement taking the longest at over three hours. Combined with all the other services utilised by an average digital consumer, it is simply not possible to even read the agreements, also stated in a rather early study by McDonald & Cranor (2008).

Secondly, part of the challenge likely lies in the fact that there are incentives for data collecting companies to be unclear about how much data is collected and how it is used: for example, Cranor et al. (2014) have studied 75 privacy policies from companies that store data on behaviour in digital contexts. They conclude that many of them lack important consumer relevant data management. This includes the collection and use of sensitive information and tracking data that can be used to identify individuals. Similarly, a study on privacy agreement texts and cookie consent information collected from 60 news sites in three countries (US, UK, and Sweden) shows that news sites “paternalistically” infer a wider consent from users than what can reasonably be expected, as a utilisation of “passive” consent. The reasons for collecting data can, according to Appelgren, therefore be said to be paternalistic in both a positive sense (i.e., beneficial to users) as well as in a negative sense, as choices may be imposed on users although users have not actively agreed, and potentially resulting in an undesired outcome.

Thirdly, part of the challenge likely also lies in the fact that emerging personal data-driven markets are complex, automated and swift – and thereby intransparent in practice. For example, the Norwegian data protection authority, Datatilsynet, conducted a study in 2015 on the amount of data collected when visiting the front page of six Norwegian newspapers (Datatilsynet, 2015). On average, the study found, between 100 and 200 web cookies were placed on any computer used to visit these homepages, information about the visitor’s IP address was sent to 356 servers, and an average of 46 third parties were “present” during each visit. One of the reasons for the presence of so many parties was the programmatic ad exchange taking place behind the web page in so-called programmatic advertising (cf. Busch, 2016), which involves increasing real-time bidding for selling advertisements that is dependent on profiling and targeting the individual visitor. However, none of the six newspapers provided their audience with any information relating to the presence of this large selection of third-party companies (Datatilsynet, 2015; Larsson, 2017c).

Each of these three examples point to the flawed notion of the individual consumer being able to, in a meaningful way, make informed choices with regards to the multitude of user agreements in play for an average digital consumer.

Media scholar and digital sociologist Anja Bechmann subsequently posits that “the consent culture of the internet has turned into a blind non-informed consent culture” (Bechmann, 2014, p. 21; cf. Joergensen, 2014). The fact remains that user agreements play a central role in regulating the handling of personal customer data between commercial parties and individuals, and that this striving for awareness is further emphasised by the GDPR. This leads to questions of how active consumer protection authorities preferably should be in empowering the “non-informed” but formally consenting consumers (Larsson & Ledendal, 2017). This question relates to how these practices apply not only to a privacy discourse but also to a discourse of consumer rights and power imbalances in the markets (Larsson, 2017a).

Complexity, opacity and the brokering of data

A challenge from a consumer protection perspective regards the increasing complexity on data-driven markets, fuelled by both a lack of transparency – often behind proprietary software – and the fact that the data is traded and brokered. Media scholar Mark Andrejevic has commented on “the spreading of prediction markets” (2013, p. 68–70) in Infoglut, and Pasquale, too, stresses the need to become more knowledgeable about how personal data is collected, analysed and traded, and the “need to hold business and government to the same standard of openness that they impose upon us – and complement their scrutiny with new forms of accountability” (2015, p. 57). A recent report on how companies collect, combine, analyse, trade, and use personal data on billions of consumers, from an Austrian research institute, describes how pre-existing practices of commercial consumer data collection have rapidly evolved into pervasive networks of digital tracking and profiling, and a “vast and complex landscape of corporate players continuously monitors the lives of billions” (Christl, 2017, p. 65). The data broker industry is of particular interest here (cf. Larsson, 2017a). For example, Acxiom reportedly manages 15,000 customer databases and 2.5 billion customer relationships for 7,000 clients, including 47 of the Fortune 100 companies (Christl, 2017). Oracle, another rising giant on the data broker horizon, has acquired companies like Datalogix, AddThis, Crosswise and BlueKai in order to be able to track billions of purchase transactions from grocery chains, users on millions of websites, a billion mobiles, the combination of PCs, phones, tablets, and TVs, as well as online message boards (Christl, 2017, p. 59; cf. Larsson, 2017a). The Federal Trade Commission in the US – to emphasise the opaque character of these practices – has stated that there is a “fundamental lack of transparency about data broker industry practices” (FTC, 2014, p. vii).

The complexity of how data travels thereby leads to a fundamental challenge for consumer and data protection. As “prediction markets” spread, more types of industries will develop a more refined, personalised relationship to consumers, which can be both to the consumers’ benefit but also their detriment. Reliance on big data sets that can be complemented in real-time to analyse the specific consumers’ conditions is increasingly being used for anything from purchase predictions by retail stores, to credit scoring by lenders, to death predictions by insurers (Siegel, 2016). Data brokers provide for profiling – as in the Acxiom example above – in partnerships with all kinds of companies ranging from Facebook, Google, Twitter to banks, insurance and airline companies (Christl, 2017). One specific problem relates to data being erroneous – as it happens. Legal scholars Mikella Hurley and Julius Adebayo (2017) have argued, in relation to credit scoring based on large amounts of collected and analysed data:

Consumers have limited ability to identify and contest unfair credit decisions, and little chance to understand what steps they should take to improve their credit. Recent studies have also questioned the accuracy of the data used by these tools, in some cases identifying serious flaws that have a substantial bearing on lending decisions.

So, the complexity of the market, the “ecosystem” of “runaway” data in essence describes what Nancy King and Jay Forder point out in a study on data analytics and consumer profiling (2016); i.e., that many of the companies dealing with consumers’ personal data gain access through secondary sources and use the information for purposes not known at the time of original collection (King & Forder, 2016). This further stresses the lack of possibilities for consumers to be informed about the uses of their data. Consequently, as consumer services – including credit scoring addressed by Hurley & Adebayo (2017) – becomes algorithmically mediated and automated, there is little chance for the individual consumer to assess if the outcome is reasonable, to counter if it is based on erroneous data, or even to clearly outline the inherent assumptions of the designed decision-making at hand. The black box of algorithmic decisions (cf. Pasquale, 2015), utilising secondary sources of data in consumer markets, is a clear challenge to consumer protection and the authorities representing it. How are they to detect if individual targeting – be it for ads or services – is based on illegal discriminatory grounds or exploiting particularly vulnerable groups?

Rhoen (2016), mentioned above, presents a socio-legally based analysis of how legal instruments can become more effective at improving consumer protection and the collection and use of consumer data (cf. Helveston, 2016). Rhoen (2016, pp. 6-8) argues, in a review of consumer protection and data protection legislation at the EU level, that a broader application of consumer protection regulation to user agreements may increase accountability for operators who collect and manage personal data, and in extension lead to increased codetermination for consumers. These consequences would, in that case, reduce the institutionalised power of the data managing parties in favour of the consumer. At the same time, however, Rhoen (2016, p. 8) points out that this can only be achieved if consumer protection legislation is applied pragmatically, which is partly the responsibility of the concerned supervisory authorities.

The European Data Protection Supervisor, EDPS, also points out the need for supervisory authorities – such as data protection and consumer protection authorities – to gain better insights into how data collection and covert profiling occurs (EDPS, 2015, p. 10), i.e., to study “the black box” (Pasquale, 2015). EDPS emphasises the lack of transparency involved and the challenges this entails also for governmental supervision; it is difficult to distinguish between advantages and intrusions when the data collection process and uses thereof are not visible (cf. King & Forder, 2016).

Conclusions

As shown, when it comes to the widespread practice of user agreements as a means to regulate the personal data collection, use and trade, the model seems flawed, particularly with regards to the notion of consumers making informed decisions. A wide array of studies show consumers’ concerns when it comes to the collection of their data, as well as the resignation or powerlessness to counter or take control over it. This relates to a widespread datafication (Larsson, 2017c) and quantification (Larsson, 2017d) of consumers, leading to a lack of transparency in data-driven markets, clouded by proprietary software and complex automated decision-making as the data travels, mediated by data brokers and others. This speaks to the need for an implementation of consumer policy that helps consumers recognise the perils of the new information landscape without being overwhelmed with information. Furthermore, and this is perhaps more important to point out, it speaks for the need to regulate consumer rights at a level that is not as strongly dependent on the consumers’ individual awareness. Pasquale, for example, also bears witness to this in relation to data brokers, stating that it is “unrealistic to expect individuals to inquire, broker by broker, about their files. Instead, we need to require brokers to make targeted disclosures to consumers. Uncovering problems in Big Data (or decision models based on that data) should not be a burden we expect individuals to solve on their own” (Pasquale, 2017).

Thus, given the overlapping character of personal data in the digital economy, there are a number of reasons why the data protection authorities and consumer-oriented authorities need to interact on a continuous and ongoing basis. Not the least the fact that personal data holds much of the value in a data-driven economy, combined with the fact that it is inherently hard for consumers to assess the bargain between data sharing and service access. This speaks for more structural solutions rather than depending on the consumers abilities of making informed choices about their personal data.

A recommendation for consumer protection authorities is therefore to develop synergies with, in particular, data protection authorities, to provide expertise on consumer protection. Transparency would likely have to include audits or control of how data-driven and targeting software operates, in order for consumer protection authorities to develop the ability to assess – in-house or perhaps through outsourced expertise – what the combination of algorithms and use of big data sources are leading to, and to discover the use of erroneous data (cf. King & Forder, 2016). This would be a way to propose a “qualified transparency” (Pasquale, 2015, p. 160–165) that may work in line with the need to “equalize the surveillance that is now being aimed disproportionally at the vulnerable” (Pasquale, 2015, p. 57). This could be a way forward to keep the proprietary software and the specific design of algorithms as the business secrets they may need to be, but at the same time provide for a necessary protective mechanism for the worst cases detrimental to consumers.

In the context of fintech firms, Pasquale (2017) witnessed before the United States Senate on the need for regulators to be able to audit machine learning processes to understand, at a minimum, whether suspect sources of data are influencing the decisions affecting consumers, such as credit scores. This would likely require data-driven and digital methods developed by the entities implementing the consumer protection supervision. In order to study the outcomes of automated services based on pattern recognition and to address accountability for these outcomes, a combination of legal and computer scientific expertise would be required. Or, put in a more general manner, in the European context, the methods operating in consumer markets have always called for scrutiny in order to secure the rights of weaker consumer parties. This was the case with traditional marketing and traditional credit scoring, and needs to be the case also for increasingly complex data-driven practices utilising increasingly sophisticated – and opaque – tools for the quantification of consumer preferences and automated responses to consumer interaction.

This article has focused on the collection and use of large sets of data in relation to consumers and their protection. It is therefore based on the assumption that consumer-focused activities in data-driven markets contain just that – data – which in theory can be scrutinised both with regards to its origin, its analysis, and application – which often means an algorithmically mediated automation. This is a field where contemporary consumer protection authorities need to have satisfactory supervisory methods.

In addition, as more and more consumer-related activities in the digital economy come to rely on artificial intelligence (AI) and machine learning, the demands of supervisory methodologies will increasingly face challenges relating to lack of transparency and autonomous agency in consumer-oriented products and services. They may even encounter a computation that is involved in decision-making that amounts to a form of cognition which is hard to explain and understand even for those that design the processes. As a response, perhaps future consumer protection authorities will find ways to utilise not only machine learning but also increasingly intelligent artificial agents to find and counteract inappropriate market behaviour, from a consumer protection point of view.

References

Andrejevic, M. (2013). Infoglut. How too Much Information is Changing the Way We Think and Know. New York, NY: Routledge.

Appelgren, E. (2017). The Reasons Behind Tracing Audience Behavior: A Matter of Paternalism and Transparency. International Journal of Communication, 11, 2178–2197. Retrieved from http://ijoc.org/index.php/ijoc/article/view/6823

Appelgreen, E., & Leckner, S. (2016). Att dela eller inte dela - användarnas inställning till insamling av personlig data. In J. Ohlsson, H. Oscarsson, & M. Solevid (Eds.), Ekvilibrium: SOM-undersökningen 2015 (pp. 403-418). Göteborg: SOM-institutet. Available at https://som.gu.se/digitalAssets/1579/1579392_ekvilibrium-inlaga-f--rg.pdf

Bechmann, A. (2014). Non-informed consent cultures: Privacy policies and app contracts on Facebook. Journal of Media Business Studies, 11(1), 21-38. doi:10.1080/16522354.2014.11073574

Boston Consulting Group. (2012). The Value of Our Digital Identity (Policy Report). Denver: Liberty Global, Inc. Available at https://www.bcg.com/publications/2012/digital-economy-consumer-insight-value-of-our-digital-identity.aspx

Brown, R.E., Jones, V.K. & Wang, M. (2016). The New Advertising. Branding, Content and Consumer Relationships in the Data-Driven Social Media Era. Santa Barbara: Praeger. 

Busch, O. (2016) Programmatic Advertising: The Successful Transformation to Automated, Data- driven Marketing in Real-time. Cham, CH: Springer. doi:10.1007/978-3-319-25023-6

Cranor, L. F., Hoke, C., Leon, P. G., & Au, A. (2014). Are They Worth Reading? An In-Depth Analysis of Online Advertising Companies’ Privacy Policies. Presented at the 42nd Research Conference on Communication, Information and Internet Policy. Available at http://www.contrib.andrew.cmu.edu/~pgl/tprc2014.pdf

Christl, W. (2017). Corporate Surveillance in Everyday Life: How Companies Collect, Combine, Analyze, Trade, and Use Personal Data on Billions (Report). Vienna: Cracked Labs. Retrieved from http://crackedlabs.org/en/corporate-surveillance

Datatilsynet. (2015). The Great Data Race: How commercial utilisation of personal data challenges privacy (Report). Retrieved from https://www.datatilsynet.no/en/about-privacy/reports-on-specific-subjects/the-great-data-race/

Edelman, D.C & Singer, M. (2015, November ). Competing on Customer Journeys. Harvard Business Review, November Issue. https://hbr.org/2015/11/competing-on-customer-journeys

European Data Protection Supervisor. (2015). Meeting the challenges of big data: A call for transparency, user control, data protection and accountability (Opinion No. 7/2015). Brussels: European Data Protection Supervisor. Available at https://edps.europa.eu/data-protection/our-work/publications/opinions/meeting-challenges-big-data_en

Federal Trade Commission. (2014). Data Brokers. A Call for Transparency and Accountability. Available at https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

Greenstein, S. (2017, June 1). Our Humanity Exposed: Predictive Modelling in a Legal Context (Doctoral Dissertation). Stockholm University, Stockholm. Retrieved from http://www.diva-portal.org/smash/get/diva2:1088890/FULLTEXT01.pdf

Halbert, D. & Larsson, S. (2015). By Policy or Design? Privacy in the US in a Post-Snowden World. Journal of Law, Technology and Public Policy, 1(2), 1-17. Retrieved from https://journal-law-tech-public-policy.scholasticahq.com/article/12-by-policy-or-design-privacy-in-the-us-in-a-post-snowden-world

Harrison, P. & Ti Gray, C. (2012). Profiling for Profit. A Report on Target Marketing and Profiling Practices in the Credit Industry (Report). Deakin University and Consumer Action Law Centre. Available at http://dro.deakin.edu.au/eserv/DU:30064922/harrison-profilingfor-2012.pdf

Helveston, M.N. (2016). Consumer Protection in the Age of Big Data. Washington University Law Review, 93(4), 859-917. Available at https://openscholarship.wustl.edu/law_lawreview/vol93/iss4/5/

Hildebrandt, M. (2008). “Defining Profiling: A New Type of Knowledge?”. In M. Hildebrandt & S. Gutwirth (Eds.), Profiling the European Citizen, Cross-Disciplinary Perspectives. Dordrecht: Springer. doi:10.1007/978-1-4020-6914-7_2

Hurley, M. & Adebayo, J. (2017). Credit Scoring the Era of Big Data. Yale Journal of Law and Technology, 18(1), 148-216. Available at http://digitalcommons.law.yale.edu/yjolt/vol18/iss1/5/

Joergensen, R. (2014). The Unbearable Lightness of User Consent. Internet Policy Review, 3(4). doi:10.14763/2014.4.330

King, N.J. & Forder, J. (2016). Data analytics and consumer profiling: Finding appropriate privacy principles for discovered data, Computer Law & Security Review, 32(5), 696-714. doi:10.1016/j.clsr.2016.05.002

Kitchin, R. & Lauriault, T. P. (2014). Towards critical data studies: Charting and unpacking data assemblages and their work (Working Paper No. 2). Maynooth, IE: The Programmable City Project. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2474112

Kshetri, N. (2014). Big data’s impact on privacy, security and consumer welfare. Telecommunications Policy, 38(11), 1134-1145. doi:10.1016/j.telpol.2014.10.002

Larsson, S. (2017a). All-seeing giants and blindfolded dwarfs: On information-asymmetries on data-driven markets. In J. Lith (Ed.), New Economic Models: Tools for Political Decision Makers Dealing with the Changing European Economies. Brussels, Belgium: European Liberal Forum asbl. Available at http://lup.lub.lu.se/record/bada07c0-3a62-4e12-950d-779178eeccd4

Larsson, S. (2017b). Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times. Oxford: Oxford University Press.

Larsson, S. (2017c). Sustaining Legitimacy and Trust in a Data-driven Society. Ericsson Technology Review, 94(1), 40-49. Available at https://lup.lub.lu.se/search/publication/75b9d975-1a58-4145-85c4-efde2e46aa14

Larsson, S. (2017d, July 2). The Quantified Consumer: blind, non-informed and manipulated? Internet Policy Review. Retrieved from https://policyreview.info/articles/news/quantified-consumer-blind-non-informed-and-manipulated/696

Larsson, S. & Ledendal, J. (2017). Personuppgifter som betalningsmedel (Report No. 2017:4). Karlstad: Konsumentverket. Available at https://www.konsumentverket.se/globalassets/publikationer/produkter-och-tjanster/gemensamt/rapport-2017-4-personuppgifter-som-betalmedel-konsumentverket.pdf

Larsson, S., Svensson, L., & Carlsson, H. (2016). Digital Consumption and Over-Indebtedness Among Young Adults in Sweden (LUii Report No. 3). Lund: Lund University Internet Institute. Available at http://portal.research.lu.se/portal/en/publications/digital-consumption-and-overindebtedness-among-young-adults-in-sweden(40a1d8bb-34cd-4540-9cef-205958989908).html

Lilley, S., Grodzinsky, F.S. & Gumbus, A. (2012). Revealing the commercialized and compliant Facebook user. Journal of Information, Communication and Ethics in Society, 10(2), 82–92. doi:10.1108/14779961211226994

McDonald, A.M. & Cranor, L.F. (2008). The Cost of Reading Privacy Policies. I/S: A Journal of Law and Policy for the Information Society, 4(3), 543-568. Available at https://kb.osu.edu/dspace/bitstream/handle/1811/72839/1/ISJLP_V4N3_543.pdf

Narayanaswamy, R. & McGrath, L. (2014). A Holistic Study of Privacy in Social Networking Sites. Academy of Information and Management Sciences Journal, 17(1), 71-85.

Pasquale, F. (2015). The Black Box Society. The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.

Pasquale, F. (2017, September 12). Exploring the Fintech Landscape. Written Testimony of Frank Pasquale Before the United States Senate Committee on the Banking, Housing, and Urban Affairs. Available at https://www.banking.senate.gov/imo/media/doc/Pasquale%20Testimony%209-12-17.pdf

Pew Research Center. (2014). Public Perceptions of Privacy and Security in the Post-Snowden Era. Washington DC: Pew Research Center. Available at http://www.pewinternet.org/2014/11/12/public-privacy-perceptions/

PwC. (2017). Global Top 100 Companies by market capitalisation: 31 March 2017 update. PwC.

Rhoen, M. (2016). Beyond consent: improving data protection through consumer protection law. Internet Policy Review, 5(1). doi:10.14763/2016.1.404

Sanchez, J.M. (2009). The Role of Information in Consumer Debt and Bankruptcies (Working Paper No. 09-04). Richmond: The Federal Reserve Bank of Richmond. 

Schwartz, P. (2004). Property, Privacy, and Personal Data. Harvard Law Review, 117(7), 2056-2128. doi:10.2307/4093335

Siegel, E. (2016). Predictive Analytics: The Power to predict who will Click, Buy, or Die. Hoboken, NJ: Wiley.

Singh, S. & Lyon, D. (2013). Surveilling consumers: the social consequences of data processing on Amazon.com. In R.W. Belk & R. Llamas (Eds.), The Routledge Companion to Digital Consumption. London: Routledge. doi:10.4324/9780203105306

Solove, D.J. (2013). Privacy Self-Management and the Consent Dilemma. Harvard Law Review, 126(7), 1880-1903. Available at https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/

Spiekermann, S., & Korunovska, J. (2016). Towards a value theory for personal data. Journal of Information Technology, 23(1), 62-84. doi:10.1057/jit.2016.4

Stone, B. (2008, October 21). Banks Mine Data and Woo Troubled Borrowers. The New York Times

Teknologirådets & Datatilsynet. (2016). Personvern 2016 – tilstand og trender. Oslo: Teknologirådets & Datatilsynet. Available at https://www.datatilsynet.no/om-personvern/rapporter-og-utredninger/personvernundersokelser/personvern-2016/

Turow, J., Hennessy, M. & Draper, N. (2015) The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation (ASC Departmental Paper No. 521). Philadelphia: Annenberg School of Communication, University of Pennsylvania. Available at https://repository.upenn.edu/asc_papers/521/

Cryptographic imaginaries and the networked public

$
0
0

Introduction

Scholars in communication and STS have long been concerned with the implications of connective technologies for society, exploring ICTs through the frameworks of the “network society” (Castells, 1996), “culture of connectivity” (van Dijck, 2013), and “network public” (boyd, 2010), among others. In recent years, we have begun to grapple with the runaway effects of connectivity; how networked infrastructures can be used for control (Barzilai-Nahon, 2008; Benkler, 2016), enabling internet companies to accumulate vast amounts of digital data with little transparency (Zuboff, 2015; Pasquale, 2014; Angwin, 2014), and facilitating surveillance by state intelligence agencies (Schneier, 2015; Deibert, 2013) that can be used to manipulate elections (Kreiss & McGregor, 2017).

This article aims to contribute to this evolving body of work through the study of related policy debates over encryption technologies. In keeping with the theme of this special issue, ‘networked publics’, I explore the cultural value of cryptography as a potential counterbalance to connectivity. Cryptography enables the transformation of messages or data into code inscrutable to anyone save those with the key to unscramble it. It thus enables us to selectively reveal information to some and not to others; adding asymmetries to the process of communication that imbue messages with new kinds of power relations. Cryptographic systems exert control over access to information through the construction of their infrastructure and design: they push the limits of written communication, experiment with new forms of visual representation of an inscribed meaning, or transform it using mathematics.

But whether and to whom access to the hidden meaning in a text is selectively available is also a social and political question. Recent policy debates over encryption reflect a struggle over the information asymmetries that have arisen in an environment of surveillance capitalism (Zuboff, 2015). Over the last decade, we have undergone a process of deep mediatisation (Couldry & Hepp, 2016), recording the most intimate details of ourselves as we move through time and space. By incorporating technologies to our daily habits, the amount of metadata we produce has bloomed, leading to the production of an infinitesimal number of data traces.

As the Snowden revelations demonstrated, these data traces are not scattered to the wind, ephemeral and fleeting. Rather, they are commoditised, mined for their economic potential and harvested by intelligence agencies in the name of national security (Zuboff, 2015; West, 2017). The work of surveillance scholars situates these transitions in their political and economic context (Lauer, 2017; Schneier, 2014), observing how systems of surveillance lead to new forms of algorithmic control (Pasquale, 2014) and are interwoven with historical patterns of discrimination (Browne, 2015).

The policy debate over encryption centres on questions about whether and under what conditions digital information should be allowed to be obscured by making it indecipherable to anyone who does not have the key to decode it.1 For privacy advocates, encryption presents an important, if partial, solution to the harms posed by mass surveillance. In the face of growing incursions on our privacy by the state and market and insufficient accountability by regulators, encryption can serve to bolster the rights of individuals. By contrast, law enforcement agencies argue that encryption presents an existential challenge: investigators contend that they are reliant on the ability to collect and use this data in order to track down people engaged in violent extremism, using bulk collection and network analysis to map the communications networks of possible terrorists. They claim that the widespread adoption of encryption could lead to the data traces produced by suspects suddenly “going dark” (Homeland Security Committee, 2016).

These two contrasting perspectives illustrate two distinct conceptualisations of the cultural meaning of encryption. Authorities assert there must be ways of using encryption to protect secrets from adversary nations while granting law enforcement access. Advocates argue this is not mathematically possible without weakening encryption such that it could easily be broken by adversaries. This often resolves into a stalemate due to differing interpretations of what is both technically and mathematically possible and politically desirable.

At its furthest extremes, the encryption debate has displaced the underlying argument over how to synthesise differing incentives between and among state agencies seeking to protect national security and individuals’ right to privacy.2 These arguments verge on treating encryption as a teleological goal in itself; what Gürses, Kundnani and van Hoboken (2016) refer to as “crypto as a defense mechanism”. By reducing the argument to technical solutions, this response fails to account for the political nature of the surveillance problem, undermining its social consequences and ignoring issues of race, gender, and class.

Really, these arguments over encryption are not about the technology itself, but who has access to information and at what scale. The crypto debate centres on the question, what are the ‘right’ relationships between information and power, and how are these relationships defined? Understanding the politics of encryption requires teasing out these questions in a nuanced way, placing them in dialogue with the broader landscape of social and technological change.

This article contributes to our understanding by tracing several readings of the cultural value of encryption historically through archival research, illustrating how they have evolved over its centuries-long history and surface today in contemporary discourses. I see each of these readings as distinct cryptographic imaginaries - conceptualisations about what encryption is, what it does, and what it should do. Following Charles Taylor (2004), I see the cryptographic imaginary as something more than a set of ideas or discourses - it is embodied in both technological architecture and social practice, ways of thinking and ways of being in the world.

My analysis is grounded in a tradition in science and technology studies (STS) that sees technological infrastructures - “those systems without which contemporary societies cannot function” (Edwards, 2003) as both having hard technical materiality and being shaped through social processes. Because these infrastructures are embedded in social arrangements, they can inscribe ethical principles into a system - signaling what is important or of value, whose voice is seen as representative or marginal, or what is seen as non-controversial or mainstream.

Surfacing and making visible the imaginaries we develop around encryption provides an entrypoint to understanding the implications of encryption technologies in a networked society: how ciphers are designed to obscure information to some and not to others, how decisions are made about who can be privy to the secrets they obscure, and who can gain access to the technologies of encryption in the first place. As cryptographer Phil Rogaway writes, “That cryptographic work is deeply tied to politics is a claim so obvious that only a cryptographer could fail to see it” (Rogaway, 2015, p. 3). Understanding how it is tied to politics has important normative and legal implications; shaping not only the policy debate, but legal and judicial interpretations of cryptography and the architecture of encryption technologies themselves.

Methods

The findings in this article are part of a larger multi-sited ethnographic study that traces evolutions in the cultural meaning of encryption in relation to the development of networked infrastructures between the 1960s and present day (Marcus, 1995). The analysis I outline here is largely historical and interpretive in nature, drawing on two years of archival research across collections at Stanford Library, the Computer History Museum, the Smithsonian Museum of Natural History and IBM Research.

In order to make sense of shifts in the cultural meaning of encryption, I first sought to understand cryptography in the context of its broad, historical trajectory. I researched canonical histories of cryptography across a range of disciplines, drawing primarily on computer science, literature, and early modern history, as well as histories that were written for popular audiences. To select texts for analysis, I conducted general searches related to cryptography and encryption through my university’s library, Google Scholar, and at each of the archives listed above. In addition, at each archive I conducted targeted keyword searches of the names of companies active in this space (such as RSA, Public Key Partners, and Netscape) as well as prominent individuals who were engaged in the study of cryptography (such as Martin Hellman, Whitfield Diffie, Ron Rivest, Adi Shamir, Leonard Adleman, and David Chaum), generating further sources of material to study. I coded the archival materials thematically using in vivo coding to identify dominant themes and historical trajectories, then worked within each theme to form a linear narrative that traced the evolution of the thematic material over time.

Though the findings I present largely draw from this historical research, they are also informed by two years of ethnographic field work conducted at conferences where members of the contemporary crypto community gather to discuss their work: these included the Chaos Communication Congress, the Internet Freedom Festival, RightsCon, and the Crypto Summit, among others. In addition to collecting participant observation data, I conducted dozens of interviews with privacy advocates, policy officials, and technologists working on encryption projects. This data was not included in my analysis for the purposes of this project, but was useful for providing context.

Despite this, my findings will inevitably be fragmentary and partial, the product of several limitations: first, there are aspects of cryptography that are notably absent from my analysis, such as its relationship to copyright regimes and incorporation into digital rights management technologies, which I determined to be out of scope for this project. Second, because encryption has historically been seen as a critical national security resource it is subject to the classification regimes of both government and corporate institutions; I was able to access some declassified materials but suspect that there are others that remain classified. Lastly, but importantly, there are gaps in whose voices were represented in the archives: those who spoke were primarily men with high levels of technical expertise and education, even though women and people of colour were actively involved in cryptologic enterprises during World War II.3 I hope to explore these gaps further in future work.

Definitions

Most texts on cryptography – its mathematical principles as well as its history – begin with a brief glossary in terms. They generally start with a statement somewhat like the following, from the Oxford English Dictionary: encryption is a “Noun. The process of converting information or data into a code, especially to prevent unauthorised access” (Oxford, 2017). This definition captures a number of different aspects of the concept: encryption as both an object (Noun.) and a process (of converting information or data into code). It is often used, as the definition suggests, “to prevent unauthorised access” – rendering its contents unintelligible to anyone without the key, or the capacity to break the code.

Encryption is also often inscribed into technical artifacts. Here, two new distinctions are drawn around what kind of inscription is involved: ciphers, which transpose individual letters in an alphabet, and codes, which replace entire plaintext words (Kahn, 1967). Similarly, to encrypt or encipher something refers to the process of translating a piece of plaintext into a ciphered text, while to encode means to translate the meaning of the plaintext into code. When it comes to the process of returning a code/cipher to its original plaintext, the actor’s intent comes into play, as well as the environment in which they are acting: if the person has legitimate possession of the key or the system needed to convert the cryptogram back to its original plaintext, they are deciphering or decoding the text. If they are a third party adversary – someone without possession of the system or key – they are cryptanalysing, or codebreaking, the text.

Finally, encryption is increasingly implicated in infrastructure, and the term encryption is often used interchangeably with the systems it is built into. Encryption is a part of contemporary networked infrastructure, inscribed in the structures and technologies of the internet and working invisibly to support the things we do with it (Star & Ruhleder, 1996). Encryption technologies are behind every credit card transaction, Bluetooth connection, and mobile phone call made by billions of people worldwide. They are used during the authentication of connections, protecting the connection between your computer browser and the servers of the websites we navigate to. They protect data at rest, ensuring that private information stored on servers is not easily accessed or changed by third parties. Each of these infrastructures are applications of encryption, constructed by technologists and deployed in particular ways. And thus, there are values and ethical principles inscribed in the depths of the systems that deploy encryption.

Cryptographic imaginaries

The remainder of this chapter is split into three sections, each describing and analysing a different cryptographic imaginary: the occult, the state, and democratic values. I define the cryptographic imaginary as a concept about what encryption is, what it does, and what it should do that is is embodied in both technological architecture and social practice, ways of thinking about cryptography and putting it to use.

The idea of a cryptographic imaginary owes much to the work of Charles Taylor and his elaboration of the social imaginary. Drawing on his work, I understand a social imaginary to be something broader and more all-encompassing than discourse; it is, as Taylor describes it, “not a set of ideas; rather it is what enables, through making sense of, the practices of a society” (2002, 91). Social imaginaries bridge ideas and practices, they encompass both ways of thinking and ways of being in the world. This is a particularly powerful concept for understanding the ideas that we elaborate around technologies, because it affords a mode of analysis that can include both technical practice and discursive arguments (Kelty, 2005).

In each section that follows, I trace the history of cryptography in association with each imaginary, interrogate the values implicit in them, and explain how these values surface in contemporary policy debates about cryptography.

Encryption and the occult

The first and one of the oldest domains in which cryptography emerged associates the transformation of writing with secrecy, magic, and the occult. This is an association that lives on today as much in the writing of the thrillers of Dan Brown and his ‘symbologist’ Robert Langdon as in claims by Google CEO Eric Schmidt that “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place” (CNBC, 2009).

Some of the earliest versions of cryptography sought to use encryption as a way of mystifying texts, using obfuscation not so much as a way of masking its meaning from adversaries but rather as a way to add a layer of symbolic meaning to written words. Early practices include the use of hieroglyphics in Egyptian funerary formulas and rune-writing in Scandinavia and Anglo-Saxon Britain, necromancers in the Roman empire, and the use of codes in religious texts such as the Hebrew substitution cipher Atbash, used throughout the Bible and other Jewish mystic writings to encode the names of important words. The use of codes and ciphers in mystic texts became a subject of fascination for the devoted, who developed a practice of decipherment and interpretation to unlock the deeper meanings embedded in religious texts.

The association of encryption with religious mysticism took on a darker tone by the 16th and 17th centuries, but not necessarily because cryptography was actually used as an occult practice – rather, these associations are more likely tied to the stigmatisation of secrecy by average individuals during this period. The early modern cryptography manual Steganographia is a good exemplar. Steganographia was published in 1606 by the German cryptographer Trithemius, and for years was held up as an example that tied the emerging discipline of cryptography to the practices of early modern magic (Ellison, 2017, Kahn, 1967). This historical interpretation is understandable – the text of the manuscript makes claims about instructing the reader in the use of spirits to send messages over great distances. But in the 1990s, cryptographers finally deciphered the text of its final volume, revealing these interpretations to be misguided. They found Steganographia to be a text that is centrally focused on cryptography, but was disguised to be a book purely focused on magic (Reeds, 1998).

Other early modern cryptographers attempted to disassociate encryption from the occult by aligning it with the emerging disciplines of the liberal arts, repositioning practices once considered to be magic, such as alchemy and astrology, into experimental and scientific practices like chemistry and astronomy (Ellison, 2017, p. 72). Their work would seem, at first, to contrast with the efforts of contemporaries like Robert Boyle, who worked to make the production of knowledge public in order to differentiate matters of fact from matters of belief. Shapin and Shaffer (1985) write of Boyle’s efforts to cultivate practices of experimental witnessing, observing “Matters of fact were the outcome of the process of having an empirical experience, warranting it to oneself, and assuring others that grounds for their belief were adequate” (Shapin & Shaffer, 1985, p. 25).

But cultural, political, and economic factors during the time period may have indeed required some level of secret communication among participants in the scientific revolution: for example, many of these early scientists faced political dangers from ecclesiastical and civil authorities (Hull, 1985), were incentivised to protect trade secrets (Macrackis, 2010), and retained paraphernalia of secret political and religious orders as a form of bonding within the budding scientific community, such as the adoption of secret names, emblems, and oaths to brotherhood (Eamon, 1985). As such, the popularity of secret communication in the emerging scientific discipline is not necessarily in contradiction with the effort to establish new standards of empiricism grounded in experimental witnessing.

Just as important, the circulation of published texts – encrypted or otherwise – in England during the 17th century was in itself subversive. Manuscripts were often spread by clandestine means in order to evade the eyes of government censors. Secret writing is thus intertwined with the practices of reading and writing, and made urgent by the widespread availability of printed matter through the invention of the printing press (Jagodzinski, 1999). As Ellison writes, cryptography “was as much a global communication system for knowledge sharing as it was also a system for hiding and concealing cultural secrets. It was as much an attempt to standardise communication across nations, ethnicities, and languages as it was a means of discriminating between audience members and preserving cultural difference” (2017, 17). It was only after the practice of reading and writing became widespread that the concepts of privacy and secrecy finally discarded their occult associations and developed a relatively neutral meaning (Jagodzinski, 1999, p. 24).

The idea that cryptography is an occult practice reflects the idea, as persistent at the time as it is today, that secrecy is a mark of poor moral character. The sociologist Georg Simmel rejected this notion, saying that “secrecy is a universal sociological form, which, as such, has nothing to do with the moral valuations of its contents” (1906, 462). But the notion never fully went away: Facebook CEO Mark Zuckerberg has made statements that suggest that hiding one’s identity is a sign of a lack of integrity, reasoning that inhibiting Facebook users’ capacity to obscure their identities will lead to more civil discourse.

These views are also reflected in how the use of encryption can become a trigger for surveillance: for example, the use of technologies like the Tor browser is one signal that leads to higher levels of targeting in US intelligence agencies’ surveillance systems (Cox, 2014). Such an approach incorporates the common argument that “if you aren’t doing anything wrong, you should have nothing to hide” in surveillance architecture, perpetuating the idea that individuals seeking privacy must be undeserving of its protections. However, it neglects to account for the real discrepancies in power between citizens and a surveillance state (Solove, 2007).

These ideas are almost never explicitly contextualised historically or tied to the complex set of factors that related cryptography to occult practices in the early modern era. But the association between cryptography and the occult is powerful: despite the efforts of cryptographers over centuries to establish the practice as a science, it retains the residual mark of these dark associations.

Cryptography and the state

Another dominant reading of cryptography centres the art of secret writing as a tool of the state. In this domain, cryptography is used as a strategic advantage over adversaries for states waging war in a geopolitical battlefield. As Lois Potter puts it in her book Secret Rites and Secret Writing: Royalist Literature 1641-1660, “Mystery is an advantage for any party in power, and, since knowledge is power, any party out of power will naturally demand further access to it. At the same time, any party which is denied access to the open expression of its views will express them covertly if it can” (Potter, 1989, p. 209).

The assertion that cryptography has historically been monopolised by state authorities requires some unpacking, however. The contemporary debates over the legal status of encryption reveal contradictions between two overlapping perspectives on the proper role of cryptography within states: cryptography as a tool for national security, and cryptography as a tool for state secrecy. These differing perspectives are increasingly in conflict with one another: whichever of them dominates will have important implications for the configuration of power in the state’s orientation toward cryptography.

1. Cryptography and national security

Cryptography is a key part of the apparatus of state national security: whoever has access to cryptography has a strategic advantage over adversaries by opening up lines of communication that cannot be intercepted. Thus, many states seek to shore up cryptographic resources by investing in technologies and in the best minds the discipline has to offer.

Though it is not the only use, the most common way cryptography has been used by states is in the military: for example, Herodotus writes that the use of secret writing saved Greece from being conquered by Xerxes, the Persian king, when an exiled Greek citizen sent a message in code to warn the Spartans of Xerxes’ invasion plan (Singh, 1999). It is directly implicated in American involvement in both world wars; the decipherment of the Zimmermann telegram by the British led directly to American involvement in World War I. The failure to piece together deciphered intelligence indicating the attack on Pearl Harbor in time led directly to its entry into World War II (Kahn, 1967). The use of cryptography by military agencies reached a new pinnacle during the wars, employed by nearly all nations engaged in the wars and codified through the formation of new agencies devoted to cryptanalysis and cryptography. Modern histories of World War II attribute the cracking of the Enigma machine as one of the decisive victories that led to the end of the war, while Sweden used cryptography decisively in order to maintain its neutrality (Kahn, 1967).

But cryptography also has an important national security function during peacetime, and is a part of the flowering of modern diplomacy between the 16th and 18th centuries: the principle of secrecy in diplomacy was well-established among European states after the Renaissance (Roberts, 2008), and enacted through the use of encryption of diplomatic communications between ambassadors and their home states. These communications were sometimes intercepted, opened and cryptanalysed by other states on the way, a practice pioneered by the French cryptologist Antoine Rossignol and institutionalised by the formation of Black Chambers by countless other states. The historian David Kahn writes that by the end of the 1500s, most European states kept full-time secretaries who worked to read the ciphered dispatches of foreign diplomats and develop official codes of their own. The sophistication of a state’s cryptologic capabilities thus became a strategic advantage not only in war, but in peacetime as well (Kahn, 1967, p. 106-109 & 157-165).

Cryptography in national security is thus about a state’s capacity to protect its own communications and to infiltrate the communications of their adversaries. In this sense, it is zero-sum: whoever has the most advanced cryptographic systems has a strategic advantage over others, and can leverage this advantage for both military and diplomatic benefits.

2. Cryptography and state secrecy

Cryptography also plays an important domestic function within states, by enabling state secrecy. Historically, secrecy by the state was meant to symbolise and safeguard the dignity of rulers and integrity of their functions (Hoffman, 1981), canonised by Tacitus in his history of the Roman empire under the principle of arcana imperii, or secrecy for the state (Roberts, 2006). This orientation toward cryptography also seeks to maintain a state monopoly on the practice, but to different ends.

One of the earliest examples of the extensive use of encryption by a government can be observed within the pre-modern bureaucratic systems of the Abbasid caliphate. The Abbasids grew a vibrant commercial industry through the administration of strict laws and low tax rates. In order to maintain this system, administrators relied on the secure communication afforded by encryption to protect their tax records and sensitive affairs of state (Singh, 1999).

More often, secrecy is used to mask corruption and impropriety among sovereigns. For example, King Charles I of England used encryption extensively in his letters, which became the subject of intrigue when they were leaked and published in 1645, revealing among other things his distaste for Queen Henrietta Marie prior to their marriage. The King made the mistake of keeping unciphered drafts of the letters in his papers, making the decipherment of the remaining texts all the easier once captured. This led to both embarrassment for the already-encumbered British royalist cause and, at the conclusion of the English Civil War, his execution for treason (Potter, 1989).

The embrace of secrecy has harmed states’ interests in modern times as well: for decades, the United Kingdom was unable to claim its invention of the first programmable digital computer. Because of the secret nature of the country’s advances in cryptography during the war, the UK destroyed all records of its invention of the Colossus, the programmable digital computer used by codebreakers at Bletchley Park to decrypt messages in the days leading up to D-Day. For years, the US-made Electronic Numerical Integrator and Computer (ENIAC) was believed to be the first computer, even though Colossus was operational three years earlier. The machine itself and much of the documentation about it were dismantled or destroyed after the war and kept secret until the 1970s (Singh, 1999, Coombs, 1983).

A series of scandals relating to state secrecy in the 1970s led to an embrace of openness in the United States, though this proved to be short-lived. The Church Committee, formed by the United States Senate found that secrecy in the Executive Branch had led to widespread abuse of powers, including the surveillance of civil rights leaders, attempts at assassination of foreign leaders, and a thirty-year programme by the US National Security Agency (NSA) to obtain copies of telegrams departing from the United States (Schwarz, 2015).

A Task Force on Secrecy concluded in 1970 that “more might be gained than lost” if the US adopted “unilaterally, if necessary - a policy of complete openness in all areas of information” (Moynihan, 61). The findings of the Task Force align with the observations of the sociologist Georg Simmel that “Democracies are bound to regard publicity as the condition desirable in itself. This follows from the fundamental idea that each should be informed about all the relationships and occurrences with which he is concerned, since this is a condition of his doing his part” (Simmel, 1906, p. 469).

The spread of networked technologies has opened up unprecedented opportunities for intelligence agencies, giving them new and significantly expanded capacity to collect data not only on citizens within the country, but from people around the globe. However, unlike during the Cold War, this capacity by no means monopolised by the United States. It has led to a fracturing of the discourse within and between government agencies around the usefulness of encryption: whether or not they see cryptography to be a friend or foe is closely tied to both their incentives and views on the role of information in national security.

For example, over the past forty years, the NSA and its UK counterpart General Communications Headquarters (GCHQ) have sought to limit the use of encryption worldwide: by inserting vulnerabilities into encryption standards (for example, by compromising the random number generator in the encryption standard adopted by the US National Institute of Standards and Technology - NIST), promoting the use of backdoored encryption devices (Levy, 2001), and engaging in legal battles to enable government agencies’ access to encryption keys (Harris, 2014).

Some former national security officials have expressed support for adopting a stance that recognises the benefits of encryption, siding with those who see privacy as a necessary part of national security, not an adversary to it (Friedersdorf, 2015). This is a view that the FBI does not share – and neither do the governments of the UK, China, India, Senegal, Egypt, and Pakistan, all of which have laws that highly control or criminalise public use of encryption projects or otherwise enable law enforcement authorities to compel decryption (Abelson et al., 2015; Levy, 2001). To complicate matters, state secrecy made a forceful return in the years following the War on Terror, resulting in the expansion of systems of classification and adoption of secret tribunals to make critical decisions about surveillance authorisations.

Though the narrative of encryption as a tool of the state continues to be a dominant force in encryption policy, it is increasingly complicated and fraught with inter-agency conflict. Despite these complications, it remains true that when viewed through the lens of state power, encryption becomes part of a battlefield of intelligence in which states seek to exploit the weaknesses of others to their advantage.

Encryption and democratic values

The third and final domain that emerged in my research is that of encryption and democratic values. The use of codes and ciphers has a longstanding tradition in the United States reaching back to the Revolutionary War: cryptography and the pseudonymous publication of pamphlets enabled the ideas at the heart of the revolution to circulate and gain popularity on their merit without the risk of immediate suppression by Loyalists (Nagy, 2009).

It also has important roots in the experiences of marginalised communities: for example, individuals fleeing slavery in the American South through the Underground Railroad were assisted by coded messages sewn onto quilts, displayed openly by conductors at waypoints on the trip north. The quilts would indicate safe houses and hiding places, or what kinds of resources were available to passengers in their travels, and were legible only to those with the ability to read the codes hidden within them (Rosenberg, 2003). The use of encryption technologies by communities of colour is a subject particularly deserving of more attention, given the long history of the racialised application of surveillance and its deployment as a means to reify boundaries around communities of colour and enforce their marginality (Browne, 2015).

In his book Domination and the Arts of Resistance, James C. Scott writes of practices that enable resistance in the face of the powerful. Scott writes that powerless groups often use what he calls ‘hidden transcripts’ to enact critiques in the face of the powerful; using disguised forms of expression such as rumors, gossip, folk tales, songs, jokes, and gestures to “insinuate a critique of power while hiding behind anonymity or behind innocuous understandings of their conduct” (Scott, 1990, p. xiii). Here, encryption is a subversive force that balances out asymmetries of power resulting from the increased surveillance capacities of both state and market actors.

By the 1980s and 1990s, amateur cryptographers were experimenting with new ideas about encryption software as an enabler of freedom (Hellegren, 2017). Calling themselves “cypherpunks”, this community envisioned a new world in which individuals would gain agency through anonymity. They anticipated the dangers of a fully connected world, and put their hopes in encryption technologies as a means to resist the forces of surveillance. For decades, they worked to build tools compatible with innovations in networked technologies that would allow citizens to disconnect, to protect their privacy, and communicate anonymously. They imagined an internet that put privacy, not connectivity, at its centre, and in so doing sought to use encryption as a form of resistance against institutional power. Their work was not without flaws: many of the tools built by cypherpunks were difficult to use, and they spent relatively little time trying to encourage mainstream computer users to adopt them. However, the evolution of ideas about cryptography in response to the advancement of networked communications between the 1970s and early 2000s laid important ideological foundations for the work of privacy advocates in the present day.

For example, Chinese netizens have developed elaborate systems of coded internet slang known as e’gao that can be used in public on social media platforms to circumvent censorship by authorities. By reappropriating common terms and their homophones to distort or subvert their commonplace meaning, everyday citizens engage in resistance against government oversight. One well-known example is a meme in which netizens adopted the term “river crab” as a stand in for its homophone “harmonious”, the signature ideology of then-Chinese president Hu Jintao. As the construction of a “harmonious” society by Hu Jintao came to be accompanied by ever-stricter levels of censorship, netizens began saying that they were “river-crabbed” in place of “harmonised” to signal to others that their words had been censored (Nordin & Richaud, 2014). The adoption of codes in this manner enabled activists to communicate outside the purview of increasingly invasive tactics by the state.

Encryption technologies have also proven useful to whistleblowers, journalists, and human rights defenders. The most famous of these cases is Edward Snowden, who used encrypted tools to protect his communications with the journalist Glenn Greenwald and filmmaker Laura Poitras while blowing the whistle on mass surveillance by the National Security Agency. Encryption enabled Snowden to mask his communications from the NSA long enough to escape to Hong Kong and publish the initial articles from the files he leaked. But, concerningly, the use of encryption by human rights advocates has increasingly served as a justification for oppression by the state: for example, the Zone 9 bloggers, a collective of journalists in Ethiopia who write about political issues and human rights abuses, were arrested and charged, among other things, for using encryption tools to protect their correspondence with sources.

In response to such actions there has been a recent effort to associate encryption with international human rights law. Following the Snowden revelations, the United Nations adopted a resolution on the right to privacy in the digital age. In 2013, then-Special Rapporteur on freedom of expression Frank La Rue drew a connection between the resolution and the use of encryption, writing that “States must refrain from forcing the private sector to implement measures compromising the privacy, security and anonymity of communications services, including requiring the construction of interception capabilities for State surveillance purposes or prohibiting the use of encryption” (Human Rights Council, 2013).

His successor, David Kaye, went on to link encryption explicitly to core values of human rights, arguing that it helps to lower barriers to the free flow of information and creates a zone of privacy necessary to make free expression possible (United Nations, 2015; Kaye, personal communication, 2017). Amnesty International has taken this a step further, declaring that encryption is itself an ‘enabler’ of human rights: “Encryption is a basic prerequisite for privacy and free speech in the digital age. Banning encryption is like banning envelopes and curtains. It takes away a basic tool for keeping your life private,” said Sherif Elsayed-Ali, Amnesty’s Deputy Director for Global Issues.

In seeking to associate encryption with human rights, these advocates are establishing that encryption may be a precondition for democratic self-expression and association, by fostering zones of privacy where communities of individuals can join together without fear of surveillance. Cryptography thus can play an important role in creating possibilities for the formation of networked publics. This use of encryption is especially important for marginalised communities that are disproportionately exposed to the gaze of surveillance by corporations and the state under the conditions of surveillance capitalism (Zuboff, 2015; Browne, 2015; Eubanks, 2017).

Conclusion

My analysis treats encryption as not just a technical, but sociocultural process. Though encryption is often treated in an instrumental way - as technologies that can be used for the protection of privacy and security - I argue that cryptography has always been innately intertwined with the interrelationships between written language and culture. This has led to the development of cryptographic imaginaries, concepts about how encryption can be used to configure relationships between information and power that are embodied in technological architectures and social practices.

As I have explored in depth, several different imaginaries centred around encryption have arisen, each of which develops distinct understandings of its purpose and use. The existence of multiple co-existing cryptographic imaginaries is in part why encryption has become the subject of so much controversy: not only do encryption debates centre on different ideas about policy, or about what is mathematically possible, they invoke fundamentally different ideas about the value systems and power discrepancies encryption addresses.

For policymakers attuned to thinking of encryption as a tool for criminals and terrorists, its value as a tool for the protection of privacy may feel trivial. For military and intelligence professionals who see cryptography as a valuable national security resource, it makes sense that it would be regulated in a similar fashion to weaponry. For activists and human rights defenders who rely on cryptography to safely conduct their work, access to cryptography is an enabler of democratic freedoms and necessary precondition for free expression.

Each of these perspectives is informed by particular configurations of access to information, and thus particular ideas about the role of cryptography in a networked society. As I have outlined, cryptography can serve as a corrective for some of the harms networked communications infrastructures make possible - namely, that the technologies that connect and empower us can also be used to surveil and hurt us. Cryptography can create new spaces of possibility for communities to form in an environment of mass surveillance; it can enable those with marginalised identities or marginalised views to create spaces for expression and cultivate relationships with like-minded individuals.

Our ability to communicate with one another across time and space through writing is accompanied by an inevitable need to retain a zone of privacy and disconnection. As historian of cryptography, David Kahn, writes, “as soon as a culture has reached a certain level, probably measured largely by its literacy, cryptography appears spontaneously – as its parents, language and writing, probably also did. The multiple human needs and desires that demand privacy among two or more people in the midst of social life must inevitably lead to cryptology wherever men thrive and wherever they write” (Kahn, 1967, p. 84).

The imaginaries we develop around the cultural meaning of cryptography will inevitably surface in what kinds of encryption technologies are built, adopted, and implemented in infrastructure. They shape the regulatory policies designed to govern them. Lastly, and perhaps most importantly, they emerge in our social imaginaries about the possibilities of our networked infrastructure.

References

Abelson, H., Anderson, R., Bellovin, S. M., Benaloh, J., Blaze, M., Diffie, W., Neumann, P. G. (2015). Keys under doormats: mandating insecurity by requiring government access to all data and communications. Journal of Cybersecurity, 1(1), 69–79. doi:10.1093/cybsec/tyv009

Agre, P. E. (1997). Computation and human experience. Cambridge, UK: Cambridge University Press.

Amnesty International. (2016). Encryption: A Matter of Human Rights. Amnesty International. Retrieved from https://www.amnestyusa.org/reports/encryption-a-matter-of-human-rights/2/

Angwin, J. (2014). Dragnet Nation: A quest for privacy, security and freedom in a world of relentless surveillance. New York, NY: Times Books.

Barzilai-Nahon, K. (2008). Toward a Theory of Network Gatekeeping: A Framework for Exploring Information Control. Journal of the American Society for Information Science and Technology, 59(9), 1493-1512. doi:10.1002/asi.20857

Benkler, Y. (2016). Degrees of Freedom, Dimensions of Power. Daedalus, 145(1): 18-32. doi:10.1162/DAED_a_00362 Available at http://www.benkler.org/Degrees_of_Freedom_Dimensions_of_Power_Final.pdf

boyd, d. (2010) Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications. In Z. Papacharissi (Ed.), Networked self: Identity, community, and culture on social network sites. Abingdon, UK: Routledge.

Browne, S. (2015). Dark matters: On the surveillance of blackness. Durham: Duke University.

Calaway, J. C. (2003). Benjamin Franklin’s Female and Male Pseudonyms: Sex, Gender, Culture, and Name Suppression from Boston to Philadelphia and Beyond (Honors Project). Illinois Wesleyan University. Retrieved from https://digitalcommons.iwu.edu/history_honproj/18/

Castells, M. (1996). The rise of the network society. Cambridge, MA.: Blackwell.

Couldry, N. and Hepp, A. (2016). The Mediated Construction of Reality. Cambridge, UK: Polity Press.

Deibert, R. (2013). Black Code: Inside the Battle for Cyberspace. Toronto, CA: McClelland & Stewart.

DeSeriis, M. (2015). Improper Names: Collective Pseudonyms from the Luddites to Anonymous. Minneapolis, MN: University of Minnesota Press.

Eamon, W. (1985). From the Secrets of Nature to Public Knowledge: The Origins of the Concept of Openness in Science. Minerva, 23(3), 321-347. doi:10.1007/BF01096442

Edwards, P. (1996). The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press.

Ellison, K. (2017). A cultural history of early modern English cryptography manuals. Abingdon, UK: Routledge, Taylor & Francis Group

Eubanks, V. (2017) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press.

Eve, M. (2016). Password. New York, NY: Bloomsbury.

Fagone, J. (2017). The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America’s Enemies. New York, NY: Dey Street Books.

Friedersdorf, C. (2015, July 30). Former National-Security Officials Now See the Peril of Weakening Encryption. The Atlantic. Retrieved from https://www.theatlantic.com/politics/archive/2015/07/former-national-security-officials-see-the-peril-of-weakening-encryption/399848/

Gill, L. (2018, in press). Law, Metaphor, and the Encrypted Machine. Osgoode Hall Law Journal, 55(2). Working paper version retrieved from https://ssrn.com/abstract=2933269

Gillespie, T. (2006). Engineering a Principle: ‘End-to-End’ in the Design of the Internet. Social Studies of Science, 36(3), 427-457. doi:10.1177/0306312706056047

Gürses, S., Kundnani, A., & Van Hoboken, J. (2016). Crypto and empire: The contradictions of counter-surveillance advocacy. Media, Culture & Society, 38(4), 576-590. doi:10.1177/0163443716643006

Harris, S. (2014). @WAR: The rise of the military-Internet complex. Boston, MA: Houghton Mifflin Harcourt.

Hellegren, Z. I. (2017). A history of crypto-discourse: encryption as a site of struggles to define internet freedom. Internet Histories, 1(4), 285–311. doi:10.1080/24701475.2017.1387466

Homeland Security Committee. (2016, June) Going Dark, Going Forward: A Primer on the Encryption Debate. House Homeland Security Committee Majority Staff Report. Retrieved from https://homeland.house.gov/press/house-homeland-security-committee-releases-encryption-report-going-dark-going-forward-primer-encryption-debate/

Huffington Post. (2009). Google CEO On Privacy (VIDEO). Huffington Post. Retrieved from http://www.huffingtonpost.com/2009/12/07/google-ceo-on-privacy-if_n_383105.html

Hull, D. (1985). Openness and Secrecy in Science: Their Origins and Limitations. Science, Technology & Human Values, 10(2): 4-13. doi:10.1177/016224398501000202

Human Rights Council. (2013). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. United Nations General Assembly.

Jagodzinski, C. (1999). Privacy and print : Reading and writing in seventeenth-century England. Charlottesville, VA: University Press of Virginia.

Kahn, D. (1967). The codebreakers; the story of secret writing. New York, NY: Macmillan.

Kelty, C. (2005). Geeks, Social Imaginaries, and Recursive Publics. Cultural Anthropology, 20(2): 185-214. doi:10.1525/can.2005.20.2.185

Kreiss, D, and McGregor, S. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle. Political Communication, 35(2): 155-177. doi:10.1080/10584609.2017.1364814

Lauer, J. (2017). Creditworthy: A History of Consumer Surveillance and Financial Identity in America. New York, NY: Columbia University Press.

Levy, S. (2001). Crypto: How the code rebels beat the government--saving privacy in the digital age. New York, NY: Viking Books.

Mackrackis, K. (2010). Confessing Secrets: Secret Communication and the Origins of Modern Science. Intelligence and National Security, 25(2). doi:10.1080/02684527.2010.489275

Marcus, G. (1995). Ethnography in/of the World System: The Emergence of Multi-Sited Ethnography. Annual Review of Anthropology, 24, 95-117. doi:10.1146/annurev.an.24.100195.000523

Mundy, L. (2017). Code Girls: The Untold Story of the American Women Code Breakers of World War II. New York, NY: Hachette Books

Nagy, J.A. (2010). Invisible Ink: Spycraft of the American Revolution. Yardley, PA: Westholme.

Nordin, A. & Richaud, L. (2014). Subverting official language and discourse in China? Type river crab for harmony. China Information, 28(1). doi:10.1177/0920203X14524687

Oxford. (2017). Encryption. Oxford English Dictionary. Retrieved from https://en.oxforddictionaries.com/definition/encryption

Pasquale, Frank. (2014). The Black Box Society. Cambridge, MA: Harvard University Press.

Potter, L. (1989). Secret rites and secret writing: Royalist literature, 1641-1660. Cambridge; New York: Cambridge University Press.

Rogaway, P. (2015). The Moral Character of Cryptographic Work. IACR Cryptology ePrint Archive, 1162. Available at https://eprint.iacr.org/2015/1162.pdf

Rosenberg, A. (2003). Cryptologists: Life Making and Breaking Codes. New York, NY: Rosen Publishing Group.

Schneier, B. (2015). Data and Goliath : The Hidden Battles to Collect Your Data and Control Your World. New York, NY: W.W. Norton and Co.

Schwarz Jr., F. (2015). Democracy in the Dark: The Seduction of Government Secrecy. New York, NY: The New Press.

Scott, J.C. (1990). Domination and the Arts of Resistance: Hidden Transcripts. New Haven, CT: Yale University Press.

Shapin, S. and Schaffer, S. (1985). Leviathan and the Air-Pump. Princeton, NJ: Princeton University Press.

Simmel, G. (1906). The Sociology of Secrecy and of Secret Societies. American Journal of Sociology, 11(4), 441-498. Available at http://www.jstor.org/stable/2762562

Singh, S. (1999). The code book: The evolution of secrecy from Mary Queen of Scots to quantum cryptography. New York, NY: Doubleday.

Solove, Daniel J. (2007). "I've got nothing to hide" and other misunderstandings of privacy. San Diego Law Review, 44(4), 745-772. Available at https://scholarship.law.gwu.edu/faculty_publications/158/

Star, S. L., & Ruhleder, K. (1996). Steps toward an ecology of infrastructure: Design and access for large information spaces. Information systems research, 7(1), 111-134. doi:10.1287/isre.7.1.111

Star, S.L. (1999). The Ethnography of Infrastructure. The American Behavioral Scientist, 43(3): 377. doi:10.1177/00027649921955326

Taylor, C. (2004) Modern Social Imaginaries. Durham, NC: Duke University Press.

United Nations. (2015). Report on encryption, anonymity, and the human rights framework. United Nations Human Rights Office of the High Commissioner. Retrieved from http://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/CallForSubmission.aspx

Van Dijck, J. (2013) Facebook and the engineering of connectivity: A multi-layered approach to social media platforms. Convergence, 19(2), 141-155. doi:10.1177/1354856512457548

West, S. M. (2017). Data Capitalism: Redefining the Logics of Surveillance and Privacy. Business & Society. doi:10.1177/0007650317718185

Williams, J. (2001). The Invisible Cryptologists: African Americans, WWII to 1956. Center for Cryptologic History, National Security Agency. Retrieved from https://www.nsa.gov/about/cryptologic-heritage/historical-figures-publications/african-americans/

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75-89. doi:10.1057/jit.2015.5

Footnotes

1. Though this is a global debate, taking place in the US, EU, Australia, Brazil, China and elsewhere, my analysis, admittedly, will be most representative of American policy discourses. Additional study of these issues in non-US, and particularly non-Western, contexts, is of great value.

2. The notion that there is a binary opposition between privacy and security is contested, see: Gill, 2018 (in press) and Abelson et al., 2015.

3. See, for example: Mundy, L. (2017). Code Girls: The Untold Story of the American Women Code Breakers of World War II. New York: Hachette Books; Fagone, J. (2017). The Woman Who Smashed Codes: A True Story of Love, Spies, and the Unlikely Heroine Who Outwitted America’s Enemies.New York: Dey Street Books; Williams, J. (2001). The Invisible Cryptologists: African Americans, WWII to 1956. Center for Cryptologic History, National Security Agency. Retrieved Mar. 31, 2018 from https://www.nsa.gov/about/cryptologic-heritage/historical-figures-publications/african-americans/.

Not just one, but many ‘Rights to be Forgotten’

$
0
0

Introduction:

This paper describes the wide spectrum of interpretations of calls for the Right to be Forgotten (‘RTBF’) across countries and data protection authorities (‘DPAs’). The paper does not discuss the European judgment itself, which led to the RTBF,1 or its general relation with public and private international law: this has been done elsewhere.2 Rather we compare the different ways RTBF has been decided in different jurisdictions. This comparative analysis concludes that RTBF appears to lack a clear conceptualisation, which then translates into multiple – and sometimes contradicting – approaches by domestic courts.

In our analysis of the relevant cases, we included the following nations: Belgium, The Netherlands, the United Kingdom, France, Germany, Poland, Argentina, Chile, Mexico, Colombia, Brazil, and Peru. Reviewing cases across these nations, we looked for four key variables in particular:

  1. Who is the applicant or plaintiff?
  2. Who is the defendant. In particular, which company is targeted by the case: a local subsidiary, or the parent company, or both?
  3. If removal is ordered, which domain did the ruling target? The local domain and /or the mother company’s domain (typically a .com domain)
  4. Finally, even when a court or authority orders removal, does it suggest that it can order so ‘globally’ (meaning that the content is no longer consultable for anyone accessing the domain, whether located in the country of the court or elsewhere), or does it request the defendant ensure removal from more than just the local domain name (.com in particular), however leaving consultation of that domain untouched for those consulting it outside of its jurisdiction. In the latter option, search results in other countries are not affected.

Comparing nations on the basis of these four variables, the review found that there was no unified approach to RTBF in the national courts we researched. Especially at the level of defendants that are being asked to remove search results (or to de-link search results and particular urls). To the contrary, there is a wide variety. That said, however, cases almost always involve Google.

Therefore, from this point, this research focused mainly on cases dealing with applications against or that have involved Google. A key element for these decisions was that the local subsidiary of Google was usually involved in the case. Often Google Inc (as the parent company) was co-sued. Yet even if it was, final judgments rarely identified the exact party which is being asked to remove links, with the notable exception of cases in the Dutch courts. In one of two cases where extension to the .com domain was specifically discussed (Cologne; further discussed below), it was rejected ─ an approach which in our view was also legally correct.3 In the other (Rotterdam: 2016, see below) the Dutch court stated that it did not see why the dispute should be limited to Google.nl and accompanied this view with a specific instruction to extend removal to the .com domain.

This analysis showcased the many ways RTBF has been tackled by different domestic courts. This allowed for the many issues dealing with RTBF to rise to the surface. The main contribution of this work consists of identifying the many issues and multiple solutions national courts have found to address the new challenges presented by RTBF, as well as the need for privacy and freedom of expression in the age of the internet.

Europe

First, we focus on how RTBF has been incorporated and analysed by multiple jurisdictions within the EU. At a superficial level, after the Google Spain decision in 2014, there was an explosion of cases dealing with individuals and corporations seeking to erase or delete negative information on the internet. Nonetheless, a closer review suggests that domestic courts have interpreted RTBF in widely different manners over time. A key example observed in several cases upon this research deals with the question of in which domain the information should be limited or deleted. Due to the very nature of the internet, and particularly search engines such as Google, most information could still be found even after links were being dropped from the local domain for a particular country. Over the years, there has been an increasing push to extend these RTBF applications to .com domains across the globe.

While some jurisdictions have dealt into these global vs. domestic concerns regarding the RTBF, other courts have focused more on the jurisdictional issues arising with the fact that many of these giant search engine companies are not based or incorporated in their respective jurisdictions. Still other courts have tried to make the RTBF fit their pre-Google Spain notions of data protection, typically by expanding the protective regime.

RTBF in Belgium

In Belgium there has been very little case law on the removal of data and the interpretation of Google Spain. Only one case can be found through our search of available databases. It does not involve Google as a party.

The case originated in Liège and went to the Supreme Court in April 2016. It concerned a claim against a Belgian newspaper, which had opened a new digital archive in which it stored old editions.4 An article providing full personal details could be found in this digital archive, relating to a doctor accused of drunk driving who caused a car accident many years ago. He requested that the newspaper anonymise the article. The newspaper refused. The Court of Appeal had listed many criteria which had to be fulfilled in order to give priority to the right to privacy over and freedom of the press. It found these criteria to all have been fulfilled in the case at issue. The Supreme Court later confirmed this ruling, in favour of the plaintiff.

The criteria set out required that the content had to be a description of facts; that there was no specific reason to publish the article again; that the content of the article had no historic value; that a certain amount of time needed to have passed between the first and second publication; that the person(s) involved was not a public figure; that all debts (sentence) had been paid; and that the person involved has been rehabilitated.

In this case the Court of appeal concluded that the publisher had to amend the article. The Supreme Court then confirmed that the right to be forgotten – the exact term used is somewhat of a literal translation into Dutch (‘recht op vergetelheid’) – can result in a restriction of the freedom of the press. This interpretation of the ‘RTBF’ was already applied in earlier case law, and is therefore not new after Google Spain.

In March 2014 the Court of First Instance in Brussels had already confirmed that there are three criteria that trigger the RTBF: a) the facts have to relate to a judicial matter, b) they have already been published (and now appear again), and c) it has to be shown that there is no legitimate interest in the redistribution of the facts.5

RTBF in The Netherlands

Two relevant Dutch cases have touched upon international jurisdiction. In the first (October 2015), the issue of jurisdiction was raised. However, in the end, the jurisdiction was quite easily accepted and Dutch data protection law was applied without much debate.

In one other case (Rotterdam, 2016) the court extended its ruling to the google.com domain. It argued that even if the search engine automatically redirects to a local extension when google.com is used, that is not a guarantee that a computer situated in the Netherlands will only see results provided for via google.nl. This, the court held, depends on the IP-settings of the computer, in that a user could quite easily change the virtual location of a device.

However, in other cases, the Dutch courts did not discussed the jurisdictional issues at all. Rather, they have proceeded immediately to the balancing of the right to privacy, with the right to freedom of the press, or the legitimate interests of society to access information. In some cases Google Spain was mentioned, while in others the Data Protection Directive, or national data protection law was relied upon without reference to Google Spain.

In a recent case involving a request to delete certain search results, the court in The Hague argued that Google Spain did not imply that every daughter company is responsible for content placed online by another company in the same company group. In this judgment it was held that Google Netherlands was only in the Netherlands for marketing purposes and could therefore not be held accountable for (in)action by its mother company. The claim against Google Netherlands was not further entertained. Whether this particular judgment is in line with the Google Spain case is far from certain.

RTBF in the United Kingdom

In the United Kingdom the courts have seemed more concerned with the fact that Google Inc. is a party not established within their jurisdiction. Cases against such defendants can only go ahead after the Court gives permission to ‘serve outside the jurisdiction’.

In Vidal-Hall (2014) and appeal (2015)6 the plaintiffs brought a case against Google Inc. because Google had collected data in the form of cookies, sent from the plaintiffs’ web browser, for marketing purposes, without consent of the users. The judge addressed the fact that the plaintiff was situated in California and held that the proceedings could only be served upon the defendant in California if the following conditions were met: (i) when there is a serious issue to be tried on the merits of their claims i.e. that the claims raised substantial issues of fact or law or both; (ii) that there is a good arguable case that their claims come within one of the jurisdictional 'gateways' set out in the relevant English rules (CPR PD 6B); (iii) that in all the circumstances, England is clearly or distinctly the appropriate forum for the trial of the dispute, and (iv) that in all the circumstances, the court ought to exercise its discretion to permit service of the proceedings out of the jurisdiction.

As for those jurisdictional ‘gateways’, the court decided that the matter fell under ‘tort’ in section 3.1(9) of the CPR Practice Direction 6B. This Practice Direction is a piece of procedural legislation and is as such not related to data protection.

The remainder of the case was concerned with whether the plaintiffs fulfilled the other conditions. Google pleaded none were fulfilled, the court however held that all were fulfilled. Google’s practices were found to be a misuse of private information, which was and is considered a tort under the CPR. (Of note to understand the judgment is that the traditional common law does not have a tort for invasion of ‘privacy’).

In Mosley v Google Inc. & Anor (2015)7 the plaintiff sought to have Google Inc. break the link between certain searches and the search results which lead to damaging images of him and a prostitute caught in a newspaper sting operation. The judgment applied Google Spain to rule that in Mosley too, Google was the controller of Data for the Data Protection Directive. The only review on jurisdiction came at an earlier stage when the High Court granted permission to serve the claim form on Google.

RTBF in France

In France, the national data protection agency (CNIL) held that the right to delisting could only be effective when carried out on all extensions of the search engine, not only local or EU-extensions – or .com for that matter. The CNIL was of the opinion that removal should extend to any possible extension, even though Google already ensured that when it removed certain information that was accessible on a local extension, it was no longer visible from any device located in the EU or .com.8 Judicial review against this decision is now pending at the Conseil d’Etat, which has referred the case to the European Court of Justice (where its case-number is C-136/17 - case pending at the time of writing..

RTBF in Germany

In Germany the same reasoning was used as in the Netherlands when it comes to applying national data protection law to a company which is situated outside of the EU. Even a controller without a server in Germany, may be subject to German data protection law as the data are processed on the device a person is using in Germany. This apparently is a purely academic debate as there is no jurisprudence on it (yet).9

In a case against Facebook, the courts in Hamburg ruled that German data protection law did not apply to the data processing operation necessary to give individuals access to the social network page, as this is not done by the German establishment of Facebook, but by the establishment in Dublin, Ireland. The court did not find this contradictory with Google Spain, as it treated the two cases distinctly. The court held that ‘carried out in the context of the activities’ was only to be construed broadly when the controller was established outside of the EU, not, as was the case here, inside the EU, as then the EU data protection rules would apply and thus there would be no loss of protection for the individual.10

The courts in Cologne11 specifically upheld jurisdiction in a libel case against Google.de alone, for that was the website aimed at the German market. It rejected extension of the removal order vis-à-vis Google.com, in spite of a possibility for German residents to reach Google.com, because, the Court argued, that service was not intended for the German speaking area and anyone wanting to reach it, had to do so intentionally.

RTBF in Poland

Individuals that wanted to invoke the RTBF in Poland, usually started an administrative procedure with the national data protection authority (GIODO). Not that many cases actually reached the courts. GIODO used Google Spain and generally followed its line of reasoning.12

One decision of the GIODO is of particular interest: it followed the lines of Google Spain against Facebook Poland. GIODO held that even though the Polish branch of Facebook was there for marketing purposes only, it could still be subject to an order to remove data from the US controlled Facebook servers.13 This decision was therefore the exact opposite from the conclusions drawn by the Dutch courts.

In a case against a company - which made private data of an individual public, the Polish highest administrative14 court sent back a judgment to the Regional court, which then invoked Google Spain and concluded that the company which made the data public could be considered a ‘controller of data’ within the meaning of the Data Protection Directive.

Latin America

Unlike the situation in the European Union, Latin American jurisdictions have lacked a single landmark decision – like Google Spain– that could serve as a reference point to help them decide cases dealing with data protection, the RTBF, and freedom of expression. In turn, some jurisdictions have made express references to the Google Spain decision in their judgements and have tried to build a domestic jurisprudence involving the RTBF. The main challenge the region faces is the complete lack of legislation enshrining the RTBF. This, however, did not stop the development of this right within the Latin American legal systems.

Many jurisdictions – like Argentina – attempted to address the RTBF by relying on their pre-Google Spain data protection and privacy laws. While others, such as Colombia, tried to rely on the European experience and tried to address the increasing need to tackle issues related to data protection in the ‘age of the internet’.

A particular issue that was addressed by many Latin American jurisdictions dealt with the risks incurred by the RTBF, especially in terms of the region’s terrible history dealing with human rights violations. Other countries raised real concerns involving the use of the RTBF as a tool to hinder the freedom of the press when investigating corruption or abuses of power.

The RTBF in Argentina

Currently there is no law or regulation that expressly deals with the RTBF (draft bills are being discussed). Courts employed the general Civil liability regime to address cases related to internet intermediaries. Key cases are highlighted below. Not all of these cases involve a RTBF: some concern removal of illicit material - while the RTBF strictly speaking applies to material that is not in and of itself illicit. These cases were nevertheless included to the extent that they highlight the overall context for the geographical scope of court rulings.

In Esteban Bluvol v. Google (2012)15 the court of first instance ruled against Google by determining that it had an objective civil responsibility under the Civil Code. A Court of Appeals reversed that ruling, determining that Google, as an intermediary, was not automatically liable for the defamatory conduct of third parties. Nevertheless, the appeals court ruled that Google was subjectively liable under the Civil Code, meaning that Google’s conduct was negligent. In this case, the Appeals Court ruled that search engines become liable once they have been notified of the existence of an infringing content and fail to remove access to it. In this case Google Inc. and its Argentinian subsidiary were sued.

In Da Cunha v. Yahoo and Others (2010)16 the first instance court ruled against Yahoo and Google. However, this decision was reversed in appeal. The Appeals Court applied the subjective liability regime established in the Civil Code. The court determined that internet intermediaries were liable once they had been notified of the existence of the illicit content and failed to remove it. The case was finally settled by the Supreme Court, which confirmed the Appeals Court decision, following the case law set out in Rodriguez v. Google, explained below. In this case Yahoo Argentina and Google Inc. were sued as defendants. Google’s local subsidiary was not named in the lawsuit.

In Florencia Peña v. Google (2013)17 the court granted a provisional remedy ordering Google to block all search results involving the plaintiff engaging in sexual acts and not limited to a determined URL. This case involved Google Inc. as the main defendant, while Google’s local subsidiary was not sued.

In Carrozo v. Yahoo de Argentina and Others (2013)18 the appeals court condemned Yahoo and Google to indemnify the plaintiff for the use of her image on pornographic websites. The court determined that the internet intermediaries were objectively liable, since their activities were inherently risky, which makes them automatically liable for any damages that may have been caused. Moreover, the court ruled that search engines located matches with the words searched by the user, thereby creating a reference to the search result, as well as a cache of the website’s content. Therefore, the court concluded, when accessing a search engine’s website, all content within it was under the search engine provider’s control, including when performing the search. In this case Yahoo Argentina and Google Inc. were sued as defendants. Google’s local subsidiary was not named in the lawsuit.

In 2014 the Supreme Court of Justice of Argentina (SCJ) stepped in and laid out the concrete requirements to establish the liability of internet intermediaries. The case involved Yahoo Argentina and Google Inc. as defendants, while Google’s local Argentinian subsidiary was not sued.

In Rodríguez, María Belén v. Google (2014)19the SCJ ruled that internet intermediaries were not objectively liable for the content they showed on their search results, since this would be contrary to their freedom of expression. Nonetheless, the SCJ ruled that these intermediaries did become liable once they had been properly notified of the existence of the illicit content and they failed to remove it.

The SCJ established the mechanism for the ‘proper notification’ of the intermediaries, as well as what constitutes ‘manifestly illicit content’. The court ruled that ‘proper notification’ could only be a judicial order issued by a court. In this regard, the court determined that any content involving child pornography, data that enables or facilitates the commission of a crime, content that endangers the life or physical integrity of persons, amongst others, were to be considered ‘manifestly illicit content’.

The case law established in the Rodriguezcase(2014) was upheld in two more Supreme Court cases, Da Cunha v. Yahoo SRL and Lorenzo, Barbara v. Google Inc.20 The latter case involved Google Inc. as the main defendant.

Currently, neither the case law nor the proposed bills ruled on whether the RTBF in Argentina would require internet intermediaries to also block content on their .com domain.

The RTBF in Chile

Currently there are no laws or regulations that deal with the RTBF in Chile. However, a bill was debated in the Chilean Congress, which would have granted citizens the right to ask search engines or websites to block or take down content from the internet.The debate on that bill is as yet unresolved.

Given the lack of normative recognition and treatment, case-law on the matter is not entirely settled. The Chilean Supreme Court of Justice (SCJ) in one recent case ordered21 the removal of a news article published more than a decade ago on the website of El Mercurio, one of the biggest and oldest newspapers in Chile. The SCJ ruled that maintaining this news article for more than ten years on the newspaper’s website allowed it to be reached by search engines, which violated the plaintiff’s rights to honour and privacy.

The Supreme Court determined that news agencies’ right to freedom of the press allows them to investigate and publish news that are of public interest. However, the passage of time makes news less relevant – unless new events makes them relevant once more – at which point the RTBF overrides the right to freedom of the press. The Court said that, as long as a news has current relevancy, the right to freedom of the press trumps the individual’s RTBF, but that this balance shifts in favour of the RTBF once the news ceases to be relevant. Nonetheless, the SCJ made clear that there are two exceptions to this rule: news that are historically important or that deal with matters of historical interest; and news related to public persons in the performance of a public act. The SCJ’s ruling not only ordered the removal of the content from the website that hosted the content, but also its removal from the newspaper’s search engine. However in an even more recent ruling a the very end of December 2017, the court decided in favor of the Chilean Center for Investigative Journalism and Information, CIPER, against a doctor's request to remove a report about medical malpractice from CIPER's site.

Currently there is no clarity as to whether the RTBF, when upheld, is applicable to .com domains or only to the local search engines (.cl). The bill under debate did not refer to the scope of application of the bill, it rather used the broad term ‘search engines’ without referring to the territorial effect that the RTBF may have.

The RTBF in Mexico

Currently there are no laws or regulations that deal with the RTBF in Mexico. There has been one known case involving the RTBF in Mexico. The case of Carlos Sanchez v. Google Mexico (2015)22 involved Google Mexico as the sole defendant. The case began after a powerful Mexican businessman applied to the Mexican data protection agency – Instituto Federal de Acceso a la Información y Protección de Datos – now called INAI, ordering Google Mexico to de-list news articles exposing alleged corruption between this businessman and government officials. Google Mexico argued that the management of the search engine ‘Google Search’ was in the hands of Google Inc., a US corporation. Moreover, the company maintained that the content in question was lodged and maintained by a third party that is outside of Google Mexico’s control. Nonetheless, the INAI, analysing the company’s statutes, determined that there was a sufficient link between Google Mexico and Google Inc. to compel the former to abide by the decision to remove access to the news articles.

The INAI ordered Google Mexico to remove access to the content and remove any content related to the links provided by the applicant. The INAI’s decision was challenged in the courts by the newspaper whose content was being prevented from being accessed. Later, an appeals court annulled23 the decision against Google Mexico on procedural grounds, since the administrative procedure did not allow for Revista Fortuna to defend its legitimate rights as the owner of the content.

The INAI did not specify the scope of enforcement of the ruling, that is, it did not determine whether the order of removal was to be limited to the .mx domain or whether it had to include the .com domain. The fact that the order was addressed to Google’s local subsidiary could suggest that it should be limited to the .mx domain.

The RTBF in Peru

Currently there are no laws or regulations that specifically deal with the RTBF in Peru. However, the Personal Data Protection Act has been recently employed to grant Peruvian citizens the right to ask internet websites and search engine providers the removal or blocking of access to content that violates Peruvian law.

In March 2016, the Peruvian Data Protection Agency (DGPDP - for its acronym in Spanish) ruled against Google Peru and Google Inc.24 ordering them to pay fines and to remove access to certain content related to a Peruvian citizen from their search engine. The DGPDP ruled that the Peruvian Data Protection Act was applicable to both Google Peru and Google Inc., since Google’s search engine performed web searches on the entire web globally, which also included websites and servers located on Peruvian territory, therefore, the agency concluded, it falls within the jurisdiction of the Peruvian Data Protection Act. Furthermore, the agency determined that it had to analyse the ‘nature of the matter’ which requires it to consider the global reach of Google Inc. In that sense, the agency determined that Google Search, as a service, is accessible to Peruvian citizens and to devices located on Peruvian territory, which creates a jurisdictional link to Peruvian legislation. The jurisdictional reach expressed in this case therefore was extraordinarily large.

The DGPDP also ruled that it had jurisdiction to rule on the matter, because of the fact that Google Search had a specific search engine for Peru (.com.pe domain), which showed content produced or hosted in Peru, gathering personal data from Peruvian citizens or residents, and even allowed for users to choose between Spanish and Quechua (the two official languages of Peru). Moreover, the agency determined that the fact that Google provided advertisements – specifically tailored for Peruvian residents and citizens, for services in the Peruvian market – also meant that it had jurisdiction and that the applicable law was the Peruvian Data Protection Act.

The DGPDP explicitly referred to the CJEU’s Google Spain case, altering the concept however into ‘cancellation right’. The agency ordered ‘Google’, in the person of either Google Peru or Google Inc. to block access to the content in question from its Google Search services. Moreover, ‘Google’ (again either Google Peru or Google Inc.) was condemned to pay fines for breach of the Data Protection Act.

The DGPDP did not specify the scope of enforcement of the ruling, that is, it did not determine whether the order of removal was to be limited to the .pe domain or whether it had to include the .com domain. Since the order was addressed to both Google Inc. and its local subsidiary, it is not possible to infer to which domain enforcement it was limited to.

The RTBF in Colombia

Currently there are no laws or regulations that specifically deal with the RTBF in Colombia. However, the Colombian Supreme Court recently ruled on the issue of the RTBF within the Colombian legal system25. Although the lawsuit was brought against El Tiempo newspaper as sole defendant, the plaintiff asked the judge to order the defendant to block and erase from all available search engines – specifically from Google.com – any negative information related to the plaintiff.

Google Colombia participated in the proceedings as interested third party, arguing that it did not have control over the search engine – either the .com or .com.co domains – nor could Google Colombia be found to be liable for any violation of the plaintiff’s rights, since it had a separate legal personality from Google Inc. Moreover, Google Colombia argued that the owner of the content alone was responsible for the content hosted on their website.

In Gloria v. Casa Editorial El Tiempo, the Supreme Court performed a detailed balancing test between the right of free speech and information, the principle of net neutrality, and the right to honour and privacy. The court ruled that the principle of net neutrality was protected by the right of free speech and information. Moreover, the court determined that it could not order Google.com to block the search results from its search engine, considering that it would impose an undue restriction on the right of free speech and information. The court made references to Google Spain, but ultimately considered that it constituted an unnecessary sacrifice to the right of free speech and information, and the principle of net neutrality, thus failing the court’s proportionality test.

The court warned that, should search engines be responsible for what third parties have created on the internet, it would transform them into censors or managers of content, which the court ruled to be against the very architecture of the internet itself.

The court found that Google was not responsible for the content. Moreover, the court ruled that Google was not to de-index the information expressly because the court felt that such order would not protect the principle of net neutrality, which could only be restricted exceptionally.

The court ruled that it was not Google’s indexation of the information that violated the plaintiff’s rights, but the diffusion of an outdated news article by the defendant. Therefore, Google was found not to be responsible of the violation of the plaintiff’s rights, leading the court to refuse to issue orders to Google.

The court expressly limited this restriction of the right of freedom of speech and information to criminal cases, considering that these cases had a more harmful nature to individuals’ rights to honour and privacy. Furthermore, the restriction was allowed in cases involving news that had remained ‘permanently’ on the internet. This suggests a similar approach taken by the Chilean Supreme Court regarding news that was no longer considered ‘news-worthy’ through the passage of time. Finally, the court decided that this restriction did not extend to public figures or public servants, or events involving crimes against humanity or human rights violations, since these events formed part of the building of the ‘national historical memory’ whose importance superseded the individual’s interest.

Currently, this is the only case that expressly ordered the removal or blocking of content from the Google.com domain.

The RTBF in Brazil

Currently there is no law or regulation that expressly deals with RTBF in Brazil. However, a bill was discussed in the Brazilian legislature that would have modified the Brazilian Marco Civil da Internet (the bill of rights for the internet) by including a very wide RTBF. The bill would have granted the courts the competence to order the removal of content, and not just a mere de-listing.

The Superior Court of Justice (SCJ), Brazil’s highest court for non-constitutional issues, recently decided a landmark case dealing with the responsibility and liability of search engine providers. The decision26 came in response to Google Brazil’s appeal against a judgement which had ordered Google’s local subsidiary to remove certain content from its search engine’s database. The SCJ ruled that it was not the obligation of search engine providers to remove search results, but rather that the content owner was responsible for the proper content. Moreover, the court said that search engine providers, by the nature of their service, did not pre-screen the content obtained per search criteria by the user. The court also determined that search engine providers could not be ordered to filter their search results for particular terms, phrases, images, or text regardless of the indication of a specific URL.

It is important to highlight that the SCJ did not expressly rule on the applicability or not of the RTBF in the case, but rather looked solely at the liability regime to determine whether Google Brazil was under the obligation to remove access to the content.

The SCJ in two cases ruled on the existence and applicability of RTBF within the Brazilian legal system. In both the defendant was Rede Globo, the largest commercial television network in Brazil, as the sole defendant. The SCJ had to rule in both cases on whether reporting on crimes that had occurred many decades ago could serve as grounds for the application of the RTBF.

The SCJ ruled in one case27 that the defendant had violated the plaintiff’s rights to honour and dignity by presenting him as co-author in a crime for which he had previously been found not guilty. The court determined that the RTBF applied in cases where the person had been acquitted of a crime or, when having been found guilty of a crime, had served their sentence. The SCJ affirmed the amount for damages to be paid by the defendant.

In the second case,28 the SCJ ruled that the historical importance of a crime or event may outweigh the RTBF and the rights to honour and dignity. The court ruled that the name of the victim was so inextricably linked with the crime itself, that the portrayal of the events would be impossible without using the name of the victim. It thus determined that the right to freedom of the press should prevail. Moreover, the court also determined that the very passage of time had – in a way – made people forget about the crime and had therefore minimised the level of pain the family of a victim of a crime might have felt when seeing the name and images of the victim portrayed and broadcasted on the media.

More recently, the Superior Court of Justice decided in a case involving Google Brazil. The court decided that forcing search engine providers to remove access to content from their search engines would impose an intolerable burden on them. The court determined that this responsibility would, in turn, make these companies into a form of digital censors.29 The court also ruled that, since the content would remain on the internet, the responsibility for that content lied on the content provider, not on the search engine provider.

Importantly, there is a case still pending before the Supreme Court of Justice, Brazil’s highest court on constitutional issues, which may prove to be essential for the application of the RTBF in the Brazilian legal system, as well as fundamental for the balancing between the RTBF and other key constitutional rights and freedoms.

Conclusion

The present research has demonstrated how the RTBF, after its quite recent introduction in Google Spain, has expanded in a rather patchwork manner. Much like the internet itself, the RTBF has had a rather inconsistent application through many jurisdictions. Different legal systems are trying to find a suitable application of the RTBF which also takes other important rights and freedoms into account. It is perhaps a testament to the importance of the issues covered by the RTBF that many countries have found the need to attempt striking a balance between the right to privacy and freedom of expression in the ‘age of the internet’.

The first two of the four key variables we identified in the introduction tend to have been addressed quite clearly by the courts: these are the identification of applicant and defendant. The third variable however is often dealt with without due specification, possibly as a result of lack of technical insight by the courts. The final, fourth variable (application of any order to users outside the territory) has only been addressed twice: once immediately rejected (Cologne), once implicitly suggested but not as such specified in the ruling (Rotterdam).

The relevance of the present research is that it identifies key areas where judges and authorities are focusing when judging on issues involving the RTBF. The jurisdictional issues raised by several courts are particularly relevant, especially when discussing the issues involving the internet. The question regarding the applicability of a de-listing order to a global vs. domestic domain is a logical consequence of these jurisdictional concerns. Furthermore, a key issue particularly discussed in the Latin American cases, deals with concerns that the RTBF could be used – or abused – to allow powerful people to hide cases of corruption. Another important and relevant concern involves the need Latin American countries have to maintain their ‘historical memory’ involving past human rights violations.

The RTBF has presented national courts with a host of challenges, ranging from the balance between the need for privacy to the importance of freedom of expression in a democratic society. National judges and authorities are increasingly being faced with cases that not only deal with complex technological issues, but also force the courts to go beyond the boundaries of national jurisdictions when dealing with an ever-globalised digital world. The present work has identified how these challenges are being judged and decided in multiple jurisdictions. However, many new issues are likely to continue to arise and it appears as though these changes will continue to outpace the legislative work, thus forcing judges and authorities to continue to face new challenges in their attempts at striking the adequate balance between privacy rights and freedom of expression.

References

Scholarly works:

van Calster, G. (2015). Regulating the internet. Prescriptive and jurisdictional boundaries to the EU’s ‘Right to be forgotten’. Retrieved from https://ssrn.com/abstract=2686111

Kodde, C. (2016). Germany's ‘Right to be forgotten’ – between the freedom of expression and the right to informational self-determination. International Review of Law, Computers & Technology, 30(1-2). doi:10.1080/13600869.2015.1125154

Legal decisions:

7th Collegial Circuit Tribunal of the Auxipar Centre of the First Region, file No. N/A, August 2016.

Bluvol, Esteban Carlos c / Google Inc. y otros s/ daños y perjuicios, National Civil Appeals Chamber, 5 December 2012.

C.15.0052.F, Supreme Court Belgium, 29 April 2016, available at http://jure.juridat.just.fgov.be/view_decision.html?justel=N-20160429-2 last consulted 5 December 2017.

Case No. 045-2015-JUS/DGPDP, Personal Data Protection General Directorate, Directional Resolution, File No. 012-2015-PTT, December 30, 2015.

Carrozo, Evangepna c/ Yahoo de Argentina SRL y otro s/ daños y perjuicios National Civil Appeals Chamber, 10 December 2013.

Court of First Instance, Brussels, 25 March 2014, nr. 2013/6156/A.

D. C. V. c/ Yahoo de Argentina SRL y otro s/ Daños y Perjuicios, National Civil Appeals Chamber, 10 August 2010.

Da Cunha, Virginia c/ Yahoo de Argentina S.R.L. Y otro s/ daños y perjuicios, Nation’s Supreme Court of Justice, 30 December 2014.

Decision No. 22243-2015, Supreme Court of Justice, 21 January 2016.

Decision T-277/15, Supreme Court of Colombia, May 12, 2015.

Decision No. Rcl. 18,685, Superior Court of Justice, 5 August 2014.

Decision No. REsp 1.334.097, Superior Court of Justice, 20 October 2013.

Decision No. REsp. 1.335.153, Superior Court of Justice, 20 October 2013.

Decision No. REsp. 1.593.873, Superior Court of Justice, 17 November 2016.Federal Institute for the Access of Information and Data Protection, file No. PPD. 0094/14, 26 January 2015.

Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González, Case C-131/12, ECp:EU:C:2014:317.

Google Inc. v Vidal-Hall & Ors [2015] EWCA Civ 311 (27 March 2015).

Lorenzo, Bárbara cl Google Inc. si daños y perjuicios, Nation’s Supreme Court of Justice, 30 December 2014.

Mosley v Google Inc. & Anor [2015] EWHC 59 (QB) (15 January 2015).

Peña María Florencia c/ Google s/ ART. 250 C.P.C. Incidente Civil, National First Instance Civil Court No. 72, file No. 35.613/2013,

Rodríguez, María Belén c/ Google Inc. s/ daños y perjuicios, Nation’s Supreme Court of Justice, 28 October 2014.

Vidal -Hall & Ors v Google Inc. [2014] EWHC 13 (QB) (16 January 2014).

X and Y v Google Inc. and Google Germany GmbH, Landgericht Köln, 16 September 2015, 28 O 14/14.

Appendix

Table of RTBF judgement in Europe and Latin America

Country

Case/Authority

Parties

Domain

Territoriality of Ruling

Recognition of RTBF - Legislation

Does the ruling impact users in other countries30

Argentina

Esteban Bluvol v. Google (2012), Appeals Court

Argentinian citizen v. Google Inc & Google Argentina

The court did not rule on the removal or de-listing of content, but solely on the liability issue.

Not applicable as the court did not order removal or de-listing of content.

No ruling on RTBF, but subjective liability.

No legislation, bills under debate.

Not applicable

Argentina

Da Cunha v. Yahoo and Others, Appeals Court & Supreme Court of Justice

Argentinian citizen v. Google Inc

Not specified.

General order to remove content.

No ruling on RTBF, but subjective liability.

No legislation, bills under debate.

Not specified

Argentina

Florencia Peña v. Google (2013) First Instance Court

Argentinian citizen v. Google Inc

Not specified.

General order to remove content.

No ruling on RTBF, provisional measure against Google.

No legislation, bills under debate.

Not specified

Argentina

Carrozo v. Yahoo de Argentina and Others (2013) Appeals Court

Argentinian citizen v. Google Inc

Not specified.

General order to remove content.

No ruling on RTBF, but objective liability

No, bills under debate.

Not specified

Argentina

Rodríguez, María Belén v. Google (2014)

Supreme Court of Justice

Argentinian citizen v. Google Inc

Not specified.

General order to remove content.

No ruling on RTBF, subjective liability.

No legislation, bills under debate.

Not specified

Brazil

Rcl. 18,685 Superior Court of Justice

Appeal filed by Google Brazil

Court ruled in favour of Google’s subsidiary. No order to remove content or de-list.

Court ruled in favour of Google’s subsidiary. No order to remove content or de-list.

No ruling on RTBF.

No legislation, bills under debate.

Not applicable

Brazil

REsp 1.334.097

Superior Court of Justice

Brazilian citizens v. Rede Globo (TV network) (Google not involved)

Limited to the .com domain of the defendant. Not addressed to Google.

General order to remove content.

Yes, the court ruled on the existence of the RTBF.

No legislation, bills under debate.

No

Brazil

REsp. 1.335.153

Superior Court of Justice

Brazilian citizens v. Rede Globo (TV network) (Google not involved)

Court ruled in favour of the defendant. No removal or de-listing order.

Court ruled in favour of the defendant. No removal or de-listing order.

Yes, the court ruled on the existence of the RTBF.

No legislation, bills under debate.

No

Brazil

REsp 1.593.873

Superior Court of Justice

Recourse by Google Brazil

Court ruled in favour of the defendant. No removal or de-listing order.

Court ruled in favour of the defendant. No removal or de-listing order.

No, the court explicitly rejected the applicability of the RTBF to search engines.

No

Chile

No. 22243-2015

Supreme Court of Justice

Chilean citizen v. El Mercurio (newspaper) (Google not involved)

Limited to the .com domain of the defendant. Not addressed to Google.

General order to remove or block access to content.

Yes, the court ruled on the existence of the RTBF.

No legislation, bills under debate.

No

Colombia

Decision T-277/15 (Gloria v. Casa Editorial El Tiempo)

Supreme Court of Justice

Colombian citizen v. El Tiempo (newspaper) & Google Colombia

Limited to the .com domain of the defendant. Not addressed to Google.

Google Colombia excluded from the removal and de-listing order.

Yes, the court ruled on the existence of the RTBF.

No legislation, bills under debate.

No

Mexico

Carlos Sanchez v. Google Mexico (2015)

Data Protection Agency

Mexican citizen v. Google Mexico

Not specified.

General order to remove or block access to content.

Yes, the DPA ruled on the existence of the RTBF.

No legislation, bills under debate.

Not specified

Peru

File No. 012-2015-PTT

Data Protection Agency

Peruvian citizen v. Google Inc. & Google Peru

Not specified.

General order to remove or block access to content.

Yes, the DPA ruled on the existence of the RTBF.

No legislation exists, no bills under debate.

Not specified

Belgium

Supreme Court

Nr. C.15.0052.F

29 April 2016

Belgian individual v.

Le Soir (newspaper)

(Google not involved)

Not relevant, only a .be domain

Not specified

Yes, specifically mentioning this right

No

Belgium

Court of First Instance

Brussels, nr. 2013/6156/A

25 March 2014

Belgian individual v. a newspaper (Google not involved)

Not relevant, only a .be domain

Not specified

Yes, specifically mentioning this right

No

The Netherlands

Court of Appeal

The Hague

26 July 2016 ECLI:NL:GHDHA:2016:2161

Dutch individual v. Google NL BV,

Google Inc.

Blogspot.nl. No request for, or broadening of domain names

Excluding Google NL from the claim

Request for removal denied by court, national data protection law applied

No

The Netherlands

Court of First Instance

Rotterdam

29 March 2016

ECLI:NL:RBROT:2016:2395

Dutch individual v. Google NL BV,

Google Inc.

.nl- and .com-URLs, court specifically includes google.com, using the IP-address argument

Excluding Google NL from the claim, ordering Google Inc. to remove the URL’s

Claim to delist granted, national data protection law applied

Not specified

The Netherlands

Interim judge Amsterdam

29 February 2016 ECLI:NL:RBAMS:2016:987

Dutch individual v. Google Inc.

.com-domain, as reviews were posted on google.com/maps

Ordering Google Inc. to remove reviews from Google Maps, Google Search and Google+

Removal of false reviews ordered

Not specified

The Netherlands

Court of Appeal

Den Bosch

6 October 2015

ECLI:NL:GHSHE:2015:3904

Dutch individual v. Google Inc.

.nl-blog (Blogger, created under a .com extension and then copied one-on-one). Appellant during the appeal requested that other domain names would be deleted as well

Ordering Google Inc to make the .nl-blog inaccessible, not the same blog under other extensions, as that was not as such ordered in first instance (‘remove from the internet)

Yes, Google has to remove a blog with a .nl-extension

No

The Netherlands

Court of Appeal, Amsterdam

31 March 2015 ECLI:NL:GHAMS:2015:1123

Dutch individual v. Google NL BV,

Google Inc.

Delisting of both .nl-links and .com-links from Google’s Search requested, no further specification of domains

Not specified

Yes, however claimant is not successful. EU Directive used

No

United Kingdom

Court of Appeal

Vidal-Hall, EWHC 13 (QB)

27 March 2015

British individual v. Google Inc.

Not specified – case deals with establishment of jurisdiction

Application of the procedure to serve a defendant not domiciled in the territory – before proceedings can take place

Claimant successful under Tort, misuse of private information

Not specified

United Kingdom

High Court

Moseley v Google, EWHC 59 (QB)

15 January 2015

British individual v. Google Inc. (claim against Google UK discontinued)

Not specified

Not specified

Only held that claimant has a viable claim

Not specified

France

DPA CNIL, 10 March 2015

CNIL and Google Inc. involved

CNIL specifically wants to broaden to all domain names

CNIL orders Google to delist from all extensions

Pending at Conseil d’Etat – case CNIL v. Google Inc.

Yes

Germany

Administrative Court Hamburg

3 March 2015

15 E 4482/15

Facebook Ireland ltd. v. Hamburg

Not specified

Facebook GER not responsible for (in)action of Facebook IRE

Not granted, German Data Protection law not applicable

Not specified

Germany

Court of First Instance

Cologne

16 September 2015

28 O 14/14

Google DE

Google Inc.

Request to also delist on the .com-domain, next to the .de-domain

No extension to .com-domain granted

Request not granted

No

Poland

GIODOD Data Protection Agency 16 February 2016 DOLiS/DEC – 50

Facebook Poland

Not specified

Not specified. Facebook Poland held responsible even though it is merely a subsidiary

Yes, delete certain information from Facebook

Not specified

Footnotes

1. Case C-131/12 Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González, ECLI:EU:C:2014:317.

2. See in particular G. van Calster, ‘Regulating the internet. Prescriptive and jurisdictional boundaries to the EU’s ‘Right to be forgotten’, https://ssrn.com/abstract=2686111, last consulted 5 December 2017.

3. See ibid.

4. Supreme Court Belgium, 29 April 2016, C.15.0052.F available at http://jure.juridat.just.fgov.be/view_decision.html?justel=N-20160429-2 last consulted 5 December 2017.

5. Court of First Instance, Brussels, 25 March 2014, nr. 2013/6156/A

6. Vidal-Hall & Ors v Google Inc. [2014] EWHC 13 (QB) (16 January 2014); Google Inc. v Vidal-Hall & Ors [2015] EWCA Civ 311 (27 March 2015).

7. Mosley v Google Inc. & Anor [2015] EWHC 59 (QB) (15 January 2015).

8.https://www.cnil.fr/en/right-be-delisted-cnil-restricted-committee-imposes-eu100000-fine-google. An unofficial English translation of the CNIL’s decision can be found here: https://www.cnil.fr/sites/default/files/atoms/files/d2016-054_penalty_google.pdf. The CNIL also has made an amusing graphic chart of why it thinks the territorial reach of a request to delisting should be global: https://www.cnil.fr/fr/infographie-portee-du-dereferencement-de-mplaignant-applique-par-google.

9. C. Kodde, Germany's ‘Right to be forgotten’ – between the freedom of expression and the right to informational self-determination, International Review of Law, Computers & Technology Vol. 30 , Iss. 1-2, 2016.

10.http://justiz.hamburg.de/contentblob/5359282/data/15e4482-15.pdf.

11. Landgericht Köln, 16 September 2015, 28 O 14/14, X and Y v Google Inc. and Google Germany GmbH, in which the court emphasised that Google.com is the search engine maintained by Google for the ‘region of the United States of America’ (p.16 of the judgment – our translation).

12. For example, http://www.giodo.gov.pl/280/id_art/9009/j/pl/ or http://www.giodo.gov.pl/280/id_art/9010/j/pl/.

13. This decision has not been published on the website of GIODO, but it discussed on the internet and a ‘leaked’ version of it is published here: https://niebezpiecznik.pl/wp-content/uploads/2016/02/facebook-dec_5016.pdf

14.http://orzeczenia.nsa.gov.pl/doc/4214FE3165

15. National Civil Appeals Chamber, “Bluvol, Esteban Carlos c / Google Inc. y otros s/ daños y perjuicios”, 5 December 2012.

16. National Civil Appeals Chamber, “D. C. V. c/ Yahoo de Argentina SRL y otro s/ Daños y Perjuicios”, 10 August 2010.

17. National First Instance Civil Court No. 72, “Peña María Florencia c/ Google s/ ART. 250 C.P.C. Incidente Civil”, file No. 35.613/2013

18. National Civil Appeals Chamber, “Carrozo, Evangelina c/ Yahoo de Argentina SRL y otro s/ daños y perjuicios”, 10 December 2013.

19. Nation’s Supreme Court of Justice, “Rodríguez, María Belén c/ Google Inc. s/ daños y perjuicios”, 28 October 2014.

20. Nation’s Supreme Court of Justice, “Da Cunha, Virginia c/ Yahoo de Argentina S.R.L. Y otro s/ daños y perjuicios”, 30 December 2014. Nation’s Supreme Court of Justice, “Lorenzo, Bárbara cl Google Inc. si daños y perjuicios”, 30 December 2014.

21. Supreme Court of Justice, decisión No. 22243-2015, 21 January 2016.

22. Federal Institute for the Access of Information and Data Protection, file No. PPD. 0094/14, 26 January 2015.

23. 7th Collegial Circuit Tribunal of the Auxiliar Centre of the First Region, file No. N/A, August 2016.

24. Personal Data Protection General Directorate, Directional Resolution, case No. 045-2015-JUS/DGPDP, File No. 012-2015-PTT, December 30, 2015.

25. Supreme Court of Colombia, decision T-277/15, May 12, 2015.

26. Superior Court of Justice, decision No. Rcl. 18,685, 5 August 2014.

27. Superior Court of Justice, decision No. REsp 1.334.097, 20 October 2013.

28. Superior Court of Justice, decision No. REsp. 1.335.153. 20 October 2013.

29. Superior Court of Justice, decision No. REsp. 1.593.873, 17 November 2016.

30. Does the ruling ask the defendant to delist or remove content such that it impacts the ability of users or citizens from other countries, outside the legal jurisdiction of the court, to access information

Networked publics: multi-disciplinary perspectives on big policy issues

$
0
0

Papers in this Special Issue

Editorial: Networked publics: multi-disciplinary perspectives on big policy issues
William H. Dutton, Michigan State University

Political topic-communities and their framing practices in the Dutch Twittersphere
Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, & Ludo Gorzeman

Big crisis data: generality-singularity tensions
Karolin Eva Kappler

Cryptographic imaginaries and the networked public
Sarah Myers West

Not just one, but many ‘Rights to be Forgotten’
Geert Van Calster, Alejandro Gonzalez Arreaza, & Elsemiek Apers

What kind of cyber security? Theorising cyber security and mapping approaches
Laura Fichtner

Algorithmic governance and the need for consumer empowerment in data-driven markets
Stefan Larsson

Standard form contracts and a smart contract future
Kristin B. Cornelius

Introduction: networked publics shaped by changing policy and regulation

This special issue of Internet Policy Review is the first of a series organised in collaboration with the Association of Internet Researchers (AoIR), an academic association centred on the ‘advancement of the cross-disciplinary field of Internet studies’1. AoIR was inspired by the internet as a major technological innovation of the twenty-first century, holding its first conference in 2000 around the state of what was then a fledgling field focused on a new research topic. The first conference gathered academics together with those involved with the internet from technical, corporate and governmental communities as well as many early internet enthusiasts from all sectors of society. Given its diversity within and beyond academia, early debate was centred on whether and how it should be viewed as a field. Some consensus emerged through the conferences that internet studies would be an interdisciplinary field (Wellman, 2004). No single discipline could address the internet and the many issues associated with it as objects of study (Consalvo and Ess, 2011).

Since those early days, its yearly conferences have focused on the use and impacts of continuous innovations in the internet, social media, mobile internet, the Internet of Things (IoT), and related information and communication technologies. While research on internet policy and governance has been developing since the technology’s inception, it was only in 2016 that the annual AoIR conference was organised around a theme of policy and governance - the concept of ‘Internet Rules!’. But with the continuing emergence of major issues of policy, regulation and governance of the internet and related ICTs, most recently around the privacy and surveillance issues of big data, policy issues have begun to draw increasing attention by the field, and this has been reflected in policy issues rising in the agendas of AoIR conferences.

This trend is illustrated by the 2017 AoIR conference. Its focus on networked publics is not explicitly policy-oriented. The concept of networked public is broad and useful in capturing the idea that networking technologies like the internet and social media can create virtual spaces analogous to physical spaces. These permit communities to form around such activities as play, work, or political and social movements. For example, danah boyd (2008) used the term to discuss her findings on the ways American teenagers used networking for a variety of social activities. I find the term compatible with my discussion of how individuals have used networks to empower themselves vis-à-vis institutions to become a fifth estate, comparable to the fourth estate shaped by the role of an independent press of an earlier era (Dutton, 2009). However, whatever networked public is of interest, from teenagers finding a comfortable space for socialising to networked individuals feeling free to search for information and network with others to hold powerful institutions more accountable, the vitality - if not the very existence - of these networks will depend on their policy and regulatory contexts. Therefore, it is not surprising that a conference without an explicit policy focus has yielded a strong set of policy-oriented contributions. The future of networked publics depend on the ways in which policy and regulation facilitate or constrain individuals from accessing and producing information and connecting with other individuals in meaningful ways.

From the changing composition of contributions to AoIR conferences over the years, it became increasingly apparent to the editors of Internet Policy Review as well as the evolving leadership of AoIR that the annual conference would be a growing source of developing scholarship on emerging issues of policy and regulation surrounding the internet. In fact, changes in the composition of AoIR conferences reflect aspects of this shift and led to more interaction between the journal and AoIR. It was in that spirit that I was asked to be a guest editor of this special issue arising from papers presented at the 2017 AoIR conference in Tartu, Estonia, organised around the theme of networked publics.

I along with the editors of Internet Policy Review were encouraged by the response to our call for papers to be considered for this special issue. We are pleased to provide this special issue, which is composed of the best policy-related papers presented at AoIR 2017.

Remarkably, for what has been defined as an interdisciplinary field, the papers in this special issue are more disciplinary than might have been anticipated in those early years of the field. It is even more remarkable in that policy studies are also viewed as inherently interdisciplinary. For example, many top policy studies programmes describe themselves as ‘interdisciplinary’, such as the Moritz College of Law’s Center for Interdisciplinary Law and Policy Studies. For this reason, this special issue refers to ‘multidisciplinary’ rather than ‘interdisciplinary’ perspectives, as each paper arguably draws primarily from a core discipline, such as sociology, science and technologies studies (STS), or law. However, it will be apparent from contributions to this special issue that disciplinary perspectives on major issues surrounding the internet and policy can offer new insights that constructively stimulate and inform debate over policy and regulation. The contributions to this issue also raise the question over whether the field as a whole is taking a more disciplinary turn.

The rise of new policy, regulation and governance issues

Before describing the contributions to this issue, it is useful to acknowledge and explain the relatively late emergence of policy issues both within the field and with respect to the larger public’s understanding of the internet. The shift of attention to the policy issues of the internet and related information and communication technologies (ICTs) is an inescapable observation based on mass media framing of internet-related stories – but it is also one of the most dramatic developments around the internet since its first decade of worldwide diffusion.

Early internet research was focused on issues driven primarily by technical innovations (Wellman, 2004; Dutton, 2013). Internet policy research initially arose in this field largely around limitations of access to the internet and related technologies, such as over issues of building internet infrastructures (Kahin and Wilson, 1997), reducing digital divides and skill gaps (Norris, 2001; Hargittai, 2002) and responding to global internet filtering regimes (Deibert et al., 2008, 2010). However, over the last decades, there has arguably been a shift to a greater focus on a wider array of policy issues (Mueller, 2002; Cranor and Wildman, 2003; DeNardis, 2009, 2013; Braman, 2009; Dutton, 2015). This shift aligns with the internet moving from a promising innovation at the turn of the century to an essential part of the lives of most people in the world’s developed economies. Within the span of two decades, this promising innovation had connected over half of the world’s population, reaching over 4 billion users (54% of the world) by 2018 (World Internet Stats, 2018).

Beyond the growing centrality of the internet, there has also been a shift in public views of the internet. Instead of being seen as a technology that fosters democracy, the internet and related technologies are increasingly identified as posing threats to democratic structures and participation in politics and society (Rainie and Wellman, 2012; Howard, 2015). In this vein, the internet is increasingly portrayed as a privacy invading surveillance technology, fueled by advances in social media, big data, the Internet of Things, and artificial intelligence (Howard, 2015). Far from the ‘technology of freedom’ of yesteryear (de sola Pool, 1983), the internet and related social media and big data are feared to be eroding privacy and putting democracy at risk – as politicians, governments and business and industries succumb to the potential for these new tools to help them observe and manipulate public opinion and behaviour (Morozov, 2011; Greenwald, 2014; Keen, 2015; Sunstein, 2017). More people want government and internet service providers to ‘do something’!

New risks tied to the internet and social media have become popularised, including:

  • search algorithms trapping internet users in ‘filter bubbles’ (Pariser, 2011),
  • social media enabling internet users to cocoon themselves in ‘echo chambers’ that confirm their social and political viewpoints (Sunstein, 2017); and
  • advertising incentives combining with the power of social media to promote the spread of disinformation, such as so-called unprofessional, junk, or fake news (Keen, 2007).

These threats to privacy and the quality and reliability of information have found widespread acceptance by the educated public, mass media, and politicians and regulators alike, illustrated by the establishment of inquiries and study groups on such issues as privacy (Mendell et al., 2012; Hardie et al., 2014) and the disinformation fostered by junk or fake news examined by the UK’s Digital, Culture, Media and Sport Committee (2017) and a high level study group for the European Commission (2018). Only recently has systematic empirical research been undertaken to address the validity of some of these expectations, as illustrated by the contributions to this special issue.

Of course, views of the internet as a technology of freedom or control are based on technologically deterministic assumptions that are not new and that have been challenged by empirical research over the years (Beniger, 1986). Well over a decade ago, I noted that:

Growing concerns over the lack of real information, the prevalence of misinformation, and increasing problems with information overload should ... not be viewed as aberrations within an information society. These failures are actually caused by inadequate regulation of access to information - the incorrect treatment of all information as being equal and benign. (Dutton, 1999, p. 11)

Utopian versus dystopian perspectives on the role of the internet and communication technologies has been a central issue for decades (Williams, 1982). Kenneth Laudon (1977) wrote about the potential for new interactive technologies being used to manage democracy, manipulating public opinion, rather than responding to democratic forces, long before the internet was taken seriously. Laudon was focused on interactive cable and telecommunications.

However, dystopian perspectives on the internet as a technology of control and manipulation rather than freedom and collective intelligence have gained increased currency in the aftermath of major events. These include the unraveling of what was thought to be an Arab Spring fostered by social media (Morozov, 2011), the disclosures by the whistleblower Edward Snowden of classified National Security Agency (NSA) documents that provided evidence of mass surveillance (Greenwald, 2014), the rise of the Internet of Things that will put tens of billions of devices online (Howard, 2015); and the Facebook fiasco over Cambridge Analytica, in which personal data of Facebook users was obtained by a political consulting firm via an academic researcher (Dutton, 2018; Schotz, 2018).

Equally significant developments contributing to this shift of perspective have been the increasing concentration of the internet industry, such as in the so-called FANG firms of Facebook, Amazon, Netflix, and Google. As I was writing this introduction, I received an online notification from a news feed that claimed to reveal: “Why Amazon is obsessed with getting inside of our homes”. Worry over the consequences of concentration within the internet industry has been one motivation behind calls for new policy initiatives around such aims as increasing competition, privacy and data protection, and efforts to prevent the blocking of legitimate content, such as through network neutrality initiatives (Wu, 2003).

It is within this backdrop of rising concerns over threats to the very values that once almost personified the internet as a technology of freedom that all the articles within this special issue can be seen. As a group, they address three big policy and regulatory issue areas that have risen around the internet. Simply put, these are research papers on the role of the internet in reshaping:

  1. access to (dis)information in ways that could literally clarify or distort our views of local and worldwide developments - from the news to environmental crises;
  2. privacy, data protection, and the security of the internet - each of which are threatened in new ways by new technologies, such as big data, computational analytics, and increasingly essential services being provided online; and
  3. legal and contractual relationships between users and providers - such as through new forms of notice and consent to the use of personal information.

These are only three of many more areas of key policy issues. Concerns over freedom of expression, digital divides, sociality, and many more remain equally important. But these three areas capture big areas of concern and arise from the actual composition of the best policy-related papers at AoIR 2017. The following sections provide a broad outline of the articles in this issue grouped around these three areas. This will be followed by a short overview of several cross-cutting themes of this special issue.

Reshaping access to information: who knows what?

All major innovations in communication technologies have a potential to reshape access to information – what we know, who we know, what services we obtain, and what knowhow we require (McLuhan, 1964; Dutton, 1999). Mark Graham (2014, p. 100) has called this ‘augmented reality’ in that the internet not only reshapes what we know, but also what we ‘are able to know and do’. This has been viewed positively with respect to the internet creating the potential for more open and global access to information, providing access to a heretofore unimaginable range of information from anywhere at any time (Dutton, 1999). Therefore, most concern in the early period of internet diffusion was focused on efforts to block access to information online, such as through internet filtering (Deibert et al., 2010).

However, it has long been argued that just as new media open up new channels of access, they can also exacerbate existing inequalities in the production and consumption of information around the world. This led the McBride Commission to call for a new world information order (ICCP, 1980), and contemporary internet scholars to call attention to continuing inequalities in access to production and consumption of information in a networked world (Castells, 1996; Graham, 2014).

As noted above, in the early years of the internet, the focus was on access to the technologies and skills to be online in a networked world, giving rise to issues over digital divides (Dutton, 1999; Norris, 2001). As increasing proportions of the world have gained access to the internet and social media, the focus has shifted to the quality and bias of information served up and consumed on these networks.

One of the most compelling arguments has been that the rise of search, and the algorithms that underpin the personalisation of its results, could be limiting access to information by diminishing the diversity of information, such as by creating a ‘filter bubble’ in which ‘what you’ve clicked on the past determines what you see next …’ (Pariser 2011 p. 16). A similar but complementary thesis is that social media not only personalise information, but they also enable individuals to more easily and almost unwittingly cocoon themselves in what Cass Sunstein (2017p. 6) coined as ‘echo chambers’ – built by ‘people’s growing power to filter what they see’, which adds to the power of providers to filter ‘based on what they know about us’. Many – from scientists to casual news readers – wish to confirm their beliefs through what they read and hear. This ‘confirmatory bias’ is greatly enabled in principle by the new social media at our fingertips (Sunstein, 2017). Therefore, rather than simply opening up new information vistas, the new media could narrow and distort our views of reality.

In many fundamental respects, this is not a new concern. A key issue with the mass media has long been focused on the quality of news and the degree that propaganda or even documentary and entertainment media coverage might distort our views of the real world and key events, ranging from the reporting of car accidents in local news to the reporting of war correspondents in remote areas. For instance, continuing debates centre on the degree to which mass media coverage might well ‘cultivate’ misperceptions of the real world (Gerbner et al., 1986), such as through consuming news portraying the world as more violent than it is in fact when coverage tends to focus on stories that attract readers – the rule of thumb in many newsrooms that ‘if it bleeds, it leads’. But as the internet has become more central to the consumption of news, new concerns have been raised, such as around the disinformation sown by junk or fake news, and the biases introduced by filter bubbles and echo chambers described above.

The first article is this issue addresses concerns over filter bubbles and echo chambers by focusing on what the authors call ideological ‘topic-communities’ forming in the Dutch Twittersphere that are focused on politics. To what degree are they diverse and can the levels of homophily observed on Twitter be explained by either the notion of a filter bubble or an echo chamber? Maranke Wieringa, Daniela van Geenen, Mirko Tobias Schäfer, and Ludo Gorzeman’s article, ‘Political topic-communities and their framing practices in the Dutch Twittersphere’, questions the explanatory value of a filter bubble as overly deterministic in light of their findings, but they lend some support to the significance of an echo chamber among one of their observed ideological communities. Their research is focused on two weeks of normal politics – the research was not conducted during a major campaign or election – and draws on a creative and rigorous use of multiple methods to provide a strong case for their findings. Nevertheless, their work raises further questions: Are their findings a reflection of Twitter users seeking to convey, rather than consume, partisan or ideological political perspectives? Are they retweeting and framing media coverage to influence others, rather than being naïve, cocooned readers, trapped in an echo chamber?

The next article by Karolin Eva Kappler, entitled ‘Big crisis data: generality-singularity tensions’, is far removed from discussions of filter bubbles and echo chambers in political discourse. Nevertheless, Kappler forces us to consider how the use of big data in the identification and monitoring of emergencies, disasters, and crises are changing the way we see these real world events, and even whether they can sustain attention when the crisis has past. For example, when social scientists collect data through any means, whether a survey or by direct observation, their method of observation shapes what they can see as well as what might be less visible through their particular methodological lens. Kappler explores the potential of a big data bias in perception, drawing on sociological perspectives to critically compare three platforms designed to capture big data about crisis events. She identifies a variety of implications common and distinct to these different platforms’ approaches to capturing crisis data, such as the idea that they make each crisis unique – a singular event – rather than a more general crisis or just another emergency. How does what she calls the ‘platformization’ of emergencies shape what we know about them? This article is refreshing in the way it moves away from the hype about big data capturing reality to critically assessing what realities these platforms see, observe, valorise, produce, and appropriate. They are, according to Kappler, all about ‘doing singularity’ – making the event a unique rather than general phenomenon.

Competing perspectives on privacy and security

The next set of three articles provides different disciplinary perspectives on the issues of privacy and security. The first, by Sarah Myers West, entitled ‘Cryptographic imaginaries and the networked public’, provides a fascinating historical and comparative perspective on what she calls ‘cyptographic imaginaries’ – how people think about encryption whether through cyphers (that transpose letters of an alphabet) and codes (that replace words) in different social, cultural, and political contexts. Specifically, she looks at encryption in three different cultures: the occult, affairs of state (national security and secrecy), and in democratic systems, where it provides a means to enable private communication essential to some movements by avoiding surveillance and potential social or political sanctions. Anchored in an STS approach, this comparison illustrates how similar technologies take on quite different meanings and roles in different cultural settings. Such insights support policy-making in this area by demonstrating how the technologies of encryption need to be understood not only in a technical sense, and not only cross-nationally, but also in the more specific social, cultural, and political contexts in which they are used. Technologies do not determine universal solutions as the role and impact of encryption, for example, is also shaped by their socio-cultural contexts of use.

The next article, by Geert van Calster, Alejandro Gonzalez Arreaza, and Elsemiek Apers, entitled ‘Not just one, but many ‘Rights to be Forgotten’’, is based on a comparative analysis of national law and policy anchored in what has become known as the ‘right to be forgotten’ (Mayer-Schönberger, 2009). While general support for such a right emerged in Europe initially through the courts and later through the European Commission, initiatives to legally define and implement this right have diffused widely across the world. This article conducts a comparative survey of over two dozen cases of concrete legal implementations of this right to be forgotten. The research team finds far more case law variations, such as in the territory over which the right would be enforced, than commentary on this universal right would lead us to expect. The article demonstrates the value of close and comparative legal analysis how general legal principles are implemented in case law across different national jurisdictions. Their study is reminiscent of early American research on implementation, which tracked how a policy spawned in Washington DC changed dramatically by the time it was implemented in local communities (Pressman and Wildavsky, 1973). One clear implication of their findings is the degree that even widespread acceptance of a general legal principle can still lead to cross-national differences. As various evolving principles of policy and regulation for the digital age move into national courts and legislatures, will the resulting patchwork of national case law be another force underpinning an increasing fragmentation of a global, open internet, that frustrates efforts at harmonisation?

Closely aligned with the right to privacy is an associated right to security. Computer scientists have long approached this issue in the information age through a focus on cyber security, defined to include the ‘technologies, processes, and policies that help to prevent and/or reduce the negative impact of events in cyberspace that can happen as the result of deliberate actions against information technology by a hostile or malevolent actor’ (National Research Council, 2014, p. 2). If privacy is in part defined by unauthorised access to personal information, then a lack of cyber security, such as the inability to prevent unauthorised access to internet devices or infrastructures, is one critical route to infringing privacy. Take, for instance, the US government’s efforts to unlock a smartphone to gain access to personal information in an investigation of terrorism (Benner and Lichtblau 2016).

The next article in this issue moves the discussion of cyber security from a general aim to a more concrete set of goals in more specific domains. By focusing on concrete domains or institutional contexts of cyber security, it is clear that cyber security takes on somewhat different meanings across each domain. Laura Fichtner’s article, entitled ‘What kind of cyber security? Theorising cyber security and mapping approaches’, provides a critical, social scientific perspective on the concept of security and and also distinguishes between four domains of cyber security, largely defined by the major values and purposes they prioritise in their particular contexts. These are: 1) data protection, such as protecting data files from unauthorised access; 2) safeguarding financial interests, such as preventing credit card fraud; 3) protecting public and political infrastructures, like securing electronic voting machines; and 4) information and communication flows, as in failing to prevent the exposure of diplomatic cables of the US State Department by WikiLeaks (Leigh and Harding, 2011). Anchored in an STS approach to her study and a focus on computer ethics, Fichtner builds a strong case that each of these arenas of cyber security involve not only different priorities, but also different ecologies of actors and prototypical responses. For example, compare the tolerance of the actors involved in credit card fraud (banks), where some loses are expected, to those ensuring against voting fraud (governments), where electronic voting is not allowed in most jurisdictions for fear of undetectable fraudulent voting (Jones and Simons, 2012). Here again, a closer look at the implementation of a global concept illuminates differences across domains that are important to address in policy and practice.

Social and legal insights on issues of consent

The final set of articles in this special issue address one of the most concrete but insurmountable issues of consumer protection in the digital age – how to notify and obtain the informed consent of internet users on the ways personal and trace data created by them can be used? This principle of a notice and consent process is simple to understand, but almost impossible to implement in ways that satisfy such important and obvious values as informed consent. I have witnessed many sessions at privacy and security conferences and panels that devoted disproportionate amounts of time critiquing the problems with contemporary approaches to notice and consent. Most notice and consent forms are long, technical, and not read. From here, agreement stops, as it has been more difficult to provide a clear and compelling alternative.

The first article in this section, by Stefan Larsson, is entitled ‘Algorithmic governance and the need for consumer empowerment in data-driven markets’. Larsson provides an insightful critique of contemporary policy and practice on notice and consent that brings this discussion into the big data age of consumer profiling. He highlights the lack of transparency in user agreements, which are exceedingly complex, and the need for policy to strengthen consumer protection in this area. In the end, his analysis leads him to question the ability of internet users to ever be able to protect themselves in the age of big data analytics. He then makes a case for the necessity of structural reform that moves responsibility from internet users to consumer protection authorities. In many respects, this is a more specific example of the case for data protection authorities in other areas. However, his article should stimulate debate on alternative remedies. It also should raise questions over the need for all users to understand all aspects of such user agreements. If only a few users discover a problem with a notice and consent process, then their objections can become a means for holding providers more accountable to users in general. Also, will consumer protection authorities themselves be adequately resourced to hold global internet service providers to account? Will consumer protection authorities have the staff and skills to understand how data are used by a complex ecology of actors in ways that truly protect users?

The final article is by Kristine B. Cornelius, entitled ‘Standard form contracts and a smart contract future’. Her legal perspective on contract law and practice adds an extremely useful background to the debate over how to regulate notice and consent, terms of service and other online contracts. Her historical points remind readers that standard form contracts (SFC) are not new. They have had a very positive role in making some legal issues manageable by the lay public and consumers that expert systems could augment (Susskind, 2008). However, her review argues that these SFC have been too slow to adapt to the digital context, such as in being too anchored to legacy paper-based forms. Moreover, she argues that the shift in medium has implications for the procedural process, which can pit the needs of consumers against the ideologies of business and industry. This need not be the case. She argues that smart contracts can be used to actually enhance the freedom of individuals to complete transactions online. In such ways, Cornelius provides insights about smart contracting in the digital context, such as in permitting more decentralised control, which might provide new approaches to such intractable issues as notice and consent.

Points of summary and conclusion

This brief editorial has sought to put the contributions to this special issue in a broader context and illuminate some of the relationships between the articles. While I have noted basic points of each contribution, I have avoided detailed summaries of their evidence and arguments. I therefore encourage you to read these contributions on their own terms, as each is succinct and useful in advancing the study of policy and regulation in the field of internet studies. That said, I found several themes relevant across these contributions which I will note as a personal observation. They all remain relatively anecdotal as they are tied simply to this sample of articles from one but nevertheless an important conference for the field of internet studies. Hopefully they will generate questions about whether they are more generally applicable.

Disciplinary perspectives

First, it is arguable that each article is anchored in more or less of a disciplinary perspective, such as in sociology, science and technology studies (STS), computer ethics and law. It is remarkable in that internet studies and policy studies are purportedly more ‘interdisciplinary’ fields and yet these contributions are more grounded in disciplinary than interdisciplinary perspectives. And, from my point-of-view, each article makes an original contribution to internet and policy studies by virtue of bringing a disciplinary approach to bear on their topic. Rather than an interdisciplinary treatment of a topic, which might surface commonalities across disciplinary divides, these contributions tend to foreground the details and differences that might be overlooked in more general treatments. For example, we see comparisons across platforms for tracking big crisis data (Kappler, this issue), multiple implementations of the right to be forgotten (Calster et al., this issue), and four distinct approaches to cyber security (Fichtner, this issue).

Another consequence of these disciplinary approaches might have been the avoidance of a degree of advocacy that invades and undermines many policy-oriented pieces. The objective of each article is more tied to theorising or refining their theoretical or empirical approach than advocating a particular policy or practice. In many ways, this leads to analyses that can be useful to the design of policy and practice by those from multiple positions on any given issue. For example, whether you support or oppose initiatives on the right to be forgotten, it is extremely useful to know that this right differs across legal jurisdictions in ways not well recognised in general debates.

A greenfield for historical, legal, social and cultural theorising

A greenfield in urban planning and development is ideal in that the developer does not need to grapple with all the constraints imposed by an existing built environment. In some respects, internet policy studies are theoretical greenfields for which theoretical ideas from many disciplines might prove valuable to explore. The contributions to this special issue, for example, underscore the degree that many theoretical approaches from cultural studies and the social sciences could be valuable to relatively under-theorised areas of internet policy studies. Work in this area is so new and so under-researched and theorised that prevalent perspectives, such as STS, have much to add to the literature. For instance, histories of the internet and internet policy and regulation have only become foci for serious historical research in the last decade, as the internet has become recognised as central to information societies in the digital age (Haigh et al., 2015). Perhaps this issue can be a call for historians, legal scholars, critical cultural theorists and social scientists across a variety of disciplines to bring their theoretical perspectives to bear on this new empirical terrain.

Need for interdisciplinary problem-solving

Multidisciplinary research is used here to refer to bringing together research anchored in specific disciplines. In contrast, interdisciplinary research refers to research that is at the intersections of disciplines or which is a synthesis of disciplinary perspectives. It does not mean a lack of or no discipline or an ‘indiscipline’ (Shrum, 2005). That said, at the end of the day, internet policy is inherently a problem-oriented field (Dutton, 2013). How to inform and stimulate debate on policy and regulation appropriate to mitigating problems with such issues as junk news, big data, encryption, the right to be forgotten, cyber security, and notice and consent are likely to require interdisciplinary thinking. But that does not require every study or every paper to be anchored in interdisciplinary research. As just noted above, disciplinary enquiries can prove to be very useful.

Instead, it suggests that disciplinary research needs to be brought together within more interdisciplinary projects, teams and centres that can understand, work with, and appreciate the contributions across the disciplines. In fact, that may well be a role that special issues on policy can play for the field of internet studies. The contributions to this special issue certainly demonstrate the value of systematic and critical disciplinary research to address the validity of key issues and concerns over the policy implications of the internet and related media, information and communication technologies.

References

Beniger, J. R. (1986). The control revolution. (Cambridge, MA: Harvard University Press).

Benner, K., and Lichtblau, E. (2016, March 28). U.S. says it has unlocked iPhone without Apple. New York Times. Retrieved from https://www.nytimes.com/2016/03/29/technology/apple-iphone-fbi-justice-department-case.html

boyd, d. m. (2008). Taken Out of Context: American Teen Sociality in Networked Publics (Phd Dissertation). University of California, Berkeley. Retrieved from https://www.danah.org/papers/TakenOutOfContext.pdf

Braman, S. (2009). Change of state: information, policy, and power. Cambridge, MA: MIT Press.

Castells, M. (1996). The Rise of the Network Society: The Information Age. Oxford: Blackwell Publishers.

National Research Council. (2014). At the Nexus of Cybersecurity and Public Policy: Some Basic Concepts and Issues. (D. Clark, T. Berson, & H. S. Lin, Eds.). Washington, DC: National Academies Press. doi:10.17226/18749

Consalvo, M., & Ess, C. (Eds.). (2011), The Handbook of Internet Studies. Oxford: Wiley-Blackwell.

Cranor, L. F., & Wildman, S. S. (Eds.). (2003). Rethinking Rights and Regulations. Cambridge, MA: MIT Press.

de Sola Pool, I. (1983). Technologies of Freedom. Cambridge, MA: Harvard University Press.

Deibert, R., Palfrey, J., Rohozinski, R., Zittrain, J. (Eds.). (2008). Access Denied: The practice and policy of global internet filtering. Cambridge, MA: MIT Press. Available at https://mitpress.mit.edu/books/access-denied

Deibert, R., Palfrey, J., Rohozinski, R., Zittrain, J. (Eds.). (2010). Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace. Cambridge, MA: MIT Press. Available at https://mitpress.mit.edu/books/access-controlled

DeNardis, L. (2009). Protocol politics: the globalization of internet governance. Cambridge, MA: MIT Press.

DeNardis, L. (2013). The emerging field of internet governance. In W. H. Dutton (Ed.), The Oxford Handbook of Internet Governance (pp. 555-575). Oxford: Oxford University Press.

Dutton, W. H. (1999). Society on the Line. Oxford: Oxford University Press.

Dutton, W. H. (2009). The fifth estate emerging through the network of networks, Prometheus, 27(1), 1-15. doi:10.1080/08109020802657453

Dutton, W. H. (2013). Internet Studies: The Foundations of a Transformative Field. In Dutton, W. H. (Ed.), The Oxford Handbook of Internet Studies (pp. 1-23). Oxford: Oxford University Press. Available at: http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199589074.001.0001/oxfordhb-9780199589074-e-1

Dutton, W. H. (2015). Putting Policy in its Place: The Challenge for Research on Internet Policy and Regulation. I/S: A Journal of Law and Policy for the Information Society, 12(1), 157-84. Retrieved from http://moritzlaw.osu.edu/students/groups/is/files/2016/09/10-Dutton.pdf

Dutton, W. H. (2018, March 21). Regulating Facebook Won’t Prevent Data Breaches, The Conversation. Retrieved from https://theconversation.com/regulating-facebook-wont-prevent-data-breaches-93697

European Commission, Directorate-General for Communication Networks, Content and Technology. (2018). A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Brussels: European Union. doi:10.2759/739290 Retrieved from https://www.cato.org/publications/policy-analysis/risky-business-role-arms-sales-us-foreign-policy

Gerbner, G., Gross, L., Morgan, M., & Signorielli, N. (1986). Living with Television: The Dynamics of the Cultivation Process. In J. Bryant & D. Zillman (Eds.), Perspectives on Media Effects. Hilldale, NJ: Lawrence Erlbaum Associates.

Graham, M. (2014). Internet Geographies: Data Shadows and Digital Divisions of Labor. In M. Graham & W. H. Dutton (Eds), Society and the Internet (pp. 99-116). Oxford: Oxford University Press.

Greenwald, G. (2014), No Place to Hide. New York: Metropolitan Books.

Haigh, T., Russell, A. L., & Dutton, W. H. (2015). Histories of the Internet: Introducing a Special Issue of Information & Culture. Information & Culture, 50(2), 143–159. doi:10.7560/IC50201

Hardie, T., Cooper, A., Chen, L., O’Hanlon, P., & Zuniga, J. C. (2014). Pervasive surveillance of the internet: Designing privacy into internet protocols. IEEE 802 Tutorial. Retrieved from https://mentor.ieee.org/802-ec/dcn/14/ec-14-0043-01-00EC-internet-privacy-tutorial.pdf

Hargittai, E. (2002) Beyond logs and surveys: In-depth measures of people’s web use skills. Journal of the Association of Information Science and Technology, 53(14), 1239-1244. doi:10.1002/asi.10166

Howard, P. N. (2015). Pax Tehnica: How the Internet of Things May Set Us Free or Lock Us Up. New Haven, Connecticut: Yale University Press.

International Commission for the Study of Communication Problems (Ed.). (1980). Many voices, one world: communication and society, today and tomorrow: towards a new more just and more efficient world information and communication order. Paris; London; New York: UNESCO; Kogan Page; Unipub.

Jones, D., & Simons, B. (2012), Broken ballots: Will your vote count? Stanford, CA: CSLI Publications.

Kahin, B., & Wilson, E. (Eds.). (1997). National information infrastructure initiatives: Vision and policy design. Cambridge, MA: MIT Press.

Keen, A. (2007). The Cult of the Amateur. New York: Doubleday.

Keen, A. (2015). The Internet is Not the Answer London: Atlantic.

Laudon, K. (1977). Communications Technology and Democratic Participation. New York: Praeger.

Leigh, D., & Harding, L. (2011), WikiLeaks. London: Guardian Books.

Mayer-Schönberger, V. (2009). Delete: The Virtue of Forgetting in the Digital Age. Princeton and Oxford: Princeton University Press.

McLuhan, M. (1964). Understanding Media: The Extensions of Man. London: Routledge.

Mendel, T., Puddephatt, A., Wagner, B., Hawtin, D., & Torres, N. (2012). Global survey on internet privacy and freedom of expression. Paris: UNESCO.

Morozov, E. (2011). The Net Delusion: How Not to Liberate The World. New York: Allen Lane.

Mueller, M. L. (2002). Ruling the root: Internet governance and the taming of cyberspace. Cambridge, MA: MIT Press.

Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the internet worldwide. Cambridge, UK: Cambridge University Press.

Pariser, E. (2011). The Filter Bubble. New York: Penguin.

Pressman, J. L., & Wildavsky, A. (1973). Implementation. Berkeley, CA: University of California Press.

Rainie, L. & Wellman, B. (2012). Networked: The New Social Operating System. Cambridge, MA: MIT Press.

Schotz, M. (2018, March 17). Cambridge Analytica Took 50M Facebook Users’ Data – And Both Companies Owe Answers, Wired. Retrieved from: https://www.wired.com/story/cambridge-analytica-50m-facebook-users-data/

Shrum, W. (2005). Internet indiscipline: Two approaches to making a field, The Information Society, 21(4), 273-5. doi:10.1080/01972240591007599

Sunstein, C. R. (2017). #republic: Divided Democracy in the Age of Social Media. Princeton, NJ: Princeton University Press.

Susskind, R. (2008). The End of Lawyers? Oxford: Oxford University Press.

United Kingdom Digital, Culture, Media and Sport Committee. (2017). Fake news inquiry - publications. Retrieved from: https://www.parliament.uk/business/committees/committees-a-z/commons-select/digital-culture-media-and-sport-committee/inquiries/parliament-2017/fake-news-17-19/publications/

Wellman, B. (2004), The Three Ages of Internet Studies, New Media & Society, 6(1), 123-129. doi:10.1177/1461444804040633

Williams, F. (1982), The Communications Revolution. Beverly Hills, CA: Sage.

World Internet Stats (2018), World internet user statistics. Retrieved from https://www.internetworldstats.com/stats.htm

Wu, T. (2003), Network neutrality, broadband discrimination, Journal of Telecommunications and High Technology Law, 2, 141–179. 

Footnotes

1. See AoIR website for more information: https://aoir.org/about/

Collectively exercising the right of access: individual effort, societal effect

$
0
0

1. Introduction

Personal data is one of the main assets in the new data economy. As a by-product of the growth of internet-enabled communication, computing power and storage capabilities, the amount of personal data that is being collected, processed and stored is growing fast. The increase in the use of personal data provides potential economic and academic benefits, but also entails risks with regards to power and privacy (Zuboff, 2015). This raises new questions as to how this new data economy should be governed (Bennett & Raab, 2017; Economist, 2017).

The European Union (EU) and the United States (US) have different approaches toward the question of how to govern personal data, though many elements seem similar and there is a partially shared genealogy (Hustinx, 2013; Schwartz, 2013). Recent events, such as the fall of the Safe Harbor agreement and the continued questioning of the EU-US Privacy Shield agreement, show that the differences are not just theoretical. While the US overall has a regime founded in consumer protection law starting from the principle that data practices are allowed as long as they have a legal ground, the EU is taking a more cautionary approach with more focus on protecting citizens rights, by approaching privacy and data protection as a fundamental right. As part of this fundamental rights approach, Europe is focusing more on safeguarding citizen rights through principles of transparency and individual control. According to Article 29 Data Protection Working Party (2018) - a cross European panel of data protection agencies - transparency is especially important, as it is one of the preconditions for the ability to exert control with respect to the processing of personal data.

The European Union has had a unified data protection framework since 1995. In light of the developments sketched above and with the aim of providing better protection of its citizens, a new data protection regulation is going into force in the EU in 2018. While there are some important additions to the data regulation framework, the central core of the framework remains essentially unchanged. This happens while we do not even know if the elements of this core function, and while some elements like informed consent, have been shown to be largely dysfunctional (e.g., Zuiderveen Borgesius, 2015). 1

The right of access is one of the key legal provisions in this framework, which should provide transparency to citizens. It puts an obligation on organisations to, upon request, provide citizens with the personal data held on them, the source of this data, the purpose of this data, and who this data is shared with (we discuss these provisions in more detail in Section 2). The right of access intends to enable citizens to verify the lawfulness of the data practices of an organisation, after this processing has already started. So, in theory, this right should enable citizens to protect their rights related to the use of their personal data.

This paper addresses the following key questions: To what extent does the exercise of the right of access meet its objective in practice? Does it provide meaningful actual transparency to citizens?

We answer these questions by recruiting participants who send data access requests and share the replies with us. We then first analyse the replies to the access requests from the point of view of their compliance with the law. Next, we collect the views of the study participants, the citizens for whom the law is written, and ask them to rate the replies that they receive, what they expect from the law, and how they evaluate the right of access after having used it. Lastly, we reflect on these findings and explore under what conditions the right of access might contribute to transparency and ensuring the lawfulness of data processing. We conclude that a much deeper story emerges through perceiving the requests as a collective endeavour.

Our paper contributes to a considerable amount of scholarly work that deals with the different data protection regulations by legal scholars (e.g., Galetta & De Hert, 2015), and governance scholars (e.g., Bennett & Raab, 2017), by providing empirical evidence to analysis that often deals with abstract principles. There have been a few small-scale studies in the Netherlands of exercising access requests in practice, such as the studies by Van Breda and Kronenburg (2016) and Hoepman (2011). We extend these works by sending requests to a larger set of organisations, sending multiple requests to the same organisation, sending follow ups, and sending requests for specific types of data. The most similar study to ours has been performed by Norris, De Hert, L’Hoiry, and Galetta (2017) who have conducted the first major multi-country empirical study of the right of access, sending and monitoring 184 access requests. To some extent, our work corroborates their findings, albeit in another country, as their study did not include the Netherlands. Our main methodological contribution is the inclusion of non-researcher citizen-participants in gathering the data, as well as in the interpretation of and reflection on the replies.

2. Right of access

In order to empower its citizens, European lawmakers have created the so called right of access in the Data Protection Directive (DPD). This gives citizens the right to obtain information about personal data that is processed pertaining to them. In the Netherlands, the DPD has been codified into law via de “Wet Bescherming Persoonsgegevens” (Dutch Personal Data Protection Act). Article 35 of that act defines the right of access as follows:

1. The data subject may request the controller without constraint and at reasonable intervals to notify him about whether personal data relating to him are being processed. The controller will notify the data subject about whether or not his personal data are being processed in writing within four weeks.

2. Where such data are being processed, the notification will contain a full summary thereof in an intelligible form, a description of the purpose(s) of the processing, the categories of data concerned and the recipients or categories of recipients, as well as the available information on the source of the data.

3. Before a controller provides the notification referred to in subsection 1, against which a third party is likely to object, he will give that third party the opportunity to express his views where the notification contains data relating to him, unless this proves impossible or involves a disproportionate effort.

4. Upon request, the controller will provide knowledge of the logic involved in any automatic processing of data concerning him.

In this research, almost all data access requests fall under the scope of this Dutch law. If the organisation is located in another European country the national implementation of the DPD applies. In most important aspects, these implementations are very similar. Differences can be found in attributes like the maximum time allowed for the response (e.g., four weeks in the Dutch law and 40 days in the UK law). In May 2018, when the new General Data Protection Regulation (GDPR) came into effect, these differences became a thing of the past. 2

The data protection regulation consists of a set of different obligations to data controllers and rights for data subjects. The goals of the different provisions overlap. With regards to transparency, there are rules that require a priori information provision directly to the data subject (art. 13 GDPR and art. 14 GDPR) and to the data protection authority (DPA) (art. 30 GDPR). There are also rules that require information provision a posteriori (art. 15 GDPR) via the right of access. A key difference between the two types of transparency is that a priori transparency can only describe data practices in abstract terms, it will describe the categories of data that are being processed more or less precisely. A statement may for example say that an organisation collects names but can only say that it recorded the name Adam after processing started. Therefore, a posteriori transparency can be used to check the accuracy of the processing while a priori information provision cannot. We think that this specificity is also needed to verify the lawfulness of the processing. People have a better understanding of processes when they are observable in concrete terms.

The text of the law is rather unclear in this respect saying that “a full summary of the data” and “the recipients or categories of recipients” have to be provided. 3 This seems to leave room for an interpretation that allows forstating categories as an acceptable reply (Van Breda & Kronenburg, 2016). However, the Dutch DPA (2007) has taken the position that the reply should include a full reproduction of the data, and this position has been accepted by the courts. 4

The transparency related rights and obligations should help the data subject, the right of access enables data subjects to check the quality of their personal data and the lawfulness of the processing. Recital 41 of the DPD defines it as follows: “… any person must be able to exercise the right of access to data relating to him which are being processed, in order to verify in particular the accuracy of the data and the lawfulness of the processing”5. De Hert and Gutwirth (2006) explain that the rationale for the data protection regulation is “to promote meaningful public accountability, and provide data subjects with an opportunity to contest inaccurate or abusive record holding practices”.

Notwithstanding these legal provisions, recent surveys show that European citizens, as elsewhere, do not feel that they have transparency and control over the use of their personal data. And while the regulatory framework for dealing with the rapid increase of the collection and use of personal data relies heavily on citizen empowerment, very little is known about the practical effectiveness of the legal provisions, such as the right of access, that should guarantee this empowerment (OECD, 2013, p. 34).

3. Research method

To find out how the right of access functions in practice, we need to observe how organisations answer data access requests in practice, compared to the criteria formulated in the law, and furthermore evaluate the experience of citizens making use of the right. In order to do so, we recruited participants to send data access requests, and interviewed them about their experience during the process, as we will explain in this section.

3.1. Data collection

The data used in this study all derive from actual replies from organisations to right of access request letters sent by seven individuals—two of the authors and five participants. Initially, to gain a basic understanding of the process involved, the authors sent approximately 35 access requests. At a later stage, eight participants connected to the authors but not data governance researchers were invited to participate in the study, five of which completed it. 6

Potential participants received documentation explaining the basics of the legal right under investigation, the purpose of the study, a template of the access request letter, a list for choosing the organisations to send a data access request to, and a consent form. Participants took part in a semi-structured intake interview, and were asked about their expectations of access requests, their attitudes towards the use of personal data in society, and their motivation for participation. These interviews served as a reference for the subjective judgment of the effectiveness of the access requests later on.

Participants were next tasked to choose at least ten organisations to send data access requests to—with a suggestion of five that deliver public services (e.g., public transport and education), three dominantly online companies (e.g., online shops and internet service providers), and two miscellaneous. The suggestions were to ensure we collect multiple data points on similar organisations, while giving participants the freedom to engage actively and with personal interest.

Subsequently, we helped the participants draft the data access requests, based on a fixed template. The standard template was a slightly adapted version of the template that the Dutch Data Protection Authority (DPA) offers on its website. 7 One participant used the standard letters provided by the Dutch digital rights organisation Bits of Freedom, which while worded slightly differently, contains the same elements. This includes a request for (i) an overview of the data being processed, if any, (ii) an explanation of the purposes of collection, (iii) with whom data has been shared, and (iv) the origin of the data. In 14 cases an English letter was sent, based on a similar template provided by the British DPA, the Information Commissioner’s Office (ICO). In 16 cases the letter was thereafter individualised, requesting specific types of data a participant wished to receive (e.g., internet traffic, or data related to a specific flight). The postal address of the target organisation was also added to the template. This address was found by looking for it in the organisation’s privacy policy, and if it was not provided, the address provided in Bits of Freedom’s online database, or the general address of the organisation was used. As a means of identification by the receiving organisation, a copy of an ID document was added.

Overall, the seven individuals sent a total of 106 access requests to organisations in different sectors, as shown in Table 1. Of these, 65 requests were sent to public organisations and 41 to private organisations. Most requests were sent by letter (85), but e-mail (15) and web forms (6) were also used. The majority of the target organisations (92) were located in the Netherlands.

In order to check the progress on the data access requests, and to find out if there were any problems, we had regular (often weekly) contact with the participants. If after four weeks—the maximum time allowed by the law—a reply was not received, participants were asked to send reminders to the organisation, indicating they expected a swift answer and referring to the legal deadline. And again, two weeks later, a second reminder suggesting the possibility to seek recourse via the DPA if a reply was not received. With regards to reminders, 47 first reminders and 21 second reminders were sent, while none of the participants filed a complaint with the DPA for non-response.

When participants received a reply, they were asked to share it with us. From these responses we recorded basic process information, such as response time, numbers of reminders sent, and how the response was received (regular post, registered post or e-mail). We noted if the responses contained answers to the different sub-questions asked—where the data comes from, with whom it is shared, and why it has been collected—and if these answers were generic or specific. We also asked the participants to evaluate the responses on completeness, communication style, and accuracy of the data received (to the extent that it was provided). They could also write down general remarks.

Finally, after all data access requests were processed, participants were interviewed again, and asked to reflect on the effectiveness of the right of access and their participation in the research.

Table 1: Number of data access requests sent to different sectors

Sector

Example organisations

Access requests sent

Target organisations

Education

Delft University of Technology, Design Academy Eindhoven, Gymnasium Haganum (high school).

7

5

Finance

ABN, Mastercard, OHRA

6

5

Government

Tax authority, municipalities, UWV

30

19

Platforms

Mi, Skype, Spotify

10

9

Retail

Happy Socks, Ikea, Bol.com

8

6

Telecom

KPN, T-Mobile, Ziggo

8

6

Transport

Car2Go, NS, Amsterdam Airport Schiphol

20

7

Utilities

Eneco, Energiedirect, PostNL

7

7

Other

NGOs, art institutions, general practitioners

10

10

3.2. Data analysis

As we have discussed, the right of access aims to bring transparency to citizens about the way in which organisations use their personal data. The transparency to be achieved is, however, not defined precisely or uniformly in the law, case law, or scientific literature.

We operationalised transparency in two ways. The first way was to compare the access responses to the formal legal criteria. The law and related case law specify several mandatory elements in response to an access request (see section 2 “Right of access”). There needs to be a reply to an access request within a number of weeks (four in Dutch law), and the reply needs to include the categories of data that an organisation processes, the actual data that is processed, an explanation of the reasons for which it is processed and an explanation of how the organisation received the data, and if, and with whom, the data was shared. We checked the replies for these elements, and whether they were given in general or specific form.

Our second way is to let citizens, for whom the law is intended, judge whether the responses gave sufficient insight into the lawfulness and accuracy of the data processing, as the law intends. This, as was described in Section 3.1, was done by asking the participants to grade each access request and response, plus the intake and final interviews.

3.3. Ethical considerations

Before involving participants, we sought and received approval from the Delft University of Technology’s Human Research Ethics Committee.

Our research requires the participants to share replies to their access requests with us, which by their nature might contain highly sensitive personal information. One principle of ethical data sharing is informed consent. We thus informed participants in detail about the setup of the experimental design, and strived for open communications and an atmosphere that makes it easy for participants to decide to share or not share their data, to share only part of their data or to revert any previously taken decision on this matter. Participants can at any point and for any reason pull back from the research. Moreover, keeping the data safe is a key concern. The original response letters were held by the participants themselves, and we stored a digital copy on an encrypted university server, accessible to two of the researchers, and the individual participants only.

Another consideration is that replying to a data access request, if taken seriously, may take much time for an organisation. Some organisations we talked to have reported that they have been previously targeted by public data access request campaigns and have experienced this almost as a ‘distributed denial of service attack’. While acknowledging this concern, given that organisations have a legal obligation to reply to access requests, and given the importance of investigating access rights by the actual application of the right (versus investigation by proxy), we deem our method acceptable. For larger size research with participants, however, some form of load balancing with regards to the queried organisations in the research design is needed. Finally, since our research is not intended as an attack on any organisation, and especially not on any individual within an organisation, we protect the privacy of individuals responding to access requests within organisations and never mention their names.

4. Legal compliance

We will now present our findings of the extent in which the replies to the access requests complied with the law. In 4.1 we look at the most basic questions: was there a reply at all, and how long did it take for organisations to reply. In 4.2 and 4.3, we describe the extent to which the replies were complete. In 4.4 we discuss how responses to specific requests and follow up questions were handled, and patterns that can be observed by matching replies to similar requests.

4.1. Is anybody listening?

Approximately 80 percent of the data access requests where eventually answered. 8 About half were answered within the four weeks stipulated under the Dutch law and, as the response time histogram in Figure 1 shows, a relatively large proportion of the replies return in the fourth and fifth week after the request, around the legal deadline. Coolblue, a web shop, responded to a request by letter in two days with data. A small proportion (7) of organisations replied within a week, but most of these responses did not contain the requested data. 9 At the other end of the spectrum, 34 organisations answered late, 21 answered after one reminder, and 9 after two reminders were sent.

Histogram of response time (in days)
Figure 1: Histogram of response time (in days)

Figure 2 provides an overview of the replies received, and the department that has sent the reply. Approximately 33% of the responses included user data and an additional 15% included categories of data but not the data itself, while 26% stated they did not have any data, and 5% of responses referred the participant to another organisation. Most replies were signed by the customer service department (25%), followed by privacy (13%), legal (12%), and others.

Pie charts of response classification and responding department
Figure 2: Response classification (left) & responding department (right) for total sample

Finally, Table 2 shows the response classification and time by sector. We can see quite some diversity across sectors. For instance, all educational organisations in our sample offered replies, while 35% of the requests to companies in the transport sector remained unanswered.

Table 2: Access request response classification and time (by sector)

Sector

N

Data (specific or categories)

No data or Referral

No reply (excluding cancelled)

Response time (mean number of days)

Education

7

57%

43%

0%

29.3

Finance

6

67%

33%

0%

40.8

Government

30

40%

43%

10%

34.2

Platforms

10

60%

10%

20%

33.1

Retail

8

50%

25%

25%

30.8

Telecom

8

38%

38%

25%

21.8

Transport

20

30%

35%

35%

26.5

Utilities

7

57%

29%

14%

20.7

Other

10

80%

0%

10%

28.8

Total

106

48%

31%

17%

30.5

4.2. Diversity of responses

In order to give a feel of the diversity among the replies, which exists in many different regards, we will start with providing a detailed description of two answers, one compliant and one non-compliant.

Stroom

A data request was sent by letter to StroomThe Hague, a publicly funded art centre in The Hague, by a participant who collaborates with them. Nineteen days after the request was sent, a response was received in the form of a letter from the director of the organisation.

In the two-page reply, seven categories of data, including name and contact details, artist details, nationality, and correspondence, are discussed. For each category, the letter describes how the organisation has received the data (for example, if it was given to them by the participant). The data is either provided in the letter, or a reference is given to an online platform where the participant can access the data, and they briefly explain why the data has been (or is) processed. Furthermore, the letter indicates which of these data are publicly available, and even includes a section about data they do not currently have, but might have under different conditions, for instance, if the participant would have had a financial relationship with this organisation.

Ziggo

A data access request was sent to Ziggo—a large Dutch cable company owned 50% by Vodafone and 50% by Liberty Global—by a participant. A customer service representative called within two days asking if the participant is facing any problems, for example with their password, and expressing that they do not really know what to do with this request. The participant explains that she would like to know how Ziggo deals with personal data, and if, for example, they record what television programmes have been watched or which internet pages have been visited. The customer service representative responds that they will figure this out and get back to the participant in writing.

Four days later, the participant is called again, this time by a representative of complaints management, who again expresses that it is not “really clear” to them what they have to do with the letter. The participant explains the same story again, and requests access to her data. The complaints manager suggests the participant read the information on the website, with no additional information. The same day the participant receives Ziggo’s privacy policy by email, which in layman terms explains the right of access: “You have the right to know which of your personal data we store. We can request a small fee for the administrative costs that are connected to offering this type of data”. But still no data is offered, nor are the specific questions regarding specific types of data answered.

A few weeks later, the participant sends a more specific data access request, and specifically asks for an overview of all the data related to her internet use in the past three months, and refers to the fact that, in her mind, the previous data access request was not sufficiently addressed. Nine months, and a reminder letter later, no response has been received.

4.3. How complete are the replies?

The data protection law stipulates that organisations should, upon request, provide a full overview of the personal data held, plus the purpose and method of collection, and who the data was shared with. Just like the diversity in response time, there is quite some diversity in the content of replies across sectors, as the breakdown by sector in Table 3 shows.

Table 3: Completeness of access responses, based on the elements specified in the law and reiterated in the requests, grouped by sector

Sector

N

Contains data

(specific or general)

Purpose of collection

Method of collection

Data sharing

(specific or general)

Education

7

14%

43%

57%

43%

29%

29%

Finance

6

50%

17%

67%

50%

17%

67%

Government

25

24%

24%

36%

24%

24%

20%

Platforms

7

71%

12%

43%

29%

0%

71%

Retail

6

50%

17%

33%

33%

33%

17%

Telecom

6

50%

7%

33%

0%

0%

33%

Transport

13

46%

0%

54%

15%

8%

46%

Utilities

6

50%

17%

50%

67%

0%

50%

Other

8

62%

38%

50%

62%

62%

12%

Total

84

61% 10

45%

32%

55%

Overall, even among the organisations that did respond to the data access request, it only very rarely seemed to be with a complete overview. Many organisations reply with lists of labels of data or categories of data, instead of sharing the specific data. As an example, Happy Socks sent a participant an email in which they said that they have data like his name and home address, but they did not give the actual name and address that they have on file. 11 OHRA, a health insurance company, after 69 days and two reminders, sent a letter containing a list of categories of data they collect, containing, amongst other things “medical data” and a list of the categories of potential recipients of the personal data containing, amongst others “healthcare providers”.

When data is given, it can be challenging for the data subject to know if it is complete. For example, The Hague Library sent a reply that contained a print-screen from what seems to be their Customer Relationship Management (CRM) system. This print-screen shows a tab called “borrower registration” which includes fields like the name, date of birth, home address, contact details, and bank account number. Is this all the information the library system holds? Or are there other tabs in the system­—with for instance payment history, a history of the books that have been borrowed, or a profile of the borrower’s interests —which are not included because of a narrow interpretation of “personal data”? 12

Access requests sent to several municipalities—who all received the same request, and probably hold similar personal data—shed light on another aspect. Large organisations often find it hard to give a complete overview of all the personal data they have, and choose different ways to handle this complexity. The Municipality of The Hague sent a 16-page list of labels of data they share with other organisations on two databases, “BRP Verstrekkingsvoorziening” (Personal Records Database Distribution Facility) and “Beheervoorziening BSN” (Social Security Number Distribution Facility), but didn’t offer any further explanation (seeAppendix 1 for the first page of the reply). The Municipality of Amsterdam on the other hand responded with a letter explaining that they have a multitude of public tasks and responsibilities, and are therefore registering personal data in multiple systems. They invited the participant to visit in person to see if the access request could be more narrowly specified. The Municipality of Amstelveen took the middle ground: they sent an overview of some registrations, and invited the participant to visit in person to learn about the ways that the municipality deals with personal data.

Indeed, the text of the law is rather unclear, stating that “a full summary of the data” and “the recipients or categories of recipients” have to be provided. This seems to leave room for various interpretations, for instance that stating categories only is an acceptable reply (Van Breda & Kronenburg, 2016). However, as previously mentioned, the Dutch DPA (2007) has taken the position that the reply should include a full reproduction of the data if the data subject asks for it, not just the categories, and this position has been accepted by the courts. 13 The GDPR addresses the ambiguity with regards to returning the actual data. 14

Another aspect of incompleteness is that many organisations do not answer the sub questions about purpose of processing and data sharing (Table 3). In fact, while 83% organisations answered to the access requests, only 22% answered to all the sub questions asked, and only 10% organisations were specific in both the aspect of the data collected and the aspect of which organisations data was shared with. Bol.com, a large Dutch online web shop, was unique in the sample for sharing the specific third-party partners that receive data for processing payments and product delivery.

4.4. Do more specific requests and follow-ups help?

One might expect that the likelihood of receiving the full and specific data increases when a more specific request is sent. The empirical data shows a mixed picture in this respect. Participants sent 16 modified access requests asking for specific forms of data. Out of these 16 cases only three received a response that directly addressed the specific question posed. Participants also sent 13 follow-up requests. These requests were almost invariably responded to with an individualised response directly addressing the question posed.

For example, participants sent five access requests to Amsterdam Airport Schiphol, two of which were modified. Schiphol replied to four participants, all with the same answer: that the airport does not have any personal data relating to them in their databases. 15 This was while one participant requested all personal data related to one specific recent flight and another requested data related to the Wi-Fi-tracking system while including the MAC address of the phone carried. These specific elements were simply ignored. We also sent one follow up letter to Schiphol, asking how it is possible that the airport has no personal data, while handling luggage and boarding passes, and engaging in Wi-Fi-tracking. Schiphol answered that they indeed keep luggage and boarding pass data, but delete these a few days after a flight, and the Wi-Fi-tracking data they hold cannot be traced back to an individual. 16

This example follows a pattern we regularly observed. In most cases a request for information about specific data in an initial data request is ignored, while follow up requests get an individualised reply more often.

Sometimes a follow up request does receive an answer with data that was previously withheld. The UWV (Employee Insurance Agency), which is the autonomous administrative authority commissioned by the Ministry of social affairs and employment to implement employee insurances and provide labour market and data services, is an example of this. In first instance, a participant sent a standard access request to the agency, to which they replied that they did not use any of the participants’ personal data. 17 Then the participant sent a follow-up letter, in which she pointed out that according to information on their own website, UWV processes data about work and income history of all employees in the Netherlands, and that she therefore does not understand how it is possible that UWV does not process any of her personal data. In response to this letter, UWV sent a reply including many pages from a system in which various personal details, including detailed income data, were recorded.

Through the examples Schiphol, UWV, and the Dutch municipalities (section 4.3), we can learn that matching responses from the same (or related) organisation increases the ability to judge the quality, completeness, and veracity of an access response. To demonstrate this point, consider how Van Breda and Kronenburg (2016) judged Schiphol’s access response, in isolation, to be of rather high quality. They find the response, despite providing no data, to be transparent and helpful as it provides information on other organisations that may process information about the data subject in the airport, and they commend the fact that the response was sent by registered post. But by sending five requests and comparing the answers we found that Schiphol sends exactly the same letter, irrespective of the precise question posed in the request. In other words, matching responses allows for a better judgement of the completeness of the individual answers.

5. Participant perceptions

Our overall analysis so far suggests a rather mixed conclusion with regards to compliance. There clearly are organisations that are putting an effort to be transparent about the way they process personal data, while others, whether out of inability or unwillingness, are non-compliant with the basics of the law. More importantly however, the right of access is a data subject right intended to empower the citizen. Thus, we have to go beyond a formal legal judgment, and take into account the citizens’ perspective, to assess the extent to which the right of access functions. We shall do that in this section.

5.1. Best and worst responses

When participants were asked which of the responses they thought were best in the interview, two criteria emerged throughout: the completeness of the data, and in different forms, the feeling of being taken seriously. The completeness was appreciated, in terms of sheer quantity, the coverage in time, and the precision in describing the origin of the data. But the much more striking aspect that participants judged was the tone and the implied willingness to provide transparency of the interactions: “Amstelveen Municipality did best because they invited me and were clearly putting an effort to get you the insight you wanted, even though you did not even know exactly what you wanted”, or TU Delft explained a lot and although I did not get the data I felt that I could have gotten it”.

When asked which were the worst responses, the mirror image emerges. While participants disliked responses without data, they are more vehemently critical of responses that do not treat them respectfully. Participants made remarks such as: “You get the feeling that they try to keep you at a distance and make it complicated”, “The way they are responding is almost like I am an idiot and they are making stuff up”, “the way in which they address you is kind of aggressive to start with”, or “Their answer seems like a Jedi/Sith mind trick”.

5.2. Completeness and communication style

We asked participants to grade all individual access request replies on a Likert scale (very bad – bad – neutral – good – very good) on the aspects of perceived completeness and communication stylesatisfaction. If we map these grades to numbers (very bad = 1, very good = 5), the average grade participants gave for perceived completeness was 2.1 (bad), and for communication satisfaction 2.6 (midway between neutral and bad).

While the number of requests is too low to make statistically significant claims about sectors as a whole, there seems to be quite a marked difference between different sectors in the sample, as shown in the Figure 3 boxplots. The high grades for the educational organisations and low grades for the telecommunications sector in particular stand out.

Boxplots of perceived completeness & communication satisfaction grades
Figure 3: Perceived completeness (left) & Communication satisfaction (right) grades given by participants to access responses (Boxplots are ordered by the median grade per sector, indicated by the orange line in the rectangles. The rectangles show the 25 and 75 percentiles of grades, which are on a Likert scale)

Low grades in the telecommunications sector (which includes mobile operators and ISPs) can be traced to a number of specific behaviours. For example, three out of four organisations (Car2go, Tele2, Telfort, Ziggo) that told participants to check out their privacy policy were in the telecommunications sector. This made participants feel “[they] let you walk in circles, [and] you get nowhere”, as their privacy policies explicitly mention the right of access that the participant is trying to make use of. With regards to completeness, none of companies in the telecom sector provided internet traffic or location data (gathered through connections with cell-phone towers), even when specifically requested. Participants felt very uncomfortable about this, because they believed that these companies have much more data than they are sharing through the access response. Additionally, participants expect more from technologically capable companies: “I tend to be a bit more lenient with companies or organisations that are not really IT based. [But] for example if a whole business is set up around databases and providing a website and giving you services, I would expect that they also have the expertise to very easily create a database dump and just give it to me”. Our finding about the negative perception on the telecommunication sector is in line with findings from Norris et al. (2017) who find that seven out of ten organisations in the mobile telephony branch apply restrictive practices when answering to data access requests.

5.3. What do citizens expect from the right of access?

Before sending the data access requests, we interviewed participants and asked them what they expected from exercising the right of access, and why they were participating in our research. After having sent data access requests and receiving the replies, we interviewed participants again and asked them to reflect on the right of access based on their experience within this research.

Most participants expressed before sending access requests that they did not expect to get access to their data through using the right. Instead, participants expressed that exercising the right of access could still be good for other reasons. They expressed that when confronted with an access request, an organisation might start to critically assess its data practices, or as one participant put it, “I want to participate in this research because I want it to initiate a discussion. This to me is even more important than getting to see my own data. These two things are not even comparable. It is extremely important that we will make sure that in society, in politics and within organisations, the awareness is built”.

Most participants reported that the replies were, by and large, in line with their expectations, i.e., the right does not work that well with respect to getting the data that they expected organisations to have: “No, it is not effective”, “In reality it is worse than I expected”, and “It feels you are still ending up in some kind of black box”. However, they also expressed that the right works with respect to getting a deeper understanding of data practices: “It has made me more aware” and “The experience of sending out these access requests was really eye opening”

Most importantly, a feeling of gaining strength through collectivisation was expressed. Participants said things such as “I think it has contributed to organisations building some kind of process for dealing with access requests, especially because I know we were in a group” and “it gives me a feeling of the potentiality of this [right] helping society in order to be more in control of our data or to be at least informed [...] about our data being hosted by third parties.

6. Discussion

6.1. A failing instrument

The goal of the right of access as a juridical tool is to enable citizens to verify the lawfulness of the processing.18 In these goals, it mostly fails. A substantial proportion of the queried organisations, whether out of inability or out of unwillingness, are non-compliant with the law. And while many replies are quite elaborate, even these replies frequently provide inadequate information to the individual for making an informed judgment about the lawfulness of the processing. Most participants reported the process to be a poor experience in terms of transparency and empowerment.

We also found that, even after over fifteen years since this law is in place, certain organisations reported that they never received an access request, indicating that the right of access is rarely exercised by citizens. This is especially intriguing in the case of large organisations that process personal data, such as Delft University of Technology with over 20,000 students, or Stedin, an electricity and gas network company with around 2,000,000 clients. Participants did not ask the organisation to report if their request was the first they ever received, but this is probably true for other cases as well. This is quite remarkable especially when taking into account that the right of access is already present in Dutch law since 2001.

That the right of access has so far not been used very often is another sign that the right of access does not function well now. Of our participants, only one had ever used the right of access. If we ask why this may be the case, a possible answer is that people just do not care so much about the particular data practices of individual organisations. But given the reflections by the participants in the interviews (section 5.3), an alternative cause maybe that the expectation of success is very low.

6.2. A way to salvage it

Based on our experience, we see some ways forward. First, the exercise of the right of access can be part of an effort to create awareness and spark dialogue among citizens as well as organisations. And second, it could be used collectively as a way to increase empowerment.

The underlying problem that could be addressed through collectivisation is two-folded. In the relationship between the citizen and the data controller, the starting point is one of a deficit of both power and knowledge on the part of the citizen (as argued by De Hert and Gutwirth, 2006).

With regards to the question of knowledge there are a few connected issues. Once a reply to an access request is received, it is very hard to know to what extent the reply is complete, or to judge the quality of the reply and the lawfulness of the data practices it reveals. To be able to judge the completeness, one needs to exactly have the knowledge that one does not have and is trying to receive through the access request. This judgement therefore can only take place in a context of a network of knowledge. This contextual knowledge needed to judge the quality of a reply can come from matching replies from other access requests, and from others with specialised knowledge.

That matching can help was demonstrated in the cases of Amsterdam Schiphol Airport (Section 4.4) and the Dutch municipalities (Section 4.2). We were only able to see that Schiphol was always sending the same answer, because we had different answers to compare with each other. And by comparing the reply of one municipality which only sent information regarding one database to those of other municipalities who showed they had personal data in a variety of databases, it appears likely that this municipality processes more data than what they sent to some participants.

The ability to judge the quality of a reply is also dependent on specialised knowledge coming from the legal and technical realms. Such is the case for example when the question is concerned whether a Media Access Control (MAC) address, a unique identifier for a communication device that is collected during Wi-Fi tracking, should be considered personal data or not. According to the Dutch DPA (2015), a MAC address is personal data, even when hashed by the organisation. A citizen that does not have the technical knowledge to understand how this works, or the legal knowledge that the DPA has voiced in this opinion, stands very weak against an organisation that takes an opposing position. Similarly, when the Dutch unemployment agency UWV, one of the largest governmental institutions of the Netherlands, claims not to process any personal data, a citizen needs to (1) know that this cannot be true, and (2) have the audacity to oppose the claim of a large government organisation. In such situations, doing access requests in a community of people, some of whom possess specialised knowledge, empowers the position of citizens.

Viewing the right of access as a legal tool to empower citizens vis-à-vis more powerful and knowledgeable organisations has parallels with freedom of informationact (FOIA) rights. The ideal behind FOIA rights is that the citizenry has the right to gain knowledge about the functioning and decision making of governmental bodies (Kreimer, 2008). And similar arguments have been made with regards to private companies (Pasquale, 2015). Only an informed citizenry can make informed political judgments with regards to a government that in a democratic society should be under its control. The rationale for having the right of access is very similar. Moreover, similar to the right of access, FOIA rights are individual rights, while the benefit is meant to be for the society as a whole.

Similarly, the difficult conditions of unequal information and power experienced by citizens that exercise their right of access, resemble the conditions experienced with FOIA rights. Kreimer (2008) notes for example that “to press a recalcitrant administration for disclosure under FOIA requires time, money and expertise”. And while the right of access has been codified in such a way that it ought to be relatively easy for the citizen to execute, for example by having a low level of formal requirements to the request or capping the cost that organisations can charge for fulfilling a request, getting a clear picture of data practices through the exercise of the right of access is still very difficult, as organisations limit accessibility to the information in many different ways. Given the parallels, the conditions under which the right of access bears full fruition will also be very similar to FOIA rights. As Kreimer (2008) phrases it, FOIA regulation is effective when part of a broader “ecology of transparency” that includes “tenacious requesters” like well-financed NGOs and an active media.

6.3. The ecology of access requests and future work

If indeed, as we argue, the access right works best when used collectively and is aimed at empowerment and transparency at a societal level, the next question is what are the best fitting forms of collective organisation for this right?

Several forms have been tested so far. A number of online projects, including Bits of Freedom’s Privacy Inzage Machineand Citizen Lab and Open Effect’s Access My Info (see Hilts & Parsons, 2015) help citizens generate access request letters. These projects create awareness for citizens about the right, and lower the boundary of exercising the right by simplifying the process. They may also encourage organisations to be better stewards of personal information, as receiving access requests in high numbers signals to an organisation that citizens are concerned about how their personal data is used, and can “spur institutions to improve their privacy practices”19. Activists, such as Rejo Zenger (in the Netherlands) and Max Schrems (in Austria and Ireland) have exercised their right of access, used blogs and websites to share findings about the results with a broader public, and entered into litigation in order to force organisations into increased transparency about their personal data practices (e.g., Zenger, 2011). Others like Dehaye (2017) have combined the creation of an online access request tool with academic work and investigative journalism.

We plan to extend the current research in two ways. First, we are currently building a digital platform to recruit a larger group of participants in various EU countries, to send and track access requests in line with the method explored in this paper (see Asghari et al., 2017). This allows a more elaborate empirical assessment of the right of access in action, and in particular, to compare sectoral and country level differences. Second, we plan to include the point of view of the target organisations, by interviewing their DPOs in the future.

7. Conclusion

Just as the proverbial proof of the pudding is in the eating, rather than in a careful assessment of its recipe, the right of access should be assessed by how effective it is in practice. And since the right is meant to empower citizens, citizens should be the ones to judge if it empowers them. In our study, we asked participants to send access requests, and collected responses to their requests, and interviewed them along the way. The resulting picture is not pretty: while there are some positive exceptions, overall the compliance with the right of access is a mess. Non-compliance with the formal requirements of the law is widespread, with some organisations failing to answer at all, and others obstructing transparency in their answers. This mess did not surprise our participants though.

This sobering picture, however, does not mean that the right is useless. When the right is used in a collective manner, it creates a context to judge the quality of replies and the lawfulness of the data practices by comparing replies to similar access requests. Participants also perceived a societal much more than an individual value in exercising this right, not the least because through collective use, the power imbalance between individual citizens and organisations shifts in favour of the citizen.

Acknowledgements

We thank the participants for their effort in sending all the access requests, thinking through the replies they received and positive support of this research endeavour. We thank Nele Brökelmann, Kathalijne Eendhuizen, Stefanie Hänold, Joris van Hoboken and Mirna Sodré de Oliveira, as well as the reviewers, for their constructive critical comments on the text. This work was supported by the Princeton University Center for Information Technology Policy (CITP) Interdisciplinary Seed Grant Program.

References

Article 29 Data Protection Working Party. (2018). Guidelines on transparency under Regulation 2016/679 (No. WP260 rev.01). Brussels: European Union. Retrieved from http://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=622227

Asghari, H., Greenstadt, R., Mahieu, R. L. P., & Mittal, P. (2017). The Right of Access as a tool for Privacy Governance. Presented at HotPETs during The 17th Privacy Enhancing Technologies Symposium. Retrieved from https://petsymposium.org/2017/papers/hotpets/rights-of-access.pdf

Bennett, C., & Raab, C. D. (2017). Revisiting the Governance of Privacy. (SSRN Scholarly Paper No. ID 2972086). Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=2972086

Bruening, P. J., & Culnan, M. J. (2015). Through a Glass Darkly: From Privacy Notices to Effective Transparency. North Carolina Journal of Law & Technology, 17(4), 515-579. Retrieved from http://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/ncjl17&section=20

Dehaye, P.-O. (2017). Cambridge Analytica and Facebook data. Medium. Retrieved May 23, 2017, from https://medium.com/personaldata-io/cambridge-analytica-and-facebook-data-299c54cb23fa

De Hert, P., & Gutwirth, S. (2006). Privacy, data protection and law enforcement. Opacity of the individual and transparency of power. In E. Claes, A. Duff, & S. Gutwirth (Eds.), Privacy and the criminal law (pp. 61–104). Antwerp/Oxford: Intersentia.

De Hert, P., & Papakonstantinou, V. (2016). The new General Data Protection Regulation: Still a sound system for the protection of individuals? Computer Law & Security Review, 32(2), 179–194. doi:10.1016/j.clsr.2016.02.006

Dutch DPA. (2015). Wifi-tracking rond winkels in strijd met de wet. Retrieved from https://autoriteitpersoonsgegevens.nl/nl/nieuws/cbp-wifi-tracking-rond-winkels-strijd-met-de-wet

Dutch DPA. (2007). Publication of personal data on the internet. Retrieved from https://autoriteitpersoonsgegevens.nl/sites/default/files/downloads/mijn_privacy/en_20071108_richtsnoeren_internet.pdf

Economist. (2017). Regulating the internet giants: The world’s most valuable resource is no longer oil, but data. The Economist. Retrieved from http://www.economist.com/news/leaders/21721656-data-economy-demands-new-approach-antitrust-rules-worlds-most-valuable-resource

Galetta, A., & De Hert, P. (2015). The proceduralisation of data protection remedies under EU data protection law: towards a more effective and data subject-oriented remedial system? Review of European Administrative Law, 8(1), 125–151. Retrieved from https://www.researchgate.net/publication/280034195_The_Proceduralisation_of_Data_Protection_Remedies_under_EU_Data_Protection_Law_Towards_a_More_Effective_and_Data_Subject-oriented_Remedial_System

Hilts, A., & Parsons, C. (2015). Access My Info: An application that helps people create legal requests for their personal information. In The 15th Privacy Enhancing Technologies Symposium, Philadelphia, PA. Retrieved from https://www.petsymposium.org/2015/papers/hilts-ami-hotpets2015.pdf

Hoepman, J. H. (2011). Het recht op inzage is een wassen neus. Wat nu? Informatiebeveiliging, 2011(6), 16–17. Retrieved from https://repository.tudelft.nl/view/tno/uuid:6be95e4c-a836-4d64-8ad2-eeb1b987bfa7/

Hustinx, P. (2013). EU data protection law: The review of directive 95/46/EC and the proposed general data protection regulation. Collected Courses of the European University Institute’s Academy of European Law, 24th Session on European Union Law, 1–12. Retrieved from https://pdfs.semanticscholar.org/f1e3/333fcc1344d28134e0ab418817d5f7aa270d.pdf

Kreimer, S. F. (2008). The freedom of information act and the ecology of transparency. University of Pennsylvania Journal of Constitutional Law, 10(5), 1011-1080. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1088413

Norris, C., De Hert, P., L’Hoiry, X., & Galetta, A. (Eds.). (2017). The Unaccountable State of Surveillance - Exercising Access Rights in Europe. Cham: Springer International Publishing. Retrieved from http://www.springer.com/us/book/9783319475714

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press.

The Organisation for Economic Co-operation and Development (OECD). (2013). The OECD Privacy Framework. OECD Publishing. Retrieved from: https://www.oecd.org/sti/ieconomy/privacy-guidelines.htm

Schnackenberg, A. K., & Tomlinson, E. C. (2016). Organizational transparency: A new perspective on managing trust in organization-stakeholder relationships. Journal of Management, 42(7), 1784–1810. doi:10.1177/0149206314525202

Schwartz, P. M. (2013). The EU-U.S. Privacy Collision: A Turn to Institutions and Procedures. Harvard Law Review, 126(7), 1966–2009. Retrieved from http://papers.ssrn.com/abstract=2290261

Van Breda, B. C., & Kronenburg, C. C. M. (2016). Inzage in de praktijk van het inzageverzoek. Privacy & Informatie, 2016(50), 60–65. Retrieved from http://old.ivir.nl/syscontent/pdfs/232.pdf

Zenger, R. (2011). Winst bij de rechter, Telfort geeft inzage in álle persoonsgegevens. Retrieved November 17, 2017, https://rejo.zenger.nl/focus/winst-bij-de-rechter-telfort-geeft-inzage-alle-persoonsgegevens/

Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 2015(30), 75–89. doi:10.1057/jit.2015.5 Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754

Zuiderveen Borgesius, F. (2015). “Informed Consent. We Can Do Better to Defend Privacy.” IEEE Security & Privacy, 13(2), 103–107. doi:10.1109/MSP.2015.34 Retrieved from http://papers.ssrn.com/abstract=2793769

Appendix

Below is the first page of a reply to an access request to the Municipality of The Hague. It contains a list of labels of data they share with other organisations through two databases “BRP Verstrekkingsvoorziening” (Personal Records Database distribution facility) and “Beheervoorziening BSN” (Social Security Number distribution facility). No further information, background or invitation for further questions is given.

Reply to an access request to the Municipality of The Hague
 

Footnotes

1. For a critical analysis on the functioning of notice regulation in the US, see Bruening and Culnan, 2015.

2. The DPD was replaced by the GDPR effective May 2018. According to De Hert and Papakonstantinou (2016) the GDPR is not substantially different than past law with regards to the right of access. Two major motivations for the introduction of the GDPR were harmonisation and increased protection for citizens in an environment of intense technological change. Harmonisation is achieved by the fact that the regulation will be directly applicable in all member states, whereas the DPD applied only through its implementations into respective national laws. Stronger data protection for citizens is pursued by, among other things, increased fines, which may increase the relevance of our work.

3. In the GDPR the first ambiguity seems to be solved as the law says: “The data subject shall have.... access to the personal data.... and the following information: …. (b) the categories of personal data concerned. The ambiguity with regards to the recipients however remains as the law still states … (c) The recipients or categories of recipient ...”

4. Dutch DPA (2007) p. 39 “Pursuant to Article 35 of the Wbp, a report must be a complete and clear overview of the data that are being processed in relation to a data subject. This must not be a description or summary of the data, but a complete reproduction. If the report were incomplete, the data subject would of course be insufficiently able to exercise his or her rights under the terms of the Wbp. This interpretation was confirmed in mid-2007 by the Supreme Court in the judgments on the Dexia case, Supreme Court, 29 June 2007, LJN: AZ4663 and Supreme Court, 29 June 2007, LJN: AZ4664.”

5.Similarly, recital 63 of the GDPR does the same. It reads: “A data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing.”

6. The small number of participants and selection method impose limitations on the generalisability of our findings, for instance about how citizens as a whole perceive access rights, or how all organisations handle access requests. Our design, however, offers insights into sentiments and data practices that are present in society.

7. We add a subject line, a paragraph explaining why the citizen requested the data, and a paragraph explaining that we add a copy of a passport in order for the organisation to be able to verify the identity.

8. We only count a request as unanswered after at least 60 days have passed. It is of course possible that some organisations will still reply at some later point.

9. In two of these cases the replies referred back to the privacy policy of the organisation, and in two cases they referred to another organisation.

10. Participants agreed with seven of the responses without data. If we count the lack of data in these replies as the correct data, the percentage within this cell increases to 69%.

11. This only happened after the participant first sent a reminder, then received a copy of the privacy policy, and then again asked Happy Socks to act upon the access request as detailed in their own privacy policy.

12. We in fact had a follow up conversation with The Hague Library, and they stated that the former is true; in particular they stated that they do not keep borrowing history nor any borrower profile.

13. Dutch DPA (2007) p.39 “Pursuant to Article 35 of the Wbp, a report must be a complete and clear overview of the data that are being processed in relation to a data subject. This must not be a description or summary of the data, but a complete reproduction. If the report were incomplete, the data subject would of course be insufficiently able to exercise his or her rights under the terms of the Wbp. This interpretation was confirmed in mid-2007 by the Supreme Court in the judgments on the Dexia case, Supreme Court, 29 June 2007, LJN: AZ4663 and Supreme Court, 29 June 2007, LJN: AZ4664”

14. The GDPR states “the data subject shall have.... access to the personal data.... and the following information: …. (b) the categories of personal data concerned.” The ambiguity with regards to the recipients however remains as the law still states“… (c) The recipients or categories of recipient ...”

15. The fifth participant received a confirmation one month after their request, stating Schiphol expects to answer with a delay because of the holiday period.

16. To clarify, we are not proposing that organisations should retain the data longer in order to respond to an access request; rather that the initial response could have pointed out the collection-deletion practice to improve transparency.

17. Of the five others who sent data access requests to two different branches of UWV, three never received a reply, even after sending reminders, and two received a reply that UWV did not process their personal data.

18. As both the explanatory memorandum of the Dutch Personal Data Protection Act as well as recitals of the GDPR state, as discussed in Section 2.

19. See https://accessmyinfo.org/

On democracy

$
0
0

Disclaimer: This guest essay in the Special issue on political micro-targeting has not been peer reviewed. It is an abbreviated version of a speech delivered by the Member of the European Partiament (MEP) Sophie in ‘t Veld in Amsterdam in May 2017 to Data & Democracy, a conference on political micro-targeting.

Democracy

Democracy is valuable and vulnerable, which is reason enough to remain alert for new developments that can undermine her. In recent months, we have seen enough examples of the growing impact of personal data in campaigns and elections. It is important and urgent for us to publicly debate this development. It is easy to see why we should take action against extremist propaganda of hatemongers aiming to recruit young people for violent acts. But we euphemistically speak of 'fake news' when lies, 'half-truths’, conspiracy theories, and sedition creepily poison public opinion.

The literal meaning of democracy is 'the power of the people'. 'Power' presupposes freedom. Freedom to choose and to decide. Freedom from coercion and pressure. Freedom from manipulation. 'Power' also presupposes knowledge. Knowledge of all facts, aspects, and options. And knowing how to balance them against each other. When freedom and knowledge are restricted, there can be no power.

In a democracy, every individual choice influences society as a whole. Therefore, the common interest is served with everyone's ability to make their choices in complete freedom, and with complete knowledge.

The interests of parties and political candidates who compete for citizen’s votes may differ from that higher interest. They want citizens to see their political advertising, and only theirs, not that of their competitors. Not only do parties and candidates compete for the voter's favour. They contend for his exclusive time and attention as well.

Political targeting

No laws dictate what kind of information a voter should rely on to be able to make the right consideration. For lamb chops, toothpaste, mortgages or cars, for example, it’s mandatory for producers to mention the origin and properties. This enables consumers to make a responsible decision. Providing false information is illegal. All ingredients, properties, and risks have to be mentioned on the label.

Political communication, however, is protected by freedom of speech. Political parties are allowed to use all kinds of sales tricks.

And, of course, campaigns do their utmost and continuously test the limits of the socially acceptable.

Nothing new, so far. There is no holding back in getting the voters to cast their vote on your party or your candidate. From temptation with attractive promises, to outright bribery. From applying pressure to straightforward intimidation.

Important therein is how and where you can reach the voter. In the old days it was easy: Catholics were told on Sundays in church that they had no other choice in the voting booth than the catholic choice. And no righteous Catholic dared to think about voting differently. At home, the father told the mother how to vote. The children received their political preference from home and from school. Catholics learned about current affairs via a catholic newspaper, and through the catholic radio broadcaster. In the Dutch society, which consisted of a few of such pillars, one was only offered the opinions of one's own pillar1. A kind of filter bubble avant la lettre.

Political micro-targeting

Nowadays, political parties have a different approach. With new technologies, the sky is the limit.

Increasingly advanced techniques allow the mapping of voter preferences, activities, and connections. Using endless amounts of personal data, any individual on earth can be reconstructed in detail. Not only can their personal beliefs be distilled from large troves of data, no, it even is possible to predict a person's beliefs, even before they have formed them themselves. And, subsequently, it is possible to subtly steer those beliefs, while leaving the person thinking they made their decision all by themselves.

As often is the case, the Americans lead in the use of new techniques. While we Europeans, touchingly old-fashioned knock on doors and hand out flyers at Saturday's market, the American employ the latest technology to identify, approach, and influence voters.

Of course, trying to find out where voters can be reached and how they can be influenced is no novelty. Political parties map which neighbourhoods predominantly vote for them, which neighbourhoods have potential, and in which neighbourhoods campaigning would be a wasted effort. Parties work with detailed profiles and target audiences, for which they can tailor their messages.

But the usage of personal data on a large scale has a lot more to offer. Obviously, this is a big opportunity for political parties, and for anyone else, who runs campaigns or aims to influence the elections.

However, the influencing techniques become increasingly opaque. As a result of the alleged filter bubble, voters are being reaffirmed in their own beliefs, and they hardly receive information anymore about the beliefs and arguments of other groups. This new kind of segmentation may stifle critical thinking. There may not be enough incentive to test one's own ideas, to find new arguments, or to critically reflect on the truthfulness of information.

I am a social and economic liberal D66 politician, and I get suggestions for news articles from websites like The Guardian or Le Monde. My colleague from the right wing nationalist PVV, may well receive URLs from Breitbart.

Pluralism is essential for a healthy, robust democracy. In a polarised society, people live in tightly knit groups, which hardly communicate with each other. In a pluralist society people engage in the free exchange, confrontation, and fusion of ideas.

The concept pluralism is under pressure. Populist parties declare themselves representative of The People. In their vision, The People, is uniform and homogenous. There is a dominant cultural norm, dictated from the top-down, to which everyone must conform. Whomever refuses, gets chewed out. Often, it is about one-dimensional symbolism such as Easter eggs and Christmas trees. There is no place for pluralism in the world of the populists. But when there is no pluralism, there is no democracy. Without pluralism, democracy is nothing more than a simple tribal dispute, instead of the expression of the will of all citizens together.

Voter data

European privacy legislation limits the use of personal data. In the world of ‘big data’, one of the explicit goals of regulation is to prevent restriction of the consumer's choice. Oddly enough, lawmakers do not explicitly aspire to guarantee voters as broad a choice as possible. But in politics, individual choices have consequences for society as a whole.

In 2018, the General Data Protection Regulation (GDPR) comes into effect. We have worked five years on the GDPR. At this moment, we work on the modernisation of the e-Privacy Directive, which is mainly about the protection of communication. As was the case with the GDPR, companies from certain sectors scream bloody murder. European privacy protection would mean certain death for the European industry. According to some corporate Cassandras, entire European industries will move to other continents. That very same death of corporate Europe is also predicted for any measure concerning, say, environmental norms, procurement rules, or employee rights. All those measures are in place, but, as far as I know, the nightmare scenario has never occurred...

There are some corporate sectors, such as publishing and marketing, which have a huge impact on the information supply to citizens. They are the ones who now cry wolf. It is understandable that they are unhappy with stricter rules concerning their activities, but as the potential impact of the use of personal data and ‘big data’ increases, so does their social responsibility.

At the moment, there is not much public debate about the new techniques. Peculiar. Thirty years ago, 'subliminal advertising', as we called it then, was prohibited because people found it unethical to influence people without their knowledge. We need to have a similar debate. What do we think of opaque influencing? Do we need ethical norms? Should such norms apply only to political campaigns, or should we look at this from a broader perspective? In the ‘big data’ debate, we tend to speak in technical or legal terms, while actually the issue is fundamentally ethical, holding far-reaching consequences for the vitality of our democracy.

Such a public debate demands more clarity on the impact of ‘big data’, profiling, targeting, and similar techniques on the individual, her behaviour, and her choices, which determine in what direction society progresses. Which voters are being reached? How sensitive are they for the subtle influencing and what makes them resilient? How do people who are hardly reached only compare to the others? How do voters and non-voters compare? Is the voter truly predictable? Can we identify or influence the floating voter? Do voters actually float between different parties? Or do they especially float within their own party, their own bubble, their own segment? How important are other factors, such as the social context? If the new influencing techniques are indeed as potent as we think, how can polls get it so wrong? What can we learn from advertisers who return to contextual advertising, because targeting turns out less effective than they thought?

We need to stay cool-headed. New technologies have a huge impact, but human nature will not suddenly change due to ‘big data’ and its use. Our natural instincts and reflexes will definitely not evolve in a few years. That would take many thousands of years, as even in the 21st century, we seem to have more than a few cavemen traits, so losing internalised behaviour is not as easy as 1-2-3. Humans are resilient, but democracy is vulnerable. On a short term, the societal impact is large. This gives us all the reason to reflect on how to deal with the new reality, and how we can keep up our values in this new reality.

The use of personal data, clearly, is not solely reserved for decent political parties. Other persons and organisations, from the Kremlin to Breitbart, can bombard European voters with information and misinformation. But European governments, controlling endless amounts of personal data of their citizens, can also manipulate information, or circulate utter nonsense to advance their own interests. A random example: the Hungarian government influencing their voters with lies and manipulation about the so-called consultation on asylum seekers.

Beyond voter data

This issue is not only about the personal data of voters, but also about the personal data of political competitors, opponents, and critics, which are increasingly being employed. Recently, we have seen efforts of external parties to influence the results of the 2017 French elections. We saw a large-scale hack of the Emmanuel Macron campaign, and the spread of false information, coming obviously from the Kremlin and the American Alt-Right, meant to discredit Macron's candidacy.

Also, the American elections show the shady game of hacking, leaking, and manipulating. The issue of the Hillary Clinton mails will undoubtedly occupy our minds for years. Who knows how the elections would have turned out without this affair?

Other democratic pillars can get corrupted as well by the misuse of data. Critical voices, opposition, and checks and balances are democracy's oxygen. Democracy is in acute jeopardy when data are employed to attack, undermine, discredit, blackmail, or persecute journalists, judges, lawyers, NGOs, whistleblowers, and opposition parties.

In Europe, we tend to shrug our shoulders at these dangers. "Oh well, we'll see, such things occur only in banana republics, not right here". Of course, this trust in our democratic rule of law is wonderful. But if we treat our rule of law this neglectfully, we will lose it eventually.

Within the European Union, we currently see this happening in Poland and Hungary. The governments of both nations ruthlessly attack independent judges, critical media, inconvenient NGOs. They do so with quasi-lawful means. Under the banner of transparency, they force NGOs to register. In doing so, they misuse laws against money laundering, and terror finance. Or the governments bring out compromising information about judges or politicians in strategic moments.

But critical voices struggle in other member states as well. Lawyers are being monitored, even without a legal basis. In the years after 9/11, we have created endless new abilities for intelligence services, police and justice departments to spy on citizens, even without suspicion, without the signature of a judge. The companies to which we unwittingly surrender our personal data, in exchange for service, are forced to hand over all information to the government, or forced to build in backdoors. Governments hack computers in other countries. Usually, it starts out with unlawful practices, but soon enough laws are put in place to legalise those practices. The magic word 'terrorism' silences any critique on such legislation.

But when politicians, journalists, NGOs, whistleblowers, lawyers, and many others cannot perform their tasks freely and without worry, our democracy withers. Not only do they have to operate without someone keeping an eye on them, they have to know nobody is in fact watching them. The mere possibility of being watched, results in a chilling effect.

For this principal reason, I have contested a French mass surveillance law before the French Conseil d'Etat. Since, as a member of the European Parliament, I spend four days a month on French soil (in Strasbourg), I could potentially be the target of the French eavesdropping programme. This is not totally imaginary, as I am not only a politician, but also a vocal critic of certain French anti-terror measures. It is not about me actually worrying about being spied on, but about the fact that I might be spied on. Luckily, I am not easily startled, but I can imagine that many politicians are vulnerable. That is a risk for democracy.

I do not discard the possibility of a ruling of the European Court of Human Rights on my case. In that turn of events, it will lead to jurisprudence valid in the entire EU (and the geographical area covered by the Council of Europe).

But, of course, this should not depend on the actions of one obstinate individual whether politicians, NGOs, journalists, and so on, can do their jobs fearlessly, to fulfil their watchdog role.

It is my personal, deep, conviction that the biggest threat to our democracy is the fact that we have enabled the powerful to access, with almost no limitations, the personal data of those who should control those very same powerful entities.

What can we do?

Some propose new forms of democracy, in which universal suffrage is weakened or even abolished. In his book ‘Against elections: the case for democracy’, David Van Reybrouck had the idea to appoint representatives on the basis of chance, and in his book ‘Against democracy’ Jason Brennan wants to give the elite more votes than the lower classes, presuming that people with more education or development make better choices. Others want to replace representative democracy with direct democracy.

I oppose those ideas. Universal suffrage and the representative democracy are great achievements, which have led to enormous progress in society.

First of all, we have to make sure our children grow up to be critical, independent thinkers. Think differently, deviate, provoke: this must be encouraged instead of condemned. A democracy needs non-conformists.

We must teach our children to contextualise information and to compare sources.

The counterpart of ‘big data’ must be ‘big transparency’. We need to understand not just open administration, but also insights into the techniques of influence.

The regulation and limitation of the use of personal data, as I hope to have argued effectively, is not a game of out-of-touch privacy activists. It is essential for democracy. We need safeguards, not only to be sure people really are free in their choices, but also to protect the necessary checks and balances. As such, I plea for a rigorous application of the GDPR, and in the European Parliament, I will work for a firm e-Privacy Directive.

And yes, perhaps we should examine whether the rules for political campaigning are still up-to-date. In most countries, those rules cover a cap on campaign expenditures, a prohibition of campaigning or polling on the day before election day, or a ban on publishing information that may influence the election results, such as the leaked e-mails in France. But these rules have little impact on the use of personal data to subtly influence elections.

Last year, the European Parliament supported my proposal for a mechanism to guard democracy, the rule of law, and fundamental rights in Europe.2

On this day (editor’s note: 9 May, Europe Day) of European democracy, I plead for equal, high norms in Europe. The last years have shown that national elections are European elections. It is crucial for us to trust that all elections in EU member states are open, free, and honest elections, free of improper influencing.

These last sixty years, the European Union has developed itself into a world leader in democracy and freedom. If we start a public debate, Europe can remain a world leader.

Footnotes

1. Pillars are referred to here as societal cleavages along ideological or religious lines

2. The report I refer to is a legislative initiative of the European Parliament. I was the initiator and the rapporteur. This is a proposal to guard democracy, the rule of law, and the fundamental rights in the EU. The Commission, at first, did not want to proceed with the initiative. Recently, however, the Commission has announced a legislative proposal for such a mechanism. I suspect this proposal will look quite different from Parliament’s. But the fact that there will be a mechanism, is most important. The realization that the EU is a community of values, and not just on paper, spreads quickly. The URL to the proposal’s text is added below. It was approved in the EP in October 2016, with 404 Yea votes and 171 Nay’s. Source (last accessed 15 January 2018): http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bREPORT%2bA8-2016-0283%2b0%2bDOC%2bWORD%2bV0%2f%2fEN


The role of digital marketing in political campaigns

$
0
0

This paper is part of 'A Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research’, a Special issue of the Internet Policy Review.

Introduction

Political campaigns in the United States have employed digital technologies for more than a decade, developing increasingly sophisticated tools and techniques during each election cycle, as “computational politics” have become standard operating procedure (Tufecki, 2014; Kreiss, 2016). However, the most recent election marked a critical turning point, as candidates, political action committees, and other interest groups were able to take advantage of significant breakthroughs in data-driven marketing techniques, such as cross-device targeting, developed since the previous presidential election (“Bernie Sanders”, 2016; Edelman Digital, 2016). Electoral politics has now become fully integrated into a growing, global commercial digital media and marketing ecosystem that has already transformed how corporations market their products and influence consumers (Chahal, 2013; LiveRamp, 2015; Rubinstein, 2014; Schuster, 2015).The strategies, technologies, and tools of digital political marketing are more complex and far-reaching than anything we have seen before, with further innovations already underway (WARC, 2017). But because most commercial and political digital operations take place below the radar, they are not fully understood by the public. 1

In the following pages, we briefly describe the growth and maturity of digital marketing, highlighting its basic features, key players, and major practices. We then document how data-driven digital marketing has moved into the centre of American political operations, along with a growing infrastructure of specialised firms, services, technologies and software systems. We identify the prevailing digital strategies, tactics, and techniques of today’s political operations, explaining how they were employed during the most recent US election cycle. Finally, we explore the implications of their use for democratic discourse and governance, discussing several recent policy developments aimed at increasing transparency and accountability in digital politics.

Our research for this paper draws from our extensive experience tracking the growth of digital marketing over the past two decades in the United States and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007; Montgomery, Chester, & Kopp, 2017). During the 2016 US presidential election, we monitored commercial digital advertising and data use by candidates, parties and special interest groups across the political spectrum. We collected examples of these ads, along with technical and market impact information from the developers of the applications. We also reviewed trade journals, research reports, and other industry documents, and attended conferences that were focused on digital technologies and politics. In the process, we identified all of the major providers of political digital data targeting applications (e.g., Google, Facebook, data clouds, ad agencies) and analysed all their key materials and case studies related to their 2016 operations. The source for much of this work was our ongoing gathering and analysis of cross-sectional commercial digital marketing practices worldwide.

Marriage of politics and commerce

Since the mid-20th century, advertising has been an increasingly powerful and pervasive presence in US political campaigns, as a growing cadre of ad agencies, public relations firms, and consultants perfected the use of opinion polls, focus groups, and psychographics to reach and influence voters through radio, television, direct mail, and other media outlets (A. Jamieson, 2016; K. H. Jamieson, 1996; Sabato, 1981). With the rise of the internet, campaign operatives began to harness digital technologies and tools to mobilize voter turnout, engage young people, raise money, and support grassroots ground operations (Karpf, 2016; Kreiss, 2016; Tufecki, 2014). Both major political parties in the United States developed large, sophisticated data and digital operations (Kreiss, 2016).

Many of the digital strategies, tools, and techniques employed in the 2016 election were initially developed, deployed, tested, and refined by the commercial sector (Tufecki, 2014).Since its origins in the mid-1990s, digital marketing has operated with a core business model that relies on continuous data collection and monitoring of individual online behaviour patterns (Montgomery, 2011). This system emerged in the United States amid a political culture of minimal government interference, and within a prevailing laissez-faire ethos regarding the internet and new technologies (Barlow, 1996). In the earliest days of the “dot-com boom”, a strong political alliance was forged between the digital media companies and their partners in the advertising and media business, enabling the nascent industry to effectively ward off any attempts to restrain its business operations through privacy regulation or other public policies (Solon & Siddiqui, 2017). As a consequence, the advertising industry played a central role in shaping the operations of platforms and applications in the digital media ecosystem. Digital marketing is now well established and thriving, with expenditures reaching nearly $72.5bn in 2016 for the US alone, and worldwide spending predicted to reach more than $223bn this year (eMarketer, 2017; IAB, n.d.-d).

Ongoing innovations over the years have increased the capacity of data and digital marketing applications. Data collection, analysis, and targeting were further woven into the daily lives of consumers with the rise of social media platforms and mobile devices. Because of the unique role that they play in users’ lives, these platforms are able to sweep up enormous amounts of information, including not only what users post about themselves, but also what is collected from them throughout their daily activities (Smith, 2014). A growing arsenal of software and analytic tools has enhanced the ability of digital media companies and their advertisers to glean valuable insights from the oceans of data they generate (Smith, 2014). Predictive analytics introduced an expanded set of tools for scoring, rating, and categorising individuals, based on an increasingly granular set of behavioural, demographic, and psychographic data (“What is Predictive Intelligence”, 2017). US digital marketers have helped popularise and spur the successful adoption of digital advertising platforms and applications in nearly every geographical location with an internet connection or a link to a mobile device (IAB, n.d.-c). Google, Facebook, and other major players in the digital marketing industry have also developed a global research infrastructure to allow them, and especially their major advertising clients, to make continuous improvements in reaching and influencing the public, and to measure with increasing accuracy the success of their efforts (Facebook IQ, n.d.-a). These developments have created what some observers have called the “surveillance economy” (Singer, 2012).

The growth of data-driven political marketing

Though political campaigns have employed micro-targeting techniques—which use an array of personalised and other data sets and marketing applications to influence the actions of individuals—during the last several election cycles, recent technological innovations and industry advances have created a much more robust system than what was in place in 2012 (IAB, n.d.-b; Rubinstein, 2014). For years, political campaigns have been able to combine public voter files with commercial information from data brokers, to develop detailed and comprehensive dossiers on American voters (Rubinstein, 2014). With recent advances in the advertising technology and data industries, they can now take advantage of a growing infrastructure of specialty firms offering more extensive resources for data mining and targeting voters. Among the new entities are data marketing clouds. Developed by well-known companies such as Adobe, Oracle, Salesforce, Nielsen, and IBM, these clouds sell political data along with an exhaustive amount of detailed consumer information for each potential target, including, for example, credit card use, personal interests, consumption patterns, and TV viewing patterns (Salesforce DMP, 2017).

Some of these massive cloud services also operate what has become a new and essential component for contemporary digital targeting—the data management platform (DMP) (Chavez, 2017). DMPs provide marketers with “centralized control of all of their audience and campaign data” (BlueKai, 2011). They do this by collecting and analysing data about individuals from a wide variety of online and offline sources, including first-party data from a customer’s own record, such as the use of a supermarket loyalty card, or their activities captured on a website, mobile phone, or wearable device; second-party data, information collected about a person by another company, such as an online publisher, and sold to others; and third-party data drawn from thousands of sources, comprising demographic, financial, and other data-broker information, including race, ethnicity, and presence of children (O’Hara, 2016). All of this information can be matched to create highly granular “target audience segments” and to identify and “activate” individuals “across third party ad networks and exchanges”. DMPs are quickly becoming a critical tool for political campaigns (Bennett, 2016; Kaye, 2016, July; Regan, J., 2016).

Facebook and Google now play a central role in political operations, offering a full spectrum of commercial digital marketing tools and techniques, along with specialied ad “products” designed for political use (Bond, 2017). Not surprisingly, these companies have also made generating revenues from political campaigns an important “vertical” category within their ad business (Facebook, n.d.-d; Facebook IQ, n.d.-b; Stanford, 2016). Facebook’s role in the 2016 election was particularly important. With users required to give their real names when they sign up as members, Facebook has created a powerful “identity-based” targeting paradigm, enabling political campaigns to access its more than 162 million US users and to target them individually by age, gender, congressional district, and interests (Facebook, n.d.-b).Its online guide for political campaign marketing urges political campaigns to use all the social media platform tools it makes available to advertisers—including through Instagram and other properties—in order to track individuals, capture their data through various “lead-generation” tactics, and target them by uploading voter files and other data (Facebook, n.d.-a-c-f). The company also employs teams of internal staff aligned with each of the major political parties to provide technical assistance and other services to candidates and their campaigns (Chester, 2017; Kreiss & Mcgregor, 2017). Google heavily promoted the use of YouTube, as well as its other digital marketing assets, during the 2016 US election, reaching out to both major political parties (YouTube, 2017).

The growth and increasing sophistication of the digital marketplace has enhanced the capacities of political campaigns to identify, reach, and interact with individual voters. Below we identify seven key techniques that are emblematic of this new digital political marketing system, providing brief illustrations of how they were employed during the 2016 election.

Cross-device targeting

Getting a complete picture of a person’s persistent “identity” through an “identity-graph” has become a key strategy for successfully reaching consumers across their “omnichannel” experience (use of mobile, TV, streaming devices, etc.) (Winterberry Group, 2016). “Cross-device recognition” allows marketers to determine if the same person who is on a social network is also using a personal computer and later watching video on a mobile phone. Through data “onboarding,” a customer record that may contain a physical and email address is linked through various matching processes, associating it with what is believed to be that individual’s online identification—cookies, IP addresses, and other persistent identifiers (Levine, 2016). Cross-device targeting is now a standard procedure for political initiatives and other campaigns. Voter files are uploaded into the onboarding process, enabling the campaigns to find their targets on mobile devices and at specific times when they may be more receptive to a message (Kaye, 2016, April; L2, n.d.-b). Such granularity of information also enables a more tailored advertisement—so-called “dynamic creative”—which can be changed over time to “deliver very specific messaging” to individuals (Schuster, 2015). Leading cross-device marketing company Drawbridge offered a suite of election services in 2016 that provided campaigns a number of ways to impact voters, including through “Voter-Centric Cross Device Storytelling”, “Political Influencer Identification”, and via “Real-Time Voter Attribution Measurement” (Drawbridge, n.d.).

Programmatic advertising

Programmatic advertising refers to new automated forms of ad buying and placement on digital media using computer programmes and algorithmic processes to find and target a customer wherever she goes. The process can also involve real-time “auctions” that occur in milliseconds in order to “show an ad to a specific customer, in a specific context” (Allen, 2016). The use of programmatic advertising was one of the major changes in political campaign digital operations between 2012 and 2016—“the first time in American History”, according to one ad company, “that such precise targeting has ever been made available at such great scale” (Briscoe, 2017; Kaye, 2015). Programmatic advertising has itself grown in its capabilities to reach individuals, taking advantage of new sources of data to reach them on all of their devices (Regan, T., 2016). In 2016, for example, global ad giant WPP’s Xaxis system—“the world’s largest programmatic and technology platform”—launched “Xaxis Politics”. Capable of “reaching US voters across all digital channels”. the system is said to “segment audiences by hundreds of hot button issues as well as by party affiliation”, including via “real-time campaigns tied to specific real-world events” (Xaxis, 2015). Candidates were able to use the services of a growing list of companies, including Google, Rubicon, AOL, PubMatic, Appnexus and Criteo, that offered programmatic advertising platforms (“Political Campaigns”, 2016; Yatrakis, 2016).

Lookalike modelling

The use of big data analytics enables marketers to acquire information about an individual without directly observing behaviour or obtaining consent. They do this by “cloning” their “most valuable customers”in order to identify and target other prospective individuals for marketing purposes (LiveRamp, 2015). For example, Stirista (n.d.), a digital marketing firm that also serves the political world, offers lookalike modelling to identify people who are potential supporters and voters. The company claims it has matched 155 million voters to their “email addresses, online cookies, and social handles”, as well as “culture, religion, interests, political positions and hundreds of other data points to create rich, detailed voter profiles”. Facebook offers a range of lookalike modelling tools through its “Lookalike Audiences” ad platform. For example, Brad Parscale, the Trump campaign’s digital director, used the Lookalike Audiences ad tool to “expand” the number of people the campaign could target (Green & Issenberg, 2016). Facebook’s “Custom Audiences” product, similarly, enables marketers to upload their own data files so they can be matched and then targeted to Facebook users (Facebook, n.d.-e).

Geolocation targeting

Mobile devices continually send signals that enable advertisers (and others) to take advantage of an individual’s location—through the phone’s GPS (global positioning system), Wi-Fi, and Bluetooth communications. All of this can be done with increasing speed and efficiency. Through a host of new location-targeting technologies, consumers can now be identified and targeted wherever they go, while driving a car, pulling into a mall, or shopping in a store (Son, Kim, & Shmatikov, 2016). A complex and growing infrastructure of geolocation-based data-marketing services has emerged, with specialised mobile data firms, machine-learning technologies, measurement companies, and new technical standards to facilitate on-the-go targeting (Warrington, 2015). The use of mobile geo-targeting techniques played a central role in the 2016 election cycle, with a growing number of specialists offering their services to campaign operatives. For example, L2 (n.d.-a) made its voter file, along with HaystaqDNA modelling data, available for mobile device targeting, offering granular profile data on voters based on their interest in such contested topics as gun laws, gay marriage, voter fraud, and school choice, among others. Conde Nast’s Advance Publications’ election advertising worked with campaigns to append geo-location, profile data, and buying behaviour “to sculpt a very specific voter profile and target down to few hundred readers in a given geo location” (Ellwanger, 2016).

Online video advertising

Digital video, via mobile and other devices, is perceived as a highly effective way of delivering emotional content on behalf of brands and marketing campaigns (IAB, n.d.-a). There are a variety of online video ad formats that provide both short- and long-form content, and that work well for political and other marketing efforts. Progressive political campaign company Revolution Messaging, which worked for the Sanders campaign, developed what it calls “smart cookies” that it says take video and other ad placement “to the next level, delivering precision and accuracy” (Revolution Messaging, n.d.). Google’s YouTube has become a key platform for political ads, with the company claiming that today, voters make their political decisions not in “living rooms” in front of a television but in what it calls “micromoments” as people watch mobile video (DoubleClick, n.d.). According to the company’s political ad services research, mobile devices were used in nearly 60 percent of election-related searches during 2016. Content producers (which it calls “Creators”) on YouTube were able to seize on these election micro-moments to influence the political opinions of potential voters 18-49 (“Letter from the Guest Editors,” 2016).

Targeted TV advertising

Television advertising, which remains a linchpin of political campaign strategy, is undergoing a major transformation, as digital technologies and “addressable” set-top boxes have changed cable and broadcast TV into powerful micro-targeting machines capable of delivering the same kinds of granular, personalised advertising messages to individual voters that have become the hallmark of online marketing. Political campaigns are in the forefront of using set-top box “second-to-second viewing data”, amplified with other data sources, such as “demographic and cross-platform data from a multitude of sources” via information brokers, to deliver more precise ads (Fourthwall Media, n.d.; Leahey, 2016; NCC Media, n.d.). NCC Media, the US cable TV ad platform owned by Comcast, Cox, and Spectrum, provided campaigns the ability to target potential voters via the integration of its set-top box viewing information with voter and other data from Experian and others (Miller, 2017). Deals between TV data viewing companies and organisations representing both Republican- and Democratic-leaning groups brought the “targeting capabilities of online advertising to TV ad buys…bringing what was once accessible only to large state-wide or national campaigns to smaller, down-ballot candidates”, explained Advertising Age (Delgado, 2016).

Psychographic, neuromarketing, and emotion-based targeting

Psychographics, mood measurement, and emotional testing have been used by advertisers for many decades, and have also been a core strategy in political campaign advertising (Key, 1974; Packard, 2007; Schiller, 1975). The digital advertising industry has developed these tools even further, taking advantage of advances in neuroscience, cognitive computing, data analytics, behavioural tracking, and other recent developments (Crupi, 2015). Granular-based messages that trigger a range of emotional and subconscious responses, to better “engage” with individuals and deepen relationships with commercial brands, have become part of the DNA of digital advertising (McEleny, 2016). Facebook (2015), Nielsen, and most leading brands use “neuromarketing” services worldwide, which utilise neuroscience tools to determine the emotional impact of advertising messages. There is a growing field, recently promoted by Google, of “Emotion Analytics” that takes advantage of “new types of data and new tracking methods” to help advertisers “understand the impact of campaigns—and their individual assets—on an emotional level…” (Kelshaw, 2017). Scholars have identified that the use of “psychological targeting” in advertising enables the influencing of large groups of people by “tailoring persuasive appeals to the psychological needs” of the specific audiences (Matz, et al, 2017). Data company Experian Marketing Services for political campaigns offered data that weaved together “demographic, psychographic and attitudinal attributes” to target voters digitally. Experian claims its data enables campaigns to examine a target’s “heart and mind” via attributes related to their “political persona” as well as “attitudes, expectations, behaviours, lifestyles, purchase habits and media preferences (Experian, 2011, 2015). One of the most well publicised and controversial players in the 2016 election was Cambridge Analytica (CA), a prominent data analytics and behavioural communications firm that claimed to be a key component in Donald Trump’s victorious campaign. The company used a “five-factor personality model” aimed at determining “the personality of every single adult in the United States of America” (Albright, 2016; Kranish, 2016).Known as OCEAN, the model rated individuals based on five key traits: openness, conscientiousness, extroversion, agreeableness, and neuroticism. Drawing from digital data, voter history, and marketing resources supplied by leading companies, including Acxiom, Experian, Nielsen, GOP firm Data Trust, Aristotle, L2, Infogroup, and Facebook, CA was able to develop an “internal database with thousands of data points per person”. Its research also identified key segments that were considered “persuadable”, and shaped the advertising content placed “across multiple digital channels (with the most effective ads also appearing on television) (Advertising Research Foundation, 2017; Nix, 2016). The strategy was based on developing messages that were tailored to the vulnerabilities of individual voters (Nix, 2016; Schwartz, 2017). CA has become the subject of much scrutiny and debate, and itself has made conflicting claims, with critics raising concerns over its techniques and expressing scepticism about the extent of its impact (Confessore & Hakim, 2017; Karpf, 2017). However, the company’s work was sufficiently convincing to the leading advertising industry research organisation, the Advertising Research Foundation (2017, March), that it honoured the firm with a “Gold” award in 2017 under its “Big Data” category.

Discussion

The above description provides only a brief overview of the data-driven marketing system that is already widely in use by candidate and issue campaigns in the United States. The increasingly central role of commercial digital marketing in contemporary political campaigns is reshaping modern-day politics in fundamental ways, altering relationships among candidates, parties, voters, and the media. We acknowledge that digital technologies have made important positive contributions to the vibrancy of the political sphere, including greatly expanding sources of news and information, significantly increasing opportunities for citizen participation, and empowering people from diverse backgrounds to form coalitions and influence policy. The same tools developed for digital marketing have also helped political campaigns substantially improve voter engagement, enhance their capacities for “small-donor” fundraising, and more efficiently generate turnout (Moonshadow Mobile, n.d.; Owen, 2017). However, many of the techniques we address in this paper raise serious concerns—over privacy, discrimination, manipulation, and lack of transparency.

Several recent controversies over the 2016 election have triggered greater public scrutiny over some of the practices that have become standard operating procedure in the digital media and marketing ecosystem. For example, “fake news” has a direct relationship to programmatic advertising, the automated system of “intelligent” buying and selling of individuals and groups (Weissbrot, 2016). These impersonal algorithmic machines are focused primarily on finding and targeting individual consumers wherever they are, often with little regard for the content where the ads may appear (Maheshwari & Isaac, 2016). As a consequence, in the middle of the 2016 election, many companies found themselves with ads placed on “sites featuring pornography, pirated content, fake news, videos supporting terrorists, or outlets whose traffic is artificially generated by computer programs”, noted the Wall Street Journal (Nicas, 2016; Vranica, 2017). As a major US publisher explained in the trade publication Advertising Age,

Programmatic’s golden promise was allowing advertisers to efficiently buy targeted, quality, ad placements at the best price, and publishers to sell available space to the highest bidders…. What was supposed to be a tech-driven quality guarantee became, in some instances, a “race to the bottom” to make as much money as possible across a complex daisy chain of partners. With billions of impressions bought and sold every month, it is impossible to keep track of where ads appear, so “fake news” sites proliferated. Shady publishers can put up new sites every day, so even if an exchange or bidding platform identifies one site as suspect, another can spring up (Clark, 2017).

Criticism from news organisations and civil society groups, along with a major backlash by leading global advertisers, led to several initiatives to place safeguards on these practices (McDermott, 2017; Minsker, 2017). For example, in an effort to ensure “brand safety”, leading global advertisers and trade associations demanded changes in how Google, Facebook and others conduct their data and advertising technology operations. As a consequence, new measures have been introduced to enable companies to more closely monitor and control where their ads are placed (Association of National Advertisers, 2017; Benes, 2017; IPA, 2017; Johnson, 2017; Liyakasa, 2017; Marshall, 2017; Timmers, 2015).

The Trump campaign relied heavily on Facebook’s digital marketing system to identify specific voters who were not supporters of Trump in the first place, and to target them with psychographic messaging designed to discourage them from voting (Green & Issenberg, 2016). Campaign operatives openly referred to such efforts as “voter suppression” aimed at three targeted groups: “idealistic white liberals, young women and African Americans”. The operations used standard Facebook advertising tools, including “custom audiences” and so-called “dark posts”—“nonpublic paid posts shown only to the Facebook users that Trump chose” with personalised negative messages (Green & Issenberg, 2016). Such tactics also took advantage of commonplace digital practices that target individual consumers based on factors such as race, ethnicity, and socio-economic status (Google, 2017; Martinez, 2016; Nielsen, 2016). Civil rights groups have had some success in getting companies to change their practices. However, for the most part, the digital marketing industry has not been held sufficiently accountable for its use of race and ethnicity in data-marketing products, and there is a need for much broader, industry-wide policies.

Conclusion

Contemporary digital marketing practices have raised serious issues about consumer privacy over the years (Schwartz & Solove, 2011; Solove & Hartzog, 2014). When applied to the political arena, where political information about individuals is only one of thousands of highly sensitive data points collected and analysed by the modern machinery of data analytics and targeting, the risks are even greater. Yet, in the United States, very little has been done in terms of public policy to provide any significant protections. In contrast to the European Union, where privacy is encoded in law as a fundamental right, privacy regulation in the US is much weaker (Bennett, 1997; Solove & Hartzog, 2014; U.S. Senate Committee on Commerce, Science, and Transportation, 2013). The US is one of the only developed countries without a general privacy law. As a consequence, except in specific areas, such as children’s privacy, consumers in the US enjoy no significant data protection in the commercial marketplace. In the political arena, there is even less protection for US citizens. As legal scholar Ira S. Rubinstein (2014) explains, “the collection, use and transfer of voter data face almost no regulation”. The First Amendment plays a crucial role in this regard, allowing the use of political data as a protected form of speech (Persily, 2016).

The political fallout over the how Russian operatives used Facebook, Twitter, and other sites in the 2016 presidential campaign has triggered unprecedented focus on the data and marketing operations of these and other powerful digital media companies. Lawmakers, civil society, and many in the press are calling for new laws and regulations to ensure transparency and accountability for online political ads (“McCain, Klobuchar & Warner Introduce Legislation”, 2017). The U.S. Federal Election Commission, which regulates political advertising, has asked for public comments on whether it should develop new disclosure rules for online ads (Glaser, 2017). In an effort to head-off regulation, both Facebook and Twitter have announced their own internal policy initiatives designed to provide the public with more information, including what organisations or individuals paid for political ads and who the intended targets were. These companies have also promised to establish archives for political advertising, which would be accessible to the public (Falck, 2017; Goldman, 2017; Koltun, 2017). The US online advertising industry trade association is urging Congress not to legislate in this area, but to allow the industry to develop new self-regulatory regimes in order to police itself (IAB, 2017). However, relying on self-regulation is not likely to address the problems raised by these practices and may, in fact, compound them. Industry self-regulatory guidelines are typically written in ways that do not challenge many of the prevailing (and problematic) business practices employed by their own members. Nor do they provide meaningful or effective accountability mechanisms (Center for Digital Democracy, 2013; Gellman & Dixon, 2011; Hoofnagle, 2005). It remains to be seen what the outcome of the current policy debate over digital politics will be, and whether any meaningful safeguards emerge from it.

While any regulation of political speech must meet the legal challenges posed by the First Amendment, limiting how the mining of commercial data can be used in the first place can serve as a critically important new electoral safeguard. Advocacy groups should call for consumer privacy legislation in the US that would place limits on what data can be gathered by the commercial online advertising industry, and how that information can be used. Americans currently have no way to decide for themselves (such as via an opt-in) whether data collected on their finances, health, geo-location, as well as race or ethnicity can be used for digital ad profiling. Certain online advertising practices, such as the use of psychographics and lookalike modelling, also call for rules to ensure they are used fairly.

Without effective interventions, the campaign strategies and practices we have documented in this paper will become increasingly sophisticated in coming elections, most likely with little oversight, transparency, or public accountability. The digital media and marketing industry will continue its research and development efforts, with an intense focus on harnessing the capabilities of new technologies, such as artificial intelligence, virtual reality, and cognitive computing, for advertising purposes. Advertising agencies are already applying some of these advances to the political field (Facebook, 2016; Google, n.d.-a; Havas Cognitive, n.d.). Academic scholars and civil society organisations will need to keep a close watch on all these developments, in order to understand fully how these digital practices operate as a system, and how they are influencing the political process. Only through effective public policies and enforceable best practices can we ensure that digital technology enhances democratic institutions, without undermining their fundamental goals.

References

Advertising Research Foundation. (2017, March 21). Cambridge Analytica receives top honor in the 2017 ARF David Ogilvy Awards. Retrieved from http://www.prnewswire.com/news-releases/cambridge-analytica-receives-top-honor-in-the-2017-arf-david-ogilvy-awards-300426997.html

Advertising Research Foundation. (2017). Cambridge Analytica: Make America number one. Case study. Retrieved from https://thearf.org/2017-arf-david-ogilvy-awards/winners/

Albright, J. (2016, November 11). What’s missing from the Trump election equation? Let’s start with military-grade psyops. Medium. Retrieved from https://medium.com/@d1gi/whats-missing-from-the-trump-election-equation-let-s-start-with-military-grade-psyops-fa22090c8c17

Allen, R. (2016, February 8). What is programmatic marketing? Smart Insights. Retrieved from http://www.smartinsights.com/internet-advertising/internet-advertising-targeting/what-is-programmatic-marketing/

Association of National Advertisers. (2017, March 24). Statement from ANA CEO on Suspending Advertising on YouTube. Retrieved from http://www.ana.net/blogs/show/id/mm-blog-2017-03-statement-from-ana-ceo

Barlow, J. P. (1996, February 8). A declaration of the independence of cyberspace. Electronic Frontier Foundation. Retrieved from https://www.eff.org/cyberspace-independence

Benes, R. (2017, August 29). Ad buyers blast Facebook Audience Network for placing ads on Breitbart,” Digiday. Retrieved from https://digiday.com/marketing/ad-buyers-blast-facebook-audience-network-placing-ads-breitbart/

Bennett, C. J. (1997). Convergence revisited: Toward a global policy for the protection of personal data? In P. Agre & M. Rotenberg (Eds.), Technology and privacy: the new landscape (pp. 99–124). Cambridge, MA: MIT Press.

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: Can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261-275. doi:10.1093/idpl/ipw021 Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2776299

Bernie Sanders: 2016 presidential campaign. (2016). Facebook Business. Retrieved from https://www.facebook.com/business/success/bernie-sanders

BlueKai. (2011). Whitepaper: Data management platforms demystified. Retrieved from http://www.bluekai.com/files/DMP_Demystified_Whitepaper_BlueKai.pdf

Bond, S. (2017, March 14). Google and Facebook build digital ad duopoly. Financial Times. Retrieved from https://www.ft.com/content/30c81d12-08c8-11e7-97d1-5e720a26771b

Briscoe, G. (2017, March 7). How political digital advertising lessons of 2016 applies to 2017. Centro. Retrieved from https://www.centro.net/blog/political-digital-advertising-lessons-2016-applies-2017/

Center for Digital Democracy. (2013, May 29). U.S. online data trade groups spin digital fairy tale to USTR about US consumer privacy prowess—CDD says privacy out of bounds in TTIP. Retrieved from http://www.democraticmedia.org/us-online-data-trade-groups-spin-digital-fairy-tale-ustr-about-us-consumer-privacy-prowess-cdd-say-0

Chahal, G. (2013, May). Election 2016: Marriage of big data, social data will determine the next president. Wired. Retrieved from https://www.wired.com/insights/2013/05/election-2016-marriage-of-big-data-social-data-will-determine-the-next-president/

Chavez, T. (2017, May 17). Krux is now Salesforce DMP. Salesforce Blog. Retrieved from https://www.salesforce.com/blog/2017/05/krux-is-now-salesforce-dmp.html

Chester, J. (2007). Digital destiny: New media and the future of democracy. New York: The New Press.

Chester, J. (2017, January 6). Our next president: Also brought to you by big data and digital advertising. Moyers and Company. Retrieved from http://billmoyers.com/story/our-next-president-also-brought-to-you-by-big-data-and-digital-advertising/

Clark, J. (2017, April 25). Fake news: New name, old problem. Can premium programmatic help? Advertising Age. Retrieved from http://adage.com/article/digitalnext/fake-news-problem-premium-programmatic/308774/

Confessore, N., & Hakim, D. (2017, March 6). Data firm says “secret sauce” aided Trump; many scoff. New York Times. Retrieved from https://www.nytimes.com/2017/03/06/us/politics/cambridge-analytica.html?_r=0

Crupi, A. (2015, May 27). Nielsen buys neuromarketing research company Innerscope. Advertising Age. Retrieved fromhttp://adage.com/article/media/nielsen-buys/298771/

Delgado, M. (2016, April 28). Experian launches audience management platform to make programmatic TV a reality across advertising industry. Experian. Retrieved from http://www.experian.com/blogs/news/2016/04/28/experian-launches-audience-management-platform/

DoubleClick. (n.d.). DoubleClick campaign manager. Retrieved from https://www.doubleclickbygoogle.com/solutions/digital-marketing/campaign-manager/

Drawbridge. (n.d.). Cross-device election playbook. Retrieved from https://drawbridge.com/c/vote

Edelman Digital (2016, April 1). How digital is shaking up presidential campaigns. Retrieved from https://www.edelman.com/post/how-digital-is-shaking-up-presidential-campaigns/

Ellwanger, S. (2016, September 15). Advance Local’s Sutton sees bid demand for digital advertising in politics. Retrieved from http://www.beet.tv/2016/09/jeff-sutton.html

eMarketer. (2017, April 12). Worldwide ad spending: The eMarketer forecast for 2017.

Experian. (2015, March). Audience guide. Retrieved from https://www.experian.com/assets/marketing-services/product-sheets/attitudinal-and-psychographic-audiences.pdf

Experian. (2011, December). Political affiliation and Beyond. Retrieved from https://www.experian.com/assets/marketing-services/product-sheets/das-political-data-sheet.pdf

Facebook. (2016, June 16). Inside marketing science at Facebook. Retrieved from https://www.facebook.com/notes/facebook-careers/inside-marketing-science-at-facebook/936165389815348/

Facebook. (n.d.-a). Activate. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/activate/

Facebook. (n.d.-b). Advanced strategies for performance marketers. Facebook Business. Retrieved from https://www.facebook.com/business/a/performance-marketing-strategies; https://www.facebook.com/business/help/202297959811696

Facebook. (n.d.-c). Impact. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/impact/

Facebook. (n.d.-d). Mobilize your voters. Facebook Business. Retrieved from https://www.facebook.com/business/a/mobilizevoters

Facebook. (n.d.-e). Toomey for Senate. Facebook Business. Retrieved from https://www.facebook.com/business/success/toomey-for-senate

Facebook. (n.d.-f). Turnout. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/turnout/

Facebook IQ. (n.d.-a). Unlock the insights that matter. Retrieved from https://www.facebook.com/iq

Facebook IQ. (n.d.-b). Vertical insights. https://www.facebook.com/iq/vertical-insights

Falck, B. (2017, October 24). New transparency for ads on Twitter. Twitter Blog. https://blog.twitter.com/official/en_us/topics/product/2017/New-Transparency-For-Ads-on-Twitter.html

Fourthwall Media. (n.d.). Solutions: Analytics firms. Retrieved from http://www.fourthwallmedia.tv/analytics-firms.

Gellman, R., & Dixon, P. (2011, October 14). Many failures: A brief history of privacy self-regulation in the United States. World Privacy Forum. Retrieved from http://www.worldprivacyforum.org/wp-content/uploads/2011/10/WPFselfregulationhistory.pdf

Glaser, A. (2017, October 17). Should political ads on Facebook include disclaimers? Slate. Retrieved from http://www.slate.com/articles/technology/future_tense/2017/10/the_fec_wants_your_opinion_on_transparency_for_online_political_ads.html

Goldman, R. (2017, October 27). Update on our advertising transparency and authenticity efforts. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2017/10/update-on-our-advertising-transparency-and-authenticity-efforts/

Google. (2017, May 1). Marketing in a multicultural world: 2017 Google's marketing forum. Google Agency Blog. Retrieved from https://agency.googleblog.com/2017/05/marketing-in-multicultural-world-2017.html

Google. (n.d.-a). Google NYC algorithms and optimization. Research at Google. Retrieved from https://research.google.com/teams/nycalg/

Google. (n.d.-b). Insights you want. Data you need. Think with Google. Retrieved from https://www.thinkwithgoogle.com

Green, J., & Issenberg, S. (2016, October 27). Inside the Trump bunker, with days to go. Bloomberg Businessweek. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

Havas Cognitive. (n.d.). EagleAi has landed. Retrieved from http://cognitive.havas.com/case-studies/eagle-ai

Hoofnagle, C. (2005, March 4). Privacy self-regulation: A decade of disappointment. Electronic Privacy Information Center. Retrieved from http://epic.org/reports/decadedisappoint.html

IPA. (2017, August). IPA issues direct call to action to Google YouTube and Facebook to clean up safety, measurement and viewability of their online video. Retrieved from http://www.ipa.co.uk/news/ipa-issues-direct-call-to-action-to-google-youtube-and-facebook-to-clean-up-safety,-measurement-and-viewability-of-their-online-video-#.Wa126YqQzQj

IAB. (2017, October 24). IAB President & CEO, Randall Rothenberg testifies before Congress on digital political advertising. Retrieved from https://www.iab.com/news/read-the-testimony-from-randall-rothenberg-president-and-ceo-iab/

IAB. (n.d.-a). The digital video advertising landscape. Retrieved from https://video-guide.iab.com/digital-video-advertising-landscape

IAB. (n.d.-b). Global digital advertising revenue reports. Retrieved from https://www.iab.com/global/

IAB. (n.d.-c). Glossary: Digital media planning & buying. Retrieved from https://www.iab.com/wp-content/uploads/2016/04/Glossary-Formatted.pdf

IAB. (n.d.-d). IAB internet advertising revenue report conducted by PricewaterhouseCoopers (PWC). Retrieved from https://www.iab.com/insights/iab-internet-advertising-revenue-report-conducted-by-pricewaterhousecoopers-pwc-2

Jamieson, A. (2016, April 5). The first Snapchat election: How Bernie and Hillary are targeting the youth vote. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/apr/05/snapchat-election-2016-sanders-clinton-youth-millennial-vote

Jamieson, K. H. (1996). Packaging the presidency: A history and criticism of presidential campaign advertising. New York: Oxford University Press.

Johnson, L. (2017, April 16). How brands and agencies are fighting back against Facebook and Google’s measurement snafus. Adweek. Retrieved from http://www.adweek.com/digital/how-brands-and-agencies-are-fighting-back-against-facebooks-and-googles-measurement-snafus/

Karpf, D. (2017, February 1). Will the real psychometric targeters please stand up? Civicist. Retrieved from https://civichall.org/civicist/will-the-real-psychometric-targeters-please-stand-up/

Karpf, D. (2016, October 31). Preparing for the campaign tech bullshit season. Civicist. Retrieved from https://civichall.org/civicist/preparing-campaign-tech-bullshit-season/

Kaye, K. (2015, June 3). Programmatic buying coming to the political arena in 2016. Advertising Age. Retrieved from http://adage.com/article/digital/programmatic-buying-political-arena-2016/298810/

Kaye, K. (2016, April 15). RNC'S voter data provider teams up with Google, Facebook and other ad firms. Advertising Age. Retrieved from http://adage.com/article/campaign-trail/rnc-voter-data-provider-joins-ad-firms-including-facebook/303534/

Kaye, K. (2016, July 13). Democrats' data platform opens access to smaller campaigns. Advertising Age. Retrieved from http://adage.com/article/campaign-trail/democratic-data-platform-opens-access-smaller-campaigns/304935/

Kelshaw, T. (2017, August). Emotion analytics: A powerful tool to augment gut instinct. Think with Google. https://www.thinkwithgoogle.com/nordics/article/emotion-analytics-a-powerful-tool-to-augment-gut-instinct/

Key, W. B. (1974). Subliminal Seduction. New York: Berkeley Press.

Koltun, N. (2017, October 27). Facebook significantly ramps up transparency efforts to cover all ads. Mobile Marketer. Retrieved from https://www.mobilemarketer.com/news/facebook-significantly-ramps-up-transparency-efforts-to-cover-all-ads/508380/

Kranish, M. (2016, October 27). Trump’s plan for a comeback includes building a “psychographic” profile of every voter. The Washington Post. Retrieved from https://www.washingtonpost.com/politics/trumps-plan-for-a-comeback-includes-building-a-psychographic-profile-of-every-voter/2016/10/27/9064a706-9611-11e6-9b7c-57290af48a49_story.html?utm_term=.28322875475d

Kreiss, D. (2016). Prototype politics: Technology-intensive campaigning and the data of democracy. New York: Oxford University Press.

Kreiss, D., & Mcgregor, S.C. (2017). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 U.S. presidential cycle, Political Communication, 1-23. doi:10.1080/10584609.2017.1364814

L2. (n.d.-a). Digital advertising device targeting. Retrieved from http://www.l2political.com/products/data/digital-advertising/device-targeting/

L2. (n.d.-b). L2 voter file enhancements. Retrieved from http://www.l2political.com/products/data/voter-file-enhancements/

Leahey, L. (2016, July 15). (Ad) campaign season: How political advertisers are using data and digital to move the needle in 2016. Cynopsis Media. Retrieved from http://www.cynopsis.com/cyncity/ad-campaign-season-how-political-advertisers-are-using-data-and-digital-to-move-the-needle-in-2016/

Letter from the guest editors: Julie Hootkin and Frank Luntz. (2016, June). Think with Google. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/guest-editors-political-consultants-julie-hootkin-frank-luntz

Levine, B. (2016, December 2). Report: What is data onboarding, and why is it important to marketers? Martech Today. Retrieved from https://martechtoday.com/report-data-onboarding-important-marketers-192924

LiveRamp (2015, August 5). Look-alike modeling: The what, why, and how. LiveRamp Blog. Retrieved from http://liveramp.com/blog/look-alike-modeling-the-what-why-and-how/

Liyakasa, K. (2017, August 24). Standard media index: YouTube’s direct ad spend down 26% in Q2 amid brand safety crackdown. Ad Exchanger. Retrieved from https://adexchanger.com/ad-exchange-news/standard-media-index-youtubes-direct-ad-spend-26-q2-amid-brand-safety-crackdown/

Maheshwari, S., & Isaac, M. (2016, November 6). Facebook will stop some ads from targeting users by race. New York Times Retrieved from https://www.nytimes.com/2016/11/12/business/media/facebook-will-stop-some-ads-from-targeting-users-by-race.html?mcubz=0

Marshall, J. (2017, January 30). IAB chief calls on online ad industry to fight fake news. Wall Street Journal. Retrieved from https://www.wsj.com/articles/iab-chief-calls-on-online-ad-industry-to-fight-fake-news-1485812139

Martinez, C. (2016, October 28). Driving relevance and inclusion with multicultural marketing. Facebook Business. Retrieved from https://www.facebook.com/business/news/driving-relevance-and-inclusion-with-multicultural-marketing

Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. (2017, October 17). Proceedings of the National Academy of Sciences (PNAS Early Edition). Retrieved from http://www.michalkosinski.com/home/publications

McCain, Klobuchar & Warner introduce legislation to protect integrity of U.S. elections & provide transparency of political ads on digital platforms. (2017, October 19). Retrieved from https://www.mccain.senate.gov/public/index.cfm/2017/10/mccain-klobuchar-warner-introduce-legislation-to-protect-integrity-of-u-s-elections-provide-transparency-of-political-ads-on-digital-platforms

McDermott, M. J. (2017, May 12). Brand safety issue vexes marketers. ANA Magazine. Retrieved from http://www.ana.net/magazines/show/id/ana-2017-05-brand-safety-issue-vexes-marketers

McEleny, C. (2016, October 16). Ford and Xaxis score in Vietnam using emotional triggers around the UEFA Champions League. The Drum. Retrieved from http://www.thedrum.com/news/2016/10/18/ford-and-xaxis-score-vietnam-using-emotional-triggers-around-the-uefa-champions

Miller, S. J. (2017, March 24). Local cable and the future of campaign media strategy. Campaigns & Elections. Retrieved from https://www.campaignsandelections.com/campaign-insider/local-cable-and-the-future-of-campaign-media-strategy

Minsker, M. (2017, August 30). Advertisers want programmatic tech players to fight fake news. eMarketer. Retrieved from https://www.emarketer.com/Article/Advertisers-Want-Programmatic-Tech-Players-Fight-Fake-News/1016406

Montgomery, K. C. (2007). Generation digital: Politics, commerce, and childhood in the age of the internet. Cambridge, MA: MIT Press.

Montgomery, K. C. (2011) Safeguards for youth in the digital marketing ecosystem. In D. G. Singer and J. L. Singer (Eds.), Handbook of children and the media (2nd ed.), pp. 631-648. Thousand Oaks, CA: Sage Publications.

Montgomery, K. C., Chester, J., & Kopp, K. (2017). Health wearable devices in the big data era: Ensuring privacy, security, and consumer protection. Center for Digital Democracy. Retrieved from https://www.democraticmedia.org/sites/default/files/field/public/2016/aucdd_wearablesreport_final121516.pdf

Moonshadow Mobile. (n.d.). Ground Game is a groundbreaking battle-tested mobile canvassing app. Retrieved from http://www.moonshadowmobile.com/products/ground-game-mobile-canvassing/

NCC Media. (n.d.). The essential guide to political advertising. Retrieved from https://nccmedia.com/PoliticalEssentialGuide/html5/index.html?page=1&noflash

Nicas, J. (2016, December 8). Fake-news sites inadvertently funded by big brands. Wall Street Journal. Retrieved from https://www.wsj.com/articles/fake-news-sites-inadvertently-funded-by-big-brands-1481193004

Nielsen. (2016, October 4). Nielsen and Ethnifacts introduce intercultural affinity segmentation to drive deeper understanding of total U.S. cultural landscape for brand marketers. Retrieved from http://www.nielsen.com/us/en/press-room/2016/nielsen-and-ethnifacts-introduce-intercultural-affinity-segmentation.html

Nix, A. (2016, September). The Power of Big Data and Psychographics in the Electoral Process. Presented at the Concordia Annual Summit, New York. Retrieved from https://www.youtube.com/watch?v=n8Dd5aVXLCc

O’Hara, C. (2016, January 25). Data triangulation: How second-party data will eat the digital world. Ad Exchanger. Retrieved from http://adexchanger.com/data-driven-thinking/data-triangulation-how-second-party-data-will-eat-the-digital-world/

Owen, D. (2017). New Media and Political Campaigns. New York: Oxford University Press.

Packard, V. (2007). The Hidden Persuaders (reissue ed.). New York: Ig Publishing.

Persily, N. (2016, August 10). Facebook may soon have more power over elections than the FEC. Are we ready? Washington Post. Retrieved from https://www.washingtonpost.com/news/in-theory/wp/2016/08/10/facebook-may-soon-have-more-power-over-elections-than-the-fec-are-we-ready/?utm_term=.ed10eef711a1

Political campaigns in 2016: The climax of digital advertising. (2016, May 10). Media Radar. Retrieved from https://www.slideshare.net/JesseSherb/mediaradarwhitepaperdigitalpoliticalfinpdf

Regan, J. (2016, July 29). Donkeys, elephants, and DMPs. Merkle. Retrieved from https://www.merkleinc.com/blog/donkeys-elephants-and-dmps

Regan, T. (2016, January). Media planning toolkit: Programmatic planning. WARC. Retrieved from https://www.warc.com/content/article/bestprac/media_planning_toolkit_programmatic_planning/106391

Revolution Messaging. (n.d.). Smart cookies. Retrieved from https://revolutionmessaging.com/marketing/smart-cookies.

Rubinstein, I. S. (2014) Voter privacy in the age of big data. Wisconsin Law Review. Retrieved from http://wisconsinlawreview.org/wp-content/uploads/2015/02/1-Rubinstein-Final-Online.pdf

Sabato, L. J. (1981). The rise of political consultants: New ways of winning elections. New York: Basic Books.

Salesforce DMP. (2017, October 20). Third-party data marketplace. Retrieved from https://konsole.zendesk.com/hc/en-us/articles/217592967-Third-Party-Data-Marketplace

Schiller, H. I. (2007). The Mind Managers. New York: Beacon Press.

Schuster, J. (2015, October 7) Political campaigns: The art and science of reaching voters. LiveRamp. Retrieved from https://liveramp.com/blog/political-campaigns-the-art-and-science-of-reaching-voters/

Schwartz, M. (2017, March 30). Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate. The Intercept. Retrieved from https://theintercept.com/2017/03/30/facebook-failed-to-protect-30-million-users-from-having-their-data-harvested-by-trump-campaign-affiliate/

Schwartz, P. M., & Solove, D. J. (2011). The PII problem: Privacy and a new concept of personally identifiable information. New York University Law Review, 86,1814-1895. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1909366

Singer, N. (2012, October 13). Do not track? Advertisers say “don’t tread on us.” New York Times. Retrieved from http://www.nytimes.com/2012/10/14/technology/do-not-track-movement-is-drawing-advertisers-fire.html?_r=0

Smith, C. (2014, March 20). Reinventing social media: Deep learning, predictive marketing, and image recognition will change everything. Business Insider. Retrieved from http://www.businessinsider.com/social-medias-big-data-future-2014-3

Solon, O., & Siddiqui, S. (2017, September 3). Forget Wall Street—Silicon Valley is the new political power in Washington. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/sep/03/silicon-valley-politics-lobbying-washington

Solove, D. J., & Hartzog, W. (2014). The FTC and the new common law of privacy. Columbia Law Review, 114, 583-677. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312913

Son, S., Kim, D., & Shmatikov, V. (2016). What mobile ads know about mobile users. NDSS ’16. Retrieved from http://www.cs.cornell.edu/~shmat/shmat_ndss16.pdf

Stanford, K. (2016, March). How political ads and video content influence voter opinion. Think with Google. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/content-marketing/political-ads-video-content-influence-voter-opinion/

Stirista. (n.d.). Political data. Retrieved from https://www.stirista.com/what-we-do/data/political-data

Timmers, B. (2015, December). Everything you wanted to know about fake news. IAS Insider Retrieved from https://insider.integralads.com/everything-wanted-know-fake-news/

Tufecki, Z. (2014, July 7). Engineering the public: Big data, surveillance and computational politics. First Monday19(7). Retrieved from http://firstmonday.org/article/view/4901/4097

U.S. Senate Committee on Commerce, Science, and Transportation. (2013, December 18). A review of the data broker industry: Collection, use, and sale of consumer data for marketing purposes. Staff report for Chairman Rockefeller. Retrieved from https://www.commerce.senate.gov/public/_cache/files/0d2b3642-6221-4888-a631-08f2f255b577/AE5D72CBE7F44F5BFC846BECE22C875B.12.18.13-senate-commerce-committee-report-on-data-broker-industry.pdf

Vranica, S. (2017, June 18). Advertisers try to avoid the web’s dark side, from fake news to extremist videos. Wall Street Journal. Retrieved from https://www.wsj.com/articles/advertisers-try-to-avoid-the-webs-dark-side-from-fake-news-to-extremist-videos-1497778201

WARC. (2017, December). Toolkit 2018: How brands can respond to the year's biggest challenges. Retrieved from https://www.warc.com/content/article/Toolkit_2018_How_brands_can_respond_to_the_yearamp;39;s_biggest_challenges/117399

Warrington, G. (2015, November 18). Tiles, proxies and exact places: Building location audience profiles. LinkedIn. Retrieved from https://www.linkedin.com/pulse/tiles-proxies-exact-places-building-location-audience-warrington

Weissbrot, A. (2016, June 20). MAGNA and Zenith: Digital growth fueled by programmatic, mobile and video. Ad Exchanger. Retrieved from https://adexchanger.com/agencies/magna-zenith-digital-growth-fueled-programmatic-mobile-video/

What is predictive intelligence and how it’s set to change marketing in 2016. (2016, February 11). Smart Insights. http://www.smartinsights.com/digital-marketing-strategy/predictive-intelligence-set-change-marketing-2016/

Winterberry Group. (2016, November). The state of consumer data onboarding: Identity resolution in an omnichannel environment. Retrieved from http://www.winterberrygroup.com/our-insights/state-consumer-data-onboarding-identity-resolution-omnichannel-environment

Xaxis. (2015, November 9). Xaxis brings programmatic to political advertising with Xaxis politics, first ad targeting solution to leverage offline voter data for reaching U.S. voters across all digital channels. https://www.businesswire.com/news/home/20151109006051/en/Xaxis-Brings-Programmatic-Political-Advertising-Xaxis-Politics

Yatrakis, C. (2016, June 28). The Trade Desk partner spotlight: Q&A with Factual. Factual. Retrieved from https://www.factual.com/blog/partner-spotlight

YouTube. (2017). The presidential elections on YouTube. Retrieved from https://think.storage.googleapis.com/docs/The_Presidential_Elections_On_YouTube.pdf

Footnotes

1. The research for this paper is based on industry reports, trade publications, and policy documents, as well as review of relevant scholarly and legal literature. The authors thank Gary O. Larson and Arthur Soto-Vasquez for their research and editorial assistance.

On democracy

$
0
0

Disclaimer: This guest essay in the Special issue on political micro-targeting has not been peer reviewed. It is an abbreviated version of a speech delivered by the Member of the European Partiament (MEP) Sophie in ‘t Veld in Amsterdam in May 2017 to Data & Democracy, a conference on political micro-targeting.

Democracy

Democracy is valuable and vulnerable, which is reason enough to remain alert for new developments that can undermine her. In recent months, we have seen enough examples of the growing impact of personal data in campaigns and elections. It is important and urgent for us to publicly debate this development. It is easy to see why we should take action against extremist propaganda of hatemongers aiming to recruit young people for violent acts. But we euphemistically speak of 'fake news' when lies, 'half-truths’, conspiracy theories, and sedition creepily poison public opinion.

The literal meaning of democracy is 'the power of the people'. 'Power' presupposes freedom. Freedom to choose and to decide. Freedom from coercion and pressure. Freedom from manipulation. 'Power' also presupposes knowledge. Knowledge of all facts, aspects, and options. And knowing how to balance them against each other. When freedom and knowledge are restricted, there can be no power.

In a democracy, every individual choice influences society as a whole. Therefore, the common interest is served with everyone's ability to make their choices in complete freedom, and with complete knowledge.

The interests of parties and political candidates who compete for citizen’s votes may differ from that higher interest. They want citizens to see their political advertising, and only theirs, not that of their competitors. Not only do parties and candidates compete for the voter's favour. They contend for his exclusive time and attention as well.

Political targeting

No laws dictate what kind of information a voter should rely on to be able to make the right consideration. For lamb chops, toothpaste, mortgages or cars, for example, it’s mandatory for producers to mention the origin and properties. This enables consumers to make a responsible decision. Providing false information is illegal. All ingredients, properties, and risks have to be mentioned on the label.

Political communication, however, is protected by freedom of speech. Political parties are allowed to use all kinds of sales tricks.

And, of course, campaigns do their utmost and continuously test the limits of the socially acceptable.

Nothing new, so far. There is no holding back in getting the voters to cast their vote on your party or your candidate. From temptation with attractive promises, to outright bribery. From applying pressure to straightforward intimidation.

Important therein is how and where you can reach the voter. In the old days it was easy: Catholics were told on Sundays in church that they had no other choice in the voting booth than the catholic choice. And no righteous Catholic dared to think about voting differently. At home, the father told the mother how to vote. The children received their political preference from home and from school. Catholics learned about current affairs via a catholic newspaper, and through the catholic radio broadcaster. In the Dutch society, which consisted of a few of such pillars, one was only offered the opinions of one's own pillar1. A kind of filter bubble avant la lettre.

Political micro-targeting

Nowadays, political parties have a different approach. With new technologies, the sky is the limit.

Increasingly advanced techniques allow the mapping of voter preferences, activities, and connections. Using endless amounts of personal data, any individual on earth can be reconstructed in detail. Not only can their personal beliefs be distilled from large troves of data, no, it even is possible to predict a person's beliefs, even before they have formed them themselves. And, subsequently, it is possible to subtly steer those beliefs, while leaving the person thinking they made their decision all by themselves.

As often is the case, the Americans lead in the use of new techniques. While we Europeans, touchingly old-fashioned knock on doors and hand out flyers at Saturday's market, the American employ the latest technology to identify, approach, and influence voters.

Of course, trying to find out where voters can be reached and how they can be influenced is no novelty. Political parties map which neighbourhoods predominantly vote for them, which neighbourhoods have potential, and in which neighbourhoods campaigning would be a wasted effort. Parties work with detailed profiles and target audiences, for which they can tailor their messages.

But the usage of personal data on a large scale has a lot more to offer. Obviously, this is a big opportunity for political parties, and for anyone else, who runs campaigns or aims to influence the elections.

However, the influencing techniques become increasingly opaque. As a result of the alleged filter bubble, voters are being reaffirmed in their own beliefs, and they hardly receive information anymore about the beliefs and arguments of other groups. This new kind of segmentation may stifle critical thinking. There may not be enough incentive to test one's own ideas, to find new arguments, or to critically reflect on the truthfulness of information.

I am a social and economic liberal D66 politician, and I get suggestions for news articles from websites like The Guardian or Le Monde. My colleague from the right wing nationalist PVV, may well receive URLs from Breitbart.

Pluralism is essential for a healthy, robust democracy. In a polarised society, people live in tightly knit groups, which hardly communicate with each other. In a pluralist society people engage in the free exchange, confrontation, and fusion of ideas.

The concept pluralism is under pressure. Populist parties declare themselves representative of The People. In their vision, The People, is uniform and homogenous. There is a dominant cultural norm, dictated from the top-down, to which everyone must conform. Whomever refuses, gets chewed out. Often, it is about one-dimensional symbolism such as Easter eggs and Christmas trees. There is no place for pluralism in the world of the populists. But when there is no pluralism, there is no democracy. Without pluralism, democracy is nothing more than a simple tribal dispute, instead of the expression of the will of all citizens together.

Voter data

European privacy legislation limits the use of personal data. In the world of ‘big data’, one of the explicit goals of regulation is to prevent restriction of the consumer's choice. Oddly enough, lawmakers do not explicitly aspire to guarantee voters as broad a choice as possible. But in politics, individual choices have consequences for society as a whole.

In 2018, the General Data Protection Regulation (GDPR) comes into effect. We have worked five years on the GDPR. At this moment, we work on the modernisation of the e-Privacy Directive, which is mainly about the protection of communication. As was the case with the GDPR, companies from certain sectors scream bloody murder. European privacy protection would mean certain death for the European industry. According to some corporate Cassandras, entire European industries will move to other continents. That very same death of corporate Europe is also predicted for any measure concerning, say, environmental norms, procurement rules, or employee rights. All those measures are in place, but, as far as I know, the nightmare scenario has never occurred...

There are some corporate sectors, such as publishing and marketing, which have a huge impact on the information supply to citizens. They are the ones who now cry wolf. It is understandable that they are unhappy with stricter rules concerning their activities, but as the potential impact of the use of personal data and ‘big data’ increases, so does their social responsibility.

At the moment, there is not much public debate about the new techniques. Peculiar. Thirty years ago, 'subliminal advertising', as we called it then, was prohibited because people found it unethical to influence people without their knowledge. We need to have a similar debate. What do we think of opaque influencing? Do we need ethical norms? Should such norms apply only to political campaigns, or should we look at this from a broader perspective? In the ‘big data’ debate, we tend to speak in technical or legal terms, while actually the issue is fundamentally ethical, holding far-reaching consequences for the vitality of our democracy.

Such a public debate demands more clarity on the impact of ‘big data’, profiling, targeting, and similar techniques on the individual, her behaviour, and her choices, which determine in what direction society progresses. Which voters are being reached? How sensitive are they for the subtle influencing and what makes them resilient? How do people who are hardly reached only compare to the others? How do voters and non-voters compare? Is the voter truly predictable? Can we identify or influence the floating voter? Do voters actually float between different parties? Or do they especially float within their own party, their own bubble, their own segment? How important are other factors, such as the social context? If the new influencing techniques are indeed as potent as we think, how can polls get it so wrong? What can we learn from advertisers who return to contextual advertising, because targeting turns out less effective than they thought?

We need to stay cool-headed. New technologies have a huge impact, but human nature will not suddenly change due to ‘big data’ and its use. Our natural instincts and reflexes will definitely not evolve in a few years. That would take many thousands of years, as even in the 21st century, we seem to have more than a few cavemen traits, so losing internalised behaviour is not as easy as 1-2-3. Humans are resilient, but democracy is vulnerable. On a short term, the societal impact is large. This gives us all the reason to reflect on how to deal with the new reality, and how we can keep up our values in this new reality.

The use of personal data, clearly, is not solely reserved for decent political parties. Other persons and organisations, from the Kremlin to Breitbart, can bombard European voters with information and misinformation. But European governments, controlling endless amounts of personal data of their citizens, can also manipulate information, or circulate utter nonsense to advance their own interests. A random example: the Hungarian government influencing their voters with lies and manipulation about the so-called consultation on asylum seekers.

Beyond voter data

This issue is not only about the personal data of voters, but also about the personal data of political competitors, opponents, and critics, which are increasingly being employed. Recently, we have seen efforts of external parties to influence the results of the 2017 French elections. We saw a large-scale hack of the Emmanuel Macron campaign, and the spread of false information, coming obviously from the Kremlin and the American Alt-Right, meant to discredit Macron's candidacy.

Also, the American elections show the shady game of hacking, leaking, and manipulating. The issue of the Hillary Clinton mails will undoubtedly occupy our minds for years. Who knows how the elections would have turned out without this affair?

Other democratic pillars can get corrupted as well by the misuse of data. Critical voices, opposition, and checks and balances are democracy's oxygen. Democracy is in acute jeopardy when data are employed to attack, undermine, discredit, blackmail, or persecute journalists, judges, lawyers, NGOs, whistleblowers, and opposition parties.

In Europe, we tend to shrug our shoulders at these dangers. "Oh well, we'll see, such things occur only in banana republics, not right here". Of course, this trust in our democratic rule of law is wonderful. But if we treat our rule of law this neglectfully, we will lose it eventually.

Within the European Union, we currently see this happening in Poland and Hungary. The governments of both nations ruthlessly attack independent judges, critical media, inconvenient NGOs. They do so with quasi-lawful means. Under the banner of transparency, they force NGOs to register. In doing so, they misuse laws against money laundering, and terror finance. Or the governments bring out compromising information about judges or politicians in strategic moments.

But critical voices struggle in other member states as well. Lawyers are being monitored, even without a legal basis. In the years after 9/11, we have created endless new abilities for intelligence services, police and justice departments to spy on citizens, even without suspicion, without the signature of a judge. The companies to which we unwittingly surrender our personal data, in exchange for service, are forced to hand over all information to the government, or forced to build in backdoors. Governments hack computers in other countries. Usually, it starts out with unlawful practices, but soon enough laws are put in place to legalise those practices. The magic word 'terrorism' silences any critique on such legislation.

But when politicians, journalists, NGOs, whistleblowers, lawyers, and many others cannot perform their tasks freely and without worry, our democracy withers. Not only do they have to operate without someone keeping an eye on them, they have to know nobody is in fact watching them. The mere possibility of being watched, results in a chilling effect.

For this principal reason, I have contested a French mass surveillance law before the French Conseil d'Etat. Since, as a member of the European Parliament, I spend four days a month on French soil (in Strasbourg), I could potentially be the target of the French eavesdropping programme. This is not totally imaginary, as I am not only a politician, but also a vocal critic of certain French anti-terror measures. It is not about me actually worrying about being spied on, but about the fact that I might be spied on. Luckily, I am not easily startled, but I can imagine that many politicians are vulnerable. That is a risk for democracy.

I do not discard the possibility of a ruling of the European Court of Human Rights on my case. In that turn of events, it will lead to jurisprudence valid in the entire EU (and the geographical area covered by the Council of Europe).

But, of course, this should not depend on the actions of one obstinate individual whether politicians, NGOs, journalists, and so on, can do their jobs fearlessly, to fulfil their watchdog role.

It is my personal, deep, conviction that the biggest threat to our democracy is the fact that we have enabled the powerful to access, with almost no limitations, the personal data of those who should control those very same powerful entities.

What can we do?

Some propose new forms of democracy, in which universal suffrage is weakened or even abolished. In his book ‘Against elections: the case for democracy’, David Van Reybrouck had the idea to appoint representatives on the basis of chance, and in his book ‘Against democracy’ Jason Brennan wants to give the elite more votes than the lower classes, presuming that people with more education or development make better choices. Others want to replace representative democracy with direct democracy.

I oppose those ideas. Universal suffrage and the representative democracy are great achievements, which have led to enormous progress in society.

First of all, we have to make sure our children grow up to be critical, independent thinkers. Think differently, deviate, provoke: this must be encouraged instead of condemned. A democracy needs non-conformists.

We must teach our children to contextualise information and to compare sources.

The counterpart of ‘big data’ must be ‘big transparency’. We need to understand not just open administration, but also insights into the techniques of influence.

The regulation and limitation of the use of personal data, as I hope to have argued effectively, is not a game of out-of-touch privacy activists. It is essential for democracy. We need safeguards, not only to be sure people really are free in their choices, but also to protect the necessary checks and balances. As such, I plea for a rigorous application of the GDPR, and in the European Parliament, I will work for a firm e-Privacy Directive.

And yes, perhaps we should examine whether the rules for political campaigning are still up-to-date. In most countries, those rules cover a cap on campaign expenditures, a prohibition of campaigning or polling on the day before election day, or a ban on publishing information that may influence the election results, such as the leaked e-mails in France. But these rules have little impact on the use of personal data to subtly influence elections.

Last year, the European Parliament supported my proposal for a mechanism to guard democracy, the rule of law, and fundamental rights in Europe.2

On this day (editor’s note: 9 May, Europe Day) of European democracy, I plead for equal, high norms in Europe. The last years have shown that national elections are European elections. It is crucial for us to trust that all elections in EU member states are open, free, and honest elections, free of improper influencing.

These last sixty years, the European Union has developed itself into a world leader in democracy and freedom. If we start a public debate, Europe can remain a world leader.

Footnotes

1. Pillars are referred to here as societal cleavages along ideological or religious lines

2. The report I refer to is a legislative initiative of the European Parliament. I was the initiator and the rapporteur. This is a proposal to guard democracy, the rule of law, and the fundamental rights in the EU. The Commission, at first, did not want to proceed with the initiative. Recently, however, the Commission has announced a legislative proposal for such a mechanism. I suspect this proposal will look quite different from Parliament’s. But the fact that there will be a mechanism, is most important. The realization that the EU is a community of values, and not just on paper, spreads quickly. The URL to the proposal’s text is added below. It was approved in the EP in October 2016, with 404 Yea votes and 171 Nay’s. Source (last accessed 15 January 2018): http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bREPORT%2bA8-2016-0283%2b0%2bDOC%2bWORD%2bV0%2f%2fEN

The role of digital marketing in political campaigns

$
0
0

This paper is part of 'A Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research’, a Special issue of the Internet Policy Review.

Introduction

Political campaigns in the United States have employed digital technologies for more than a decade, developing increasingly sophisticated tools and techniques during each election cycle, as “computational politics” have become standard operating procedure (Tufecki, 2014; Kreiss, 2016). However, the most recent election marked a critical turning point, as candidates, political action committees, and other interest groups were able to take advantage of significant breakthroughs in data-driven marketing techniques, such as cross-device targeting, developed since the previous presidential election (“Bernie Sanders”, 2016; Edelman Digital, 2016). Electoral politics has now become fully integrated into a growing, global commercial digital media and marketing ecosystem that has already transformed how corporations market their products and influence consumers (Chahal, 2013; LiveRamp, 2015; Rubinstein, 2014; Schuster, 2015).The strategies, technologies, and tools of digital political marketing are more complex and far-reaching than anything we have seen before, with further innovations already underway (WARC, 2017). But because most commercial and political digital operations take place below the radar, they are not fully understood by the public. 1

In the following pages, we briefly describe the growth and maturity of digital marketing, highlighting its basic features, key players, and major practices. We then document how data-driven digital marketing has moved into the centre of American political operations, along with a growing infrastructure of specialised firms, services, technologies and software systems. We identify the prevailing digital strategies, tactics, and techniques of today’s political operations, explaining how they were employed during the most recent US election cycle. Finally, we explore the implications of their use for democratic discourse and governance, discussing several recent policy developments aimed at increasing transparency and accountability in digital politics.

Our research for this paper draws from our extensive experience tracking the growth of digital marketing over the past two decades in the United States and abroad, monitoring and analysing key technological developments, major trends, practices and players, and assessing the impact of these systems in areas such as health, financial services, retail, and youth (Chester, 2007; Montgomery, 2007; Montgomery, Chester, & Kopp, 2017). During the 2016 US presidential election, we monitored commercial digital advertising and data use by candidates, parties and special interest groups across the political spectrum. We collected examples of these ads, along with technical and market impact information from the developers of the applications. We also reviewed trade journals, research reports, and other industry documents, and attended conferences that were focused on digital technologies and politics. In the process, we identified all of the major providers of political digital data targeting applications (e.g., Google, Facebook, data clouds, ad agencies) and analysed all their key materials and case studies related to their 2016 operations. The source for much of this work was our ongoing gathering and analysis of cross-sectional commercial digital marketing practices worldwide.

Marriage of politics and commerce

Since the mid-20th century, advertising has been an increasingly powerful and pervasive presence in US political campaigns, as a growing cadre of ad agencies, public relations firms, and consultants perfected the use of opinion polls, focus groups, and psychographics to reach and influence voters through radio, television, direct mail, and other media outlets (A. Jamieson, 2016; K. H. Jamieson, 1996; Sabato, 1981). With the rise of the internet, campaign operatives began to harness digital technologies and tools to mobilize voter turnout, engage young people, raise money, and support grassroots ground operations (Karpf, 2016; Kreiss, 2016; Tufecki, 2014). Both major political parties in the United States developed large, sophisticated data and digital operations (Kreiss, 2016).

Many of the digital strategies, tools, and techniques employed in the 2016 election were initially developed, deployed, tested, and refined by the commercial sector (Tufecki, 2014).Since its origins in the mid-1990s, digital marketing has operated with a core business model that relies on continuous data collection and monitoring of individual online behaviour patterns (Montgomery, 2011). This system emerged in the United States amid a political culture of minimal government interference, and within a prevailing laissez-faire ethos regarding the internet and new technologies (Barlow, 1996). In the earliest days of the “dot-com boom”, a strong political alliance was forged between the digital media companies and their partners in the advertising and media business, enabling the nascent industry to effectively ward off any attempts to restrain its business operations through privacy regulation or other public policies (Solon & Siddiqui, 2017). As a consequence, the advertising industry played a central role in shaping the operations of platforms and applications in the digital media ecosystem. Digital marketing is now well established and thriving, with expenditures reaching nearly $72.5bn in 2016 for the US alone, and worldwide spending predicted to reach more than $223bn this year (eMarketer, 2017; IAB, n.d.-d).

Ongoing innovations over the years have increased the capacity of data and digital marketing applications. Data collection, analysis, and targeting were further woven into the daily lives of consumers with the rise of social media platforms and mobile devices. Because of the unique role that they play in users’ lives, these platforms are able to sweep up enormous amounts of information, including not only what users post about themselves, but also what is collected from them throughout their daily activities (Smith, 2014). A growing arsenal of software and analytic tools has enhanced the ability of digital media companies and their advertisers to glean valuable insights from the oceans of data they generate (Smith, 2014). Predictive analytics introduced an expanded set of tools for scoring, rating, and categorising individuals, based on an increasingly granular set of behavioural, demographic, and psychographic data (“What is Predictive Intelligence”, 2017). US digital marketers have helped popularise and spur the successful adoption of digital advertising platforms and applications in nearly every geographical location with an internet connection or a link to a mobile device (IAB, n.d.-c). Google, Facebook, and other major players in the digital marketing industry have also developed a global research infrastructure to allow them, and especially their major advertising clients, to make continuous improvements in reaching and influencing the public, and to measure with increasing accuracy the success of their efforts (Facebook IQ, n.d.-a). These developments have created what some observers have called the “surveillance economy” (Singer, 2012).

The growth of data-driven political marketing

Though political campaigns have employed micro-targeting techniques—which use an array of personalised and other data sets and marketing applications to influence the actions of individuals—during the last several election cycles, recent technological innovations and industry advances have created a much more robust system than what was in place in 2012 (IAB, n.d.-b; Rubinstein, 2014). For years, political campaigns have been able to combine public voter files with commercial information from data brokers, to develop detailed and comprehensive dossiers on American voters (Rubinstein, 2014). With recent advances in the advertising technology and data industries, they can now take advantage of a growing infrastructure of specialty firms offering more extensive resources for data mining and targeting voters. Among the new entities are data marketing clouds. Developed by well-known companies such as Adobe, Oracle, Salesforce, Nielsen, and IBM, these clouds sell political data along with an exhaustive amount of detailed consumer information for each potential target, including, for example, credit card use, personal interests, consumption patterns, and TV viewing patterns (Salesforce DMP, 2017).

Some of these massive cloud services also operate what has become a new and essential component for contemporary digital targeting—the data management platform (DMP) (Chavez, 2017). DMPs provide marketers with “centralized control of all of their audience and campaign data” (BlueKai, 2011). They do this by collecting and analysing data about individuals from a wide variety of online and offline sources, including first-party data from a customer’s own record, such as the use of a supermarket loyalty card, or their activities captured on a website, mobile phone, or wearable device; second-party data, information collected about a person by another company, such as an online publisher, and sold to others; and third-party data drawn from thousands of sources, comprising demographic, financial, and other data-broker information, including race, ethnicity, and presence of children (O’Hara, 2016). All of this information can be matched to create highly granular “target audience segments” and to identify and “activate” individuals “across third party ad networks and exchanges”. DMPs are quickly becoming a critical tool for political campaigns (Bennett, 2016; Kaye, 2016, July; Regan, J., 2016).

Facebook and Google now play a central role in political operations, offering a full spectrum of commercial digital marketing tools and techniques, along with specialied ad “products” designed for political use (Bond, 2017). Not surprisingly, these companies have also made generating revenues from political campaigns an important “vertical” category within their ad business (Facebook, n.d.-d; Facebook IQ, n.d.-b; Stanford, 2016). Facebook’s role in the 2016 election was particularly important. With users required to give their real names when they sign up as members, Facebook has created a powerful “identity-based” targeting paradigm, enabling political campaigns to access its more than 162 million US users and to target them individually by age, gender, congressional district, and interests (Facebook, n.d.-b).Its online guide for political campaign marketing urges political campaigns to use all the social media platform tools it makes available to advertisers—including through Instagram and other properties—in order to track individuals, capture their data through various “lead-generation” tactics, and target them by uploading voter files and other data (Facebook, n.d.-a-c-f). The company also employs teams of internal staff aligned with each of the major political parties to provide technical assistance and other services to candidates and their campaigns (Chester, 2017; Kreiss & Mcgregor, 2017). Google heavily promoted the use of YouTube, as well as its other digital marketing assets, during the 2016 US election, reaching out to both major political parties (YouTube, 2017).

The growth and increasing sophistication of the digital marketplace has enhanced the capacities of political campaigns to identify, reach, and interact with individual voters. Below we identify seven key techniques that are emblematic of this new digital political marketing system, providing brief illustrations of how they were employed during the 2016 election.

Cross-device targeting

Getting a complete picture of a person’s persistent “identity” through an “identity-graph” has become a key strategy for successfully reaching consumers across their “omnichannel” experience (use of mobile, TV, streaming devices, etc.) (Winterberry Group, 2016). “Cross-device recognition” allows marketers to determine if the same person who is on a social network is also using a personal computer and later watching video on a mobile phone. Through data “onboarding,” a customer record that may contain a physical and email address is linked through various matching processes, associating it with what is believed to be that individual’s online identification—cookies, IP addresses, and other persistent identifiers (Levine, 2016). Cross-device targeting is now a standard procedure for political initiatives and other campaigns. Voter files are uploaded into the onboarding process, enabling the campaigns to find their targets on mobile devices and at specific times when they may be more receptive to a message (Kaye, 2016, April; L2, n.d.-b). Such granularity of information also enables a more tailored advertisement—so-called “dynamic creative”—which can be changed over time to “deliver very specific messaging” to individuals (Schuster, 2015). Leading cross-device marketing company Drawbridge offered a suite of election services in 2016 that provided campaigns a number of ways to impact voters, including through “Voter-Centric Cross Device Storytelling”, “Political Influencer Identification”, and via “Real-Time Voter Attribution Measurement” (Drawbridge, n.d.).

Programmatic advertising

Programmatic advertising refers to new automated forms of ad buying and placement on digital media using computer programmes and algorithmic processes to find and target a customer wherever she goes. The process can also involve real-time “auctions” that occur in milliseconds in order to “show an ad to a specific customer, in a specific context” (Allen, 2016). The use of programmatic advertising was one of the major changes in political campaign digital operations between 2012 and 2016—“the first time in American History”, according to one ad company, “that such precise targeting has ever been made available at such great scale” (Briscoe, 2017; Kaye, 2015). Programmatic advertising has itself grown in its capabilities to reach individuals, taking advantage of new sources of data to reach them on all of their devices (Regan, T., 2016). In 2016, for example, global ad giant WPP’s Xaxis system—“the world’s largest programmatic and technology platform”—launched “Xaxis Politics”. Capable of “reaching US voters across all digital channels”. the system is said to “segment audiences by hundreds of hot button issues as well as by party affiliation”, including via “real-time campaigns tied to specific real-world events” (Xaxis, 2015). Candidates were able to use the services of a growing list of companies, including Google, Rubicon, AOL, PubMatic, Appnexus and Criteo, that offered programmatic advertising platforms (“Political Campaigns”, 2016; Yatrakis, 2016).

Lookalike modelling

The use of big data analytics enables marketers to acquire information about an individual without directly observing behaviour or obtaining consent. They do this by “cloning” their “most valuable customers”in order to identify and target other prospective individuals for marketing purposes (LiveRamp, 2015). For example, Stirista (n.d.), a digital marketing firm that also serves the political world, offers lookalike modelling to identify people who are potential supporters and voters. The company claims it has matched 155 million voters to their “email addresses, online cookies, and social handles”, as well as “culture, religion, interests, political positions and hundreds of other data points to create rich, detailed voter profiles”. Facebook offers a range of lookalike modelling tools through its “Lookalike Audiences” ad platform. For example, Brad Parscale, the Trump campaign’s digital director, used the Lookalike Audiences ad tool to “expand” the number of people the campaign could target (Green & Issenberg, 2016). Facebook’s “Custom Audiences” product, similarly, enables marketers to upload their own data files so they can be matched and then targeted to Facebook users (Facebook, n.d.-e).

Geolocation targeting

Mobile devices continually send signals that enable advertisers (and others) to take advantage of an individual’s location—through the phone’s GPS (global positioning system), Wi-Fi, and Bluetooth communications. All of this can be done with increasing speed and efficiency. Through a host of new location-targeting technologies, consumers can now be identified and targeted wherever they go, while driving a car, pulling into a mall, or shopping in a store (Son, Kim, & Shmatikov, 2016). A complex and growing infrastructure of geolocation-based data-marketing services has emerged, with specialised mobile data firms, machine-learning technologies, measurement companies, and new technical standards to facilitate on-the-go targeting (Warrington, 2015). The use of mobile geo-targeting techniques played a central role in the 2016 election cycle, with a growing number of specialists offering their services to campaign operatives. For example, L2 (n.d.-a) made its voter file, along with HaystaqDNA modelling data, available for mobile device targeting, offering granular profile data on voters based on their interest in such contested topics as gun laws, gay marriage, voter fraud, and school choice, among others. Conde Nast’s Advance Publications’ election advertising worked with campaigns to append geo-location, profile data, and buying behaviour “to sculpt a very specific voter profile and target down to few hundred readers in a given geo location” (Ellwanger, 2016).

Online video advertising

Digital video, via mobile and other devices, is perceived as a highly effective way of delivering emotional content on behalf of brands and marketing campaigns (IAB, n.d.-a). There are a variety of online video ad formats that provide both short- and long-form content, and that work well for political and other marketing efforts. Progressive political campaign company Revolution Messaging, which worked for the Sanders campaign, developed what it calls “smart cookies” that it says take video and other ad placement “to the next level, delivering precision and accuracy” (Revolution Messaging, n.d.). Google’s YouTube has become a key platform for political ads, with the company claiming that today, voters make their political decisions not in “living rooms” in front of a television but in what it calls “micromoments” as people watch mobile video (DoubleClick, n.d.). According to the company’s political ad services research, mobile devices were used in nearly 60 percent of election-related searches during 2016. Content producers (which it calls “Creators”) on YouTube were able to seize on these election micro-moments to influence the political opinions of potential voters 18-49 (“Letter from the Guest Editors,” 2016).

Targeted TV advertising

Television advertising, which remains a linchpin of political campaign strategy, is undergoing a major transformation, as digital technologies and “addressable” set-top boxes have changed cable and broadcast TV into powerful micro-targeting machines capable of delivering the same kinds of granular, personalised advertising messages to individual voters that have become the hallmark of online marketing. Political campaigns are in the forefront of using set-top box “second-to-second viewing data”, amplified with other data sources, such as “demographic and cross-platform data from a multitude of sources” via information brokers, to deliver more precise ads (Fourthwall Media, n.d.; Leahey, 2016; NCC Media, n.d.). NCC Media, the US cable TV ad platform owned by Comcast, Cox, and Spectrum, provided campaigns the ability to target potential voters via the integration of its set-top box viewing information with voter and other data from Experian and others (Miller, 2017). Deals between TV data viewing companies and organisations representing both Republican- and Democratic-leaning groups brought the “targeting capabilities of online advertising to TV ad buys…bringing what was once accessible only to large state-wide or national campaigns to smaller, down-ballot candidates”, explained Advertising Age (Delgado, 2016).

Psychographic, neuromarketing, and emotion-based targeting

Psychographics, mood measurement, and emotional testing have been used by advertisers for many decades, and have also been a core strategy in political campaign advertising (Key, 1974; Packard, 2007; Schiller, 1975). The digital advertising industry has developed these tools even further, taking advantage of advances in neuroscience, cognitive computing, data analytics, behavioural tracking, and other recent developments (Crupi, 2015). Granular-based messages that trigger a range of emotional and subconscious responses, to better “engage” with individuals and deepen relationships with commercial brands, have become part of the DNA of digital advertising (McEleny, 2016). Facebook (2015), Nielsen, and most leading brands use “neuromarketing” services worldwide, which utilise neuroscience tools to determine the emotional impact of advertising messages. There is a growing field, recently promoted by Google, of “Emotion Analytics” that takes advantage of “new types of data and new tracking methods” to help advertisers “understand the impact of campaigns—and their individual assets—on an emotional level…” (Kelshaw, 2017). Scholars have identified that the use of “psychological targeting” in advertising enables the influencing of large groups of people by “tailoring persuasive appeals to the psychological needs” of the specific audiences (Matz, et al, 2017). Data company Experian Marketing Services for political campaigns offered data that weaved together “demographic, psychographic and attitudinal attributes” to target voters digitally. Experian claims its data enables campaigns to examine a target’s “heart and mind” via attributes related to their “political persona” as well as “attitudes, expectations, behaviours, lifestyles, purchase habits and media preferences (Experian, 2011, 2015). One of the most well publicised and controversial players in the 2016 election was Cambridge Analytica (CA), a prominent data analytics and behavioural communications firm that claimed to be a key component in Donald Trump’s victorious campaign. The company used a “five-factor personality model” aimed at determining “the personality of every single adult in the United States of America” (Albright, 2016; Kranish, 2016).Known as OCEAN, the model rated individuals based on five key traits: openness, conscientiousness, extroversion, agreeableness, and neuroticism. Drawing from digital data, voter history, and marketing resources supplied by leading companies, including Acxiom, Experian, Nielsen, GOP firm Data Trust, Aristotle, L2, Infogroup, and Facebook, CA was able to develop an “internal database with thousands of data points per person”. Its research also identified key segments that were considered “persuadable”, and shaped the advertising content placed “across multiple digital channels (with the most effective ads also appearing on television) (Advertising Research Foundation, 2017; Nix, 2016). The strategy was based on developing messages that were tailored to the vulnerabilities of individual voters (Nix, 2016; Schwartz, 2017). CA has become the subject of much scrutiny and debate, and itself has made conflicting claims, with critics raising concerns over its techniques and expressing scepticism about the extent of its impact (Confessore & Hakim, 2017; Karpf, 2017). However, the company’s work was sufficiently convincing to the leading advertising industry research organisation, the Advertising Research Foundation (2017, March), that it honoured the firm with a “Gold” award in 2017 under its “Big Data” category.

Discussion

The above description provides only a brief overview of the data-driven marketing system that is already widely in use by candidate and issue campaigns in the United States. The increasingly central role of commercial digital marketing in contemporary political campaigns is reshaping modern-day politics in fundamental ways, altering relationships among candidates, parties, voters, and the media. We acknowledge that digital technologies have made important positive contributions to the vibrancy of the political sphere, including greatly expanding sources of news and information, significantly increasing opportunities for citizen participation, and empowering people from diverse backgrounds to form coalitions and influence policy. The same tools developed for digital marketing have also helped political campaigns substantially improve voter engagement, enhance their capacities for “small-donor” fundraising, and more efficiently generate turnout (Moonshadow Mobile, n.d.; Owen, 2017). However, many of the techniques we address in this paper raise serious concerns—over privacy, discrimination, manipulation, and lack of transparency.

Several recent controversies over the 2016 election have triggered greater public scrutiny over some of the practices that have become standard operating procedure in the digital media and marketing ecosystem. For example, “fake news” has a direct relationship to programmatic advertising, the automated system of “intelligent” buying and selling of individuals and groups (Weissbrot, 2016). These impersonal algorithmic machines are focused primarily on finding and targeting individual consumers wherever they are, often with little regard for the content where the ads may appear (Maheshwari & Isaac, 2016). As a consequence, in the middle of the 2016 election, many companies found themselves with ads placed on “sites featuring pornography, pirated content, fake news, videos supporting terrorists, or outlets whose traffic is artificially generated by computer programs”, noted the Wall Street Journal (Nicas, 2016; Vranica, 2017). As a major US publisher explained in the trade publication Advertising Age,

Programmatic’s golden promise was allowing advertisers to efficiently buy targeted, quality, ad placements at the best price, and publishers to sell available space to the highest bidders…. What was supposed to be a tech-driven quality guarantee became, in some instances, a “race to the bottom” to make as much money as possible across a complex daisy chain of partners. With billions of impressions bought and sold every month, it is impossible to keep track of where ads appear, so “fake news” sites proliferated. Shady publishers can put up new sites every day, so even if an exchange or bidding platform identifies one site as suspect, another can spring up (Clark, 2017).

Criticism from news organisations and civil society groups, along with a major backlash by leading global advertisers, led to several initiatives to place safeguards on these practices (McDermott, 2017; Minsker, 2017). For example, in an effort to ensure “brand safety”, leading global advertisers and trade associations demanded changes in how Google, Facebook and others conduct their data and advertising technology operations. As a consequence, new measures have been introduced to enable companies to more closely monitor and control where their ads are placed (Association of National Advertisers, 2017; Benes, 2017; IPA, 2017; Johnson, 2017; Liyakasa, 2017; Marshall, 2017; Timmers, 2015).

The Trump campaign relied heavily on Facebook’s digital marketing system to identify specific voters who were not supporters of Trump in the first place, and to target them with psychographic messaging designed to discourage them from voting (Green & Issenberg, 2016). Campaign operatives openly referred to such efforts as “voter suppression” aimed at three targeted groups: “idealistic white liberals, young women and African Americans”. The operations used standard Facebook advertising tools, including “custom audiences” and so-called “dark posts”—“nonpublic paid posts shown only to the Facebook users that Trump chose” with personalised negative messages (Green & Issenberg, 2016). Such tactics also took advantage of commonplace digital practices that target individual consumers based on factors such as race, ethnicity, and socio-economic status (Google, 2017; Martinez, 2016; Nielsen, 2016). Civil rights groups have had some success in getting companies to change their practices. However, for the most part, the digital marketing industry has not been held sufficiently accountable for its use of race and ethnicity in data-marketing products, and there is a need for much broader, industry-wide policies.

Conclusion

Contemporary digital marketing practices have raised serious issues about consumer privacy over the years (Schwartz & Solove, 2011; Solove & Hartzog, 2014). When applied to the political arena, where political information about individuals is only one of thousands of highly sensitive data points collected and analysed by the modern machinery of data analytics and targeting, the risks are even greater. Yet, in the United States, very little has been done in terms of public policy to provide any significant protections. In contrast to the European Union, where privacy is encoded in law as a fundamental right, privacy regulation in the US is much weaker (Bennett, 1997; Solove & Hartzog, 2014; U.S. Senate Committee on Commerce, Science, and Transportation, 2013). The US is one of the only developed countries without a general privacy law. As a consequence, except in specific areas, such as children’s privacy, consumers in the US enjoy no significant data protection in the commercial marketplace. In the political arena, there is even less protection for US citizens. As legal scholar Ira S. Rubinstein (2014) explains, “the collection, use and transfer of voter data face almost no regulation”. The First Amendment plays a crucial role in this regard, allowing the use of political data as a protected form of speech (Persily, 2016).

The political fallout over the how Russian operatives used Facebook, Twitter, and other sites in the 2016 presidential campaign has triggered unprecedented focus on the data and marketing operations of these and other powerful digital media companies. Lawmakers, civil society, and many in the press are calling for new laws and regulations to ensure transparency and accountability for online political ads (“McCain, Klobuchar & Warner Introduce Legislation”, 2017). The U.S. Federal Election Commission, which regulates political advertising, has asked for public comments on whether it should develop new disclosure rules for online ads (Glaser, 2017). In an effort to head-off regulation, both Facebook and Twitter have announced their own internal policy initiatives designed to provide the public with more information, including what organisations or individuals paid for political ads and who the intended targets were. These companies have also promised to establish archives for political advertising, which would be accessible to the public (Falck, 2017; Goldman, 2017; Koltun, 2017). The US online advertising industry trade association is urging Congress not to legislate in this area, but to allow the industry to develop new self-regulatory regimes in order to police itself (IAB, 2017). However, relying on self-regulation is not likely to address the problems raised by these practices and may, in fact, compound them. Industry self-regulatory guidelines are typically written in ways that do not challenge many of the prevailing (and problematic) business practices employed by their own members. Nor do they provide meaningful or effective accountability mechanisms (Center for Digital Democracy, 2013; Gellman & Dixon, 2011; Hoofnagle, 2005). It remains to be seen what the outcome of the current policy debate over digital politics will be, and whether any meaningful safeguards emerge from it.

While any regulation of political speech must meet the legal challenges posed by the First Amendment, limiting how the mining of commercial data can be used in the first place can serve as a critically important new electoral safeguard. Advocacy groups should call for consumer privacy legislation in the US that would place limits on what data can be gathered by the commercial online advertising industry, and how that information can be used. Americans currently have no way to decide for themselves (such as via an opt-in) whether data collected on their finances, health, geo-location, as well as race or ethnicity can be used for digital ad profiling. Certain online advertising practices, such as the use of psychographics and lookalike modelling, also call for rules to ensure they are used fairly.

Without effective interventions, the campaign strategies and practices we have documented in this paper will become increasingly sophisticated in coming elections, most likely with little oversight, transparency, or public accountability. The digital media and marketing industry will continue its research and development efforts, with an intense focus on harnessing the capabilities of new technologies, such as artificial intelligence, virtual reality, and cognitive computing, for advertising purposes. Advertising agencies are already applying some of these advances to the political field (Facebook, 2016; Google, n.d.-a; Havas Cognitive, n.d.). Academic scholars and civil society organisations will need to keep a close watch on all these developments, in order to understand fully how these digital practices operate as a system, and how they are influencing the political process. Only through effective public policies and enforceable best practices can we ensure that digital technology enhances democratic institutions, without undermining their fundamental goals.

References

Advertising Research Foundation. (2017, March 21). Cambridge Analytica receives top honor in the 2017 ARF David Ogilvy Awards. Retrieved from http://www.prnewswire.com/news-releases/cambridge-analytica-receives-top-honor-in-the-2017-arf-david-ogilvy-awards-300426997.html

Advertising Research Foundation. (2017). Cambridge Analytica: Make America number one. Case study. Retrieved from https://thearf.org/2017-arf-david-ogilvy-awards/winners/

Albright, J. (2016, November 11). What’s missing from the Trump election equation? Let’s start with military-grade psyops. Medium. Retrieved from https://medium.com/@d1gi/whats-missing-from-the-trump-election-equation-let-s-start-with-military-grade-psyops-fa22090c8c17

Allen, R. (2016, February 8). What is programmatic marketing? Smart Insights. Retrieved from http://www.smartinsights.com/internet-advertising/internet-advertising-targeting/what-is-programmatic-marketing/

Association of National Advertisers. (2017, March 24). Statement from ANA CEO on Suspending Advertising on YouTube. Retrieved from http://www.ana.net/blogs/show/id/mm-blog-2017-03-statement-from-ana-ceo

Barlow, J. P. (1996, February 8). A declaration of the independence of cyberspace. Electronic Frontier Foundation. Retrieved from https://www.eff.org/cyberspace-independence

Benes, R. (2017, August 29). Ad buyers blast Facebook Audience Network for placing ads on Breitbart,” Digiday. Retrieved from https://digiday.com/marketing/ad-buyers-blast-facebook-audience-network-placing-ads-breitbart/

Bennett, C. J. (1997). Convergence revisited: Toward a global policy for the protection of personal data? In P. Agre & M. Rotenberg (Eds.), Technology and privacy: the new landscape (pp. 99–124). Cambridge, MA: MIT Press.

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: Can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261-275. doi:10.1093/idpl/ipw021 Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2776299

Bernie Sanders: 2016 presidential campaign. (2016). Facebook Business. Retrieved from https://www.facebook.com/business/success/bernie-sanders

BlueKai. (2011). Whitepaper: Data management platforms demystified. Retrieved from http://www.bluekai.com/files/DMP_Demystified_Whitepaper_BlueKai.pdf

Bond, S. (2017, March 14). Google and Facebook build digital ad duopoly. Financial Times. Retrieved from https://www.ft.com/content/30c81d12-08c8-11e7-97d1-5e720a26771b

Briscoe, G. (2017, March 7). How political digital advertising lessons of 2016 applies to 2017. Centro. Retrieved from https://www.centro.net/blog/political-digital-advertising-lessons-2016-applies-2017/

Center for Digital Democracy. (2013, May 29). U.S. online data trade groups spin digital fairy tale to USTR about US consumer privacy prowess—CDD says privacy out of bounds in TTIP. Retrieved from http://www.democraticmedia.org/us-online-data-trade-groups-spin-digital-fairy-tale-ustr-about-us-consumer-privacy-prowess-cdd-say-0

Chahal, G. (2013, May). Election 2016: Marriage of big data, social data will determine the next president. Wired. Retrieved from https://www.wired.com/insights/2013/05/election-2016-marriage-of-big-data-social-data-will-determine-the-next-president/

Chavez, T. (2017, May 17). Krux is now Salesforce DMP. Salesforce Blog. Retrieved from https://www.salesforce.com/blog/2017/05/krux-is-now-salesforce-dmp.html

Chester, J. (2007). Digital destiny: New media and the future of democracy. New York: The New Press.

Chester, J. (2017, January 6). Our next president: Also brought to you by big data and digital advertising. Moyers and Company. Retrieved from http://billmoyers.com/story/our-next-president-also-brought-to-you-by-big-data-and-digital-advertising/

Clark, J. (2017, April 25). Fake news: New name, old problem. Can premium programmatic help? Advertising Age. Retrieved from http://adage.com/article/digitalnext/fake-news-problem-premium-programmatic/308774/

Confessore, N., & Hakim, D. (2017, March 6). Data firm says “secret sauce” aided Trump; many scoff. New York Times. Retrieved from https://www.nytimes.com/2017/03/06/us/politics/cambridge-analytica.html?_r=0

Crupi, A. (2015, May 27). Nielsen buys neuromarketing research company Innerscope. Advertising Age. Retrieved fromhttp://adage.com/article/media/nielsen-buys/298771/

Delgado, M. (2016, April 28). Experian launches audience management platform to make programmatic TV a reality across advertising industry. Experian. Retrieved from http://www.experian.com/blogs/news/2016/04/28/experian-launches-audience-management-platform/

DoubleClick. (n.d.). DoubleClick campaign manager. Retrieved from https://www.doubleclickbygoogle.com/solutions/digital-marketing/campaign-manager/

Drawbridge. (n.d.). Cross-device election playbook. Retrieved from https://drawbridge.com/c/vote

Edelman Digital (2016, April 1). How digital is shaking up presidential campaigns. Retrieved from https://www.edelman.com/post/how-digital-is-shaking-up-presidential-campaigns/

Ellwanger, S. (2016, September 15). Advance Local’s Sutton sees bid demand for digital advertising in politics. Retrieved from http://www.beet.tv/2016/09/jeff-sutton.html

eMarketer. (2017, April 12). Worldwide ad spending: The eMarketer forecast for 2017.

Experian. (2015, March). Audience guide. Retrieved from https://www.experian.com/assets/marketing-services/product-sheets/attitudinal-and-psychographic-audiences.pdf

Experian. (2011, December). Political affiliation and Beyond. Retrieved from https://www.experian.com/assets/marketing-services/product-sheets/das-political-data-sheet.pdf

Facebook. (2016, June 16). Inside marketing science at Facebook. Retrieved from https://www.facebook.com/notes/facebook-careers/inside-marketing-science-at-facebook/936165389815348/

Facebook. (n.d.-a). Activate. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/activate/

Facebook. (n.d.-b). Advanced strategies for performance marketers. Facebook Business. Retrieved from https://www.facebook.com/business/a/performance-marketing-strategies; https://www.facebook.com/business/help/202297959811696

Facebook. (n.d.-c). Impact. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/impact/

Facebook. (n.d.-d). Mobilize your voters. Facebook Business. Retrieved from https://www.facebook.com/business/a/mobilizevoters

Facebook. (n.d.-e). Toomey for Senate. Facebook Business. Retrieved from https://www.facebook.com/business/success/toomey-for-senate

Facebook. (n.d.-f). Turnout. Facebook Elections. Retrieved from https://politics.fb.com/ad-campaigns/turnout/

Facebook IQ. (n.d.-a). Unlock the insights that matter. Retrieved from https://www.facebook.com/iq

Facebook IQ. (n.d.-b). Vertical insights. https://www.facebook.com/iq/vertical-insights

Falck, B. (2017, October 24). New transparency for ads on Twitter. Twitter Blog. https://blog.twitter.com/official/en_us/topics/product/2017/New-Transparency-For-Ads-on-Twitter.html

Fourthwall Media. (n.d.). Solutions: Analytics firms. Retrieved from http://www.fourthwallmedia.tv/analytics-firms.

Gellman, R., & Dixon, P. (2011, October 14). Many failures: A brief history of privacy self-regulation in the United States. World Privacy Forum. Retrieved from http://www.worldprivacyforum.org/wp-content/uploads/2011/10/WPFselfregulationhistory.pdf

Glaser, A. (2017, October 17). Should political ads on Facebook include disclaimers? Slate. Retrieved from http://www.slate.com/articles/technology/future_tense/2017/10/the_fec_wants_your_opinion_on_transparency_for_online_political_ads.html

Goldman, R. (2017, October 27). Update on our advertising transparency and authenticity efforts. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2017/10/update-on-our-advertising-transparency-and-authenticity-efforts/

Google. (2017, May 1). Marketing in a multicultural world: 2017 Google's marketing forum. Google Agency Blog. Retrieved from https://agency.googleblog.com/2017/05/marketing-in-multicultural-world-2017.html

Google. (n.d.-a). Google NYC algorithms and optimization. Research at Google. Retrieved from https://research.google.com/teams/nycalg/

Google. (n.d.-b). Insights you want. Data you need. Think with Google. Retrieved from https://www.thinkwithgoogle.com

Green, J., & Issenberg, S. (2016, October 27). Inside the Trump bunker, with days to go. Bloomberg Businessweek. Retrieved from https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

Havas Cognitive. (n.d.). EagleAi has landed. Retrieved from http://cognitive.havas.com/case-studies/eagle-ai

Hoofnagle, C. (2005, March 4). Privacy self-regulation: A decade of disappointment. Electronic Privacy Information Center. Retrieved from http://epic.org/reports/decadedisappoint.html

IPA. (2017, August). IPA issues direct call to action to Google YouTube and Facebook to clean up safety, measurement and viewability of their online video. Retrieved from http://www.ipa.co.uk/news/ipa-issues-direct-call-to-action-to-google-youtube-and-facebook-to-clean-up-safety,-measurement-and-viewability-of-their-online-video-#.Wa126YqQzQj

IAB. (2017, October 24). IAB President & CEO, Randall Rothenberg testifies before Congress on digital political advertising. Retrieved from https://www.iab.com/news/read-the-testimony-from-randall-rothenberg-president-and-ceo-iab/

IAB. (n.d.-a). The digital video advertising landscape. Retrieved from https://video-guide.iab.com/digital-video-advertising-landscape

IAB. (n.d.-b). Global digital advertising revenue reports. Retrieved from https://www.iab.com/global/

IAB. (n.d.-c). Glossary: Digital media planning & buying. Retrieved from https://www.iab.com/wp-content/uploads/2016/04/Glossary-Formatted.pdf

IAB. (n.d.-d). IAB internet advertising revenue report conducted by PricewaterhouseCoopers (PWC). Retrieved from https://www.iab.com/insights/iab-internet-advertising-revenue-report-conducted-by-pricewaterhousecoopers-pwc-2

Jamieson, A. (2016, April 5). The first Snapchat election: How Bernie and Hillary are targeting the youth vote. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/apr/05/snapchat-election-2016-sanders-clinton-youth-millennial-vote

Jamieson, K. H. (1996). Packaging the presidency: A history and criticism of presidential campaign advertising. New York: Oxford University Press.

Johnson, L. (2017, April 16). How brands and agencies are fighting back against Facebook and Google’s measurement snafus. Adweek. Retrieved from http://www.adweek.com/digital/how-brands-and-agencies-are-fighting-back-against-facebooks-and-googles-measurement-snafus/

Karpf, D. (2017, February 1). Will the real psychometric targeters please stand up? Civicist. Retrieved from https://civichall.org/civicist/will-the-real-psychometric-targeters-please-stand-up/

Karpf, D. (2016, October 31). Preparing for the campaign tech bullshit season. Civicist. Retrieved from https://civichall.org/civicist/preparing-campaign-tech-bullshit-season/

Kaye, K. (2015, June 3). Programmatic buying coming to the political arena in 2016. Advertising Age. Retrieved from http://adage.com/article/digital/programmatic-buying-political-arena-2016/298810/

Kaye, K. (2016, April 15). RNC'S voter data provider teams up with Google, Facebook and other ad firms. Advertising Age. Retrieved from http://adage.com/article/campaign-trail/rnc-voter-data-provider-joins-ad-firms-including-facebook/303534/

Kaye, K. (2016, July 13). Democrats' data platform opens access to smaller campaigns. Advertising Age. Retrieved from http://adage.com/article/campaign-trail/democratic-data-platform-opens-access-smaller-campaigns/304935/

Kelshaw, T. (2017, August). Emotion analytics: A powerful tool to augment gut instinct. Think with Google. https://www.thinkwithgoogle.com/nordics/article/emotion-analytics-a-powerful-tool-to-augment-gut-instinct/

Key, W. B. (1974). Subliminal Seduction. New York: Berkeley Press.

Koltun, N. (2017, October 27). Facebook significantly ramps up transparency efforts to cover all ads. Mobile Marketer. Retrieved from https://www.mobilemarketer.com/news/facebook-significantly-ramps-up-transparency-efforts-to-cover-all-ads/508380/

Kranish, M. (2016, October 27). Trump’s plan for a comeback includes building a “psychographic” profile of every voter. The Washington Post. Retrieved from https://www.washingtonpost.com/politics/trumps-plan-for-a-comeback-includes-building-a-psychographic-profile-of-every-voter/2016/10/27/9064a706-9611-11e6-9b7c-57290af48a49_story.html?utm_term=.28322875475d

Kreiss, D. (2016). Prototype politics: Technology-intensive campaigning and the data of democracy. New York: Oxford University Press.

Kreiss, D., & Mcgregor, S.C. (2017). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 U.S. presidential cycle, Political Communication, 1-23. doi:10.1080/10584609.2017.1364814

L2. (n.d.-a). Digital advertising device targeting. Retrieved from http://www.l2political.com/products/data/digital-advertising/device-targeting/

L2. (n.d.-b). L2 voter file enhancements. Retrieved from http://www.l2political.com/products/data/voter-file-enhancements/

Leahey, L. (2016, July 15). (Ad) campaign season: How political advertisers are using data and digital to move the needle in 2016. Cynopsis Media. Retrieved from http://www.cynopsis.com/cyncity/ad-campaign-season-how-political-advertisers-are-using-data-and-digital-to-move-the-needle-in-2016/

Letter from the guest editors: Julie Hootkin and Frank Luntz. (2016, June). Think with Google. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/guest-editors-political-consultants-julie-hootkin-frank-luntz

Levine, B. (2016, December 2). Report: What is data onboarding, and why is it important to marketers? Martech Today. Retrieved from https://martechtoday.com/report-data-onboarding-important-marketers-192924

LiveRamp (2015, August 5). Look-alike modeling: The what, why, and how. LiveRamp Blog. Retrieved from http://liveramp.com/blog/look-alike-modeling-the-what-why-and-how/

Liyakasa, K. (2017, August 24). Standard media index: YouTube’s direct ad spend down 26% in Q2 amid brand safety crackdown. Ad Exchanger. Retrieved from https://adexchanger.com/ad-exchange-news/standard-media-index-youtubes-direct-ad-spend-26-q2-amid-brand-safety-crackdown/

Maheshwari, S., & Isaac, M. (2016, November 6). Facebook will stop some ads from targeting users by race. New York Times Retrieved from https://www.nytimes.com/2016/11/12/business/media/facebook-will-stop-some-ads-from-targeting-users-by-race.html?mcubz=0

Marshall, J. (2017, January 30). IAB chief calls on online ad industry to fight fake news. Wall Street Journal. Retrieved from https://www.wsj.com/articles/iab-chief-calls-on-online-ad-industry-to-fight-fake-news-1485812139

Martinez, C. (2016, October 28). Driving relevance and inclusion with multicultural marketing. Facebook Business. Retrieved from https://www.facebook.com/business/news/driving-relevance-and-inclusion-with-multicultural-marketing

Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. (2017, October 17). Proceedings of the National Academy of Sciences (PNAS Early Edition). Retrieved from http://www.michalkosinski.com/home/publications

McCain, Klobuchar & Warner introduce legislation to protect integrity of U.S. elections & provide transparency of political ads on digital platforms. (2017, October 19). Retrieved from https://www.mccain.senate.gov/public/index.cfm/2017/10/mccain-klobuchar-warner-introduce-legislation-to-protect-integrity-of-u-s-elections-provide-transparency-of-political-ads-on-digital-platforms

McDermott, M. J. (2017, May 12). Brand safety issue vexes marketers. ANA Magazine. Retrieved from http://www.ana.net/magazines/show/id/ana-2017-05-brand-safety-issue-vexes-marketers

McEleny, C. (2016, October 16). Ford and Xaxis score in Vietnam using emotional triggers around the UEFA Champions League. The Drum. Retrieved from http://www.thedrum.com/news/2016/10/18/ford-and-xaxis-score-vietnam-using-emotional-triggers-around-the-uefa-champions

Miller, S. J. (2017, March 24). Local cable and the future of campaign media strategy. Campaigns & Elections. Retrieved from https://www.campaignsandelections.com/campaign-insider/local-cable-and-the-future-of-campaign-media-strategy

Minsker, M. (2017, August 30). Advertisers want programmatic tech players to fight fake news. eMarketer. Retrieved from https://www.emarketer.com/Article/Advertisers-Want-Programmatic-Tech-Players-Fight-Fake-News/1016406

Montgomery, K. C. (2007). Generation digital: Politics, commerce, and childhood in the age of the internet. Cambridge, MA: MIT Press.

Montgomery, K. C. (2011) Safeguards for youth in the digital marketing ecosystem. In D. G. Singer and J. L. Singer (Eds.), Handbook of children and the media (2nd ed.), pp. 631-648. Thousand Oaks, CA: Sage Publications.

Montgomery, K. C., Chester, J., & Kopp, K. (2017). Health wearable devices in the big data era: Ensuring privacy, security, and consumer protection. Center for Digital Democracy. Retrieved from https://www.democraticmedia.org/sites/default/files/field/public/2016/aucdd_wearablesreport_final121516.pdf

Moonshadow Mobile. (n.d.). Ground Game is a groundbreaking battle-tested mobile canvassing app. Retrieved from http://www.moonshadowmobile.com/products/ground-game-mobile-canvassing/

NCC Media. (n.d.). The essential guide to political advertising. Retrieved from https://nccmedia.com/PoliticalEssentialGuide/html5/index.html?page=1&noflash

Nicas, J. (2016, December 8). Fake-news sites inadvertently funded by big brands. Wall Street Journal. Retrieved from https://www.wsj.com/articles/fake-news-sites-inadvertently-funded-by-big-brands-1481193004

Nielsen. (2016, October 4). Nielsen and Ethnifacts introduce intercultural affinity segmentation to drive deeper understanding of total U.S. cultural landscape for brand marketers. Retrieved from http://www.nielsen.com/us/en/press-room/2016/nielsen-and-ethnifacts-introduce-intercultural-affinity-segmentation.html

Nix, A. (2016, September). The Power of Big Data and Psychographics in the Electoral Process. Presented at the Concordia Annual Summit, New York. Retrieved from https://www.youtube.com/watch?v=n8Dd5aVXLCc

O’Hara, C. (2016, January 25). Data triangulation: How second-party data will eat the digital world. Ad Exchanger. Retrieved from http://adexchanger.com/data-driven-thinking/data-triangulation-how-second-party-data-will-eat-the-digital-world/

Owen, D. (2017). New Media and Political Campaigns. New York: Oxford University Press.

Packard, V. (2007). The Hidden Persuaders (reissue ed.). New York: Ig Publishing.

Persily, N. (2016, August 10). Facebook may soon have more power over elections than the FEC. Are we ready? Washington Post. Retrieved from https://www.washingtonpost.com/news/in-theory/wp/2016/08/10/facebook-may-soon-have-more-power-over-elections-than-the-fec-are-we-ready/?utm_term=.ed10eef711a1

Political campaigns in 2016: The climax of digital advertising. (2016, May 10). Media Radar. Retrieved from https://www.slideshare.net/JesseSherb/mediaradarwhitepaperdigitalpoliticalfinpdf

Regan, J. (2016, July 29). Donkeys, elephants, and DMPs. Merkle. Retrieved from https://www.merkleinc.com/blog/donkeys-elephants-and-dmps

Regan, T. (2016, January). Media planning toolkit: Programmatic planning. WARC. Retrieved from https://www.warc.com/content/article/bestprac/media_planning_toolkit_programmatic_planning/106391

Revolution Messaging. (n.d.). Smart cookies. Retrieved from https://revolutionmessaging.com/marketing/smart-cookies.

Rubinstein, I. S. (2014) Voter privacy in the age of big data. Wisconsin Law Review. Retrieved from http://wisconsinlawreview.org/wp-content/uploads/2015/02/1-Rubinstein-Final-Online.pdf

Sabato, L. J. (1981). The rise of political consultants: New ways of winning elections. New York: Basic Books.

Salesforce DMP. (2017, October 20). Third-party data marketplace. Retrieved from https://konsole.zendesk.com/hc/en-us/articles/217592967-Third-Party-Data-Marketplace

Schiller, H. I. (2007). The Mind Managers. New York: Beacon Press.

Schuster, J. (2015, October 7) Political campaigns: The art and science of reaching voters. LiveRamp. Retrieved from https://liveramp.com/blog/political-campaigns-the-art-and-science-of-reaching-voters/

Schwartz, M. (2017, March 30). Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate. The Intercept. Retrieved from https://theintercept.com/2017/03/30/facebook-failed-to-protect-30-million-users-from-having-their-data-harvested-by-trump-campaign-affiliate/

Schwartz, P. M., & Solove, D. J. (2011). The PII problem: Privacy and a new concept of personally identifiable information. New York University Law Review, 86,1814-1895. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1909366

Singer, N. (2012, October 13). Do not track? Advertisers say “don’t tread on us.” New York Times. Retrieved from http://www.nytimes.com/2012/10/14/technology/do-not-track-movement-is-drawing-advertisers-fire.html?_r=0

Smith, C. (2014, March 20). Reinventing social media: Deep learning, predictive marketing, and image recognition will change everything. Business Insider. Retrieved from http://www.businessinsider.com/social-medias-big-data-future-2014-3

Solon, O., & Siddiqui, S. (2017, September 3). Forget Wall Street—Silicon Valley is the new political power in Washington. The Guardian. Retrieved from https://www.theguardian.com/technology/2017/sep/03/silicon-valley-politics-lobbying-washington

Solove, D. J., & Hartzog, W. (2014). The FTC and the new common law of privacy. Columbia Law Review, 114, 583-677. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312913

Son, S., Kim, D., & Shmatikov, V. (2016). What mobile ads know about mobile users. NDSS ’16. Retrieved from http://www.cs.cornell.edu/~shmat/shmat_ndss16.pdf

Stanford, K. (2016, March). How political ads and video content influence voter opinion. Think with Google. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/content-marketing/political-ads-video-content-influence-voter-opinion/

Stirista. (n.d.). Political data. Retrieved from https://www.stirista.com/what-we-do/data/political-data

Timmers, B. (2015, December). Everything you wanted to know about fake news. IAS Insider Retrieved from https://insider.integralads.com/everything-wanted-know-fake-news/

Tufecki, Z. (2014, July 7). Engineering the public: Big data, surveillance and computational politics. First Monday19(7). Retrieved from http://firstmonday.org/article/view/4901/4097

U.S. Senate Committee on Commerce, Science, and Transportation. (2013, December 18). A review of the data broker industry: Collection, use, and sale of consumer data for marketing purposes. Staff report for Chairman Rockefeller. Retrieved from https://www.commerce.senate.gov/public/_cache/files/0d2b3642-6221-4888-a631-08f2f255b577/AE5D72CBE7F44F5BFC846BECE22C875B.12.18.13-senate-commerce-committee-report-on-data-broker-industry.pdf

Vranica, S. (2017, June 18). Advertisers try to avoid the web’s dark side, from fake news to extremist videos. Wall Street Journal. Retrieved from https://www.wsj.com/articles/advertisers-try-to-avoid-the-webs-dark-side-from-fake-news-to-extremist-videos-1497778201

WARC. (2017, December). Toolkit 2018: How brands can respond to the year's biggest challenges. Retrieved from https://www.warc.com/content/article/Toolkit_2018_How_brands_can_respond_to_the_yearamp;39;s_biggest_challenges/117399

Warrington, G. (2015, November 18). Tiles, proxies and exact places: Building location audience profiles. LinkedIn. Retrieved from https://www.linkedin.com/pulse/tiles-proxies-exact-places-building-location-audience-warrington

Weissbrot, A. (2016, June 20). MAGNA and Zenith: Digital growth fueled by programmatic, mobile and video. Ad Exchanger. Retrieved from https://adexchanger.com/agencies/magna-zenith-digital-growth-fueled-programmatic-mobile-video/

What is predictive intelligence and how it’s set to change marketing in 2016. (2016, February 11). Smart Insights. http://www.smartinsights.com/digital-marketing-strategy/predictive-intelligence-set-change-marketing-2016/

Winterberry Group. (2016, November). The state of consumer data onboarding: Identity resolution in an omnichannel environment. Retrieved from http://www.winterberrygroup.com/our-insights/state-consumer-data-onboarding-identity-resolution-omnichannel-environment

Xaxis. (2015, November 9). Xaxis brings programmatic to political advertising with Xaxis politics, first ad targeting solution to leverage offline voter data for reaching U.S. voters across all digital channels. https://www.businesswire.com/news/home/20151109006051/en/Xaxis-Brings-Programmatic-Political-Advertising-Xaxis-Politics

Yatrakis, C. (2016, June 28). The Trade Desk partner spotlight: Q&A with Factual. Factual. Retrieved from https://www.factual.com/blog/partner-spotlight

YouTube. (2017). The presidential elections on YouTube. Retrieved from https://think.storage.googleapis.com/docs/The_Presidential_Elections_On_YouTube.pdf

Footnotes

1. The research for this paper is based on industry reports, trade publications, and policy documents, as well as review of relevant scholarly and legal literature. The authors thank Gary O. Larson and Arthur Soto-Vasquez for their research and editorial assistance.

Micro-targeting, the quantified persuasion

$
0
0

Disclaimer: This guest essay in the Special issue on political micro-targeting has not been peer reviewed. It is treated here as a reflection.

During the past three decades there has been a persistent, and dark, narrative about political micro-targeting. Phil Howard (2006) vividly described a present and future where politicians would use data to “redline” the citizens that received political information, manufacturing attitudes and beliefs, leading to “managed citizenship”. In the years since Howard wrote his monumental book, the concerns over micro-targeting have only grown. The explosion of data about the electorate in Western democracies such as Australia, Canada, the UK, and the United States (Howard & Kreiss, 2010) has triggered deep unease among scholars and privacy advocates alike. Sophisticated voter databases now contain everything from political party data gleaned through millions of interactions with the electorate, public data obtained from state agencies, and commercial marketing information that is bought and sold on international open markets. The 2016 US presidential election revealed the new ways that individuals can be profiled, identified, found, tracked, and messaged to on social media platforms such as Facebook and YouTube, which these companies themselves help facilitate (Kreiss and McGregor, 2017).

While it might seem that the micro-targeting practices of campaigns have massive, and un-democratic, electoral effects, decades of work in political communication should give us pause. Although we lack the first-hand data from political campaigns, consultancies, and technology firms such as Facebook to know for sure, previous research tells us that people are seldom the unwitting dupes of strategic political communication. Partisanship shapes much of how people vote and decades of research reveals that it is really hard to change people’s minds through campaigns (Kalla & Broockman, 2017; Henderson & Theodoridis, 2017). This has large implications for the effectiveness of micro-targeting. For example, Eitan Hersh’s (2015) deeply and carefully researched ground-breaking study using data from a major vendor to the US Democratic Party discovers that campaign practitioners find it really hard to persuade voters. This is because political practitioners lack reliable and identifiable data on cross-pressured and low information voters. Given this, campaigns often focus on known voters rather than risk targeting and messaging to the wrong people. Indeed, Hersh reveals that despite hundreds of data points on members of the electorate, it is a small cluster of publically available data – such as turnout history, party identification, and demographic data – that matters far more for predicting vote choice.

The lesson is that micro-targeted campaign ads are likely most effective in the short run when campaigns use them to mobilise identified supporters or partisans, spurring volunteerism, donations, and ultimately turnout – hardly the image of a managed, manipulated, or duped public (Baldwin-Philippi, 2017). Ironically, campaigns often use micro-targeting to further these forms of democratic participation, making appeals to targeted subsets of voters on the basis of the parties and issues they already care about. Campaigns also use micro-targeting in the attempt to decrease voter turnout on the opposing side, sending negative messages to the oppositions’ likely voters in the hopes this will make them less excited to turn out for their candidate. But two decades of social science suggests that this can be a risky strategy given that partisans can rally behind their candidate who is being attacked (Dunn & Tedesco, 2017).

What explains the outsized concerns about micro-targeting in the face of the generally thin evidence of its widespread and pernicious effects? This essay argues that we have anxieties about micro-targeting because we have anxieties about democracy itself. Or, to put it differently, that scholars often hold up an idealised vision of democracy as the standard upon which to judge all political communication. In a world where many scholars and journalists both hope and ardently believe, in the face of all available evidence, that members of the public are fundamentally rational, seek to be informed, and consider the general interest, micro-targeting appears to be manipulative, perverting the capacity of citizens to reason about politics. Meanwhile, for many scholars and journalists, political elites are fundamentally opposed to members of the public, seeking domination or control as opposed to representing their interests. In this world, much of the concern over micro-targeting reads as a classic “third-person effect”, where scholars and journalists presume that members of the public are more affected by campaign advertising than they themselves are.

And yet, this idealised version is not how democracy really is, nor necessarily how it should be. The argument of this brief essay is that, as a quantifiable practice premised on strategically identifying targeted groups of voters and crafting messages designed to appeal to them, micro-targeting is broadly reflective of the fact that democracy is often partisan, identity-based, and agonistic – in short, political. Following communication scholar Michael Schudson’s (1986) study of commercial advertising nearly three decades ago, this essay asks the following questions in the US context: what is the work that micro-targeting does, where does it fit into the political culture, and, what kind of political culture has given rise to it? I argue that micro-targeting is only imaginable, and efficacious, in a polity that prizes partisan mobilisation, group solidarity, agonism, and the clash of opposing moral views in its politics. Following from this, I suggest different democratic concerns about micro-targeting that relate to its cultural power to, over time, create a powerful set of representations of democracy that undermines the legitimacy of political representation, pluralism, and political leadership.

The cultural work of micro-targeting

To analyse the role that micro-targeting plays in politics, first we need to understand how and why citizens vote. In their recent book Democracy for Realists, political scientists Christopher Achen and Larry Bartels (2016) offer a sustained critique of what they call the “folk theory” of American democracy. According to this “folk theory” that underlies conceptions of popular sovereignty, Americans have identifiable and consistent policy preferences. During the course of an election, they inform themselves about the policy positions of candidates and make rational decisions as to which best represents their preferences, which in turn leads parties to be responsive to the wishes of the public.

As Achen and Bartels (ibid.) argue, this is a fiction. They outline a “group theory of democracy”, where it is social attachments and group identification that largely determine both partisanship and vote choice. Achen and Bartels argue that people see themselves in relation to the groups that they belong to and those that they do not. Identity is so strong, in this account, that it conditions both what partisans believe parties stand for but also their interpretation of facts (ibid., 267; see also Prasad et al., 2009). As Achen and Bartels demonstrate, this identity and group theory of politics has expansive empirical support over seventy years of research which demonstrates, time and again, that people have little knowledge about politics and yet detailed understandings of the social groups that the Democratic and Republican parties are perceived to represent. It is in this context that candidate performances of partisan and social identity become more important for electoral outcomes then the informational content of journalism. Events and candidates make identity more or less salient and strengthen group attachments. During campaigns, parties and candidates work to remind voters of their partisan and social attachments and strengthen them so they are mobilised to participate in the election. As Achen and Bartels (ibid., 311) argue:

Political campaigns consist in large part of reminding voters of their partisan identities – “mobilizing” them to support their group at the polls. Formal communications by the groups and informal communication networks among group members also help citizens understand how their identity groups connect to the candidates and parties.

In this context, what is important about political campaigns is this work of communicating the partisan and social identities of candidates to voters. Candidates and their campaigns use micro-targeting, along with other strategic communications, to accomplish this. Micro-targeting is both a campaign practice of using data to craft and deliver strategic messages to subsets of the electorate (historically across many different media), and a genre of campaign communications that, much like political advertising more broadly, reinforces and amplifies the partisan, group, and identity conflicts at the heart of US politics. There has been extensive research on how micro-targeting works as a data-driven and quantifiable practice (see, for instance, Karpf, 2016). What these messages do as a genre of campaign communications, however, has received considerably less scrutiny. Drawing on my own previous work in the US context (Kreiss, 2016), the first argument that I develop here is that micro-targeting furthers the mobilisation that Achen and Bartels (2015) identify, primarily through reminding citizens of and shoring up their partisan and group identities. I then discuss the potential democratic consequences of this in a more expansive, cultural sense.

Micro-targeted ads have an aesthetic of what I call “political realism”, building on Michael Schudson’s work on commercial advertising. In Advertising, The Uneasy Persuasion, Schudson (1986) compared commercial advertising with Soviet realist art (the official state-sanctioned art of the former Soviet Union), arguing that it offers a form of “commercial realism”. As commercial realism, commercial advertising “simplifies and typifies” (215); advertising is abstracted, presenting the world as it should be, not as it is, and it exemplifies individuals as members of larger social groups. As it does so, “the aesthetic of capitalist realism — without a masterplan of purposes — glorifies the pleasures and freedoms of consumer choice in defense of the virtues of private life and material ambitions.” (ibid., 218) 

We can see micro-targeted digital advertising as a cultural form of ‘political realism’ that reflects, reinforces, and celebrates a political culture, at least in the United States, premised on identity, moral certainty, and mobilisation - not weighty considerations of the general interest or deliberation. Micro-targeted digital content shares a few central characteristics, which I adapt here for politics from Schudson’s (1986) work on commercial realism:

  • It presents social and political life in simplified and typified ways;
  • It presents life as it should become, or for negative ads, as it must not become;
  • It presents reality in its larger social significance, not in its actual workings;
  • It presents progress towards the future and positive social struggle, or for negative ads, the ideas of the other party as negative steps back into the past. It carries a message of optimism for one partisan side, and takes a stance of pessimism towards political opponents; and,
  • It tells us that political conflict is necessary, a clash of different groups and worldviews; moral certainty is assured, political identity is certain, and political agonism is reality.

For example, micro-targeted ads present social life in simplified ways, not presenting actual lives but abstract, stylised ones designed to be rife with larger meaning. A depiction of a farmer’s daily work in a campaign ad, for instance, is not about actual events or daily labours, but is meant to be an abstract, simplified, symbol of the American values of hard work and cultivation of the earth and celebration of ordinary people in a democratic society. The farmer here is typified; the campaign ad is not about a real person who farms. The farmer is a representation of the larger social categories, values, and ideas the ad presents as desirable or worthy of emulation for all Americans. At the same time, the two dominant US political parties often stress different themes in their ads, a recognition that they have different visions of what life should be become, what progress is, and what worldviews and moral claims the public should embrace. While doing so, political micro-targeting is inherently pluralist. It reflects a basic claim that “everyone has interests to defend and opinions to advance about his or her own good, or the group’s good, or the public good, and every interest was at least potentially a political interest group.” (Rosenblum, 2010, 259)

While it is impossible to know the full range of micro-targeted ads run during the course of an election cycle, consider some of the examples culled from the non-profit and non-partisan Democracy in Action website that chronicles US campaigns and the Hillary for America Design 2016 website that compiles the creative design from the campaign. To start, much of political micro-targeting is about building campaign databases by finding supporters online, signing them up for the cause through email, and repeatedly messaging them to enlist them in becoming a volunteer or a donor.

Take, for instance, the declarative “I am a Hillary Voter” digital ad (see Figure 1), presumably (but also logically) directed (although we can never know for sure) at the candidate’s supporters. What separates micro-targeted political ads from their mass broadcast counterparts is the data that lies behind them: campaigns can explicitly try to find and send messages to their partisan audiences or intra-party supporters, linking the names in their databases to identities online or on social media platforms such as Facebook. Campaigns can also try to find additional partisans and supporters by starting with the online behaviours, lifestyles, or likes or dislikes of known audiences and then seeking out ‘look-alike audiences’, to use industry parlance. And, what people do when they see these ads is quantified in terms of their performance, measured through things such as engagement and click-throughs. Micro-targeting is about mobilisation through conveying and building social solidarity. While there is much concern over candidates speaking out of both sides of their mouths to the electorate through hyper-targeted digital ads, likely far more often campaigns use micro-targeting to provide occasions for social identification and group belonging, conveying and constructing the sense of shared identity and group membership at the heart of politics. The “Wish Hillary a Happy Birthday” ad captures this (see Figure 2). Not only is this appeal directed at supporters (what Republican will want to wish Hillary a happy birthday after all), it constructs a sense of what social identification with Hillary Clinton means: motherhood, family, warmth, care, and nurturing.

"I'm a Hillary Voter"
Figure 1: Hillary Clinton digital campaign advertisements
"Wish Hillary a Happy Mother's Day! – Sign the card"
Figure 2: Hillary Clinton digital campaign advertisement

Source: Hillary for America Design 2016

Micro-targeting is also about the marking of difference. This is, perhaps, the most common trope in micro-targeted digital campaign ads. Campaigns look to not only establish the cultural meaning of their candidates and supporters, but also that of their opposition (Alexander, 2010). Donald Trump’s ads during the 2016 election reflected his rhetoric from the campaign trail in stressing themes of safety and security, in addition to the need to draw boundaries around civic incorporation (i.e., who should be allowed to be a citizen). For Hillary Clinton, micro-targeted ads were celebrations of diversity and multi-culturalism, especially the empowerment of women and racial and ethnic minorities. Political advertisements attempt to connect the candidates they promote with the demographic and social groups they seek to represent (in the United States this is at times drawn on racial and ethnic terms: whites for Republicans and a more diverse coalition for Democrats, see the discussion in Grossmann & Hopkins, 2016, 43-45).

In this, micro-targeting reflects and reinforces political agonism, the clash of competing social groups, interests, and values. Through micro-targeting, candidates stake out their claim to be on the civil side of the moral binary of the political sphere and strive to paint their opponents as anti-civil (Alexander, 2010). More colloquially, micro-targeted advertisements offer the beautiful affirmation of our values and the sharp critique of those of our opponents. Hillary Clinton’s campaign, for instance, clearly sought to portray Trump in terms of anti-civil racism, xenophobia, and sexism. And, the campaign used issues, such as abortion rights, and values, such as autonomy and choice, to build group identity and social solidarity around opposition to Trump: “Let’s stand together, join millions of women” (see Figure 3). This Facebook ad pits Clinton and her supporters against Trump and his supporters. Trump, in turn, combined nationalist and security appeals with an implicit construction of the American body politic in white identity terms (Figure 4). These ads capture the reality that political conflict is not only inevitable, but necessary: there are opposing views in politics on fundamental questions such as life, autonomy, and country. The audiences for these ads are not being presented with information to help them make up their own minds, they are being invited into a political struggle with clear opposing worldviews and moral values (see Figure 5). This is why mobilisation ads are directed towards identity-congruent audiences.

"Join Women for Hillary"
Figure 3: Hillary Clinton Facebook advertisement
"Immigration Reform – Build a Wall"
Figure 4: Donald Trump digital advertisement

Source: Democracy in Action

"Nope" / "Stop Trump"
Figure 5: Anti-Trump Hillary Clinton digital advertisements

Source: Hillary for America Design 2016

In these advertisements, it is also clear that micro-targeted ads present life as it should become, or as it must not become, linking the preferred candidate and political party with a civil vision of the future and the opposition with an anti-civil vision of the future, to use Alexander’s (2010) framework. As an example, for Ted Cruz (see Figure 6), the opposing side wants to infringe on the Bill of Rights, the fundamental liberty of Americans to defend their lives, liberties, families, and properties. Candidates run these issue ads to stake out their stance on the conflicting values, visions of the good life, plans for the future, and ends that are desirable in politics – whether it is embracing the freedom and security of gun rights for American Republicans or autonomy and choice in the context of reproductive rights for Democrats. These appeals are designed to mobilise the committed around the candidate’s vision of America’s past and future – they are designed for a world where we are sure of who we are and committed to our values and the ends we pursue.

"Obama wants your guns!"
Figure 6: Ted Cruz digital campaign advertisement

Source: Democracy in Action

Conclusion: democratic anxieties

I believe that there is such democratic anxiety about micro-targeting because citizens are supposed to be independent, autonomous, and rational. Micro-targeted advertising works to reinforce group identities and solidarity, mobilise partisans, and further the clash of political values. These things are all suspect from the perspective of the powerful and potent “folk theory” of democracy, as Achen and Bartels phrase it. As these realists argue, however, it’s far better to grapple with the reality of group-based democracy, with its attendant ingrained social allegiances and conflicts over values and power, rather than wishing for a transcendent and pure form of democracy without politics. These authors argue that we need to make peace with conflictual and competitive forms of group-based and pluralistic democracy premised on institutionally organised opposition. As Achen and Bartels (2015, 318) conclude:

Freedom is to faction what air is to fire, Madison said. But ordinary citizens often dislike the conflict and bickering that comes with freedom. They wish their elected officials would just do the people’s work without so much squabbling amongst themselves. They dislike the compromises that result when many different groups are free to propose alternative policies, leaving politicians to adjust their differences. Voters want “a real leader, not a politician,” by which they generally mean that their own ideas should be adopted and other people’s opinions disregarded, because views different from their own are obviously self-interested and erroneous. To the contrary, politicians with vision who are also skilled at creative compromise are the soul of successful democracy, and they exemplify real leadership.

My own view is that micro-targeting comes in the necessary service of this “conflict and bickering”. At its normative best, micro-targeting strengthens the hands of opposing factions, enabling them to identify and mobilise partisans to their cause, providing them with resources in terms of boots on the ground and money in the coffers. When opposing politicians and parties square off, they carry these resources into battle trying to advance their agendas or win concessions for their side. Compromise may be harder in a world of stronger factions, their hands steadied by the resources that micro-targeting can deliver, but that does not make compromise any less necessary or essential.

On the other hand, there are reasons for democratic concern about micro-targeting, but they look a bit different from narratives about public manipulation. Schudson (1986, 232) concludes that “advertising does not make people believe in capitalist institutions or even in consumer values, but so long as alternative articulations of values are relatively hard to locate in the culture, capitalist realist art will have some power.” I suspect that the same is true of political micro-targeting. The cultural power of political micro-targeting, but also political advertising more generally, lies in its creation of a set of ready-to-hand representations of democracy that citizens can express easily and fall back on. Taken to its extreme in a polarized political climate, micro-targeting can work to undermine the legitimacy of conflicts over opposing values and claims in democratic life. For example, in an undemocratic political culture micro-targeting can portray the other side as crooked and dangerous to the polity, political compromise as selling out, political expertise and representation as not to be trusted, and partisans’ own beliefs and identities as the only legitimate ones, not simply those among many in a pluralistic democracy. Micro-targeting also melds symbolic and social power in new ways, culturally legitimating and furthering the fortunes of autonomous and independent candidates, divorced from their parties and taking their appeals directly to voters (see Hersh, 2017).

References

Achen, C. H., & Bartels, L. M. (2016). Democracy for realists: Why elections do not produce responsive government. Princeton University Press.

Alexander, J. C. (2010). The performance of politics: Obama's victory and the democratic struggle for power. Oxford University Press.

Baldwin-Philippi, J. (2017). The myths of data-driven campaigning. Political Communication, 34(4), 627-633. doi:10.1080/10584609.2017.1372999

Dunn, S., & Tedesco, J. C. (2017). Political Advertising in the 2016 Presidential Election. In The 2016 US Presidential Campaign (pp. 99-120). Palgrave Macmillan, Cham.

Grossmann, M., & Hopkins, D. A. (2016). Asymmetric politics: Ideological Republicans and group interest Democrats. Oxford University Press.

Hersh, E. D. (2015). Hacking the electorate: How campaigns perceive voters. Cambridge University Press.

Hersh, E. D. (2017). Political Hobbyism: A Theory of Mass Behavior.

Howard, P. N., and Kreiss, D. (2010). Political Parties and Voter Privacy: Australia, Canada, the United Kingdom, and United States in Comparative Perspective. First Monday, 15(12). 

Howard, P.N. (2006) New Media Campaigns and the Managed Citizen. Cambridge University Press.

Kalla, J. L., & Broockman, D. E. (2017). The Minimal Persuasive Effects of Campaign Contact in General Elections: Evidence from 49 Field Experiments. American Political Science Review, 1-19. doi:10.1017/S0003055417000363

Karpf, D. (2016). Analytic activism: Digital listening and the new political strategy. Oxford University Press.

Kreiss, D., & McGregor, S.C. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 US Presidential Cycle. Political Communication, 1-23. doi:10.1080/10584609.2017.1364814

Kreiss, D. (2016). Prototype politics: Technology-intensive campaigning and the data of democracy. Oxford University Press.

Henderson, J. A., & Theodoridis, A. G. (2017). Seeing Spots: Partisanship, Negativity and the Conditional Receipt of Campaign Advertisements. Political Behavior, 1-23. doi:10.1007/s11109-017-9432-6

Prasad, M., Perrin, A. J., Bezila, K., Hoffman, S. G., Kindleberger, K., Manturuk, K., … Payton, A. R. (2009). The Undeserving Rich: “Moral Values” and the White Working Class. Sociological Forum, 24(2), 225–253. doi:10.1111/j.1573-7861.2009.01098.x

Rosenblum, N. L. (2010). On the side of the angels: an appreciation of parties and partisanship. Princeton University Press.

Schudson, M. (1986). Advertising, the uneasy persuasion: its dubious impact in American Society. New York: Routledge.

On democracy

$
0
0

Disclaimer: This guest essay in the Special issue on political micro-targeting has not been peer reviewed. It is an abbreviated version of a speech delivered by the Member of the European Partiament (MEP) Sophie in ‘t Veld in Amsterdam in May 2017 to Data & Democracy, a conference on political micro-targeting.

Democracy

Democracy is valuable and vulnerable, which is reason enough to remain alert for new developments that can undermine her. In recent months, we have seen enough examples of the growing impact of personal data in campaigns and elections. It is important and urgent for us to publicly debate this development. It is easy to see why we should take action against extremist propaganda of hatemongers aiming to recruit young people for violent acts. But we euphemistically speak of 'fake news' when lies, 'half-truths’, conspiracy theories, and sedition creepily poison public opinion.

The literal meaning of democracy is 'the power of the people'. 'Power' presupposes freedom. Freedom to choose and to decide. Freedom from coercion and pressure. Freedom from manipulation. 'Power' also presupposes knowledge. Knowledge of all facts, aspects, and options. And knowing how to balance them against each other. When freedom and knowledge are restricted, there can be no power.

In a democracy, every individual choice influences society as a whole. Therefore, the common interest is served with everyone's ability to make their choices in complete freedom, and with complete knowledge.

The interests of parties and political candidates who compete for citizen’s votes may differ from that higher interest. They want citizens to see their political advertising, and only theirs, not that of their competitors. Not only do parties and candidates compete for the voter's favour. They contend for his exclusive time and attention as well.

Political targeting

No laws dictate what kind of information a voter should rely on to be able to make the right consideration. For lamb chops, toothpaste, mortgages or cars, for example, it’s mandatory for producers to mention the origin and properties. This enables consumers to make a responsible decision. Providing false information is illegal. All ingredients, properties, and risks have to be mentioned on the label.

Political communication, however, is protected by freedom of speech. Political parties are allowed to use all kinds of sales tricks.

And, of course, campaigns do their utmost and continuously test the limits of the socially acceptable.

Nothing new, so far. There is no holding back in getting the voters to cast their vote on your party or your candidate. From temptation with attractive promises, to outright bribery. From applying pressure to straightforward intimidation.

Important therein is how and where you can reach the voter. In the old days it was easy: Catholics were told on Sundays in church that they had no other choice in the voting booth than the catholic choice. And no righteous Catholic dared to think about voting differently. At home, the father told the mother how to vote. The children received their political preference from home and from school. Catholics learned about current affairs via a catholic newspaper, and through the catholic radio broadcaster. In the Dutch society, which consisted of a few of such pillars, one was only offered the opinions of one's own pillar1. A kind of filter bubble avant la lettre.

Political micro-targeting

Nowadays, political parties have a different approach. With new technologies, the sky is the limit.

Increasingly advanced techniques allow the mapping of voter preferences, activities, and connections. Using endless amounts of personal data, any individual on earth can be reconstructed in detail. Not only can their personal beliefs be distilled from large troves of data, no, it even is possible to predict a person's beliefs, even before they have formed them themselves. And, subsequently, it is possible to subtly steer those beliefs, while leaving the person thinking they made their decision all by themselves.

As often is the case, the Americans lead in the use of new techniques. While we Europeans, touchingly old-fashioned knock on doors and hand out flyers at Saturday's market, the American employ the latest technology to identify, approach, and influence voters.

Of course, trying to find out where voters can be reached and how they can be influenced is no novelty. Political parties map which neighbourhoods predominantly vote for them, which neighbourhoods have potential, and in which neighbourhoods campaigning would be a wasted effort. Parties work with detailed profiles and target audiences, for which they can tailor their messages.

But the usage of personal data on a large scale has a lot more to offer. Obviously, this is a big opportunity for political parties, and for anyone else, who runs campaigns or aims to influence the elections.

However, the influencing techniques become increasingly opaque. As a result of the alleged filter bubble, voters are being reaffirmed in their own beliefs, and they hardly receive information anymore about the beliefs and arguments of other groups. This new kind of segmentation may stifle critical thinking. There may not be enough incentive to test one's own ideas, to find new arguments, or to critically reflect on the truthfulness of information.

I am a social and economic liberal D66 politician, and I get suggestions for news articles from websites like The Guardian or Le Monde. My colleague from the right wing nationalist PVV, may well receive URLs from Breitbart.

Pluralism is essential for a healthy, robust democracy. In a polarised society, people live in tightly knit groups, which hardly communicate with each other. In a pluralist society people engage in the free exchange, confrontation, and fusion of ideas.

The concept pluralism is under pressure. Populist parties declare themselves representative of The People. In their vision, The People, is uniform and homogenous. There is a dominant cultural norm, dictated from the top-down, to which everyone must conform. Whomever refuses, gets chewed out. Often, it is about one-dimensional symbolism such as Easter eggs and Christmas trees. There is no place for pluralism in the world of the populists. But when there is no pluralism, there is no democracy. Without pluralism, democracy is nothing more than a simple tribal dispute, instead of the expression of the will of all citizens together.

Voter data

European privacy legislation limits the use of personal data. In the world of ‘big data’, one of the explicit goals of regulation is to prevent restriction of the consumer's choice. Oddly enough, lawmakers do not explicitly aspire to guarantee voters as broad a choice as possible. But in politics, individual choices have consequences for society as a whole.

In 2018, the General Data Protection Regulation (GDPR) comes into effect. We have worked five years on the GDPR. At this moment, we work on the modernisation of the e-Privacy Directive, which is mainly about the protection of communication. As was the case with the GDPR, companies from certain sectors scream bloody murder. European privacy protection would mean certain death for the European industry. According to some corporate Cassandras, entire European industries will move to other continents. That very same death of corporate Europe is also predicted for any measure concerning, say, environmental norms, procurement rules, or employee rights. All those measures are in place, but, as far as I know, the nightmare scenario has never occurred...

There are some corporate sectors, such as publishing and marketing, which have a huge impact on the information supply to citizens. They are the ones who now cry wolf. It is understandable that they are unhappy with stricter rules concerning their activities, but as the potential impact of the use of personal data and ‘big data’ increases, so does their social responsibility.

At the moment, there is not much public debate about the new techniques. Peculiar. Thirty years ago, 'subliminal advertising', as we called it then, was prohibited because people found it unethical to influence people without their knowledge. We need to have a similar debate. What do we think of opaque influencing? Do we need ethical norms? Should such norms apply only to political campaigns, or should we look at this from a broader perspective? In the ‘big data’ debate, we tend to speak in technical or legal terms, while actually the issue is fundamentally ethical, holding far-reaching consequences for the vitality of our democracy.

Such a public debate demands more clarity on the impact of ‘big data’, profiling, targeting, and similar techniques on the individual, her behaviour, and her choices, which determine in what direction society progresses. Which voters are being reached? How sensitive are they for the subtle influencing and what makes them resilient? How do people who are hardly reached only compare to the others? How do voters and non-voters compare? Is the voter truly predictable? Can we identify or influence the floating voter? Do voters actually float between different parties? Or do they especially float within their own party, their own bubble, their own segment? How important are other factors, such as the social context? If the new influencing techniques are indeed as potent as we think, how can polls get it so wrong? What can we learn from advertisers who return to contextual advertising, because targeting turns out less effective than they thought?

We need to stay cool-headed. New technologies have a huge impact, but human nature will not suddenly change due to ‘big data’ and its use. Our natural instincts and reflexes will definitely not evolve in a few years. That would take many thousands of years, as even in the 21st century, we seem to have more than a few cavemen traits, so losing internalised behaviour is not as easy as 1-2-3. Humans are resilient, but democracy is vulnerable. On a short term, the societal impact is large. This gives us all the reason to reflect on how to deal with the new reality, and how we can keep up our values in this new reality.

The use of personal data, clearly, is not solely reserved for decent political parties. Other persons and organisations, from the Kremlin to Breitbart, can bombard European voters with information and misinformation. But European governments, controlling endless amounts of personal data of their citizens, can also manipulate information, or circulate utter nonsense to advance their own interests. A random example: the Hungarian government influencing their voters with lies and manipulation about the so-called consultation on asylum seekers.

Beyond voter data

This issue is not only about the personal data of voters, but also about the personal data of political competitors, opponents, and critics, which are increasingly being employed. Recently, we have seen efforts of external parties to influence the results of the 2017 French elections. We saw a large-scale hack of the Emmanuel Macron campaign, and the spread of false information, coming obviously from the Kremlin and the American Alt-Right, meant to discredit Macron's candidacy.

Also, the American elections show the shady game of hacking, leaking, and manipulating. The issue of the Hillary Clinton mails will undoubtedly occupy our minds for years. Who knows how the elections would have turned out without this affair?

Other democratic pillars can get corrupted as well by the misuse of data. Critical voices, opposition, and checks and balances are democracy's oxygen. Democracy is in acute jeopardy when data are employed to attack, undermine, discredit, blackmail, or persecute journalists, judges, lawyers, NGOs, whistleblowers, and opposition parties.

In Europe, we tend to shrug our shoulders at these dangers. "Oh well, we'll see, such things occur only in banana republics, not right here". Of course, this trust in our democratic rule of law is wonderful. But if we treat our rule of law this neglectfully, we will lose it eventually.

Within the European Union, we currently see this happening in Poland and Hungary. The governments of both nations ruthlessly attack independent judges, critical media, inconvenient NGOs. They do so with quasi-lawful means. Under the banner of transparency, they force NGOs to register. In doing so, they misuse laws against money laundering, and terror finance. Or the governments bring out compromising information about judges or politicians in strategic moments.

But critical voices struggle in other member states as well. Lawyers are being monitored, even without a legal basis. In the years after 9/11, we have created endless new abilities for intelligence services, police and justice departments to spy on citizens, even without suspicion, without the signature of a judge. The companies to which we unwittingly surrender our personal data, in exchange for service, are forced to hand over all information to the government, or forced to build in backdoors. Governments hack computers in other countries. Usually, it starts out with unlawful practices, but soon enough laws are put in place to legalise those practices. The magic word 'terrorism' silences any critique on such legislation.

But when politicians, journalists, NGOs, whistleblowers, lawyers, and many others cannot perform their tasks freely and without worry, our democracy withers. Not only do they have to operate without someone keeping an eye on them, they have to know nobody is in fact watching them. The mere possibility of being watched, results in a chilling effect.

For this principal reason, I have contested a French mass surveillance law before the French Conseil d'Etat. Since, as a member of the European Parliament, I spend four days a month on French soil (in Strasbourg), I could potentially be the target of the French eavesdropping programme. This is not totally imaginary, as I am not only a politician, but also a vocal critic of certain French anti-terror measures. It is not about me actually worrying about being spied on, but about the fact that I might be spied on. Luckily, I am not easily startled, but I can imagine that many politicians are vulnerable. That is a risk for democracy.

I do not discard the possibility of a ruling of the European Court of Human Rights on my case. In that turn of events, it will lead to jurisprudence valid in the entire EU (and the geographical area covered by the Council of Europe).

But, of course, this should not depend on the actions of one obstinate individual whether politicians, NGOs, journalists, and so on, can do their jobs fearlessly, to fulfil their watchdog role.

It is my personal, deep, conviction that the biggest threat to our democracy is the fact that we have enabled the powerful to access, with almost no limitations, the personal data of those who should control those very same powerful entities.

What can we do?

Some propose new forms of democracy, in which universal suffrage is weakened or even abolished. In his book ‘Against elections: the case for democracy’, David Van Reybrouck had the idea to appoint representatives on the basis of chance, and in his book ‘Against democracy’ Jason Brennan wants to give the elite more votes than the lower classes, presuming that people with more education or development make better choices. Others want to replace representative democracy with direct democracy.

I oppose those ideas. Universal suffrage and the representative democracy are great achievements, which have led to enormous progress in society.

First of all, we have to make sure our children grow up to be critical, independent thinkers. Think differently, deviate, provoke: this must be encouraged instead of condemned. A democracy needs non-conformists.

We must teach our children to contextualise information and to compare sources.

The counterpart of ‘big data’ must be ‘big transparency’. We need to understand not just open administration, but also insights into the techniques of influence.

The regulation and limitation of the use of personal data, as I hope to have argued effectively, is not a game of out-of-touch privacy activists. It is essential for democracy. We need safeguards, not only to be sure people really are free in their choices, but also to protect the necessary checks and balances. As such, I plea for a rigorous application of the GDPR, and in the European Parliament, I will work for a firm e-Privacy Directive.

And yes, perhaps we should examine whether the rules for political campaigning are still up-to-date. In most countries, those rules cover a cap on campaign expenditures, a prohibition of campaigning or polling on the day before election day, or a ban on publishing information that may influence the election results, such as the leaked e-mails in France. But these rules have little impact on the use of personal data to subtly influence elections.

Last year, the European Parliament supported my proposal for a mechanism to guard democracy, the rule of law, and fundamental rights in Europe.2

On this day (editor’s note: 9 May, Europe Day) of European democracy, I plead for equal, high norms in Europe. The last years have shown that national elections are European elections. It is crucial for us to trust that all elections in EU member states are open, free, and honest elections, free of improper influencing.

These last sixty years, the European Union has developed itself into a world leader in democracy and freedom. If we start a public debate, Europe can remain a world leader.

Footnotes

1. Pillars are referred to here as societal cleavages along ideological or religious lines

2. The report I refer to is a legislative initiative of the European Parliament. I was the initiator and the rapporteur. This is a proposal to guard democracy, the rule of law, and the fundamental rights in the EU. The Commission, at first, did not want to proceed with the initiative. Recently, however, the Commission has announced a legislative proposal for such a mechanism. I suspect this proposal will look quite different from Parliament’s. But the fact that there will be a mechanism, is most important. The realization that the EU is a community of values, and not just on paper, spreads quickly. The URL to the proposal’s text is added below. It was approved in the EP in October 2016, with 404 Yea votes and 171 Nay’s. Source (last accessed 15 January 2018): http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fNONSGML%2bREPORT%2bA8-2016-0283%2b0%2bDOC%2bWORD%2bV0%2f%2fEN

Viewing all 178 articles
Browse latest View live