Quantcast
Channel: News and Research articles on Privacy & Security
Viewing all 178 articles
Browse latest View live

Micro-targeting, the quantified persuasion

$
0
0

Disclaimer: This guest essay in the Special issue on political micro-targeting has not been peer reviewed. It is treated here as a reflection.

During the past three decades there has been a persistent, and dark, narrative about political micro-targeting. Phil Howard (2006) vividly described a present and future where politicians would use data to “redline” the citizens that received political information, manufacturing attitudes and beliefs, leading to “managed citizenship”. In the years since Howard wrote his monumental book, the concerns over micro-targeting have only grown. The explosion of data about the electorate in Western democracies such as Australia, Canada, the UK, and the United States (Howard & Kreiss, 2010) has triggered deep unease among scholars and privacy advocates alike. Sophisticated voter databases now contain everything from political party data gleaned through millions of interactions with the electorate, public data obtained from state agencies, and commercial marketing information that is bought and sold on international open markets. The 2016 US presidential election revealed the new ways that individuals can be profiled, identified, found, tracked, and messaged to on social media platforms such as Facebook and YouTube, which these companies themselves help facilitate (Kreiss and McGregor, 2017).

While it might seem that the micro-targeting practices of campaigns have massive, and un-democratic, electoral effects, decades of work in political communication should give us pause. Although we lack the first-hand data from political campaigns, consultancies, and technology firms such as Facebook to know for sure, previous research tells us that people are seldom the unwitting dupes of strategic political communication. Partisanship shapes much of how people vote and decades of research reveals that it is really hard to change people’s minds through campaigns (Kalla & Broockman, 2017; Henderson & Theodoridis, 2017). This has large implications for the effectiveness of micro-targeting. For example, Eitan Hersh’s (2015) deeply and carefully researched ground-breaking study using data from a major vendor to the US Democratic Party discovers that campaign practitioners find it really hard to persuade voters. This is because political practitioners lack reliable and identifiable data on cross-pressured and low information voters. Given this, campaigns often focus on known voters rather than risk targeting and messaging to the wrong people. Indeed, Hersh reveals that despite hundreds of data points on members of the electorate, it is a small cluster of publically available data – such as turnout history, party identification, and demographic data – that matters far more for predicting vote choice.

The lesson is that micro-targeted campaign ads are likely most effective in the short run when campaigns use them to mobilise identified supporters or partisans, spurring volunteerism, donations, and ultimately turnout – hardly the image of a managed, manipulated, or duped public (Baldwin-Philippi, 2017). Ironically, campaigns often use micro-targeting to further these forms of democratic participation, making appeals to targeted subsets of voters on the basis of the parties and issues they already care about. Campaigns also use micro-targeting in the attempt to decrease voter turnout on the opposing side, sending negative messages to the oppositions’ likely voters in the hopes this will make them less excited to turn out for their candidate. But two decades of social science suggests that this can be a risky strategy given that partisans can rally behind their candidate who is being attacked (Dunn & Tedesco, 2017).

What explains the outsized concerns about micro-targeting in the face of the generally thin evidence of its widespread and pernicious effects? This essay argues that we have anxieties about micro-targeting because we have anxieties about democracy itself. Or, to put it differently, that scholars often hold up an idealised vision of democracy as the standard upon which to judge all political communication. In a world where many scholars and journalists both hope and ardently believe, in the face of all available evidence, that members of the public are fundamentally rational, seek to be informed, and consider the general interest, micro-targeting appears to be manipulative, perverting the capacity of citizens to reason about politics. Meanwhile, for many scholars and journalists, political elites are fundamentally opposed to members of the public, seeking domination or control as opposed to representing their interests. In this world, much of the concern over micro-targeting reads as a classic “third-person effect”, where scholars and journalists presume that members of the public are more affected by campaign advertising than they themselves are.

And yet, this idealised version is not how democracy really is, nor necessarily how it should be. The argument of this brief essay is that, as a quantifiable practice premised on strategically identifying targeted groups of voters and crafting messages designed to appeal to them, micro-targeting is broadly reflective of the fact that democracy is often partisan, identity-based, and agonistic – in short, political. Following communication scholar Michael Schudson’s (1986) study of commercial advertising nearly three decades ago, this essay asks the following questions in the US context: what is the work that micro-targeting does, where does it fit into the political culture, and, what kind of political culture has given rise to it? I argue that micro-targeting is only imaginable, and efficacious, in a polity that prizes partisan mobilisation, group solidarity, agonism, and the clash of opposing moral views in its politics. Following from this, I suggest different democratic concerns about micro-targeting that relate to its cultural power to, over time, create a powerful set of representations of democracy that undermines the legitimacy of political representation, pluralism, and political leadership.

The cultural work of micro-targeting

To analyse the role that micro-targeting plays in politics, first we need to understand how and why citizens vote. In their recent book Democracy for Realists, political scientists Christopher Achen and Larry Bartels (2016) offer a sustained critique of what they call the “folk theory” of American democracy. According to this “folk theory” that underlies conceptions of popular sovereignty, Americans have identifiable and consistent policy preferences. During the course of an election, they inform themselves about the policy positions of candidates and make rational decisions as to which best represents their preferences, which in turn leads parties to be responsive to the wishes of the public.

As Achen and Bartels (ibid.) argue, this is a fiction. They outline a “group theory of democracy”, where it is social attachments and group identification that largely determine both partisanship and vote choice. Achen and Bartels argue that people see themselves in relation to the groups that they belong to and those that they do not. Identity is so strong, in this account, that it conditions both what partisans believe parties stand for but also their interpretation of facts (ibid., 267; see also Prasad et al., 2009). As Achen and Bartels demonstrate, this identity and group theory of politics has expansive empirical support over seventy years of research which demonstrates, time and again, that people have little knowledge about politics and yet detailed understandings of the social groups that the Democratic and Republican parties are perceived to represent. It is in this context that candidate performances of partisan and social identity become more important for electoral outcomes then the informational content of journalism. Events and candidates make identity more or less salient and strengthen group attachments. During campaigns, parties and candidates work to remind voters of their partisan and social attachments and strengthen them so they are mobilised to participate in the election. As Achen and Bartels (ibid., 311) argue:

Political campaigns consist in large part of reminding voters of their partisan identities – “mobilizing” them to support their group at the polls. Formal communications by the groups and informal communication networks among group members also help citizens understand how their identity groups connect to the candidates and parties.

In this context, what is important about political campaigns is this work of communicating the partisan and social identities of candidates to voters. Candidates and their campaigns use micro-targeting, along with other strategic communications, to accomplish this. Micro-targeting is both a campaign practice of using data to craft and deliver strategic messages to subsets of the electorate (historically across many different media), and a genre of campaign communications that, much like political advertising more broadly, reinforces and amplifies the partisan, group, and identity conflicts at the heart of US politics. There has been extensive research on how micro-targeting works as a data-driven and quantifiable practice (see, for instance, Karpf, 2016). What these messages do as a genre of campaign communications, however, has received considerably less scrutiny. Drawing on my own previous work in the US context (Kreiss, 2016), the first argument that I develop here is that micro-targeting furthers the mobilisation that Achen and Bartels (2015) identify, primarily through reminding citizens of and shoring up their partisan and group identities. I then discuss the potential democratic consequences of this in a more expansive, cultural sense.

Micro-targeted ads have an aesthetic of what I call “political realism”, building on Michael Schudson’s work on commercial advertising. In Advertising, The Uneasy Persuasion, Schudson (1986) compared commercial advertising with Soviet realist art (the official state-sanctioned art of the former Soviet Union), arguing that it offers a form of “commercial realism”. As commercial realism, commercial advertising “simplifies and typifies” (215); advertising is abstracted, presenting the world as it should be, not as it is, and it exemplifies individuals as members of larger social groups. As it does so, “the aesthetic of capitalist realism — without a masterplan of purposes — glorifies the pleasures and freedoms of consumer choice in defense of the virtues of private life and material ambitions.” (ibid., 218) 

We can see micro-targeted digital advertising as a cultural form of ‘political realism’ that reflects, reinforces, and celebrates a political culture, at least in the United States, premised on identity, moral certainty, and mobilisation - not weighty considerations of the general interest or deliberation. Micro-targeted digital content shares a few central characteristics, which I adapt here for politics from Schudson’s (1986) work on commercial realism:

  • It presents social and political life in simplified and typified ways;
  • It presents life as it should become, or for negative ads, as it must not become;
  • It presents reality in its larger social significance, not in its actual workings;
  • It presents progress towards the future and positive social struggle, or for negative ads, the ideas of the other party as negative steps back into the past. It carries a message of optimism for one partisan side, and takes a stance of pessimism towards political opponents; and,
  • It tells us that political conflict is necessary, a clash of different groups and worldviews; moral certainty is assured, political identity is certain, and political agonism is reality.

For example, micro-targeted ads present social life in simplified ways, not presenting actual lives but abstract, stylised ones designed to be rife with larger meaning. A depiction of a farmer’s daily work in a campaign ad, for instance, is not about actual events or daily labours, but is meant to be an abstract, simplified, symbol of the American values of hard work and cultivation of the earth and celebration of ordinary people in a democratic society. The farmer here is typified; the campaign ad is not about a real person who farms. The farmer is a representation of the larger social categories, values, and ideas the ad presents as desirable or worthy of emulation for all Americans. At the same time, the two dominant US political parties often stress different themes in their ads, a recognition that they have different visions of what life should be become, what progress is, and what worldviews and moral claims the public should embrace. While doing so, political micro-targeting is inherently pluralist. It reflects a basic claim that “everyone has interests to defend and opinions to advance about his or her own good, or the group’s good, or the public good, and every interest was at least potentially a political interest group.” (Rosenblum, 2010, 259)

While it is impossible to know the full range of micro-targeted ads run during the course of an election cycle, consider some of the examples culled from the non-profit and non-partisan Democracy in Action website that chronicles US campaigns and the Hillary for America Design 2016 website that compiles the creative design from the campaign. To start, much of political micro-targeting is about building campaign databases by finding supporters online, signing them up for the cause through email, and repeatedly messaging them to enlist them in becoming a volunteer or a donor.

Take, for instance, the declarative “I am a Hillary Voter” digital ad (see Figure 1), presumably (but also logically) directed (although we can never know for sure) at the candidate’s supporters. What separates micro-targeted political ads from their mass broadcast counterparts is the data that lies behind them: campaigns can explicitly try to find and send messages to their partisan audiences or intra-party supporters, linking the names in their databases to identities online or on social media platforms such as Facebook. Campaigns can also try to find additional partisans and supporters by starting with the online behaviours, lifestyles, or likes or dislikes of known audiences and then seeking out ‘look-alike audiences’, to use industry parlance. And, what people do when they see these ads is quantified in terms of their performance, measured through things such as engagement and click-throughs. Micro-targeting is about mobilisation through conveying and building social solidarity. While there is much concern over candidates speaking out of both sides of their mouths to the electorate through hyper-targeted digital ads, likely far more often campaigns use micro-targeting to provide occasions for social identification and group belonging, conveying and constructing the sense of shared identity and group membership at the heart of politics. The “Wish Hillary a Happy Birthday” ad captures this (see Figure 2). Not only is this appeal directed at supporters (what Republican will want to wish Hillary a happy birthday after all), it constructs a sense of what social identification with Hillary Clinton means: motherhood, family, warmth, care, and nurturing.

"I'm a Hillary Voter"
Figure 1: Hillary Clinton digital campaign advertisements
"Wish Hillary a Happy Mother's Day! – Sign the card"
Figure 2: Hillary Clinton digital campaign advertisement

Source: Hillary for America Design 2016

Micro-targeting is also about the marking of difference. This is, perhaps, the most common trope in micro-targeted digital campaign ads. Campaigns look to not only establish the cultural meaning of their candidates and supporters, but also that of their opposition (Alexander, 2010). Donald Trump’s ads during the 2016 election reflected his rhetoric from the campaign trail in stressing themes of safety and security, in addition to the need to draw boundaries around civic incorporation (i.e., who should be allowed to be a citizen). For Hillary Clinton, micro-targeted ads were celebrations of diversity and multi-culturalism, especially the empowerment of women and racial and ethnic minorities. Political advertisements attempt to connect the candidates they promote with the demographic and social groups they seek to represent (in the United States this is at times drawn on racial and ethnic terms: whites for Republicans and a more diverse coalition for Democrats, see the discussion in Grossmann & Hopkins, 2016, 43-45).

In this, micro-targeting reflects and reinforces political agonism, the clash of competing social groups, interests, and values. Through micro-targeting, candidates stake out their claim to be on the civil side of the moral binary of the political sphere and strive to paint their opponents as anti-civil (Alexander, 2010). More colloquially, micro-targeted advertisements offer the beautiful affirmation of our values and the sharp critique of those of our opponents. Hillary Clinton’s campaign, for instance, clearly sought to portray Trump in terms of anti-civil racism, xenophobia, and sexism. And, the campaign used issues, such as abortion rights, and values, such as autonomy and choice, to build group identity and social solidarity around opposition to Trump: “Let’s stand together, join millions of women” (see Figure 3). This Facebook ad pits Clinton and her supporters against Trump and his supporters. Trump, in turn, combined nationalist and security appeals with an implicit construction of the American body politic in white identity terms (Figure 4). These ads capture the reality that political conflict is not only inevitable, but necessary: there are opposing views in politics on fundamental questions such as life, autonomy, and country. The audiences for these ads are not being presented with information to help them make up their own minds, they are being invited into a political struggle with clear opposing worldviews and moral values (see Figure 5). This is why mobilisation ads are directed towards identity-congruent audiences.

"Join Women for Hillary"
Figure 3: Hillary Clinton Facebook advertisement
"Immigration Reform – Build a Wall"
Figure 4: Donald Trump digital advertisement

Source: Democracy in Action

"Nope" / "Stop Trump"
Figure 5: Anti-Trump Hillary Clinton digital advertisements

Source: Hillary for America Design 2016

In these advertisements, it is also clear that micro-targeted ads present life as it should become, or as it must not become, linking the preferred candidate and political party with a civil vision of the future and the opposition with an anti-civil vision of the future, to use Alexander’s (2010) framework. As an example, for Ted Cruz (see Figure 6), the opposing side wants to infringe on the Bill of Rights, the fundamental liberty of Americans to defend their lives, liberties, families, and properties. Candidates run these issue ads to stake out their stance on the conflicting values, visions of the good life, plans for the future, and ends that are desirable in politics – whether it is embracing the freedom and security of gun rights for American Republicans or autonomy and choice in the context of reproductive rights for Democrats. These appeals are designed to mobilise the committed around the candidate’s vision of America’s past and future – they are designed for a world where we are sure of who we are and committed to our values and the ends we pursue.

"Obama wants your guns!"
Figure 6: Ted Cruz digital campaign advertisement

Source: Democracy in Action

Conclusion: democratic anxieties

I believe that there is such democratic anxiety about micro-targeting because citizens are supposed to be independent, autonomous, and rational. Micro-targeted advertising works to reinforce group identities and solidarity, mobilise partisans, and further the clash of political values. These things are all suspect from the perspective of the powerful and potent “folk theory” of democracy, as Achen and Bartels phrase it. As these realists argue, however, it’s far better to grapple with the reality of group-based democracy, with its attendant ingrained social allegiances and conflicts over values and power, rather than wishing for a transcendent and pure form of democracy without politics. These authors argue that we need to make peace with conflictual and competitive forms of group-based and pluralistic democracy premised on institutionally organised opposition. As Achen and Bartels (2015, 318) conclude:

Freedom is to faction what air is to fire, Madison said. But ordinary citizens often dislike the conflict and bickering that comes with freedom. They wish their elected officials would just do the people’s work without so much squabbling amongst themselves. They dislike the compromises that result when many different groups are free to propose alternative policies, leaving politicians to adjust their differences. Voters want “a real leader, not a politician,” by which they generally mean that their own ideas should be adopted and other people’s opinions disregarded, because views different from their own are obviously self-interested and erroneous. To the contrary, politicians with vision who are also skilled at creative compromise are the soul of successful democracy, and they exemplify real leadership.

My own view is that micro-targeting comes in the necessary service of this “conflict and bickering”. At its normative best, micro-targeting strengthens the hands of opposing factions, enabling them to identify and mobilise partisans to their cause, providing them with resources in terms of boots on the ground and money in the coffers. When opposing politicians and parties square off, they carry these resources into battle trying to advance their agendas or win concessions for their side. Compromise may be harder in a world of stronger factions, their hands steadied by the resources that micro-targeting can deliver, but that does not make compromise any less necessary or essential.

On the other hand, there are reasons for democratic concern about micro-targeting, but they look a bit different from narratives about public manipulation. Schudson (1986, 232) concludes that “advertising does not make people believe in capitalist institutions or even in consumer values, but so long as alternative articulations of values are relatively hard to locate in the culture, capitalist realist art will have some power.” I suspect that the same is true of political micro-targeting. The cultural power of political micro-targeting, but also political advertising more generally, lies in its creation of a set of ready-to-hand representations of democracy that citizens can express easily and fall back on. Taken to its extreme in a polarized political climate, micro-targeting can work to undermine the legitimacy of conflicts over opposing values and claims in democratic life. For example, in an undemocratic political culture micro-targeting can portray the other side as crooked and dangerous to the polity, political compromise as selling out, political expertise and representation as not to be trusted, and partisans’ own beliefs and identities as the only legitimate ones, not simply those among many in a pluralistic democracy. Micro-targeting also melds symbolic and social power in new ways, culturally legitimating and furthering the fortunes of autonomous and independent candidates, divorced from their parties and taking their appeals directly to voters (see Hersh, 2017).

References

Achen, C. H., & Bartels, L. M. (2016). Democracy for realists: Why elections do not produce responsive government. Princeton University Press.

Alexander, J. C. (2010). The performance of politics: Obama's victory and the democratic struggle for power. Oxford University Press.

Baldwin-Philippi, J. (2017). The myths of data-driven campaigning. Political Communication, 34(4), 627-633. doi:10.1080/10584609.2017.1372999

Dunn, S., & Tedesco, J. C. (2017). Political Advertising in the 2016 Presidential Election. In The 2016 US Presidential Campaign (pp. 99-120). Palgrave Macmillan, Cham.

Grossmann, M., & Hopkins, D. A. (2016). Asymmetric politics: Ideological Republicans and group interest Democrats. Oxford University Press.

Hersh, E. D. (2015). Hacking the electorate: How campaigns perceive voters. Cambridge University Press.

Hersh, E. D. (2017). Political Hobbyism: A Theory of Mass Behavior.

Howard, P. N., and Kreiss, D. (2010). Political Parties and Voter Privacy: Australia, Canada, the United Kingdom, and United States in Comparative Perspective. First Monday, 15(12). 

Howard, P.N. (2006) New Media Campaigns and the Managed Citizen. Cambridge University Press.

Kalla, J. L., & Broockman, D. E. (2017). The Minimal Persuasive Effects of Campaign Contact in General Elections: Evidence from 49 Field Experiments. American Political Science Review, 1-19. doi:10.1017/S0003055417000363

Karpf, D. (2016). Analytic activism: Digital listening and the new political strategy. Oxford University Press.

Kreiss, D., & McGregor, S.C. (2017). Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 US Presidential Cycle. Political Communication, 1-23. doi:10.1080/10584609.2017.1364814

Kreiss, D. (2016). Prototype politics: Technology-intensive campaigning and the data of democracy. Oxford University Press.

Henderson, J. A., & Theodoridis, A. G. (2017). Seeing Spots: Partisanship, Negativity and the Conditional Receipt of Campaign Advertisements. Political Behavior, 1-23. doi:10.1007/s11109-017-9432-6

Prasad, M., Perrin, A. J., Bezila, K., Hoffman, S. G., Kindleberger, K., Manturuk, K., … Payton, A. R. (2009). The Undeserving Rich: “Moral Values” and the White Working Class. Sociological Forum, 24(2), 225–253. doi:10.1111/j.1573-7861.2009.01098.x

Rosenblum, N. L. (2010). On the side of the angels: an appreciation of parties and partisanship. Princeton University Press.

Schudson, M. (1986). Advertising, the uneasy persuasion: its dubious impact in American Society. New York: Routledge.


Personalisation algorithms and elections: breaking free of the filter bubble

$
0
0

Personalisation algorithms allow platforms to carefully target web content to the tastes and interests of their users. They are at the core of social media platforms, dating apps, shopping and news sites. They make us see the world as we want to see it. By forging a specific reality for each user, they silently and subtly shape customised “information diets”, including around our voting preferences. We still remember Facebook’s CEO Mark Zuckerberg testifying before the US Congress (in April 2018) about the many vulnerabilities of his platform during election campaigns. With the elections for the European Parliament scheduled for May 2019, it is about time to look at our information diets and take seriously the role of platforms in shaping our worldviews. But how? Personalisation algorithms are kept a closely guarded secret by social media platform companies. The few experiments auditing these algorithms rely on data provided by platform companies themselves. Researchers are sometimes subject to legal challenges by social media companies who accuse them of violating the Terms of Services of their utility. As we speak, technological fencing-offs are emerging as the newest challenge to third-party accountability. Generally, auditing algorithms fail to involve ordinary users, missing out on a crucial opportunity for awareness raising and behavioural change.

The Algorithms Exposed (ALEX) project1, funded by a Proof of Concept grant of the European Research Council, intervenes in this space by promoting an approach to algorithms auditing that empowers and educates users. ALEX stabilises and expands the functionalities of a browser extension - fbtrex - an original idea of lead developer Claudio Agosti. Analysing the outcomes of Facebook's news feed algorithm, our software enables users to monitor their own social media consumption, and to volunteer their data for scientific or advocacy projects of their choosing. It also empowers advanced users, including researchers and journalists, to produce sophisticated investigations of algorithmic biases. Taking Facebook and the forthcoming EU elections as a test case, ALEX unmasks the functioning of personalisation algorithms on social media platforms.

A screenshot of Facebook open in a browser with the fbtrex extension installed, showing how the Facebook bar is shaded green to indicate the extension's running.
Figure 1: When active, fbtrex changes the colour of your Facebook bar.

Our evidence: Facebook and the Italian elections

In December 2018, the Guardian published a report on “how Italy's populists used Facebook to win power”. Using data from theMediaLab at the University of Pisa, the journalists showed how the two front-runners, namely Matteo Salvini (Lega) and Luigi Di Maio (Five Star Movement) managed to bypass mainstream media coverage: together, they totaled 7,8 million Facebook likes and shares during the two-month electoral campaign. Most of the content consisted in “viral videos and personal, off-the-cuff live broadcasts”, reaching users in the intimate space of a personal Facebook page.

However, the analysis published by the Guardian is based on the measured engagement of users with a given set of content. While this might be helpful to describe a political outcome, in our opinion it provides only a partial picture. In fact, user engagement is a metric produced by the mash-up of three diverse factors.

The first factor refers to how much a page (or a political candidate) produces. Often candidates like Matteo Salvini, who can count on a communication campaign and a team managing their social media accounts, produce more content than would be normally available to an individual alone. The second factor concerns Facebook’s prioritisation algorithm, which has unknown reasons to prefer a given content over others, and to recursively proposing it to an individual—or on the contrary, hiding it from a certain user. The functioning of this algorithm is unclear, but we have reasons to believe it has a strong influence on the outcome. The third factor refers to the actual engagement metric, namely what we can observe within a certain segment of the electoral body which interacts with its preferred candidate. These factors have distinct political implications—respectively, a candidate’s accountability, intervention of opaque algorithms, and legitimate political interest—and cannot be reduced to engagement metrics alone.

We, too, conducted an experiment in the wake of the Italian general elections (March 2018). We used fbtrex and bots created ad hoc to collect evidence on social media manipulation. How did we proceed?

We selected 30 Facebook pages across the political spectrum, identifying six amongst the most active ones in five distinct political orientations, recording content by means of the now discontinued Facebook API. This allowed us to collect the posts selected by Facebook as evidence of the ongoing algorithmic curation. At the same time, we created six profiles (our “bots”) which started following the pages, accessing the platform 13 times per day at given moments and automatically scrolling over the timeline, and distributing likes differently on content from distinct political orientations. All public posts in the “wall” of these profiles were collected to show users what Facebook selects from each of them. Comparing data from the different profiles, we got a glimpse of the algorithmic selection. We could thus generate dedicated metrics to compare content types and post repetition, also in relation to content sources.

In a controlled environment, we could isolate emerging patterns among the timelines. For example, the percentage of content shown is quite stable for every profile: different users have a recurring and unique ratio. The individual with more pictures than text, for example, kept being fed more pictures. The percentage of posts displayed more than once in a given timeline, varies by profiles, and appears to be influenced by the "informative variety" a profile is exposed to.

For a more detailed visual account on how we did it, watch Claudio Agosti's presentation at last Chaos Computer Conference and/or check out the slides here. The presentation was designed with a hacker audience in mind.

Reclaiming algorithmic sovereignty

Don't #deletefacebook. Give your profile to science!
Figure 2: fbtrex slogan

In times of elections more than ever, it is of paramount importance to empower users to break free of their filter bubble. But reclaiming our own algorithmic sovereignty is no easy task. Here we suggest some key features algorithmic sovereignty projects like fbtrex should implement.

  • First of all, algorithmic accountability efforts should move out of the realm of the experts - academics and the industry above all - to involve users. Self-determination should be a community endeavour, if we are to counteract the current algorithm hegemony, centralised in a handful of corporations.
  • Algorithm literacy is key: users should be empowered to independently test the contours of their own filter bubble, to find out for themselves how algorithmic personalisation affects their digital experience. Putting easy-to-use algorithmic accountability tools in the hands of users means they can move away from the hypothetical into the real life, and judge by themselves. Enabling self-awareness, algorithm literacy promotes a healthy information diet and fosters a responsible use of social media.
  • Any algorithmic accountability tool should analyse the algorithm and not track individual behaviour. In addition, because algorithmic personalisation becomes visible only by comparing individual data, such tools should be able to aggregate data from various users. It is paramount that data collection and data reuse protocols protect users and their data, while at the same time supporting data analysis in the public interest.
  • Users should have full control of data extraction patterns, and be able to decide at any time whether they intend to volunteer their data. They should also be able to withdraw their participation whenever they want.
  • Finally, any algorithmic sovereignty tool should be open-source, in order to promote transparency in its functioning and enable others to check its functioning, evolve its functions, modify or customise it.

You can read how we implemented this within fbtrexhere. We are currently looking for test groups in various EU countries. If you want to join the study, please write to info@algorithms.exposed. To find out more, visit our website and https://facebook.tracking.exposed.

Acknowledgement: This project is funded by the European Research Council (ERC) under the European Union's Horizon2020 research and innovation programme (grant agreement No 825974-ALEX, awarded to Stefania Milan as Principal Investigator). See  algorithms.exposed. For additional information, contact info@algorithms.exposed

Footnotes

1. ALEX is a spin-off of the DATACTIVE project, investigating the evolution of citizen participation in the age of datafication.

Protecting the global digital information ecosystem: a practical initiative

$
0
0
Draft G7 Charlevoix statement

Introduction

The digitisation of our societies comes along with a number of challenges and opportunities - the dimension of which are far from being assessed, not to say understood. While the internet allowing easy access of everybody to the general political discourse was for some time understood as a great opportunity for strengthening democracy, more recent developments depicted by buzzwords like “fake news”, “disinformation operations” and “psychographically microtargeted advertising” as practiced with the support of Cambridge Analytica are observed with great concern as fundamental threats to the functioning of democracy. 1 Not less than cyber attacks on industries, infrastructures and governments, such practices are difficult to control. In particular, their origine is difficult to localise and there is no technical instrument available for clear attribution. Yet, protecting our democratic systems seems to amount to a serious common concern of the United States, the European Union as well as all democracies around the globe.

While some countries have already taken legislative measures or – at least – drafted action plans aiming at protecting the internet, the public discourse and democracy from criminal and terrorist content as well as from hate speech, fake news and disinformation operations, 2 valid concerns are raised with regard to the respect for freedom of speech as the foundation of democracy. 3 The question, thus, how to adequately protect the deliberative process of building political will and to thereby ensure the legitimacy of the democratic process against all sorts of IT-driven attempts of manipulation remains open. The recent call of a high-level civil servant of the European Commission “for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose“ 4 indicates only one of the directions the political discussion may move to. But it also shows that the problem is not limited to one or the other country or continent but has a global dimension. It is a challenge to constitutionalism at a global level.

This is why all democracies have an interest in finding common approaches for tackling the new challenges to their own survival. Forums like the G7 and the G20 are as important for stimulating the discussion on concrete solutions and measures as is the Internet Governance Forum (IGF) and other multi-stakeholder initiatives. Academic research and conferences can provide material, analysis and ideas to feed this process of discovery.

With the aim to identifying some G7 interest in digital technology and democracy Eileen Donahoe, Fen Hampson, Gordon Smith (all CIGI - Centre for International Governance Innovation, Waterloo, Canada) and myself (HIIG - Humboldt Institute for Internet and Society, Berlin, Germany) back in 2017 decided to submit some thoughts and proposals to those preparing the G7 summit in Charlevoix, Québec, Canada, 8-9 June 2018. On the basis of our discussions my contribution to this collaborative initiative was the following draft statement, and I would like to particularly thank Eileen Donahoe for her substantial input and revision of this work.

While the attempt to introduce the draft statement formally in the preparatory work for the summit at an early stage was unsuccessful, it is interesting to see, nonetheless, that THE CHARLEVOIX G7 SUMMIT COMMUNIQUE in its point 15 raises – as a matter of “building a more peaceful and secure world” - some of the issues addressed by our draft as follows:

  1. We commit to take concerted action in responding to foreign actors who seek to undermine our democratic societies and institutions, our electoral processes, our sovereignty and our security as outlined in the Charlevoix Commitment on Defending Democracy from Foreign Threats. We recognize that such threats, particularly those originating from state actors, are not just threats to G7 nations, but to international peace and security and the rules-based international order. We call on others to join us in addressing these growing threats by increasing the resilience and security of our institutions, economies and societies, and by taking concerted action to identify and hold to account those who would do us harm. 5

The “Charlevoix Commitment on Defending Democracy from Foreign Threats“ referred to in this Communiqué, 6identifies more in detail the steps the leaders of the G7 intend to take. The formulation remains less concrete than our proposal. In particular, it could have been clearer on the global character of the problem and the close relationship of defending democracy against foreign attacks with other issues of cybersecurity and the due diligence obligations of states, industries and the individual. 7 It is to be understood as a challenge of global (internet) governance and international peace.

More work is to be done, therefore, and besides of many upcoming conferences and projects the IGF in Berlin (25-29 November 2019) presents an excellent multistakeholder forum where the issues could further be discussed with the aim of finding a consensus among all stakeholders on a declaration on the protection of the global digital information ecosystem along the lines of the following draft.

Draft G7 Charlevoix statement on the protection of the global digital information ecosystem

  1. Threat of misuse of digital technology and information: We, the Leaders of the G7, note with concern increased misuse of the internet and digital information both, by states and private actors, aimed at disturbing political processes in our democracies and in political systems throughout the world. We strongly condemn any malicious cyber activities like the manipulation of national elections, digital disinformation campaigns and psychographic targeting in election campaigns, and commit ourselves fully to abstain from such practices.
  2. Protection of the global digital information ecosystem: The effective protection of the digital information ecosystem is a condition for the full exercise of political freedoms and self-determination of peoples in modern democracies. We will take all necessary measures and call upon all stakeholders, to defend our globalised digital society against any threat or attempt to hamper further development of the benefits offered by digitisation of societies.
  3. Cyber threats against other states equivalent to violation of international law: We understand malicious cyber-activities against other states and their infrastructures, including digital offences by governments against the integrity of political processes and the public sphere of political discourse in foreign countries, as equivalent to an intervention into their internal affairs contrary to the principle of equal sovereignty embodied in Article 2 (1) of the Charter of the United Nations. Those activities constitute a breach of international law giving rise to countermeasures.
  4. Global cyber-security compact: With a view to avoiding distrust among states and a risk of escalation and conflict worldwide, we commit ourselves and strive to bring all countries together to agree upon a ‘global cyber-security compact’ compelling all governments to abstain from cyber-offences against other states or private parties and, in particular, from information operations and other intentional intervention into the democratic processes of other countries.
  5. Due diligence against cyber attacks: The international law principle of due diligence requiring each country to make every effort possible to prevent attacks by private actors from their territory against foreign countries, industries or people, equally applies to malicious cyber activities. This includes the prohibition of private parties to conduct such action. We commit ourselves and call all other governments to fully respect this principle and to agree upon concrete terms of its application in cyber space as part of the ‘global cyber-security compact’.
  6. Private sector responsibilities to develop resilient technology: We call upon IT companies such as communication service providers and platforms to develop resilient IT systems and share technologies to combat malicious activities in the cyber-sphere. Social media and search platforms should also apply algorithms and be prepared to detect and take down illegal hate speech and content that supports extremist, racist and terrorist propaganda. Illegal expression that can be identified as originating from unlawful bots or other automatic devices that distort the free and independent formation of political views, should also be restricted in full compliance with the human right to freedom of expression and international human rights law.
  7. Private sector global governance responsibilities: Corporate Social Responsibility (CSR) and the United Nations Guiding Principles on Business and Human Rights applicable to private sector companies are important elements of the global governance framework for business relations in the cyber realm and function as a corollary to state regulation and international law. It includes the duty to be responsive to concerns of individuals who feel offended in their rights by illegal conduct of IT companies or illegal content made publicly available through social media and platforms. We urge companies to establish easily available, rapid and efficient procedures to fairly answer private complaints against such acting and content, with due regard to the freedom of expression, and accordingly to take complaints against any unjustified take-down of lawful content.
  8. Multi-stakeholder collaboration to protect the global digital ecosystem: The protection of the digital information ecosystem can only be accomplished through common and coordinated efforts of states, businesses, the civil society and individual users. We commit to new and additional investment in education systems including universities to develop curricula, train teachers and media, and to undertake further research to ensure highest degrees of digital literacy, critical thinking, awareness for cyber-risks and diligence in the production and use of IT products throughout our societies. We consider these new efforts as being essential for ensuring a safe digital infrastructure and sustainable democratic resilience.
  9. Global information culture consistent with international human rights: We call for the establishment of a new global information culture, based upon the protection of international human rights standards and in particular, the fundamental freedom of expression, free access to information, to education and to culture, full respect of privacy and the protection of personal data. We understand these as guiding principles of our policies related to digitisation and security and as a condition for a prosperous development of our democracies. We commit ourselves to support civil society initiatives and other stakeholders in their endeavor to give effect to these principles, rights and values as part of the ongoing process of internet governance.

Footnotes

1. With some proposals for solution: Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda. Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press, 2018). For operations in Europe and with regard to the Brexit-referendum see in particular the alarming account of Carole Cadwalladr, The great British Brexit robbery: how our democracy was hijacked, The Guardian, 7 May 2017, at: https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy (accessed 10 December 2018).

2. See the German Network Enforcement Law (“Gesetz zur Verbesserung der Rechtsdruchsetzung in sozialen Netzwerken, Netzwerkdurchsetzungsgesetz - NetzDG“) of 1 September 2017, available at: https://www.gesetze-im-internet.de/netzdg/BJNR335210017.html (accessed 8 December 2018), English translation: https://www.bmjv.de/SharedDocs/Gesetzgebungsverfahren/Dokumente/NetzDG_engl.pdf?__blob=publicationFile&v=2. For the initiatives of the European Union see, in particular the “Code of Practice against Disinformation“, where for the first time worldwide industry agreed, on a voluntary basis, to self-regulatory standards to fight disinformation, at: https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation (accessed 28 February 2019). See also the revised EU Cybersecurity Strategy: European Commission/High Representative of the Union For Foreign Affairs and Security Policy, Joint Communication to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions “Action Plan Against Disinformation“, 5.12.2018, JOIN(2018) 36 final, p. 1, (Introduction) and pp. 5-11, mentioning four pillars and 10 actions, at: https://eeas.europa.eu/headquarters/headquarters-homepage_en/54866/Action%20Plan%20against%20Disinformation (accessed 13 February 2019). See also the Introduction to: European Commission, Joint Communication of the European Parliament and the Council, Resilience, Deterrence and Defence: Building strong cybersecurity for the EU, JOIN(2017) 450 final of 13 September 2017, at: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52017JC0450&from=en (accessed 10 February 2019). For the United States see, in particular, The White House, National Cyber Strategy of the United States of America, September 2018, Introduction, pp. 1-2, and p. 9: “Protect our democracy”, at: https://www.whitehouse.gov/wp-content/uploads/2018/09/National-Cyber-Strategy.pdf, (accessed 10 February 2019).

3. Eileen Donahoe, Don’t Undermine Democratic Values in the Name of Democracy. How not to regulate social media, in The American Interest, 2017, at: https://www.the-american-interest.com/2017/12/12/179079/ (accessed 8 December 2018); see also: idem: Protecting Democracy from Online Disinformation Requires Better Algorithms, Not Censorship. In: Council on Foreign Relations, 21 August 2017, at: https://www.cfr.org/blog/protecting-democracy-online-disinformation-requires-better-algorithms-not-censorship (accessed 8 December 2018); for other critical comments see Mathias Hong, The German Network Entforcement Act and the Presumption in Favour of Freedom of Speech, in: Verfassungsblog 22 January 2018, at: https://verfassungsblog.de/the-german-network-enforcement-act-and-the-presumption-in-favour-of-freedom-of-speech/ (accessed 8 December 2018).

4. Paul Nemitz, Constitutional democracy and technology in the age of artificial intelligence, in: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 15 October 2018, at: https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0089 (accessed 10 December 2018).

5. See: https://g7.gc.ca/wp-content/uploads/2018/06/G7SummitCommunique.pdf (accessed 28 February 2019).

6. Available at: https://g7.gc.ca/wp-content/uploads/2018/06/DefendingDemocracyFromForeignThreats.pdf (accessed 28 February 2019).

7. More details: Ingolf Pernice, Global Cybersecurity Governance. A Constitutional Analysis, in: 7 Global Constitutionalism (2018), pp. 112-141.

Operationalising communication rights: the case of a “digital welfare state”

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

The rampant spread of disinformation and hate speech online, the so-called surveillance capitalism of the internet giants and related violations of privacy (Zuboff, 2019), persisting digital divides (International Telecommunication Union, 2018), and inequalities created by algorithms (Eubanks, 2018): these issues and many other current internet-related phenomena challenge us as individuals and members of the society. These challenges have sparked renewed discussion about the idea and ideal of citizens’ communication rights.

Either as a legal approach or as a moral discursive strategy, the rights-based approach is typically presented in a general sense as a counterforce that protects individuals against illegitimate forms of power, including both state and corporate domination (Horten, 2016). The notion of communication rights can not only refer to existing legally binding norms, but also more broadly to normative principles against which real-world developments are assessed. However, there is no consensus on what kinds of institutions are needed to uphold and enforce communication rights in the non-territorial, regulation-averse and rapidly changing media environment. Besides the actions of states, the realisation of communication rights is now increasingly impacted by the actions of global multinational corporations, activists, and users themselves.

While much of the academic debate has focused on transnational attempts to codify and promote communication rights at the global level, in this article, we examined a national approach to communication rights. Despite the obvious transnational nature of the challenges, we argued for the continued relevance of analysing communication rights in the context of national media systems and policy traditions. We provided a model to analyse communication rights in a framework that has its foundation in a specific normative, but also empirically grounded understanding of the role of communication in a democracy. In addition, we discussed the relevance of single country analyses to global or regional considerations of rights-based governance.

Communication rights and the case of Finland

The concept of communication rights has a varied history, starting with the attempts of the Global South in the 1970s to counter the Westernisation of communication (Hamelink, 1994; McIver et al., 2003). The connections between human rights and media policy have also been addressed, especially in international contexts and in the United Nations (Jørgensen, 2013; Mansell & Nordenstreng, 2006). Communication rights have also been invoked in more specific contexts to promote, for instance, the rights of disabled persons and cultural and sexual minorities in today’s communication environment (Padovani & Calabrese, 2014; McLeod, 2018). Currently, these rights are most often employed for the use of civil society manifestos and international declarations focused on digital or internet-related rights (Karppinen, 2017; Redeker, Gill, & Gasser, 2018).

Today, heated policy debates have surrounded the role of global platforms in realising or violating principles, such as freedom of expression or privacy, which are already stipulated in the United Nations Universal Declaration of Human Rights (MacKinnon, 2013; Zuboff, 2019). Various groups have made efforts to monitor and influence the global policy landscape, including the United Nations, its Special Rapporteurs, and the Internet Governance Forum; voluntary multi-stakeholder coalitions, such as the Global Network Initiative; and civil society actors, such as the Electronic Frontier Foundation, Freedom House, or Ranking Digital Rights (MacKinnon et al., 2016). At the same time, nation states are still powerful actors whose choices can make a difference in the realisation of rights (Flew, Iosifides, & Steemers, 2016). This influence is made evident through monitoring efforts that track internet freedom and the increased efforts by national governments to control citizens’ data and internet access (Shahbaz, 2018).

Communication rights in Finland are particularly worth exploring and analysing. Although the Finnish communication policy solutions are now intertwined with the broader European Union initiatives, the country has an idiosyncratic historical legacy in communication policy. Year after year, it remains as one of the top countries in press freedom rankings (Reporters without Borders, 2018). In the 1990s, Finland was a frontrunner in shaping information society policies, gaining notice for technological development and global competitiveness, especially in the mobile communications sector (Castells & Himanen, 2002). Finland was also among the first nations to make affordable broadband access a legal right ( Nieminen, 2013). On the EU Digital Economy and Society Index, Finland scores high in almost all categories, partly due to its forward-looking strategies for artificial intelligence and extensive, highly developed digital public services (Ministry of Finance, 2018). According to the think tank Center for Data Innovation, Finland’s availability of official information is the best in the EU (Wallace & Castro, 2017). Not only are Finns among the most frequent users of the internet in the European Union, they also report feeling well-informed about risks of cybercrime and trust public authorities with their online data more than citizens of any other EU country (European Union, 2017, pp. 58-60).

While national competitiveness in the global marketplace has informed many of Finland’s policy approaches (Halme et al., 2014), they also reflect the Nordic tradition of the so-called “epistemic commons”, that is the ideals of knowledge and culture as a joint and shared domain, free of restrictions (Nieminen, 2014 1). Aspects such as civic education, universal literacy, and mass media are at the heart of this ideal (Nieminen, 2014). This ideal has been central to what Syvertsen, Enli, Mjøs, and Moe (2014) called the “Nordic Media Welfare State”: Nordic countries are characterised by universal media and communications services, strong and institutionalised editorial freedom, a cultural policy for the media, and policy solutions that are consensual and durable, based on consultation with both public and private stakeholders.

Operationalising rights

How does Finland, a country with such unique policy traditions, fare as a “Digital Welfare State”? In this article, we employed a basic model that divides the notion of communication rights into four distinct operational categories (Nieminen, 2010; 2016; 2019; Horowitz & Nieminen, 2016). These divisions differ from other recent categorisations (Couldry et al., 2016; Goggin et al., 2017) in that they specifically reflect the ideal of the epistemic commons of shared knowledge and culture. Communication rights, then, should preserve and remove restrictions on the epistemic commons. We understand the following rights as central to those tasks:

  1. Access: citizens’ equal access to information, orientation, entertainment, and other contents serving their rights.
  2. Availability: equal availability of various types of content (information, orientation, entertainment, or other) for citizens.
  3. Dialogical rights: the existence of public spaces that allow citizens to publicly share information, experiences, views, and opinions on common matters.
  4. Privacy: protection of every citizen’s private life from unwanted publicity, unless such exposure is clearly in the public interest or if the person decides to expose it to the public, as well as protection of personal data (processing, by authorities or businesses alike, must have legal grounds and abide by principles, such as data minimisation and purpose limitation, while individuals’ rights must be safeguarded).

To discuss each category of rights, we deployed them in three levels: the level of the Finnish regulatory-normative framework; the level of implementation by the public sector, as manifested in the level of activity by commercial media and communications technology providers; and in the level of activity by citizen-consumers. This multi-level analysis aims at depicting the complex nature of the rights and the often contested and contradictory realisations at different levels. For each category, we also highlighted one example: for access, telecommunications; for availability, extended collective licencing in the context of online video recording services; for dialogical rights, e-participation; and for privacy, monitoring communications metadata within organisations.

Access

Access as a communication right well illustrates the development of media forms, the expansion of the Finnish media ecosystem, and the increasing complexity of rights as realised in regulatory decisions by the public sector, commercial media, and communications technology providers. After 100 years of independence, Finland is still short of domestic capital and heavily dependent on exports, which makes it vulnerable to economic downturns (OECD, 2018). Interestingly, despite changes to the national borders, policies, and technologies over time, it is these geopolitical, demographic, and socioeconomic conditions that have remained relatively unchanged and, in turn, have shaped most of the current challenges towards securing access to information and media.

While the right to access in Finland also relates to institutions, such as libraries and schools, the operationalisation here is illustrated by the case of telecommunications. Telecommunications are perhaps the most illustrative cases of access. They were originally introduced in Finland by the Russian Empire; however, the Finnish Senate managed to obtain an imperial mandate for licensing private telephone operations. As a result, the Finnish telephone system formed a competitive market based on several regional private companies. There was no direct state involvement in the telecommunications business before Finland became independent (Kuusela, 2007).

The licenses of the private telephone operators required them to arrange the telephone services in their area to meet the telephone customers’ needs for reasonable and equal prices. In practice, every company had a universal service obligation (USO) in its licensing area. However, as the recession of the 1930s stopped the development of private telephone companies in the most sparsely inhabited areas, the state of Finland had to step in. The national Post and Telecommunication service eventually played a pivotal role in providing telephone services to the most northern and eastern parts of Finland (Moisala, Rahko, & Turpeinen, 1977).

Access to a fixed telephone network improved gradually until the early 1990s, when about 95% of households had at least one telephone in their use. However, the number of mobile phone subscriptions surpassed the number of fixed line telephone subscriptions as early as 1999, and an increasing share of households gave up the traditional telephone completely. As a substitute to the fixed telephone, in the late 1990s, mobile phones were seen in Finland as the best way to bring communication “into every pocket” (Silberman, 1999). Contrary to the ideal of the epistemic commons, the official government broadband strategy was based much more on market-led development and mobile networks than, for example, in Sweden, where the government made more public investments in building fixed fibre-optic connections (Eskelinen, Frank, & Hirvonen, 2008). Finland also gave indirect public subsidies to mobile broadband networks (Haaparanta & Puhakka, 2002). While the rest of Europe had started to auction their mobile spectrum (Sims, Youell, & Womersley, 2015); in Finland, the operators received all mobile frequencies for free until 2013.

The European regulations of USOs in telecommunication have been designed to set a relatively modest minimum level of telephone services at an affordable price, which could be implemented in a traditional fixed telephone network. Any extensions for mobile or broadband services have been deliberately omitted (Wavre, 2018). However, the universal services directive (2002/22/EC) lets the member states use both fixed and wireless mobile network solutions for USO provision. In addition, while the directive suggests that users should be able to access the internet via the USO connection, it does not set any minimum bitrate for connections in the common market.

Finland amended its national legislation in 2007 to let the telecom operators meet their universal service obligations using mobile networks. The results were dramatic, as operators quickly replaced large parts of the fixed telephone network with a mobile network, especially in eastern and northern parts of Finland. Today, less than 10% of households have fixed telephones. At the same time, there are almost 10 million mobile subscriptions in use in a country with 5.5 million inhabitants. Less than 1% of households do not have any mobile phones at all (Statistic Finland, 2017). Thanks to the 3G networks using frequencies the operators had obtained for free, Finland became a pioneer in making affordable broadband a legal right. Reasonably priced access to broadband internet from home has been part of the universal service obligation in Finland since 2010. However, the USO broadband speed requirement (2 Mbps) is rather modest by contemporary standards.

It is obvious that since the 1990s, Finland has not systematically addressed access as a basic right, but rather as a tool to reach political and economic goals. Although about 90% of households already have internet access, only 51% of them have access to ultra-fast fixed connections. Almost one-third of Finnish households are totally dependent on mobile broadband, which is the highest share in the EU. To guarantee access to 4G mobile broadband throughout the country, the Finnish government licensed two operators, Finnish DNA and Swedish Telia, to build and operate a new, shared mobile (broadband) network in the northern and eastern half of Finland. Despite recent government efforts to also develop ultra-fast fixed broadband, Finland is currently lagging other EU countries. A report monitoring the EU initiative “A Digital Agenda for Europe” (European Court of Auditors, 2018) found that Finland is only 22nd in the ranking in terms of progress towards universal coverage with fast broadband (> 30 Mbps) by 2020. In contrast, another Nordic Media Welfare State, Sweden, with its ongoing investments in citizens’ access to fast broadband, expects all households have access to at least 100 Mbps by 2020 (European Court of Auditors, 2018).

Availability

As a communication right, availability is the counterpart to access, but also dialogical rights and privacy. Availability refers to the abundance, plurality, and diversity of factual and cultural content to which citizens may equally expose themselves. Importantly, despite an apparent abundance of available content in the current media landscape, digitalisation does not translate into limitless availability, but rather implies new restrictions and conditions thereof as well as challenges stemming from disinformation. Availability both overcomes many traditional boundaries and faces new ones, many pertaining to ownership and control over content. For instance, public service broadcasting no longer self-evidently caters for availability, and media concentration may affect availability. In Finland, one specific question of availability and communication pertains to linguistic rights. Finland has two official languages, which implies additional demands for availability both in Finnish and in Swedish, alongside Sami and other minority languages. These are guaranteed in a special Language Act, but are also included in several other laws, including the law on public service broadcasting.

Here, availability is examined primarily through overall trends in free speech and access to information in Finland, as well as from the perspective of copyright and paywalls in particular. Availability is framed and regulated from an international and supranational level (e.g., the European Union) to the national level. Availability at a national level relies on the constitutionally safeguarded freedom of expression and access to information as well as fundamental cultural and educational rights. Freedom of the press and publicity dates back to 18th-century Sweden-Finland. After periods of censorship and “Finlandization”, the basic tenet has been a ban on prior restraint, notwithstanding measures required to protect children in the audio-visual field (Neuvonen, 2005; 2018). Later, Finland became a contracting party to the European Convention of Human Rights (ECHR) in 1989, linking Finland closely to the European tradition. However, in Finland, privacy and freedom of expression were long balanced in favour of the former, departing somewhat from ECHR standards and affecting media output (Tiilikka, 2007).

Regarding transparency, and publicity in the public sector, research has showed that Finnish municipalities, in general, are not truly active in catering to citizens’ access to information requests, and there is an inequality across the country (Koski & Kuutti, 2016). This is in contrast to the ideals of the Nordic Welfare State (Syvertsen et al., 2014). In response, civil society group, Open Knowledge Finland, has created a website that publishes information requests and guides people to submit their own request.

The digital environment is conducive to restrictions and requirements stemming from copyright and personal data protection—both having an effect on availability. The “right to be forgotten”, for example, enables individual requests to remove links in search results, thus affecting searchability (Alén-Savikko, 2015). To overcome a particular copyright challenge, new provisions were tailored in Finland to enable online video recording services, thereby allowing people to access TV broadcasts at more convenient times in a manner that transcends the traditional private copying practices. The Finnish solution rests partly on the Nordic approach to so called extended collective licensing (ECL), which was originally developed as a solution to serve the public interest in the field of broadcasting. Collective management organizations are able to license such use not only on behalf of their members, with an extended effect (i.e. they are regarded representative of non-members as well), while TV companies license their rights (Alén-Savikko & Knapstad, 2019; Alén-Savikko 2016).

Alongside legal norms, different business models frame and construct the way availability presents itself to citizens. Currently, pay-per-use models and pay walls feature in the digital media sector, although pay TV development in particular has long been moderate in Finland (Ministry of Transport and Communications, 2014a). With new business models, availability transforms into conditional access, while equal opportunity turns into inequality based on financial means. From the perspective of individual members of the public, the one-sided emphasis on consumer status is in direct opposition to the ideals of the epistemic commons and the Nordic Media Welfare State.

Dialogical rights

Access and availability are prerequisites for dialogical rights. These rights can be operationalised as citizens’ possibilities and realised activities to engage in dialogue that fosters democratic decision-making. Digital technology offers new opportunities of participation: in dialogues between citizens and the government; in dialogues with and via legacy media; and in direct, mediated peer-to-peer communication that can amount to civic engagement.

Finland has a long legacy of providing equal opportunities for participation, for instance as the first country in Europe to establish universal suffrage in 1906, when still under the Russian Empire. After reaching independence in 1917, Finland implemented its constitution in 1919. The constitution secures freedom of expression, while also stipulating that public authorities shall promote opportunities for the individual to participate in societal activity and to influence the decisions that concern him or her.

Currently, a dozen laws support dialogical rights, ranging from the Election Act and Non-Discrimination Act to the Act on Libraries. Several of them address media organisations, including the Finnish Freedom of Expression Act (FEA) that safeguards individuals’ right to report and make a complaint about media content and the Act on Yleisradio (public broadcasting) that stipulates the organization’s role in supporting democratic participation.

Finland seems to do particularly well in providing internet-based opportunities for direct dialogue between citizens and their government. These efforts began, as elsewhere in Europe, in the 1990s (Pelkonen, 2004). The government launched a public engagement programme, followed in the subsequent decade by two other participation-focused programmes (Wilhelmsson, 2017). While Estonia is the forerunner in all types of electronic public services, Finland excels in the Nordic model of combining e-governance and e-participation initiatives: it currently features a number of online portals for gathering both citizen’s opinions and initiatives, both at the national and municipal levels (Wilhelmsson, 2017).

Still, increasing inequality in capability for political participation is one of the main concerns in the National Action Plan 2017–2019 (Ministry of Justice, 2017). The country report on the Sustainable Governance Indicators notes that the weak spot for Finland is public’s evaluative and participatory competencies (Anckar et al., 2018). Some analyses posit that the Finnish civil society is simply not very open for diverse debates, contrary to the culture of public dialogue in Sweden (Pulkkinen, 1996). While Finns are avid news followers, they trust the news, and they are more likely to pay for online news than news consumers in most countries (Reunanen, 2018), participatory possibilities do not entice them very much. Social media are not widely used for political participation, even by young people (Statistics Finland, 2017) and, for example, Twitter remains a forum for dialogues between the political and media elite (Eloranta & Isotalus, 2016).

The most successful Finnish e-participation initiative is based on a 2012 amendment to the constitution that has made it possible for citizens to submit initiatives to the Parliament. One option to do so is via a designated open source online portal. An initiative will proceed to Parliament if it has collected at least 50,000 statements of support within six months. By 2019, the portal had accrued almost 1000 proposals, 24 had proceeded to be discussed in Parliament, and two related laws had been passed. Research shows, however, that many other digital public service portals still remain unknown to Finns (Wilhelmsson, 2017).

As Karlsson (2015) has posited in the case of Sweden, public and political dialogues online can be assessed by their intensity, quality, and inclusiveness. The Finnish case shows that digital solutions do not guarantee participation if they are not actively marketed to citizens, and if they do not entail a direct link to decision-making (Wilhelmsson, 2017). While the Finnish portal for citizen initiatives has mobilized some marginalized groups, the case suggests that e-participation can also alienate others, for example older citizens (Christensen et al., 2017). Valuing each and every voice as well as prioritising ways to do so over economic or political priorities (Couldry, 2010) or the need to govern effectively (Nousiainen, 2016) could be seen as central to dialogical rights between the citizen and those in the government and public administration.

Privacy

Privacy brings together all the main strands of changes caused by digitalisation: changes in media systems from mass to multimedia; technological advancements; regulatory challenges of converging sectors; and shifting sociocultural norms and practices. It also highlights a shrinking, rather than expanding, space for the right to privacy.

Recent technical developments and the increased surveillance capacities of both corporations and nation states have raised concerns regarding the fundamental right to privacy. While the trends are arguably global, there is a distinctly national logic to privacy rights. This logic coexists with international legal instruments. In the Nordic case, the strong privacy rules exist alongside access to information laws that require the public disclosure of data that would be regarded as intimate in many parts of the world, such as tax records. Curiously, a few years ago, the majority of Finns did not even consider their name, home address, fingerprints, or mobile phone numbers to be personal information (European Union, 2011), and they are still among the most trusting citizens in the EU when it comes to the use of their digital data by authorities (European Union, 2017).

In Finland, the right to privacy is a fundamental constitutional right and includes the right to be left alone, a person’s honour and dignity, the physical integrity of a person, the confidentiality of communications, the protection of personal data, and the right to be secure in one’s home (Neuvonen, 2014). The present slander and defamation laws date back to Finland’s first criminal code from 1889, when Finland was still a Grand Duchy of the Russian Empire. In 1919, the Finnish constitution provided for the confidentiality of communications by mail, telegraph, and telephone, as well as the right to be secure in one’s home—important rights for citizens in a country that had lived under the watchful eye of the Russian security services.

In the sphere of privacy protection, new laws are usually preceded by the threat of new technology (Tene & Polonetsky, 2013); however, in Finland, this was not the case. Rather, the need for new laws reflected a change in Finland’s journalistic culture that had previously respected the private lives of politicians, business leaders, and celebrities. The amendments were called “Lex Hymy” (Act 908/1974) after one of Finland’s most popular monthlies had evolved into a magazine increasingly focused on scandals.

Many of the more recent rules on electronic communications and personal data are a result of international policies being codified into national legislation, perhaps most importantly EU legislation’s transposition into national law. What is fairly clear, however, is that the state has been seen as the guarantor of the right to privacy since even before Finland was a sovereign nation. The strong role of the state is consistent with the European social model and increased focus on public service regulation (cf., Venturelli, 2002, p. 80). Nevertheless, the potential weakness of this model is that the privacy rights seldom trump the public interest, and public uses of personal data are not as strictly regulated as their private use.

Finland has also introduced legislation that weakens the relatively strong right to privacy. After transposing the ePrivacy Directive guaranteeing the confidentiality of electronic communications into national law, the Finnish Government proposed an amending act that granted businesses and organisations the right to monitor communications metadata within their networks. The act was dubbed “Lex Nokia” after Finland’s leading newspaper published an article that alleged that the Finnish mobile giant had pressured politicians and officials to introduce the new law (Sajari, 2009). While it is difficult to assess to what degree Nokia influenced the contents of the legislation, it is clear that Nokia took the initiative and was officially involved in the legislative process (Jääsaari, 2012).

The Lex Nokia act demonstrates how the state’s public interest considerations might coincide with the economic interests of large corporations to the detriment of the right to privacy. Regardless, Finnish citizens remain more trusting of public authorities, health institutions, banks, and telecommunications companies than most of their European compatriots (European Union, 2015). It remains to be seen whether this trust in authority will erode, as more public and private actors aim to capitalise on the promises of big data. Nothing in recent Eurobarometer surveys (European Union, 2018a, pp. 38–56; European Union, 2018b) would indicate that the trust in public authorities would be in crisis or in steep decline—the same cannot be said for trust in political institutions, which seem to decline a few percentage points each year in various studies.

Discussion

The promotion of communication rights based on the ideal of epistemic commons is institutionalized in a variety of ways in Finnish communication policy-making, ranging from traditional public service media arrangements to more recent broadband and open data initiatives. However, understood as equal and effective capabilities, communication rights and the related policy principles of the Nordic Media Welfare State have never been completely or uniformly followed in the Nordic countries.

The analysis of the Finnish case highlights how the ideal of a “Digital Welfare State” falls short in several ways. Policies of access or privacy may focus on economic goals rather than rights. E-participation initiatives promoting dialogical rights do not automatically translate to a capacity or a desire to participate in decision-making. Arguably, the model employed in this article has been built on a specific understanding of which rights and stakeholders are needed to support the ideals of the epistemic commons and the Nordic Media Welfare State. That is why it focuses more on the national specificities and less on the impact on supranational and international influences on the national situation. It is obvious that in the current media landscape, national features are challenged by a number of emergent forces, including not only technological transformations but also general trends of globalisation and the declining capacities of nation states to enforce public interest or rights-based policies (Horten, 2016).

Still, more subtle and local manifestations of global and market-driven trends are worth examining to understand different policy options and interpretations. National mapping and monitoring the state of communication rights with measurement tools and indicators have been developed and employed that target their various components, such as linguistic issues or accessibility. In Finland, this type of approach has been adopted in the field of media and communications policy (Ala-Fossi et al., 2018; Artemjeff & Lunabba, 2018; Ministry of Transport and Communications, 2014b). Recent academic efforts aiming at comparative outlooks (Couldry et al., 2016; Goggin et al., 2017) are indications that communication rights urgently call for a variety of conceptualisations and operationalisations to uncover similarities and differences between countries and regions. As Eubanks (2017) argued, we seem to be at a crossroads: despite our unparalleled capacities for communication, we are witnessing new forms of digitally enabled inequality, and we need to curb these inequalities now—if we want to counter them at all. We may need both the global policy efforts, but we also need to understand their specific national and supranational reiterations to counter these and other inequalities regarding citizens’ communication rights.

References

Ala-Fossi, M., Alén-Savikko, A., Grönlund, M., Haara, P., Hellman, H., Herkman, J.,…Mykkanen, M. (2018). Media- ja viestintäpolitiikan nykytila ja sen mittaaminen [Current state of media and communication policy and measurement]. Helsinki: Ministry of Transport and Communications. Retrieved February 21, 2019, from http://urn.fi/URN:ISBN:978-952-243-548-4

Alén-Savikko, A. (2015). Pois hakutuloksista, pois mielestä? [Pois hakutuloksista, pois mielestä?]. Lakimies, 113(3-4), 410–433. Retrieved from http://www.doria.fi/handle/10024/126796

Alén-Savikko, A. (2016). Copyright-proof network-based video recording services? An analysis of the Finnish solution. Javnost – The Public, 23(2), 204–219. doi:10.1080/13183222.2016.1162979

Alén-Savikko, A., & Knapstad, T. (2019). Extended collective licensing and online distribution – prospects for extending the Nordic solution to the digital realm. In T. Pihlajarinne, J. Vesala & O. Honkkila (Eds.), Online distribution of content in the EU (pp. 79–96). Cheltenham, UK & Northampton, MA: Edward Elgar. doi:10.4337/9781788119900.00012

Anckar, D., Kuitto, K., Oberst, C. & Jahn, D. (2018). Finland Report Sustainable Governance Indicators 2018. Retrieved March 14, 2018, from https://www.researchgate.net/publication/328214890_Finland_Report_-_Sustainable_Governance_Indicators_2018

Artemjeff, P., & Lunabba, V. (2018). Kielellisten oikeuksien seurantaindikaattorit [Indicators for monitoring linguistic rights] (No. 42/2018). Helsinki: Ministry of Justice Finland. Retrieved from http://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/161087/OMSO_42_2018_Kielellisten_oikeuksien_seurantaindikaattorit.pdf

Castells, M., & Himanen, P. (2002). The information society and the welfare state: The Finnish model. Oxford: Oxford University Press.

Christensen, H., Jäske, M., Setälä, M. & Laitinen, E. (2017). The Finnish Citizens’ Initiative: Towards Inclusive Agenda-setting? Scandinavian Political Studies, 40(4), 411–433. doi:10.1111/1467-9477.12096

Couldry, N. (2010). Why voice matters: Culture and politics after neoliberalism. London: Sage.

Couldry, N. Rodriguez, C., Bolin G., Cohen, J. , Goggin, G., Kraidy, M. …Zhao Y. (2016). Chapter 13 – Media and communications. Retrieved November 14, 2018, from https://comment.ipsp.org/sites/default/files/pdf/chapter_13_-_media_and_communications_ipsp_commenting_platform.pdf

Eloranta, A., & Isotalus, P. (2016). Vaalikeskustelun aikainen livetwiittaaminen – kansalaiskeskustelun uusi muoto? [Election discussion during the electoral debate - a new form of civic debate?]. In K. Grönlund & H. Wass (Eds.), Poliittisen osallistumisen eriytyminen: Eduskuntavaalitutkimus 2015 [Differentiation of Political Participation: Parliamentary Research 2015](pp. 435–455). Helsinki: Oikeusministeriö.

Eskelinen, H., Frank, L., & Hirvonen, T. (2008). Does strategy matter? A comparison of broadband rollout policies in Finland and Sweden. Telecommunications Policy, 32(6), 412–421. doi:10.1016/j.telpol.2008.04.001

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

European Court of Auditors (2018). Broadband in the EU Member States: despite progress, not all the Europe 2020 targets will be met (Special report No. 12). Luxembourg: European Court of Auditors. Retrieved February 22, 2018, from http://publications.europa.eu/webpub/eca/special-reports/broadband-12-2018/en/

European Commission (2011). Attitudes on data protection and electronic identity in the European Union (Special Eurobarometer No. 359). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from http://ec.europa.eu/public_opinion/archives/ebs/ebs_359_en.pdf

European Commission (2015). Data protection (Special Eurobarometer No. 431). Luxembourg: Publications Office of the European Union. Retrieved November 14, 2018, from https://data.europa.eu/euodp/data/dataset/S2075_83_1_431_ENG

European Commission (2017). Europeans’ attitudes towards cyber security (Special Eurobarometer No. 464a). Luxembourg: Publications Office of the European Union. doi:10.2837/82418

European Commission (2018a). Public opinion in the European Union (Standard Eurobarometer No. 89). Luxembourg: Publications Office of the European Union. doi:10.2775/172445

European Commission (2018b). Kansallinen raportti. KansalaismielipideEuroopan unionissa: Suomi [National Report. Citizenship in the European Union: Finland] (Standard Eurobarometer, National Report No. 90). Luxembourg: Publications Office of the European Union. Retrieved from https://ec.europa.eu/finland/sites/finland/files/eb90_nat_fi_fi.pdf

Flew, T., Iosifides, P., & Steemers, J. (Eds.). (2016). Global media and national policies: The return of the state. Basingstoke: Palgrave. doi:10.1057/9781137493958

Goggin, G., Vromen, A., Weatherall, K. G., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital rights in Australia (Sydney Law School Research Paper No. 18/23). Sydney: University of Sydney. https://ses.library.usyd.edu.au/bitstream/2123/17587/7/USYDDigitalRightsAustraliareport.pdf

Haaparanta P., & Puhakka M. (2002). Johtolangatonta keskustelua: Tunne ja järki huutokauppakeskustelussa. Kansantaloudellinen Aikakauskirja, 98(3), 267–274.

Habermas, J. (2006). Political communication in media society: Does democracy still enjoy an epistemic dimension? The impact of normative theory on empirical research. Communication Theory, 16(4), 411–426. doi:10.1111/j.1468-2885.2006.00280.x

Halme, K., Lindy, I., Piirainen, K., Salminen, V., & White, J. (Eds.). (2014). Finland as a knowledge economy 2.0: Lessons on policies and governance (Report No. 86943). Washington, DC: World Bank Group. Retrieved from http://documents.worldbank.org/curated/en/418511468029361131/Finland-as-a-knowledge-economy-2-0-lessons-on-policies-and-governance

Hamelink, C. J. (1994). The politics of world communication. London: Sage.

Horowitz, M., & Nieminen, H. (2016). European public service media and communication rights. In G. F. Lowe & N. Yamamoto (Eds.), Crossing borders and boundaries in public service media: RIPE@2015 (pp. 95–106). Gothenburg: Nordicom. Available at https://gupea.ub.gu.se/bitstream/2077/44888/1/gupea_2077_44888_1.pdf#page=97

Horten, M. (2016). The closing of the net. Cambridge: Polity Press.

International Telecommunication Union. (2018). Measuring the information society report 2018 - Volume 1. Geneva: International Telecommunication Union. Retrieved from: https://www.itu.int/en/ITU-D/Statistics/Pages/publications/misr2018.aspx

Jääsaari, J. (2012). Suomalaisen viestintäpolitiikan normatiivinen kriisi: Esimerkkinä Lex Nokia [The normative crisis of Finnish communications policy: For example, Lex Nokia]. In K. Karppinen & J. Matikainen (Eds.), Julkisuus ja Demokratia [Publicity and Democracy] (pp. 265–291). Tampere: Vastapaino.

Jørgensen, R. F. (2013). Framing the net: The internet and human rights. Cheltenham, UK & Northhampton, MA: Edward Elgar.

Karlsson, M. (2015). Interactive, qualitative, and inclusive? Assessing the deliberative capacity of the political blogosphere. In K. Jezierska & L. Koczanowicz (Eds.), Democracy in dialogue, dialogue in democracy: The politics of dialogue in theory and practice (pp. 253–272). London & New York: Routledge.

Karppinen, K. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), Routledge companion to media and human rights (pp. 95–103). London & New York: Routledge. doi:10.4324/9781315619835-9

Koski, A., & Kuutti, H. (2016). Läpinäkyvyys kunnan toiminnassa – tietopyyntöihin Vastaaminen [Transparency in municipal action - responding to requests for information]. Helsinki: Kunnallisalan kehittämissäätiö [Municipal Development Foundation]. Retrieved November 14, 2018, from http://kaks.fi/wp-content/uploads/2016/11/Tutkimusjulkaisu-98_nettiin.pdf

Kuusela, V. (2007). Sentraalisantroista kännykkäkansaan - televiestinnän historia Suomessa tilastojen valossa [From the central antennas to mobile phone - the history of telecommunications in Finland in the light of statistics]. Helsinki: Tilastokeskus. Retrieved November 14, 2018, from http://www.stat.fi/tup/suomi90/syyskuu.html

MacKinnon, R. (2013). Consent of the networked: The struggle for internet freedom. New York: Basic Books.

MacKinnon, R., Maréchal, N., & Kumar, P. (2016). Global Commission on Internet Governance – Corporate accountability for a free and open internet (Paper No. 45). Ontario; London: Centre for International Governance Innovation; Chatham House .Retrieved from https://www.cigionline.org/sites/default/files/documents/GCIG%20no.45.pdf

Mansell, R. & Nordenstreng, K. (2006). Great Media and Communication Debates: WSIS and the MacBride Report. Information Technologies and International Development, 3(4), 15–36. Available at http://tampub.uta.fi/handle/10024/98193

McIver, W. J., Jr., Birdsall, W. F., & Rasmussen, M. (2003). The internet and right to communicate. First Monday,8(12). doi:10.5210/fm.v8i12.1102

McLeod, S. (2018). Communication rights: Fundamental human rights for all. International Journal of Speech-Language Pathology, 20(1), 3–11. doi:10.1080/17549507.2018.1428687

Ministry of Finance, Finland. (2018, May 23). Digital Economy and Society Index: Finland has EU's best digital public services. Helsinki: Ministry of Finance. Retrieved February 28, 2019, from https://vm.fi/en/article/-/asset_publisher/digitaalitalouden-ja-yhteiskunnan-indeksi-suomessa-eu-n-parhaat-julkiset-digitaaliset-palvelut

Ministry of Justice, Finland. (2017). Action plan on democracy policy. Retrieved February 28, 2019, from https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/79279/07_17_demokratiapol_FI_final.pdf?sequence=1

Ministry of Transport and Communications, Finland. (2014a). Televisioala Suomessa: Toimintaedellytykset internetaikakaudella [Television industry in Finland: Operating conditions in the Internet era] (Publication No. 13/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-398-5

Ministry of Transport and Communications, Finland. (2014b). Viestintäpalveluiden esteettömyysindikaattorit [Accessibility indicators for communication services] (Publication No. 36/2014). Helsinki: Ministry of Transport and Communications. Retrieved from http://urn.fi/URN:ISBN:978-952-243-437-1

Moisala, U. E., Rahko, K., & Turpeinen, O. (1977). Puhelin ja puhelinlaitokset Suomessa 1877–1977 [Telephone and telephone companies in Finland 1877–1977]. Turku: Puhelinlaitosten Liitto ry.

Neuvonen, R. (2005). Sananvapaus, joukkoviestintä ja sääntely [Freedom of expression, media and regulation]. Helsinki: Talentum.

Neuvonen, R. (2014). Yksityisyyden suoja Suomessa [Privacy in Finland]. Helsinki: Lakimiesliiton kustannus.

Neuvonen, R. (2018). Sananvapauden historia Suomessa [The History of Freedom of Expression in Finland]. Helsinki: Gaudeamus

Nieminen, H. (2019). Inequality, social trust and the media. Towards citizens’ communication and information rights. In J. Trappel (Ed.), Digital Media Inequalities Policies against divides, distrust and discrimination (pp, 43–66). Gothenburg: Nordicom. Available at https://norden.diva-portal.org/smash/get/diva2:1299036/FULLTEXT01.pdf#page=45

Nieminen, H. (2016). Communication and information rights in European media policy. In L. Kramp, N. Carpentier, A. Hepp, R. Kilborn, R. Kunelius, H. Nieminen, T. Olsson, T. Pruulmann-Vengerfeldt, I. Tomanić Trivundža, & S. Tosoni (Eds.), Politics, civil society and participation: media and communications in a transforming environment (pp. 41–52). Bremen: Edition lumière. Available at: http://www.researchingcommunication.eu/book11chapters/C03_NIEMINEN201516.pdf

Nieminen, H. (2014). A short history of the epistemic commons: Critical intellectuals, Europe and the small nations. Javnost - The Public, 2(3), 55–76. doi:10.1080/13183222.2014.11073413

Nieminen, H. (2013). European broadband regulation: The “broadband for all 2015” strategy in Finland. In M. Löblich & S. Pfaff- Rüdiger (Eds.), Communication and media policy in the era of the internet: Theories and processes (pp. 119-133). Munich: Nomos. doi:10.5771/9783845243214-119

Nieminen, H. (2010). The European public sphere and citizens’ communication rights. In I. Garcian-Blance, S. Van Bauwel, & B. Cammaerts (Eds.), Media agoras: Democracy, diversity, and communication (pp. 16-44). Newcastle Upon Tyne, UK: Cambridge Publishing.

Nousiainen, M. (2016). Osallistavan käänteen lyhyt historia [A brief history of a participatory turn]. In M. Nousiainen & K. Kulovaara (Eds.), Hallinnan ja osallistamisen politiikat [Governance and Inclusion Policies] (pp. 158-189). Jyväskylä: Jyväskylä University Press. Available at https://jyx.jyu.fi/bitstream/handle/123456789/50502/978-951-39-6613-3.pdf?sequence=1#page=159

OECD. (2018). OECD economic surveys: Finland 2018. Paris: OECD Publishing. doi:10.1787/eco_surveys-fin-2018-en

Padovani, C., & Calabrese, A. (Eds.) (2014). Communication Rights and Social Justice. Historical Accounts of Transnational Mobilizations. Cham: Springer / Palgrave Macmillan. doi:10.1057/9781137378309

Pelkonen, A. (2004). Questioning the Finnish model – Forms of public engagement in building the Finnish information society (Discussion Paper No. 5). London: STAGE. Retrieved November 14, 2018, from http://lincompany.kz/pdf/Finland/5_ICTFinlandcase_final2004.pdf

Pulkkinen. T. (1996). Snellmanin perintö suomalaisessa sananvapaudessa [Snellman's legacy in Finnish freedom of speech]. In: K. Nordenstreng (Ed.), Sananvapaus [Freedom of Expression] (pp. 194–208). Helsinki: WSOY

Redeker, D., Gill, L., & Gasser, U. (2018). Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302–319. doi:10.1177/1748048518757121\

Reporters without Borders (2018). 2018 World Press Freedom Index. Retrieved February 28, 2019, from: https://rsf.org/en/ranking

Reunanen, E. (2018). Finland. In N. Newman, R. Fletcher, A. Kalogeropoulos, D. A. L. Levy, & R. K. Nielsen (Eds.), Reuters Institute digital news report 2018 (pp. 77–78). Oxford: Reuters Institute for the Study of Journalism.

Sajari, P. (2009). Lakia vahvempi Nokia [The law is stronger Nokia]. Helsingin Sanomat.

Shahbaz, A. (2018). Freedom on the net 2018: The rise of digital authoritarianism. Washington, DC: Freedom House. Retrieved February 28, 2019, from https://freedomhouse.org/sites/default/files/FOTN_2018_Final%20Booklet_11_1_2018.pdf

Silberman, S. (1999, September). Just say Nokia. Wired Magazine.

Sims, M., Youell, T., & Womersley, R. (2015). Understanding spectrum liberalisation. Boca Raton, FL: CRC Press.

Statistics Finland (2017). Väestön tieto- ja viestintätekniikan käyttö 2017 [Population Information and Communication Technologies 2017]. Helsinki: Official Statistics of Finland. Retrieved February 28, 2019, from https://www.stat.fi/til/sutivi/2017/13/sutivi_2017_13_2017-11-22_fi.pdf

Syvertsen, T., Enli, G., Mjøs, O., & Moe, H. (2014). Media welfare state. Nordic media in the digital era. Ann Arbor: University of Michigan Press.

Tene, O., & Polonetsky, J. (2013). A theory of creepy: Technology, privacy and shifting social norms. Yale Journal of Law & Technology, 16, 59–102. Available at: https://yjolt.org/theory-creepy-technology-privacy-and-shifting-social-norms

Tiilikka, P. (2007). Sananvapaus ja yksilön suoja: lehtiartikkelin aiheuttaman kärsimyksen korvaaminen [Freedom of speech and protection of the individual: compensation for the suffering of a journal article]. Helsinki: WSOYpro.

Venturelli, S. (2002). Inventing e-regulation in the US, EU and East Asia: Conflicting social visions of the information society. Telematics and Informatics,19(2), 69–90. doi:10.1016/S0736-5853(01)00007-7

Wallace, N., & Castro, D. (2017). The state of data innovation in the EU. Brussels &

Washington, D.C. Center for Data Innovation. Retrieved February 28, 2019, from http://www2.datainnovation.org/2017-data-innovation-eu.pdf

Wavre, V. (2018). Policy diffusion and telecommunications regulation. Cham: Springer / Palgrave Macmillan.

Wilhelmsson, N. (2017). Finland: eDemocracy adding value and venues for democracy. In eDemocracy and eParticipation. The precious first steps and the way forward (pp 25-33). Retrieved February 28, 2019, from http://www.fnf-southeasteurope.org/wp-content/uploads/2017/11/eDemocracy_Final_new.pdf

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.

Footnotes

1. The quest for more openness and publicity is a continuation of the long historical development. European modernity is fundamentally based on the assumption that knowledge and culture belong to the common domain and that the process of democratization necessarily means removing restrictions on the epistemic commons. Aspects such as civic education, universal literacy, and mass media (newspapers; public service broadcasting as tool for the daily interpretation of the world) are at the heart of this ideal. The epistemic commons reflects the core ideas and ideals of deliberative democracy: At the centre of this view is democratic will formation that is public and transparent, includes everyone and provides equal opportunities for participation, and results in rational consensus (Habermas, 2006). The epistemic commons is thought to facilitate such will formation.

Data and digital rights: recent Australian developments

$
0
0

This paper is part of Practicing rights and values in internet policy around the world, a special issue of Internet Policy Review guest-edited by Aphra Kerr, Francesca Musiani, and Julia Pohle.

Introduction

Digital rights have become a much debated set of issues in a world in which digital communications, cultures, platforms, and technologies are key to social life (Couldry et al., 2018; Hintz, Dencik, & Wahl-Jorgenson, 2019; Isin & Ruppert, 2015).

We see this, for example, in public debates about the widespread application of biometrics systems, facial recognition, or mandatory retention of telecommunications data, strategies nominally mobilised by nation states in their pursuit of information about terrorist threats, but also in controlling political dissidence. Similarly, the widespread involvement of non-state actors in the capture, analysis and trade of personal information has heightened public fears about how corporate use of their data might affect their access to information, goods and services, and also prompted questions about discriminatory applications of automated decision-making (Eubanks, 2018). Increasingly, too, linkage and use of data by governments in decision-making, and links between state and non-state actors in the collection, use, and sharing of data elicits concerns relating to power and inequality. Governments are using data beyond the security context, and also are intimately connected with the collection and use of data by private actors (including the sharing of data with third parties).

Globally and locally, it has proven difficult for citizens to propel their governments to take action, especially given the increasing complex interplay among national (and sub-national), regional, and global laws, policies, and innovation systems when it comes to internet and associated technologies. Outcomes for consumers, citizens, civil society, business, and institutions should, at least in theory, be highly influenced by the kinds of fundamental human rights set out in longstanding international frameworks, and policed (or not policed) by institutions, such as the United Nations, and as set out in national charters of rights and rights-promoting national legislation. But both national and international institutions have been slow to grapple with and enact aspects of digital rights, even as governments and non-state actors take actions that restrict or undermine those rights. Although the technologies themselves have facilitated some counterbalance to this effect: through the growth of new rights advocacy organisations and models enabled by digital platform, such as US-based international group Access Now (Solomon, 2018).

Adding to the challenges are the decisive roles played in communications and media by nonstate-based governance and regulation arrangements, such as the community standards and terms of service of digital platforms — which decisively shape global content regulation on social media channels. There are risks that these efforts will protect existing power relations, and deflect, and make more difficult, the activation of digital rights in the context of data tracking, collection and trading, pervasive, embedded, and automated in everyday life by digital systems.

All in all, there are long, entangled challenges as well as genealogies to digital rights (Liberty, 1999). Little surprise then that the turn to digital rights has been roundly critiqued for its incoherent and partial nature. In his notable paper, for instance, Kari Karppinen argues that the umbrella concept of “digital rights” falls short of being a coherent framework. Rather, Karppinen suggests, digital rights amounts to a diverse set of debates, visions, and perspectives on the process of contemporary media transformations (Karppinen, 2017). He proposes that we approach digital rights as “emerging normative principles for the governance of digital communication environment[s]” (Karppinen, 2017, p. 96).

Reflecting on this suggestion, we imagine that such normative principles are likely to come from existing human rights frameworks, as well as emergent conceptions and practices of rights. Some especially important issues in this regard, which theorists, activists, policymakers, and platform providers alike have sought to explore via notions of digital rights, are evolving citizen uses of platforms like Facebook, personal health tracking apps, and state e-health registers and databases, and the associated rights and responsibilities of platform users.

To explore these issues, in 2017, we conducted an Australia study of citizen uses and attitudes in relation to emerging digital technology and rights (Goggin et al., 2017), as part of a larger project on digital rights and governance in Australia and Asia (Goggin et al., 2019). Our study drew on three sources of data: a national survey of the attitudes and opinions of 1600 Australians on key rights issues; focus group discussion of related rights scenarios; and analysis of legal, policy and governance issues (Goggin et al., 2017).

In summary, our study showed that the majority of respondents are concerned about their online privacy, including in the relatively new areas of digital privacy at work. A central issue across a very high proportion of respondents we surveyed is control. Their concerns regarding control are not sufficiently addressed by availing themselves of available privacy settings and options. It appears an underlying issue is lack of knowledge about what platforms, and other core actors (such as corporations and governments) do with internet users’ information, and consequent absence of any sense of control. Our findings showed considerable concern about individual privacy and data protection, and the adequacy of responses by technology corporations and governments (cf. the key report by Digital Rights Watch, 2018).

Like other studies nationally and internationally (OAIC, 2017; Ofcom, 2018; Pew 2016; Center for the Digital Future, 2017), these findings lend firm support to the need for better policy and design frameworks and practices to respond to such concerns.

Following hard on the heels of our research in 2018–2019 have been successive waves of revelations and debates about data privacy breaches. Key among these was the Cambridge Analytica/Facebook exposé (Cadwallader & Graham-Harrison, 2018; Isaak & Hanna, 2018), but also many other well publicised and controversial issues have been raised by the data collection and sharing practices of corporations and governments, by surveillance practices, and lack of effective safeguards or accountability mechanism for citizens.

In mid-2018, expectations were raised around the world by the implementation of the European General Data Protection Regulation (GDPR), with many hoping that this law would have a decisive influence on corporate policies and practices internationally, and also jurisdictions outside the direct orbit of European polity, law, and governance.

Against this backdrop, in this paper, we reflect upon subsequent developments in Australia in data privacy rights.

In the first part of the paper, we discuss Australian policy in comparison to the European and international developments. In the second part, we discuss two contemporaneous and novel Australian policy developments initiated by the national government: a Digital Platforms Inquiry; and the development of a consumer data right.

Both policy initiatives seek to grapple with the widening pressure to provide better public domain information, fair and effective options for users to exercise choice over how they configure technologies, and strengthened legal frameworks, enhanced rights, and better avenues redress. Both also illustrate the uniquely challenging environment for digital rights in Australia.

Australian digital rights, privacy, and data protection in international context

The concept of rights has a long, complex, and rich set of histories, across politics, law, philosophy, and ethics –– to mention just a few key domains. Shortly after the 70th anniversary of the United Nations Universal Declaration of Human Rights in 2017, it is evident that the very idea of rights remains strongly contested from a wide range of perspectives (Blouin-Genest, Doran, & Paquerot, 2019; Moyn, 2018). The recognition of certain rights is shaped by cultural, social, political, and linguistic dynamics, as well as particular contexts and events (Erni, 2019; Gregg, 2012; Hunt, 2007; Moyn, 2010).

The way that we acknowledge, defend or pursue rights — our contemporary rights “setting” — has also been shaped by the heritage of this concept in international relations as well as local contexts, and the pivotal role that rights instruments, language and discourses, practices, and struggles play in our economic, political, and social arrangements (Gregg, 2016; López, 2018). In each country, there are particular histories, arrangements, and challenges concerning rights. In relation to our Australian setting, there is a fundamental threshold issue about the constitutional and legal status of rights (Chappell, Chesterman, & Hill, 2009; Gerber & Castan, 2013). As often observed, Australia lacks an explicit, overarching constitutional or legal framework enunciating and safeguarding rights — a gap that has led many over recent years to propose a national bill of rights (Byrnes, 2009; Erdos, 2010), and led three intermediate governments, the Victorian, Australian Capital Territory, and most recently (in 2019) Queensland governments, to develop their own human rights charters.

The Australian setting is interesting to data privacy scholarship for a range of reasons, including its status as an ambiguously placed nation across global North and South (Gibson, 1992; Mann & Daly, 2018), and between West and East (Goggin, 2008; Keating, 1996). It stands as proof that protection for human rights is not inevitable, even in a Western liberal democracy. The absence of a bill of rights or equivalent to the European Convention on Human Rights in Australia has significant implications in this context. Not least, it arguably diminishes the quality of the discussion about rights, because, for instance, it means that Australia lacks opportunities for measured judicial consideration of acts that may breach human rights, or questions regarding the proportionality or trade-off to be drawn between, for example, national security and privacy (Mann et al., 2018). That leaves researchers, institutions, and the wider society –– including the public –– with a relatively impoverished rights discussion that is skewed by the political considerations of the day and the views of advocacy groups on all sides.

The Australian case is of particular relevance to the UK going forward, and understanding the data privacy rights evolution of kindred ‘Westminster’ democracies (Erdos, 2010). The UK has a Human Rights Act, and it has some teeth, however it lacks any constitutional bill of rights (Hunt, 2015; Kang-Riou, Milner, & Nayak, 2012) –– although this has been a longstanding proposal from some actors (Blackburn, 1999), including the Conservative Party during the 2015 UK General Election. Up to now, however, it has been possible to challenge actions in the UK via EU institutions, and the UK has been bound by specific instantiations of rights in detailed EU instruments. Owing to Brexit, the existing UK human rights arrangements look like becoming unmoored from at least some judicial systems of Europe (subject to the shape of the final arrangements) (Gearty, 2016; Young, 2017, pp. 211-254).

Notably, in recent times, it has been proactive European Union (EU) response that has gained widespread attention (Daly, 2016b). Data protection is enshrined in the Treaty on the Functioning of the EU (Article 16). The fundamental right to the protection of personal data is also explicitly recognised in Article 8 of the 2000 Charter of Fundamental Rights of the European Union, and the general right to respect for ‘private and family life, home and communications’ (Article 7). The EU’s new GDPR (European Union, 2016) took effect in May 2018. The GDPR ‘seeks to harmonise the protection of fundamental rights and freedoms of natural persons in respect of processing activities and to ensure the free flow of personal data between [EU] Member States’ (Recital 3). In part, the GDPR represents an important early effort to address the implications of large-scale data analytics and automated processing and decision-making. The implications of the GDPR for citizen rights are untested in the courts so far, but the implementation of the law has provided a focal point for a sustained academic, policy, and industry discussion of automated data processing in Europe.

In the area of privacy and data protection, Europe has played an important normative role (Voloshin, 2014) in Australian debates (Stats, 2015), because of its leadership in this area and the involvement of many Australian researchers, jurists, parliamentarians, policy-makers, and industry figures in engagement with European actors and trends (Calzada, 2018; Poullet, 2018; Stalla-Bourdillon, Pearce, & Tsakalakis, 2018; Vestoso, 2018). Recently, the EU’s expanded emphasis on its external policy portfolio, and its capacity to serve as a more “joined-up global actor”, has been theorised as a kind of “new sector diplomacy” (Damro, Gstöhl & Schunz, 2017). Already the GDPR has some global effect –– introducing compliance obligations for international organisations or businesses based outside the EU that have an establishment in the EU, that offer goods and services in the EU, or that monitor or process data about the behaviour of individuals in the EU.

Another route for this strong influence has been via joint efforts under the auspices of the OECD. A watershed here was the creation and adoption of the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (OECD, 1980/2013). Distinguished Australian High Court judge and law reformer, the Justice Michael Kirby was the Chairman of the OECD Expert Groups on Privacy (1978–80) and Data Security (1991–2). He notes that the OECD Guidelines, and the privacy principles they contain, “profoundly influenced” the foundational 1988 Australian Privacy Act that remains in force today (Kirby, 1999). More recently, there is abundant evidence of the influence of European law and policy reform on the wider region, as documented in the work Australian privacy scholar, Graham Greenleaf, notably his comparative study of Asian data privacy laws (Greenleaf, 2014).

Europe is thus a lodestone and an internationally respected point of reference for privacy and data protection, including in Australia. Yet, European developments also offer a stark contrast to the situation in Australia, where, in recent years, Australia’s law-makers have been slow to respond to expressions of citizen and user data privacy concerns (Daly, 2016a; Daly, 2018).

State of play of data privacy in Australia: a snapshot

Australian privacy law is the result of both legislation and the common law. There is no right to privacy enshrined in the Constitution. Information collection and processing by government and by larger private sector players is governed by the Privacy Act 1988 (Cth) and a range of state and territory legislation. These instruments do not, however, provide an enforceable right to privacy. The Privacy Act includes 13 Australian Privacy Principles (APPs) that impose obligations on government and private sector organisations (with some important exclusions) when collecting, handling, storing, using, and disclosing personal information, and certain rights for individuals to access and correct personal information. The Privacy Principles place more stringent obligations on entities that handle “sensitive information” about an individual, including information about their health and biometric data, racial or ethnic origin, political opinions and membership, religious beliefs or affiliation, sexual orientation, and criminal record. Both the current Australian legal framework and the terms and conditions applied by online platforms are based on a model of notice and consent: notification that personal information is being collected and consent to those users. Yet as our 2017 study indicated, even where citizens may have assented to their data collection, and may be taking active steps to protect their privacy, they are still worried that they lack knowledge of its potential uses, and control over the acquisition of personal information.

Australians, however, have no direct right to sue for a breach of the principles –– only rights to complain, first to the organisation involved or, if there is no satisfactory response, to the Office of the Australian Information Commissioner (OAIC). For its part, the OAIC’s powers include “investigating individuals' complaints [in the second instance] and commencing Commissioner initiated investigations, making a determination about breaches of privacy, and applying to the Federal Court for a civil penalty order for serious or repeated interferences with privacy” (OAIC, 2018). The role, powers, and resourcing of the OAIC and its failure to take enforcement actions have been the subject of considerable criticism (Australian Privacy Foundation, 2018; Daly, 2017).

Australians’ rights against unwanted intrusions on seclusion, or the unwanted revelation of private information, are also limited. The appellate courts in Australia do not currently recognise any civil cause of action for invasion of privacy, although the High Court has left open the possibility of developing one (Daly, 2016a). There is some potential to seek remedies for serious invasions of privacy through other legal mechanisms, such as legal rights to prevent physical invasion or surveillance of one’s home, rights against defamation or the disclosure of confidential information, or even copyright law (ALRC, 2014). Proposals to recognise a statutory cause of action from the Australian Law Reform Commission have not been acted on (Daly, 2016a).

None of these various Australian legal regimes have responded to broader shifts in the capacity to gather data on a larger scale, to link datasets, to analyse and pattern data, and to use such capacities to draw inferences about people or tailor what people see or the decisions that are made about them at an ever more fine-grained level (despite relatively recent 2014 reforms, cf. Von Dietze & Allgrove, 2014). For now, Australians’ hope of some data protection may be indirect, via the rising tide of the GDPR and European-influenced international frameworks.

As charted by Australian privacy law expert and advocate Professor Graham Greenleaf, there is also an emergence of an effective global standard, due to widespread adoption of standards in accordance with global Data Protection Convention 108/108+ (COE, 2018; Greenleaf, 2018a & 2018b), the Council of Europe data protection convention that includes many of the GDPR requirements –– what Greenleaf terms “GDPR-lite” (Greenleaf 2018a; Kemp, 2018). Article 27 makes the Convention open to “any State around the globe complying with its provisions” (COE, 2018, clause 172, p. 32).

In October 2018, Joseph A. Cannataci, the UN Special Rapporteur on the right to privacy recommended that member states of the United Nations “be encouraged to ratify data protection Convention 108+ ….[and] implement the principles contained there through domestic law without undue delay, paying particular attention to immediately implementing those provisions requiring safeguards for personal data collected for surveillance and other national security purposes” (Cannataci, 2018, recommendation 117.e). This is a recommendation which chimes with the positions taken by Australian privacy and civil society groups — and it will be interesting to see if it is picked up by a current Australian Human Rights Commission inquiry underway on technology and human rights, expected to report in 2019 (AHRC, 2018).

The policy and legal lacunae in Australia become evident when governments and corporations are in a tense dance to reconcile their interests, in order to make the market in consumer data, sharing, and collection work smoothly and to promote innovation agendas in IT development. At the heart of the contemporary power, technology, and policy struggles over data collection and uses are citizen and user disquiet and lack of trust about the systems that would provide protection and safeguards, and secure privacy and data rights.

In Australia, there have been a range of specific incidents and controversies that have attracted significant criticism and dissent by a range of activist groups. Concerns have been raised, in particular, by policy initiatives, such as internal moves to facilitate broader government data sharing, among agencies, as well as wider, security-oriented reforms centring on facial recognition (Mann et al., 2018). One of the most controversial initiatives was the botched 2017 introduction of a national scheme called “My Health Record” to collect and make available to health practitioners the data of patients. (Smee, 2018). Such was the widespread opposition and opting-in that by February 2019, approximately 2.5 million Australians (of a population of 25 million) had chosen to opt out (Knaus, 2019).

Two approaches to digital rights

As we have indicated, in Australia while there is a groundswell of concern and continuing activism on digital rights issues, there is no real reform of general privacy and data protection laws afoot. Instead, privacy is being addressed at a legislative level in a piecemeal way, with tailored rules being included in legislation for specific, data-related policy initiatives. Two interesting and significant initiatives underway that could, if implemented properly, make important contributions to better defining and strengthening privacy and data rights.

Digital Platforms Inquiry

One important force for change is the Digital Platforms Inquiry being undertaken by the general market regulator, the Australian Competition and Consumer Commission (ACCC).

Established in December 2017 by then Treasurer, later Prime Minister the Hon Scott Morrison MP to the ACCC, the Digital Platforms Inquiry was first and foremost focused on the implications for news and journalistic content of the emergence of online search engines, social media, and digital content aggregators.

In its preliminary report, released in December 2018, the ACCC gave particular attention to Google and Facebook, noting their reliance on consumer attention and consumer data for advertising revenues as well as the “substantial market power” both companies hold in the Australian market (ACCC, 2018b, p. 4).

What is especially interesting in the ACCC’s interim report and its public discussion is the salience given to issues of consumer data collection and consumers’ awareness of these practices and their implications (note the framing of Australians as consumers, rather than citizens, a point to which we return below). The ACCC also found that consumers were troubled by the scale and scope of platform data collection. It also noted that they “are generally not aware of the extent of data that is collected nor how it is collected, used and shared by digital platforms” (p.8) due to the length, complexity, and ambiguity of platform terms of service and privacy policies, and that they had little bargaining power compared to platforms which largely set the terms of information collection, use and disclosure on a bundled or ‘take it or leave it’ basis (p.8). Reflecting on this, the ACCC argued that this information asymmetry and power imbalance had negative implications for people’s capacity to demonstrate consent and exercise choice (ACCC, 2018b, p. 8). The ACCC also noted the absence of effective mechanisms for enforcing privacy laws, and cautioned that:

The lack of both consumer protection and effective deterrence under laws governing data collection have enabled digital platforms’ data practices to undermine consumers’ ability to select a product that best meets their privacy preferences. (ACCC, 2018b, p. 8)

The ACCC’s Preliminary Report proposes various recommendations for legislative and policy change to address issues of market power and safeguarding competition, and also proposes a set of amendments to the Privacy Act “to better enable consumers to make informed decisions in relation to, and have greater control over, privacy and the collection of personal information” (ACCC, 2018b, p. 13).

Among other things, these recommendations include: strengthening notification requirements for collection of consumers’ personal information by their platform or third party; requiring that consent be express (and opt-in), adequately informed, voluntarily given, current, and specific; enabling erasure of personal information; increasing penalties for breach; and expanded resources for the Office of Australian Information Commissioner (OAIC) to scale up its enforcement activities. (ACCC, 2018b, pp. 13-14). In addition, the ACCC recommends a new enforceable code of practice to be developed by key digital platforms and the OAIC, to “provide Australians with greater transparency and control over how their personal information is collected, used and disclosed by digital platforms” (ACCC, 2018b, p, 14). Also notable is a recommendation for the introduction of a statutory cause of action enabling individuals to take action over serious invasions of privacy “to increase the accountability of businesses for their data practices and give consumers greater control over their personal information” (ACCC, 2018b, p. 14).

With the full report due in mid-2019, and formal government response to follow, a wide range of actors debated potential regulation of digital platforms, including civil society, academia, as well as industry. For their part, affected platform operators Google and Facebook were notably united in their opposition to a new regulator that could ensure greater transparency and oversight in the operation of algorithms that “determine search results and rank news articles in user feeds” (Duke & McDuling, 2019; cf. Ananny & Crawford, 2016; Google, 2018).

The international stakes are also high, illustrated in the ACCC’s call for its international counterparts to follow its lead in this “world first” inquiry in applying tougher safeguards (Duke & McDuling, 2018; Simons, 2019). Pitted against the digital platform giants are the older media companies still with significant interests in press, broadcasting, and radio, supporting the call for tighter regulation of the ‘digital behemoths’ (Swan, Vitorovich, & Samios, 2019). Clearly protectionism of existing media market dispensations is to the fore here, rather than protection of citizen rights — these traditional corporate players are very happy to see emergent internet and digital platform companies regulated as if these were media companies; or indeed facing extensions of other regulations, such as privacy and data law and regulation.

Consumer Data Right: “Data as an Asset”

There has been something of a long-term, bipartisan consensus shared by both major political parties — the conservative Liberal/National Party Coalition government as well as the typically more social democratic Australian Labor Party (ALP, currently in opposition) — that, especially when it comes to internet, telecommunications, social media, and associated digital technologies, “light touch” market-oriented regulation is to be favoured. The dominant position of the ALP is to style itself as pro-market with an admixture of government intervention and responsive regulation as needed. Hence it has been generally more responsive to calls for privacy and data rights improvements, when it comes to abuses from digital platform companies. However, it is extremely reluctant to be seen as “weak” or “soft” on issues of national security, cybersecurity, and fighting terrorism, so has rarely challenged contentious Coalition laws on metadata, and data retention (Suzor, Pappalardo, & McIntosh, 2017). Most recently, in December 2018, the ALP backed down in parliament, withdrawing its proposed amendments on legislation allowing security agencies greater access to encrypted communications (creating “backdoors” in Whatsapp, iMessage, and other “over-the-top” messaging apps) (Worthington & Bogle, 2018). Internationally, this new law was received as an “encryption-busting law that could impact global privacy”, as a Wired magazine report put it (Newman, 2018).

On the direction of the government, the ACCC is also a key player in a second, related yet distinct initiative to better conceptualize and enact one very particular kind of digital right, in the form of a consumer data right. Data generated by consumers in using particular technologies, and their associated products and services, often resides with, and is controlled or even owned by, the company providing it. If consumers cannot access and transfer their data from one provider to another, and especially if they cannot trust a provider to use their data in agreed ways, this makes it difficult for a competitive market to be effectively established and sustained.

Following an Open Banking Review (Australian Government, 2017), and Productivity Commission report on Data Availability and Use (Productivity Commission, 2017), the Australian government decided to legislate a Consumer Data Right. The idea of this Consumer Data Right is to “give Australians greater control over their data, empowering customers to choose to share their data with trusted recipients only for the purposes that they have authorised” (Australian Government, 2018):

… [W]e see the future treatment of data as joint property as a healthier foundation for future policy development ... [W]hat is happening today in Australia to treat data as an asset in regulatory terms is a first step in a better foundation for managing both the threat and the benefit [of data collection]. (Harris, 2018)

The Australia consumer data right has its parallels in European developments, such as data portability right under the GDPR (Esayas & Daly, 2018), although its foundation lies in consumer rights, rather than broader digital or human rights. Such a concept of a data right — as something that an individual has ownership of — is clearly bound up with the controversial debates on “data as commodity” (e.g. Nimmer & Krauthaus, 1992; Fuchs, 2012), and indeed the wide-ranging debate underway about what “good data” concepts and practices might look like (Daly, 2019). The Productivity Commission report, which provides the theoretical basis for the data right, summarizes it as follows:

Rights to use data will give better outcomes for consumers than ownership: the concept of your data always being your data suggests a more inalienable right than one of ownership (which can be contracted away or sold). And in any event, consumers do not own their data in Australia. (Productivity Commission, 2017, p. 191)

The consumer would have the “right to obtain a machine-readable copy of their own digital data” (p. 191), however the “asset” would be a joint property:

Consumer data would be a joint asset between the individual consumer and the entity holding the data. Exercise of the Right by a consumer would not alter the ability of the initial data holder to retain and keep using the data. (Productivity Commission, 2017, p. 191)

The government’s plan is to implement the consumer data right initially in the banking, energy, and telecommunications sectors, and then to roll it out economy wide sector-by-sector (Australian Government, 2018). The ACCC was charged with developing the rules for the consumer data right framework (ACCC, 2018a), of which it has released a preliminary version. The consumer data right framework would be nested inside the general privacy protection framework existing in Australia, especially the Privacy Act. This has led to criticisms — even from industry participants, such as the energy company AGL –– that the government should take the opportunity to update and strength the existing Privacy Act (for instance, in relation to the Australian Privacy Principles), rather than creating a separate set of privacy safeguards, in effect leading to “twin privacy regimes” that would “complicate compliance as well as the collection of consents for data sharing from consumers” (Crozier, 2019; Dept of Prime Minister & Cabinet, 2018).

What is especially interesting in this process is the role that standards play. In the long term, the government has promised the establishment of a Data Standards Body, with an Advisory Committee including representatives of data holders (such as banks, telecommunications, and energy companies), data “recipients” (such as fintech firms), and consumer and privacy advocates. The Data Standards Body would be led by an independent Chair responsible for selection of the Advisory Committee, as well as “ensuring appropriate government, process, and stakeholder engagement” (Australian Government, 2018). In the short term, for the first three years, Data61, the digital innovation arm of Australia’s national science agency (https://www.data61.csiro.au/) has been appointed to lead the development of Consumer Data Standards. Some consumer-sensitive work has been conducted in this process. For instance, Data61 conducted research with approximately 80 consumers, releasing a consumer experience report (Data61, 2019, p. 4).

As the Consumer Policy Research Centre notes in their Consumer Data and Digital Economy report (Nguyen & Solomon, 2018), how the framework strikes a balance will be crucial: “For consumers to benefit, policy settings need to drive innovation, enhance competition, protect human rights and the right to privacy and, ultimately, enable genuine consumer choice” (CPRC, 2018). So far, however, the framework, draft rules, and policy process has been heavily criticised by the CPRC, other consumer advocacy, privacy, digital rights groups, industry participants, and parliamentarians. (Eyers, 2019).

Conclusion

Citizen uses of and attitudes to privacy and data are at the heart of contemporary internet and emerging technologies. Much more work needs to be done to fill out the picture on these internationally. In particular it will be important to ensure that the full range of citizens and societies are represented in research and theory. Also it is key that such work is translated into the kinds of insights and evidence shaping and woven into the often messy policy and law making, discourses, and institutional arrangements. We would hope to see serious efforts to engage with citizens regarding their understandings, expectations, and experience of digital rights and developing technologies, with a view to informing strong, responsive citizen-centred frameworks in law, policy, technology design, and product and service offerings.

Globally, there are legislative and regulatory efforts underway to respond to people’s concern about developments in data collection and use, and the feeling, documented in our research and the research of others, of an absence of effective control. The European efforts such as the Convention 108+ and GDPR have been vital in the wider international scene to provide resources and norms that can help influence, guide, or, better still, structure government and corporate frameworks and behaviour.

This paper makes a case for the importance of local context. Australia is an interesting case for examining government responses to concerns about data collection and use, as a technologically advanced, Western developed nation without an effective human (or digital) rights framework. In Australia, it is notable that efforts to respond to concern have come, not in the context of an overhaul of privacy laws or digital rights generally, but via efforts, by market-oriented policy bodies (the ACCC and Productivity Commission) to make markets work better and meet the needs, and expectations, of consumers.

In the case of the Digital Platforms Inquiry, there are internationally leading reforms to frameworks on data, algorithms, and privacy rights proposed that betoken a major step forward for citizens’ digital rights. Yet in play is a political and policy process in which citizen concerns and activism are allied with some actors (even potentially old media companies), while pitted against others (digital platforms companies, including those such as Google who often argue for some element of digital rights). Ultimately it will be up to the government concerned to take action, and then for the regulators and key industry interests to be prepared to lead necessary change, ensuring citizens will have a fair and strong role in shaping co-regulatory frameworks and practices.

Like the premise of the Digital Platforms Inquiry, the Consumer Data Right initiative involves designing the architecture — legal, economic, and technical — to ensure the effective and fair operation of markets in consumer data. In both initiatives undertaken by the ACCC there is a common thread — they are aligned with consumer protection, rather than citizen concerns and rights. Here the consent, labour, and legitimation of consumers is in tension, rather than in harmony, it could be suggested, with the interests of citizens (Lunt & Livingstone, 2012). At the same time, individuals’ privacy rights as citizens seem to be missing from the debate, subsumed under an overwhelming security imperative that frames individual privacy as consistently a lower priority than broad law enforcement and national security goals.

Thus, Australia offers a fascinating and instructive instance where internet policy experiment in compartmentalised data privacy rights is being predicated and attempted. Given the story so far, we would say that it is further evidence of the imperative for strong regulatory frameworks that capture and pin together transnational, regional, national, and sub-national level and modes to address citizens mounting privacy and data concerns; at the same time, it offers yet more evidence that, at best, this remains, in Australia as elsewhere, a work in process.

Acknowledgements

We are grateful to the three reviewers of this paper as well as the editors of the journal and special issue for their very helpful feedback on earlier versions of this paper.

References

Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. doi:10.1177/1461444816676645

Australian Competition and Consumer Commission. (2018a, September, 12). ACCC seeks views on consumer data rights rules framework [Media release MR179/18]. Retrieved from https://www.accc.gov.au/media-release/accc-seeks-views-on-consumer-data-right-rules-framework

Australian Competition and Consumer Commission. (2018b). Digital Platforms Inquiry: Preliminary report. Canberra: Australian Competition and Consumer Commission. Retrieved from https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/preliminary-report

Australian Government (2018, May 9). Consumer data right. Canberra: The Treasury. Retrieved from https://treasury.gov.au/consumer-data-right/

Australian Government (2018). Review into Open Banking: Giving consumers choice, convenience, and confidence. Canberra: The Treasury. Retrieved from https://static.treasury.gov.au/uploads/sites/1/2018/02/Review-into-Open-Banking-_For-web-1.pdf

Australian Human Rights Commission. (2018). Human rights and technology issues paper. Sydney: Australian Human Rights Commission. Retrieved from https://tech.humanrights.gov.au/sites/default/files/2018-07/Human%20Rights%20and%20Technology%20Issues%20Paper%20FINAL.pdf

Australian Privacy Foundation. (2018, August 15). Privacy in Australia: Brief to UN Special Rapporteur on Right to Privacy. Retrieved from https://privacy.org.au/wp-content/uploads/2018/08/Privacy-in-Australia-Brief.pdf

Blackburn, R. (1999). Towards a constitutional Bill of Rights for the United Kingdom: Commentary and documents. London and New York: Pinter.

Blouin-Genest, Gabriel, Doran, Marie-Christine, & Paquerot, Sylvie. (Eds.). (2019). Human rights as battlefields: Changing practices and contestations. Cham, Switzerland: Palgrave Macmillan.

Cadwalladr, C., & Graham-Harrison, E. (2018).Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. March 18, 2018. Retrieved from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Calzada, I. (2018). (Smart) citizen from data providers to decision-makers? The case study of Barcelona. Sustainability, 10(9). doi:10.3390/su10093252

Cantacci, J. A. (2018). Report of the Special Rapporteur on the right to privacy. (Report No. A/73/45712). General Assembly of the United Nations. Retrieved from https://www.ohchr.org/Documents/Issues/Privacy/SR_Privacy/A_73_45712.docx

Center for the Digital Future. (2017). The 2017 Digital Future Report: Surveying the Digital Future. Year Fifteen. Los Angeles: Center for the Digital Future at USC Annenberg. Retrieved from https://www.digitalcenter.org/wp-content/uploads/2018/04/2017-Digital-Future-Report-2.pdf

Consumer Policy Research Centre (CPRC). (2018, July 17). Report: Consumer data & the digital economy [Media release]. Retrieved from http://cprc.org.au/2018/07/15/report-consumer-data-digital-economy/

Couldry, N., Rodriguez, C., Bolin, G., Cohen, J., Volkmer, I., Goggin, G.,…Lee, K. (2018). Media and communications. In International Panel on Social Progress (IPSP) (Ed.), Rethinking Society for the 21st Century: Report of the International Panel on Social Progress (Vol. 2, pp. 523–562). Cambridge: Cambridge University Press. doi:10.1017/9781108399647.006

Council of Europe (COE). (2018). Convention 108+: Convention for the protection of individuals with regard to the processing of personal data. Strasbourg: Council of Europe. Retrieved from https://rm.coe.int/convention-108-convention-for-the-protection-of-individuals-with-regar/16808b36f1

Crozier, R. (2019, March 5). AGL warns consumer data data right being “rushed”. IT News. Retrieved from https://www.itnews.com.au/news/agl-warns-consumer-data-right-being-rushed-520097

Daly, A. (2016a). Digital rights in Australia’s Asian century: A good neighbour? in Digital Asia Hub (Ed.), The good life in Asia’s digital 21st century (pp. 128–136). Hong Kong: Digital Asia Hub. Retrieved from https://www.digitalasiahub.org/thegoodlife

Daly, A. (2019). Good data is (and as) peer production. Journal of Peer Production, 13. Retrieved from http://peerproduction.net/issues/issue-13-open/news-from-nowhere/good-data-is-and-as-peer-production/

Daly, A. (2018). The introduction of data breach notification legislation in Australia: A comparative view. Computer Law & Security Review, 34(3), 477–495. doi:10.1016/j.clsr.2018.01.005

Daly, A. (2016b). Private power, online information flows and EU law. Oxford: Hart Publishing.

Daly, A. (2017). Privacy in automation: An appraisal of the emerging Australian approach. Computer Law & Security Review, 33(6), 836–846. doi:10.1016/j.clsr.2017.05.009

Damro, C., Gstöhl, S., & Schunz, S. (Eds.). (2017). The European Union’s evolving external engagement: Towards new sectoral diplomacies? London: Routledge.

Data61. (2019). Consumer data standards: Phase 1: CX report. February 20, 2019. Retrieved from https://consumerdatastandards.org.au/wp-content/uploads/2019/02/Consumer-Data-Standards-Phase-1_-CX-Report.pdf

Department of Prime Minister and Cabinet. (2018, July 4). New Australian Government data sharing and release legislation: Issues paper for consultation. Retrieved from https://www.pmc.gov.au/resource-centre/public-data/issues-paper-data-sharing-release-legislation

Digital Rights Watch. (2018). State of digital rights. Sydney: Digital Rights Watch. Retrieved from https://digitalrightswatch.org.au/wp-content/uploads/2018/05/State-of-Digital-Rights-Media.pdf

Duke, J., & McDuling, J. (2019, March 4). Australian regulators prepare for Facebook, Google turf war. The Age. Retrieved from https://www.theage.com.au/business/companies/australian-regulators-prepare-for-facebook-google-turf-war-20190304-p511kg.html

Duke, J., & McDuling, J. (2018, December 10). Facebook, Google scramble to contain global fallout from ACCC plan. Sydney Morning Herald. Retrieved from https://www.smh.com.au/business/companies/competition-watchdog-suggests-new-ombudsman-to-handle-google-and-facebook-20181210-p50l80.html

Erdos, D. (2010). Delegating rights protections: The rise of Bills of Rights in the Westminster World. Oxford: Oxford University Press.

Erni, J. (2019). Law and cultural studies: A critical rearticulation of human rights. London and New York: Routledge.

Esayas, S. Y., & Daly, A. (2018). The proposed Australia consumer data right: A European comparison. European Competition and Regulatory Law Review, 2(3), 187–202. doi:10.21552/core/2018/3/6

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press

Eyers, J. (2019, February 18). Labor warns consumer data right could become second “My Health” debacle. Australian Financial Review. Retrieved from https://www.afr.com/business/banking-and-finance/labor-warns-consumer-data-right-could-become-second-my-health-debacle-20190218-h1be3u

Fuchs, C. (2012). Dallas Smythe today: The audience commodity, the digital labour debate, Marxist political economy and critical theory. Prolegomena to a digital labour theory of value. tripleC: Open Access Journal for a Global Sustainable Information Society, 10(2), 692–740. doi:10.31269/triplec.v10i2.443

Gearty, C. (2016). On fantasy island: Britain, Europe, and human rights. Oxford; New York: Oxford University Press.

Gibson, R. (1992). South of the West: Postcolonialism and the narrative construction of Australia. Bloomington, IN: Indiana University Press.

Goggin, G. (2008). Reorienting the mobile: Australasian imaginaries. The Information Society, 24(3), 171–181. doi:10.1080/01972240802020077

Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital rights in Australia. Sydney: Department of Media and Communications. Retrieved from http://hdl.handle.net/2123/17587

Goggin, G., Ford, M., Webb, A., Martin, F., Vromen, A., & Weatherall, K. (2019). Digital rights in Asia: Rethinking regional and international agenda. In A. Athique & E. Baulch (Eds.), Digital transactions in Asia: Economic, informational, and social exchanges. London and New York: Routledge.

Google. (2019, October 19). Second submission to the ACCC Digital Platforms Inquiry. Retrieved from https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/submissions

Greenleaf, G. (2014). Asian data privacy laws: Trade and human rights perspectives. Oxford: Oxford University Press.

Greenleaf, G. (2012). The influence of European data privacy standards outside Europe: Implications for globalization of Convention 108. International Data Privacy Law, 2(2), 68–92. doi:10.1093/idpl/ips006

Greenleaf, G. (2018a, May 24), Global convergence of Data Privacy standards and laws: Speaking notes for the European Commission Events on the launch of the General Data Protection Regulation (GDPR), Brussels & New Delhi, May 25 (Research Paper No. 18–56). Sydney: University of New South Wales. doi:10.2139/ssrn.3184548

Greenleaf, G. (2018b, April 8). The UN should adopt Data Protection Convention 108 as a global treaty. Submission on ‘the right to privacy in the digital age’ to the UN High Commission for Human Rights, to the Human Rights Council, and to the Special Rapporteur on the Right to Privacy. Retrieved from https://www.ohchr.org/Documents/Issues/DigitalAge/ReportPrivacyinDigitalAge/GrahamGreenleafAMProfessorLawUNSWAustralia.pdf

Gregg, B. (2012). Human rights as social construction. Cambridge, UK: Cambridge University Press.

Gregg, B. (2016). The human rights state: Justice within and beyond sovereign nations. Philadelphia, PA: University of Pennsylvania Press.

Harris, P. (2018, July 4). Data, the European Union General Data Protection Regulation (GDPR) and Australia’s new consumer right. Speech to the International Institute of Communications (IIC) Telecommunication and Media Forum (TMF), Sydney. Retrieved from https://www.pc.gov.au/news-media/speeches/data-protection

Hintz, A., Dencik, L., & Wahl-Jorgensen, K. (2018). Digital citizenship in a datafied society. Cambridge: Polity Press.

Hunt, M. (2015). Parliaments and human rights: Redressing the democratic deficit. London: Bloomsbury.

Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56–59. doi:10.1109/MC.2018.3191268

Isin, E. F., & Ruppert, E. S. (2015). Being digital citizens. Lanham, MA: Rowman & Littlefield.

Kang-Riou, N., Milner, J., & Nayak, S. (Eds.). (2012). Confronting the Human Rights Act: Contemporary themes and perspectives. London; New York: Routledge.

Karppinen, K. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), Routledge Companion to Media and Human Rights (pp. 95-103). London; New York: Routledge. doi:10.4324/9781315619835-9

Keating, P. (1996). Australia, Asia, and the new regionalism. Singapore: Institute of Southeast Asian Studies.

Kemp, K. (2018, September 27). Getting data right. Retrieved from https://www.centerforfinancialinclusion.org/getting-data-right

Kirby, M. (1999). Privacy protection, a new beginning: OECD principles 20 years on. Privacy Law & Policy Reporter, 6(3). Retrieved from http://www5.austlii.edu.au/au/journals/PrivLawPRpr/1999/41.html

Knaus, C. (2019). More than 2.5 million people have opted out of My Health Record. The Guardian. Retrieved from https://www.theguardian.com/australia-news/2019/feb/20/more-than-25-million-people-have-opted-out-of-my-health-record

Liberty (Ed.). (1999). Liberating cyberspace: Civil liberties, Human Rights, and the Internet. London: Pluto Press.

López, J. J. (2018). Human rights as political imaginary. Cham, Switzerland: Palgrave Macmillan. doi:10.1007/978-3-319-74274-8

Lunt, P., & S. Livingstone. (2012). Media regulation: Governance and the interests of citizens and consumers. London: Sage.

Mann, M., & Daly, A. (2018). (Big) data and the north-in-South: Australia’s informational imperialism and digital colonialism. Television & New Media, 20(4). Retrieved from doi:10.1177/1527476418806091

Mann, M., Daly, A., Wilson, M. & Suzor, N. (2018). The limits of (digital) constitutionalism: Exploring the privacy-security (im)balance in Australia. International Communication Gazette, 80(4), 369–384. doi:10.1177%2F1748048518757141

Mendelson, D. (2018). The European Union General Data Protection Regulation (EU 2016/679) and the Australian My Health Record Scheme: A comparative study of consent to data processing provisions. Journal of Law and Medicine, 26(1), 23–38.

Moyn, S. (2018). Not enough: Human rights in an unequal world. Cambridge, MA: Harvard University Press.

Murphy, K. (2018, July 31). My Health Record: Greg Hunt promises to redraft legislation after public outcry. The Guardian. Retrieved from https://www.theguardian.com/australia-news/2018/jul/31/my-health-record-greg-hunt-promises-to-redraft-legislation-after-public-outcry

Newman, L. H. (2018, December 7). Australia’s encryption-busting law could impact global privacy. Wired. Retrieved from https://www.wired.com/story/australia-encryption-law-global-impact/

Nguyen, P., & Solomon, L. (2018). Consumer data and the digital economy. Melbourne: Consumer Policy Research Centre. Retrieved from http://cprc.org.au/wp-content/uploads/Full_Data_Report_A4_FIN.pdf

Nimmer, R. T., & Krauthaus, P. A. (1992). Information as a commodity: New imperatives of commercial law. Law and Contemporary Problems, 55(3), 103–130. doi:10.2307/1191865

Office of the Australian Information Commissioner (OAIC). (2017). Australian community attitudes to privacy survey, 2017. Sydney: Office of the Australian Information Commissioner. Retrieved from https://www.oaic.gov.au/resources/engage-with-us/community-attitudes/acaps-2017/acaps-2017-report.pdf

Office of the Australian Information Commissioner (OAIC). (2019). History of the Privacy Act. Retrieved from https://www.oaic.gov.au/about-us/who-we-are/history-of-the-privacy-act

Office of the Australian Information Commissioner (OAIC). (2018, April 17). Submission on Issues Paper –– Digital Platforms Inquiry. Retrieved from https://www.oaic.gov.au/engage-with-us/submissions/digital-platforms-inquiry-submission-to-the-australian-competition-and-consumer-commission

OECD. (1980/2013). OECD guidelines on the protection of privacy and transborder flows of personal data. Paris: OECD. Retrieved from http://www.oecd.org/internet/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm

Ofcom. (2018). Adults’ media use and attitudes report 2018. London: Ofcom. Retrieved from https://www.ofcom.org.uk/research-and-data/media-literacy-research/adults/adults-media-use-and-attitudes

Pew Research Center. (2016). Privacy and information sharing. Washington, DC: Pew Research Center. Retrieved from http://www.pewinternet.org/2016/01/14/privacy-and-information-sharing/

Poullet, Y. (2018). Is the general data protection regulation the solution? Computer Law & Security Review, 34(4), 773–778. doi:10.1016/j.clsr.2018.05.021

Productivity Commission. (2017). Data availability and use (Report No. 82). Canberra: Productivity Commission. Retrieved from https://www.pc.gov.au/inquiries/completed/data-access/report

Simons, M. (2018, December 11). The ACCC’s plan to reshape the media landscape. Inside Story. Retrieved from https://insidestory.org.au/the-acccs-plan-to-reshape-the-media-landscape/

Smee, B. (2018, September 18). My Health Record: Big pharma can apply to access data. The Guardian. Retrieved from https://www.theguardian.com/australia-news/2018/sep/18/my-health-record-big-pharma-can-apply-to-access-data

Solomon, B. (2018, August 23). (2018). Open letter to Michelle Bachelet, new High Commissioner for Human Rights. Access Now. Retrieved from https://www.accessnow.org/cms/assets/uploads/2018/09/Open-Letter-Bachelet.pdf

Stalla-Bourdillon, S., Pearce, H., & Tsakalakis, N. (2018). The GDPR: A game changer for electronic identification schemes. Computer Law & Security Review, 34(4), 784–805. doi:10.1016/j.clsr.2018.05.012

Stats, K. (2015). Antipodean antipathy: Australia’s relations with the European Union. In N. Witzleb, A. M. Arranz & P. Winand (Eds.), The European Union and Global Engagement: Institutions, Policies, and Challenges. Cheltenham, UK: Edward Elgar (pp. 279–304).

Suzor, N. P., Pappalardo, K. M., & McIntosh, N. (2017). The passage of Australia’s data retention regime: National security, human rights, and media scrutiny. Internet Policy Review, 6(1). doi:10.14763/2017.1.454

Swan, D., Vitorovich, L., & Samios, Z. (2019, March 5). Media companies back ACCC on need to patrol digital behemoths. The Australian. Retrieved from https://www.theaustralian.com.au/business/media/media-companies-back-accc-on-need-to-patrol-digital-behemoths-google-and-facebook/

Vestoso, M. (2018). The GDPR beyond privacy: Data-driven challenges for social scientists, legislators and policy-makers. Future Internet, 10(7). doi:10.3390/fi10070062

Voloshin, G. (2014). The European Union’s normative power in Central Asia: Promoting values and defending interests. Houndsmills, UK: Palgrave Macmillan. doi:10.1057/9781137443946

von Dietze, A., & Allgrove, A.-M. (2014). Australian privacy reforms—an overhauled data protection regime for Australia. International Data Privacy Law4(4), 326–341. doi:10.1093/idpl/ipu016

Worthington, B., & Bogle, A. (2018, 6 December). Labor backdown allows Federal government to pass encryption laws. Sydney Morning Herald. Retrieved from https://www.abc.net.au/news/2018-12-06/labor-backdown-federal-government-to-pass-greater-surveillance/10591944

Young, A. L. (2017). Democratic dialogue and the constitution. Oxford: Oxford University Press.

Will Serbia adjust its data protection framework to GDPR in practice?

$
0
0

After a process that took more than five years, Serbia finally received a new Law on Personal Data Protection [in Serbian] - adopted by the National Assembly last November. The law closely follows EU’s General Data Protection Regulation (GDPR), almost to the point of literal translation into Serbian. That was expected, due to Serbia’s EU membership candidacy. However, it seems it will be very difficult to implement the new legislation in practice - and thereby actually make a difference, as there are numerous flaws that were overlooked when the law was drafted and enacted.

There is not a high level of privacy culture in Serbia and therefore the majority of people are not much sensitive about the way the state and the private sector are collecting and handling their personal data. The recent affair with new high-tech surveillance cameras in Serbia’s capital city Belgrade, which were supplied by Huawei and have facial and vehicle license plate recognition capabilities, shows that little thought is invested in how intrusive technologies might impact citizens’ privacy and everyday lives. The highest-ranking state officials for internal affairs, the Minister of Interior and the Director of Police, have announced in the media that these cameras are yet to be installed in Belgrade, while a use case study on Huawei’s official website claimed that the cameras were already operational. Soon after the SHARE Foundation, a non-profit organisation from Serbia dedicated to protecting and improving human rights in the digital environment, and of which I’m part, published an article with information found in Huawei’s “Safeguard Serbia” use case, the study miraculously disappeared from the company website but an archived version of the page is still available.

Considering that the adaptation period provided in the law is only nine months after its coming into force - compared to two years under GDPR, the general feeling is that both the public and the private sector will have many difficulties in adjusting their practices to the provisions of the new law.

In the past several years, we have witnessed many cases of personal data breaches and abuse, the largest one undoubtedly being the case of the now defunct Privatization Agency, when more than five million people, almost the entire adult population of Serbia, had their personal data - such as names and unique master citizen numbers, exposed on the internet. The agency was ultimately shut down by the government, and no one was held accountable as the legal proceeding was not completed in time (see PDF of Commissioner’s report, 2017, p. 59).

Although the Serbian law contains key elements of GDPR, such as principles relating to processing of personal data and data subjects’ rights, its text is very complicated to understand and interpret, even for lawyers. One of the main reasons for this is the fact that the law contains provisions related to matters in the scope of EU Directive 2016/680, i.e. the so-called “Police Directive”, which deals with processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties and on the free movement of such data. The law also fails to cover video surveillance, а particularly important aspect of personal data processing. The Commissioner for Information of Public Importance and Personal Data Protection, Serbia’s Data Protection Authority, and civil society organisations have pointed out to these and other flaws on several occasions (see, among other, Serbia’s former Commissioner’s comments), but the Ministry of Justice ignored these comments.

In addition to filing a complaint to the Commissioner, citizens are also allowed under the law to seek court protection of their rights, creating a “parallel system” of protection which can lead to legal uncertainty and uneven practice in the protection of citizens’ rights. Regarding data subjects’ rights, the final text of the law includes an article with limitations to these rights, which omitted that they can only be restricted by law. In practice, this would mean that state institutions or private companies processing personal data of citizens may arbitrarily restrict their rights as data subjects.

To make matters even more complicated, the National Assembly still hasn’t appointed the new Commissioner, the head of the key institution for personal data protection reform. The term of the previous Commissioner ended in December last year, and the public is still in the dark as to whom will be appointed and when. There are also fears, including on behalf of civil society and experts on the topic, that the new Commissioner might not be up to the task in terms of expertise and political independence.   

New and improved data protection legislation, adapted for the world of mass data collection and processing via artificial intelligence technologies, is a key component of a successful digital transformation of society, whereas in Serbia it is usually looked at like something needed “for joining the EU” – another box to be ticked. A personal data protection framework which meets high standards set in the GDPR in practice is of great importance for the digital economy, particularly for Serbia’s growing IT sector. If all entities processing personal data can demonstrate that they are indeed GDPR-compliant in their everyday practices, and not just “on paper”, there will be more opportunities for investments in Serbia’s digital economy and for Serbian companies to compete in the European digital market.

It will take a lot of effort to improve the standards of data protection in Serbia, especially with a data protection law which is largely flawed and which will be difficult to implement in practice. Therefore, it is of utmost importance that the National Assembly appoints a person with enough expertise and professional integrity as the new Commissioner, so that the process of preparing both the private and public sector for the new regulations can be expedited. As the application of the new Law on Personal Data Protection starts in August, it should be regarded as just the beginning of a new relationship towards citizens’ data, which requires a lot of hard work to accomplish. Otherwise, the law will remain just a piece of paper with no practical effect.   

Data subjects as data controllers: a Fashion(able) concept?

$
0
0

Introduction

Recent case-law of the European Court of Justice has substantially widened the notion of “data controller" in unclear and potentially onerous ways for a range of actors involved in personal data processing. While this approach may be positive for data protection compliance generally (generating a ‘ripple effect’, in the words of late Advocate General Bot), it also has worrying implications for data subjects who may be characterised as controllers, and for emergent decentralised and privacy protective technologies; we hope the Court will address these issues in its forthcoming judgment in Case C‑40/17, Fashion ID.

The expanding notion of data controller

The European Union’s General Data Protection Regulation (‘GDPR’) recognises two main categories of actors: data subjects and data controllers. Pursuant to Article 4(1) GDPR, the data subject is the identified or identifiable natural person that personal data relates to. The data controller is the entity that ‘alone or jointly with others, determines the purposes and means of the processing of personal data’ (Article 4(7) GDPR). Whereas the Regulation implicitly assumes that data controllers and data subjects are different actors, recent technical and legal developments have made the dividing line between both sets of actors less clear-cut. Increasingly, users may find themselves deemed to be acting as joint controllers with service providers, or even as sole controllers. This qualification matters, because effectively it places the principal, onerous duties in the data protection regime on the users themselves, which may be inappropriate for legal and technical reasons, as well as prejudicing the rights of (other) data subjects. After a period of turmoil in the case law, helpfully the European Court of Justice (ECJ) will have a chance to review this area of law again in the upcoming Fashion IDcase, which has already generated a controversial Opinion by AG Bobek 1, with a final judgment expected before the Court’s 2019 summer break in mid-July.

When the foundations of EU data protection law were being laid, typically one entity controlled both the means (the ‘how’) and the purposes (the ‘why’) of processing (think of census data processed by public authorities, or payroll records kept in a company). Data crunching itself was often outsourced to a third party, but their subordinate role as data processor was usually clearly delineated in the contract made with the controller company. Nowadays, however, data ecosystems are often much more complex, with the consequence that there is no unitary control over the means and the purposes of processing. Furthermore, systems are increasingly distributed regarding infrastructure and organisation. Consider the example of cloud computing, where arguably providers determine the means but their clients determine the purposes of processing. Traditionally cloud providers have been seen as mere data processors but this no longer captures the diversity of business models, the ways in which they can shape data controllers’ processing operations, and the intermingling of their own purposes with those of their clients. Similarly, where blockchain technology is used, oftentimes the purposes are determined by the users, but they have no influence over the means of processing, which are rather determined by the actor(s) that control the infrastructure (usually not users). This led the French Data Protection Authority to argue in its guidance on blockchains and the GDPR that a data subject could indeed be a data controller in relation to personal data that relates to themselves.

In a series of recent judgments, the ECJ has added to the confusion between data subjects and data controllers in adopting a very broad definition of the notion of controllership, striving to ensure the ‘effective and complete protection of data subjects’. In Wirtschaftsakademie Schleswig-Holstein, the Grand Chamber decided that operators of a Facebook fan page were joint controllers together with Facebook, merely because they exerted influence over Facebook’s collection of data from visitors to that page.2 In the Jehovah’s Witnesses case the court found that the Jehovah’s Witnesses community was a joint controller (together with the individuals doing door-to-door preaching) of collected data as it organised, coordinated and encouraged such collection despite never gaining access to this data.3 As Advocate General Bobek aptly noted in his opinion in Fashion IDtaken to extremes this means that anyone in a “personal data chain” that makes data processing “possible”, becomes a joint controller.4

Implications: limitations of ‘everyone is a controller’ approach

This tendency towards the widening of responsibility via joint controllership may be particularly perilous for consumers or domestic users seeking greater control over data through emerging privacy protective architectures known as personal data stores (PDSs). Here, instead of data being held and processed in a centralised manner on the cloud, it is retained in a decentralised manner by data subjects themselves. Privacy-preserving computations can then be used to generate inferences from this data and thus provide users with services like price comparison or search without their data ever leaking to an external platform. In times of concern about the monetisation of user privacy and the rise of “surveillance capitalism”,5 such experiments are important. Using cryptographic systems built on tools like secure multi-party computation and homomorphic encryption, even centralised machine learning models can still be trained from this decentralised data.6

This raises a number of key problems for data protection regimes. First, data subjects using PDSs – especially perhaps in “smart homes” – are likely to be seen as joint controllers, but may also find no succor from the so-called “household exemption” which was designed to protect domestic users, such as those running club mailing lists, from the full rigours of controllership. Article 2 GDPR exempts from its scope data processing “by a natural person in the course of a purely personal or household activity”. This has been interpreted narrowly, as in Lindqvist7, but two additional criteria of judicial origin, namely that data must not be shared with an indefinite number of people and that processing must not be ‘directed outwards from the private setting of the person processing the data’, mean that the household exemption is very unlikely to protect smart home users who seek external services or, perhaps unintentionally, process the data of visitors to their home.8 Secondly, data stores, similarly to distributed ledger technologies, may place data controllers in a contrary position as actors orchestrating or coordinating processing, but not actually seeing the data themselves. The ECJ has held that this does not prevent them acting as joint controllers: both Wirtschaftsakademie and the overarching Jehovah’s Witnesses organisation did not have copies of the data but were nonetheless seen as controllers.

In these decentralised set-ups, how effective is data protection law? Where there are joint centralised-controllers and data subject–controllers, how does responsibility fall? Will this lead to more cases where central data controllers bind their own hands as not to be able to exercise full data controller responsibilities, such as access or erasure9? What about cases with no discernable central, orchestrating body at all, as on public and permissionless blockchains? One way forward might be to look closer at the GDPR’s provisions around data protection by design (Article 25), ensuring that decentralised systems have safeguards baked in at their heart. However, it seems unlikely that even careful, concerted design could fully support a model where already over-burdened data subjects are expected to undertake controller obligations too.

Another approach, which is championed by Bobek in Fashion ID, would be to consider more carefully in law how joint controller responsibilities should be allocated, and for what stages of processing. Bobek’s own solution, however, which seeks to limit the spread of joint controllership by deeming two actors joint controllers only for the stages of processing where they determine common purposes of processing, may be hard to determine with any specificity and predictability. It may thus deprive data subjects of effective protection, in particular where they lack knowledge of the specific purposes of third party processing that they enable. Further, responsibility and potential liability of an ‘enabling controller’ in the absence of knowledge or awareness of illegality in the activity of its joint controller(s) appears to be in striking tension with the safe harbour established by article 14 of the e-Commerce Directive for content hosts.

Conclusion

Given the above, we submit that the apparent widening of responsibility of data subjects for the processing of their own data is a worrying trend which may impede both the development and uptake of privacy protective technologies that are badly needed, as well as decentralised data ecosystems that are being promoted as innovative by many EU member states. To avert those consequences, it is hoped that the Court will address some of the uncertainties and shortcomings in the existing doctrine of joint controllership. For these reasons, the Fashion ID judgement is one to watch with anticipation.

Funding declaration

Michael Veale was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. Lilian Edwards was supported by EPSRC grant EP/S035362/1.

Footnotes

1. Case C-40/17 Fashion ID (Opinion of Advocate General Bobek) ECLI:EU:C:2018:1039

2. Case C-210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH ECLI:EU:C:2018:388.

3. Case C-25/17 Tietosuojavaltuutettu ECLI:EU:C:2018:551.

4. Supra n 3 at para 74.

5. Shoshana Zubhoff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019).

6. Royal Society (2019) Protecting privacy in practice: The current use, development and limits of Privacy Enhancing Technologies in data analysis. https://royalsociety.org/topics-policy/projects/privacy-enhancing-technologies/ .

7. Case C-101/01 Lindqvist (2003) EU:2003:596.

8. Case C-12/13 František Ryneš v Úřad pro ochranu osobních údajů EU:C:2014:2428, para 33.

9. Veale, M., Binns, R., & Ausloos, J. (2018). When data protection by design and data subject rights clash. International Data Privacy Law, 8(2), 105-123.

Making sense of data ethics. The powers behind the data ethics debate in European policymaking

$
0
0

Introduction

January 2018: The tweet hovered over my head: “Where are the ethicists?” I was on a panel in Brussels about data ethics and this wasn’t the first time a panel or initiative as such was questioned. There wasn’t the foundation proper, the right expertise was not included - the ethicists were missing, the humanists were missing, the legal experts were missing. The results, outcome and requirements of these initiatives were unclear. Would they water down the law? I understood the critiques though. How could we talk about data ethics when a law was just passed following a lengthy negotiation process on this very topic? What was the function of these discussions? If we were not there to acknowledge a consensus, that is, the legal solution, what then was the point?

In the slipstream of sweeping data protection law reform in Europe, discussions regarding data ethics has gained traction in European public policy-making. Numerous data ethics public policy initiatives have been created, moving beyond issues of mere compliance with data protection law to increasingly focus on the ethics of big data, especially concerning private companies’ and public institutions’ handling of personal data in digital forms. Reception in public discourse has been mixed. Although gaining significant public attention and interest, these data ethics policy initiatives have also been depicted as governmental “toothless wonders” (e.g., Hill, 24 November 2017) and a waste of resources, and have been criticised for drawing attention away from public institutions’ mishandling of citizens’ data (e.g., Ingeniøren’s managing panel, op ed, 16 March 2018) and for potential “ethics washing” (Wagner, 2018), questioning the expertise and interests involved in the initiatives, as well as their normative ethics frameworks.

This article constitutes an analytical investigation of the various dimensions and actors that shape definitions of data ethics in European policy-making. Specifically, I explore the role and function of European data ethics policy initiatives and present an argument regarding how and why they took shape in the context of a European data protection regulatory reform. The explicit use of the term “ethics” calls for a philosophical framework; the term “data” for a contemporary perspective of the critical role of information in a digitalised society; and the policy context for consensus-making and problem solving. Together, these views on the role of the data ethics policy initiatives are highly pertinent. However, taken separately they each provide a one-sided kaleidoscopic insight into their role and function. For example, a moral philosophical view concerning data ethics initiatives (in public policy-making as well as in the private industry) might not be vigilant of the embedded interests and power relations; pursuit of actionable policy results may overlook their function as spaces of negotiation and positioning; while viewing data ethics initiatives as something radically new in the age of big data can lose sight of their place in and relation to history and governance in general.

In my analysis, I therefore adopt an interdisciplinary approach that draws on methods and theories from different subfields within applied ethics, political science, sociology, culture and infrastructure/STS studies. A central thesis of this article is that we should perceive data ethics policy initiatives as open-ended spaces of negotiation embedded in complex socio-technical dynamics, which respond to multifaceted governance challenges extended over time. Thus, we should not view data ethics policy initiatives as solutions in their own right. They do not replace legal frameworks such as the European General Data Protection Regulation (GDPR). Rather, they complement existing law and may inspire, guide and even set in motion political, economic and educational processes that could foster an ethical “design” of the big data age, covering everything from the introduction of new laws, the implementation of policies and practices in organisations and companies and the development of new engineering standards, to awareness campaigns among citizens and educational initiatives.

In the following, I first outline a cross-disciplinary conceptualisation of data ethics, presenting what I define as an analytical framework for a data ethics of power. I then describe the data ethics public policy focus in the context of the GDPR. I recognise that ethics discussions are implicit in legislative processes. Nevertheless, in this article I do not specifically focus on the regulation’s negotiation process as such, but rather on policymakers’ explicit use of the term “data ethics”, and especially on the emergence of formal data ethics policy initiatives (for instance, committees, working groups, stated objectives and results), many of which followed the adoption of the GDPR. I subsequently move on to an analysis of data ethics as described in public policy reports, statements, interviews and events in the period 2015–2018. In conclusion, I take a step back and review the definition of data ethics. Today, data ethics is an idea, concept and method that is used in policy-making, but which has no shared definition. While more aligned conceptualisations of data ethics might provide a guiding step for a collective vision for actions in law, business and society in general, an argument that runs through this article is that there is no definition of data ethics in this space neutral of values and politics. Therefore, we must position ourselves within a context-specific type of ethical action.

This article is informed by a study that I am conducting on data ethics in governance and technology development in the period 2017-2020. In that study and this article, I use an ethnographically informed approach based on active and embedded participation in various data protection/internet governance policy events, working groups and initiatives. Qualitative embedded research entails an immersion of the researcher in the field of study as an active and engaged member to achieve thorough knowledge and understanding (Bourdieu, 1997; Bourdieu & Wacquant 1992; Goffman, 1974; Ingold, 2000; Wong, 2009). Thus, essential to my understanding of the underlying dimensions of the topic of this article is my active participation in the internet governance policy community. I was part of the Danish government’s data ethics expert committee (2018) and am part of the European Commission’s Artificial Intelligence High Level Expert group (2018-2020). I am also the founder of the non profit organisation DataEthics.eu, which is active in the field.

In this article, I specifically draw on ideas, concepts and opinions generated in interaction with nine active players (decision-makers, policy advisors and civil servants) whom contributed to my understanding of the policy-making dynamics by sharing their experiences with data ethics in European 1 policy-making (see further in references). The interviewees were informed about the study and that they would not be represented by name and institution in any publications, as I wanted them to be minimally influenced by institutional interests and requirements in their accounts.2

Section 1: What is data ethics? A data ethics of power

In this section I introduce the emerging field of data ethics as the cross-disciplinary study of the distribution of societal powers in the socio-technical systems that form the fabric of the “Big Data Society”. Based on theories, practices and methods within applied ethics, legal studies and cultural studies, social and political sciences, as well as a movement within policy and business, I present an analytical framework for a “data ethics of power”.

As a point of departure, I define a data ethics of power as an action-oriented analytical framework concerned with making visible the power relations embedded in the “Big Data Society” and the conditions of their negotiation and distribution, in order to point to design, business, policy, social and cultural processes that support a human-centric distribution of power. In a previous book (Hasselbalch and Tranberg, 2016) we described data ethics as a social movement of change and action: “Across the globe, we’re seeing a data ethics paradigm shift take the shape of a social movement, a cultural shift and a technological and legal development that increasingly places the human at the centre” (p. 10). Thus, data ethics can be viewed as a proactive agenda concerned with shifting societal power relations and with the aim to balance the powers embedded in the Big Data Society. This shift is evidenced in legal developments (such as the GDPR negotiation process) and in new citizen privacy concerns and practices such as the rise in use of ad blockers and privacy enhancing services, etc. In particular, new types of businesses emerge that go beyond mere compliance with data protection legislation when incorporating data ethical values in collection and processing of data, as well as their general innovation practices, technology development, branding and business policies.

Here, I use the notion of “Big Data Society” to reflectively position data ethics in the context of a recent data (re)evolution of the “Information Society”, enabled by computer technologies and dictated by a transformation of all things (and people) into data formats (“datafication”) in order to “quantify the world” (Mayer-Schonberger & Cukier, 2013, p. 79) to organise society and predict risks. While I suggest that this is not an arbitrary evolution, but can also be viewed as an expression of negotiations between different ontological views on the status of the human being and the role of science and technology. As the realisation of a prevailing ideology of modernist scientific practices to command nature and living things, the critical infrastructures of the Big Data Society may therefore very well be described as modernity embodied in a “lived reality” (Edwards, 2002, p. 191) of control and order. From this viewpoint, a data ethics of power can be described as a type of post-modernist, or in essence vitalist, call for a specific kind of “ethical action” (Frohmann, 2007, p. 63) to free the living/human being from the constraints of the practices of control embedded in the technological infrastructures of modernity that at the same time reduce the value of the human being. It is here valuable to understand current calls for data ethical action in extension of the philosopher Henri Bergson’s vitalist arguments at the turn of the last century against the scientific rational intellect that provides no room for, or special status to, the living (1988, 1998). In a similar ethical framework, Gilles Deleuze, who was also greatly inspired by Bergson (Deleuze, 1988), later described over-coded “Societies of Control” (Deleuze, 1992), which reduce people (“dividuals”) to a code marking their access and locking their bodies in specific positions (p. 5). More recently, Spiekerman et al. (2017) in their Anti-Transhumanist Manifesto directly oppose a vision of the human as merely information objects, no different than other information objects (that is; non-human informational things), which they describe as “an expression of the desire to control through calculation. Their approach is limited to reducing the world to data-based patterns suited for mechanical manipulation” (p. 2).

However, a data ethics of power should also be viewed as a direct response to the power dynamics embedded in and distributed via our very present and immediate experiences of a “Liquid Surveillance Society” (Lyon, 2010). Surveillance studies scholar David Lyon (2014) envisions an “ethics of Big Data practices” (2014, p. 10) to renegotiate what is increasingly exposed to be an unequal distribution of power in the technological big data infrastructures. Within this framework we do not only pay conventional attention to the state as the primary power actor (of surveillance), but also include new stakeholders that gain power through accumulation and access to big data. For example, in the analytical framework of a data ethics of power, changing power dynamics are progressively more addressed in the light of the information asymmetry between individuals and the big data companies that collect and process data in digital networks (Pasquale, 2015; Powles, 2015–2018; Zuboff, 5 March 2016, 9 September 2014, 2019).

Beyond this fundamental theoretical framing, a data ethics of power can be explored in an interdisciplinary field addressing the distribution of power in the Big Data Society in diverse ways.

For instance, in a computer ethics perspective, power distributions are approached as ethical dilemmas or as implications of the very design and practical application of computer technologies. Indeed, technologies are never neutral, they embody moral values and norms (Flanagan, Howe, & Nissenbaum, 2008), hence power relations can be identified through analysing how technologies are designed in ethical or ethically problematic ways. Information science scholars Batya Friedman and Helen Nissenbaum (1996) have illustrated different types of bias embedded in existing computer systems that are used for tasks such as flight reservations and the assignment of medical graduates to their first job, and have presented a framework for such issues in the design of computer systems. From this perspective, we can also describe data ethics as what the philosophy and technology scholar Philip Brey terms a “Disclosive Computer Ethics”, identifying moral issues such as “privacy, democracy, distributive justice, and autonomy” (Brey, 2000, p. 12) in opaque information technologies. Phrased differently, a data ethics of power presupposes that technology has “politics” or embedded “arrangements of power and authority” (Winner, 1980, p. 123). Case studies of specific data processing software and their use can be defined as data ethics case studies of power, notably the “Machine Bias” study (Angwin et al., 2016), which exposed discrimination embedded in data processing software used in United States defence systems, and Cathy O’Neil’s (2016) analysis of the social implications of the math behind big data decision making in everything from getting insurance, credit to getting and holding a job.

Nevertheless, data systems are increasingly ingrained in society in multiple forms (from apps to robotics) and have limitless and wide-ranging ethical implications (from price differentiation to social scoring), necessitating that we look beyond design and computer technology as such. Data ethics as a recent designation represents what philosophers Luciano Floridi and Mariateresa Taddeo (2016, p. 3) describe as a primarily semantic shift within a computer and information ethics philosophical tradition from a concern with the ethical implications of the “hardware” to one with data and data science practices. However, looking beyond applied ethics in the field of philosophy to a data ethics of power, our theorisation of the Big Data Society is more than just semantic. The conceptualisation of a data ethics of power can also be explored in a legal framework, as an aspect of the rule of law and protection of citizens’ rights in an evolving Big Data Society. Here, redefining the concept of privacy (Cohen, 2013; Solove, 2008) in a legal studies framework, addresses the ethical implications of new data practices and configurations that challenge existing laws, and thereby the balancing of powers in a democratic society. As legal scholars Neil M. Richards and Jonathan King (2014) argue: “Existing privacy protections focused on managing personally identifying information are not enough when secondary uses of big data sets can reverse engineer past, present, and even future breaches of privacy, confidentiality, and identity” (p. 393). Importantly, these authors define big data “socially, rather than technically, in terms of the broader societal impact they will have,” (Richards & King, 2014, p. 394) providing a more inclusive analysis of a “big data ethics” (p. 393) and thus pointing to the ethical implications of the empowerment of institutions that possess big data capabilities at the expense of “individual identity” (p. 395).

Looking to the policy, business and technology field, the ethical implications of the power of data and data technologies are framed as an issue of growing data asymmetry between big data institutions and citizens in the very design of data technologies. For example, the conceptual framework of the “Personal Data Store Movement” (Hasselbalch & Tranberg, 27 September 2016) is described by the non-profit association MyData Global Movement as one in which “[i]ndividuals are empowered actors, not passive targets, in the management of their personal lives both online and offline – they have the right and practical means to manage their data and privacy” (Poikola, Kuikkaniemi, & Honko, 2018). In this evolving business and technology field, the emphasis is on moving beyond mere legal data protection compliance, implementing values and ethical principles such as transparency, accountability and privacy by design (Hasselbalch & Tranberg, 2016), and ethical implications are mitigated by values-based approaches to the design of technology. For example, engineering standards such as those of IEEE P7000s Ethics and AI standards3 that seek to develop ethics by design standards and guiding principles for the development of artificial intelligence (AI). A values based design approach is also revisited in recent policy documents such as section 5.2. “Embedded values in technology – ethical-by-design” of the European Parliament’s “Resolution on Artificial Intelligence and Robotics” adopted in February 2019.

A key framework for data ethics is the human-centric approach that we increasingly see included within ethics guidelines and policy documents. For example, the European Parliament’s (2019, V.) resolution states that “whereas AI and robotics should be developed and deployed in a human-centred approach with the aim of supporting humans at work and at home…”. The EC High Level Expert Group on Artificial Intelligence’s draft ethics guidelines also stress how the human-centric approach to AI is one that “strives to ensure that human values are always the primary consideration” (working document, 18 December 2018, p. iv), and directly associate it with the balance of power in democratic societies: “political power is human centric and bounded. AI systems must not interfere with democratic processes” (p. 7). The human-centric approach in European policy-making is framed in a European fundamental rights framework (as for example extensively described in the European Commission’s AI High Level Expert group’s draft ethics guidelines) and/or with an emphasis on the human being’s interests prevailing over “the sole interests of society or science” (article 2, “Oviedo Convention”). Practical examples of the human-centric approach can also be found in technology and business developments that aim to preserve the specific qualities of humans in the development of information processing technologies. Examples include the Human in the Loop (HITL) approach to the design of AI, The International Organization for Standardization (ISO) standards on human-centred design (HCD) and the Personal Data Store Movement, which is defined as “A Nordic Model for human-centered personal data management and processing.” (Poikola et al., 2018)

Section 2: European data ethics policy initiatives in context

Policy debates that specifically address ethics in the context of technological developments have been ongoing in Europe since the 1990s. The debate has increasingly sought to harmonise national laws and approaches in order to preserve a European value framework in the context of rapid technological progress. For instance, the Council of Europe’s “Oviedo Convention” was motivated by what Wachter (1997, p. 14) describes as “[t]he feeling that the traditional values of Europe were threatened by rapid and revolutionary developments in biology and medicine”. Data ethics per se gained momentum in pan-European politics in the final years of the negotiation of the GDPR, through the establishment of a number of initiatives directly referring to data and/or digital ethics. Thus, the European Data Protection Supervisor (EDPS) Digital Ethics Advisory Group (2018, p. 5) describes its work as being carried out against “a growing interest in ethical issues, both in the public and in the private spheres and the imminent entry into force of the General Data Protection Regulation (GDPR) in May 2018”.

Examination of the differences in scope and the stakeholders involved in respectively the development of the 1995 Data Protection Directive and the negotiation process of the GDPR beginning with the European Commission’s proposal in 2012, provides some insight into the evolution of the focus of data ethics. The 1995 Directive was developed by a European working party of privacy experts and national data protection commissioners in a process that excluded business stakeholders (Heisenberg, 2005). Nevertheless, the group of actors influencing and participating in the development of the GDPR process progressively expanded, with new stakeholders comprising consumer and civil liberty organisations and American industry representatives and policymakers. The GDPR was generally described as one of the most lobbied EU regulations (Warman, 8 February 2012). At the same time, the public increasingly scrutinised the ethical implications of a big data era, with numerous news stories published on data leaks and hacks, algorithmic discrimination and data-based voter manipulation.

Several specific provisions of the GDPR were discussed inside and outside the walls of European institutions. For example, the “right to erasure” proposed in 2012 was heavily debated by industry and civil society organisations, especially in Europe and the USA, and was frequently described in the media as a value choice between privacy and freedom of expression. In 2013, the transfer of data to third countries (including those covered by the EU-US Safe Harbour agreement) engendered a wider public debate between certain EU parliamentarians and US politicians regarding mass surveillance and the role of large US technology companies. Another example was the discussion of an age limit of 16. This called civil society advocates into action (Carr, Should I laugh, cry or emigrate?, 13 December 2015) and led to new alliances with US technology companies regarding young people’s right to “educational and social opportunities” (Richardson, “European General Data Protection Regulation draft: the debate”, 10 December 2015). A last-minute decision rendered it possible to lower the age limit to 13 in member states.

These intertwined debates and negotiations illustrate how the data protection field was transformed within a global information technology infrastructure. It took shape as a negotiation of competing interests and values between economic entities, EU institutions, civil society organisations, businesses and third country national interests. We can also perceive these spaces of negotiation of rights, values and responsibilities and the creation of new alliances to have a causal link with the emergence of data ethics policy initiatives in European policy-making. In the years following the first communication of the reform, data protection debates were extended, with the concept of data ethics increasingly included in meeting agendas, debates in public policy settings and reports and guidelines. Following the adoption of the GDPR, the list of European member states or institutions with established data or digital ethics initiatives and objectives rapidly grew. Examples included the UK government’s announcement of a £9 million Centre for Data Ethics and Innovation with the stated aim to “advise government and regulators on the implications of new data-driven technologies, including AI” (Digital Charter, 2018). The Danish government appointed a data ethics expert committee 4 in March 2018 with a direct economic incentive to create data ethics recommendations to Danish industry and to turn responsible data sharing into a competitive advantage for the country (Danish Business Authority, 12 March 2018). Several member states’ existing and newly established expert and advisory groups and committees began to include ethics objectives into their work. For example, the Italian government established an AI Task Force in April 2017, publishing its first white paper in 2018 (AI Task Force/Italy, 2018) with an explicit section on ethics. The European Commission’s communication on an AI strategy, published in April 2018, also included the establishment of an AI High Level Expert Group 5, whose responsibility it was, among others, to publish ethics guidelines for AI in Europe the following year.

Section 3: Data ethics - policy vacuums

“I’m pretty convinced that the ethical dimension of data protection and privacy protection is going to become a lot more important in the years to come” (in ‘t Veld, 2017). These words of a European parliamentarian in a public debate in 2017 referred to the evolution of policy debates regarding data protection and privacy. You can discuss legal data protection provisions, she claimed, but then there is “a kind of narrow grey area where you have to make an ethical consideration and you say what is more important” (in ‘t Veld, 2017). What did she mean by her use of the term “ethics” in this context?

In an essay entitled “What is computer ethics?” (1985), the moral philosophy scholar James H. Moor described the branch of applied ethics that studies the ethical implications of computer technologies. Published only a few years after Acorn, the first IBM personal computer, was introduced to the mass market, Moor was interested in computer technologies per se (what is special about computers), as well as the policies required in specific situations where computers alter the state of affairs and create something new. But he also predicted a more general societal revolution (Moor, 1985, p. 268) due to the introduction of computers that will “leave us with policy and conceptual vacuums” (p. 272). Policy vacuums, he argued, would present core problems and challenges, revealing “conceptual muddles” (p. 266), uncertainties and the emergence of new values and alternative policies (p. 267).

If we view data ethics policy initiatives according to Moor’s framework, they can be described as moments of sense-making and negotiation created in response to the policy vacuums that arise when situations and settings are amended by computerised systems. In an interview conducted at the Internet Governance Forum (IGF) in 2017, a Dutch parliamentarian described how in 2013, policy-makers in her country rushed to tackle the transformations instigated by digital technologies that were going “ very wrong” (Interview, IGF 2017). In response, she proposed the establishment of a national commission to consider the ethical challenges of the digital society: “it’s very hard to get the debate out of the trenches, you know, so that people stop saying, ‘well this is my position and this is my position’, but to just sit back and look at what is happening at the moment, which is going to be so huge, so incredible, we have no idea what is going to happen with our society and we need people to think about what to do about all of this, not in the sense you know, ‘I don’t want it’, but more in the sense, ‘are there boundaries?’ ‘Do we have to set limits to all of these possibilities that will occur in the coming years?’” Similarly, in another interview conducted at the same event, a representative of a European country involved in the information policy of the Committee of Ministers of the Council of Europe discussed how the results of the evolution of the Information Society included “violations”, “abuses” and recognition of the internet’s influence on the economy. Concluding, she stated that: “We need to slow down a little bit and to think about where we are going”.

In reviewing descriptions of data ethics initiatives, we can note implicit acknowledgement of the limits of data protection law in harnessing all of the ethical implications of a rapidly evolving information and data infrastructure. Data ethics thus become a means to make sense of emerging problems and challenges and to evaluate various policies and solutions. For example, a report from EDPS from 2015 states: “In today’s digital environment, adherence to the law is not enough; we have to consider the ethical dimension of data processing” (p. 4). It continues by describing how different EU law principles (such as data minimisation and the concepts of sensitive personal data and consent) are challenged by big data business models and methods.

The policy vacuums described in such reports and statements highlight the uncertainties and questions that exist regarding the governance of a socio-technical information infrastructure that increasingly shapes not only personal, but also social, cultural and economic activities.

In the same year as Moor’s essay was published, communications scholar Joshua Meyrowitz’s No Sense of Place (1985) portrayed the emergence of “information systems” that modify our physical settings via new types of access to information, thereby restructuring our social relations by transforming situations. As Meyrowitz (1985, p. 37) argued, “[w]e need to look at the larger, more inclusive notion of “patterns of information””, illustrating how our information realities have real qualities that shape our social and physical realities. Accordingly, European policymakers emphasise the real qualities of information and data. They see digital data processes as meaningful components of social power dynamics. Information society policy-making thus becomes an issue of the distribution of resources and of social and economic power, as an EU Competition Commissioner stated at a DataEthics.eu event on data as power in Copenhagen in 2016: “I’m very glad to have the chance to talk with you about how we can deal with the power that data can give” (Vestager, 9 September 2016). Thus, data ethics policy debates have moved beyond the negotiation of a legal data protection framework, increasingly involving a general focus on information society policy-making, in which different sectional policy areas are intertwined. As the European Commissioner for Competition elaborated at the DataEthics.eu event: “So competition is important. It keeps the pressure on companies to give people what they want. And that includes security and privacy. But we can’t expect competition enforcement to solve all our privacy problems. Our first line of defence will always be rules that are designed specifically to guarantee our privacy”.

Section 4: Data ethics - culture and values

According to Moor, the policy vacuums that emerge when existing policies clash with technological evolution, force us to “discover and make explicit what our value preferences are” (1985, p. 267). He proposes that the computer induced societal revolution will occur in two stages, marked by the questions that we ask. In the first “Introduction Stage”, we ask functional questions: How well does this and that technology function for its purpose? In the second “Permeation Stage”, when institutions and activities are transformed, Moor argues that we will begin to ask questions regarding the nature and value of things (p. 271). Such second-stage questions are echoed in the European policy debate of 2017, as one Member of the European Parliament (MEP) who was heavily involved in the GDPR negotiation process argued in a public debate: “[this is] not any more a technical issue, it’s a real life long important learning experience” (Albrecht, 2017), or as another MEP claimed in the same debate: “The GDPR is not only a legislative piece, it’s like a textbook, which is teaching us how to understand ourselves in this data world and how to understand what are the responsibilities of others and what are the rules which is governing in this world” (Lauristin, 2017).

Consequently, the technicalities of new data protection legislation are transformed into a general discussion about the norms and values of a big data age. Philip Brey describes values as “idealized qualities or conditions in the world that people find good”, ideals that we can work towards realising (2010, p. 46). However, values are not just personal ideals; they are also culturally situated. The cultural theorist Raymond Williams (1958, p. 6) famously defined culture as a “shape”, a set of purposes and common meanings expressed “in institutions, and in arts and learning”, which emerge in a social space of “active debate and amendment under the pressures of experience, contact and discovery”. Culture is thus traditional as well as creative, consisting of prescribed dominant meanings and their negotiation (Williams, 1958). Similarly, the anthropologist James Clifford (1997) replaced the metaphor of “roots” (an image of the original, authentic and fixed cultural entity) with “routes”: intervals of negotiation and translation between the fixed cultural points of accepted meaning. Values are advanced in groups with shared interests and culture but they exist in spaces of constant negotiation. In an interview conducted at the IGF 2017, one policy advisor to an MEP enquired as to the role of values in the GDPR’s negotiations, described privacy as a value shared by a group of individuals involved in the reform process: “I think a group of core players shared that value (…) all the way from people who wrote the proposal at the Commission, to the Commissioner in charge to the rapporteur from the EU Parliament, they all (…) to some extent shared this value, and I think that they managed to create a compromise closer to their value than to others”. He also explained how discussions about values were emerging in processes of negotiation between diverse and at times contradictory interests: “the moment you see a conflict of interest, that is when you start looking at the values (…) normally it would be a discussion about different values (….) an assessment of how much one value should go before another value (… ) so some people might say that freedom of information might be a bigger value or the right to privacy might be a bigger value” .

Accordingly, ethics in practice, or what Brey refers to as “the act of valuing something, or finding it valuable (…) to find it good in some way” (2010, p. 46) is in essence never merely a subjective practice, but neither is it a purely objective construct. If we investigate the meaning of data ethics and ethical action in European data protection policy-making, we can see the points of negotiation. That is, if we look at what happens in the “intervals” between established value systems and the renegotiation of these in new contexts, we discover clashes of values and negotiation as well as the contours of cultural positioning.

Section 5: Data ethics - power and positioning

Philosophy and media studies scholar Charles Ess (2014) has illustrated how culture plays a central role in shaping our ethical thinking about digital technologies. For instance, he argues that people in Western societies place ethical emphasis on “the individual as the primary agent of ethical reflection and action, especially as reinforced by Western notions of individual rights” (p. 196). Such cultural positioning in a global landscape can also be identified in the European data ethics policy debate. An example is the way in which one participant in the 2017 MEP debate discussed above described the GDPR with reference to the direct lived experiences of specific European historical events: “It is all about human dignity and privacy. It is all about the conception of personality which is really embedded in our culture, the European culture ( ...) it came from the general declaration of human rights. But there is a very very tragic history behind war, fascism, communism and totalitarian societies and that is a lesson we have learned in order to understand why privacy is important” (Lauristin, 2017).

Values such as human dignity and privacy are formally recognised in frameworks of European fundamental rights and data protection law, and conscious of their institutionalised roots in the European legal framework, European decision-makers will reference them when asked about the values of “their” data ethics. Awareness of data ethics thus becomes a cultural endeavour, transferring European cultural values into technological development. As stated in an EDPS report from 2015: “The EU in particular now has a ‘critical window’ before mass adoption of these technologies to build the values into digital structures which will define our society” (p. 13) .

When exploring European data ethics policy initiatives as spaces of value negotiations, a specific cultural arrangement emerges. In this context, policy and decision-makers position themselves against a perceived threat to a specifically European set of values and ethics that is pervasive, opaque and embedded in technology. In particular, a concern with a new opponent to the state power emerges. In an interview conducted in 2018 at an institution in Europe, a project officer reflected on her previous work in a European country’s parliament and government where concerns with the alternative form of power that the internet represents had surfaced. The internet is the place where discussions are held and decisions are made, she said, before remembering the policy debates concerning “GAFA” (the acronym for the four giant technology companies of Google, Apple, Facebook and Amazon). Such a clash in values has been directly addressed by European policymakers in public speeches and debates, increasingly naming the technology industry stakeholders they deem responsible. Embedded values of technology innovation are a “wrecking ball”, aiming not simply to “play with the way society is organised but instead to demolish the existing order and build something new in its place”, argued a President of the European Parliament in a speech in 2016 (Schultz, 2016). Values and ethics are hence directly connected with a type of cultural power that is built into technological systems. As one Director for Fundamental Rights and Union Citizenship, European Commission DG Justice claimed in a 2017 public debate: “the challenge of ethics is not in the first place with the individual, the data subject; the challenge is with the controllers, which have power, they have power over people, they have power over data, and what are their ethics? What are the ethics they instil in their staff? In house compliance ethics? Ethics of engineers?” (Nemitz,2017).

Section 6: Data ethics - spaces of negotiation

When dealing with the development of technical systems, we are inclined towards points of closure and stabilisation (Bijker et al., 1987) that will guide the governance, control and risk mitigation of the systems. Relatedly, we can understand data ethics policy initiatives as end results with the objectives “to formulate and support morally good solutions (e.g., right conducts or right values)” (Floridi & Taddeo, 2016, p. 1), emphasising algorithms (or technologies) that may not be “ethically neutral” (Mittelstadt et al., 2016, p. 4). That is to say, as solutions to the ethical problems raised within the very design of technologies, the data processing activities of the algorithms or the collection and dissemination of data. However, I would like to address data ethics policy initiatives in their contexts of interest and value negotiation. For instance, where does morality begin and end in a socio-technical infrastructure that extends across jurisdictions and continents, cultural value systems and societal sectors?

The technical does indeed in the very design represent forms of order, as the political theorist Langdon Winner reminded us (1980, p. 123). That is, it is “political” and thus has ethical implications when creating by design “wonderful breakthroughs by some social interests and crushing setbacks by others” (Winner, 1980, p 125). To provide an example, the Facebook APIs that facilitated the mass collection of user data, before these were reused and processed by Cambridge Analytica, were specifically designed to track users and share data en masse with third parties, hence directly enabling the mass collection, storage and processing of data. However, these design issues of the technical are also “inextricably” “bound up” into an “organic whole” with economic, social, political and cultural problems (Callon, 1987, p. 84). An analysis of data ethics as it is evolving in the European policy sphere demonstrates the complexity of governance challenges arising from the infrastructure of the information age being “shaped by multiple agents with competing interests and capacities, engaged in an indefinite set of distributed interactions over extended periods of time” (Harvey et al., 2017, p. 26). Governance in this era is, as highlighted by internet governance scholars Jeanette Hofmann et al., a “heterogeneous process of ordering without a clear beginning or endpoint” (2016, p. 1412). It consists of actors engaged in “fields of struggle” (Pohle et al-, 2016) of meaning making and competing interpretations of policy issues that are “continuously produced, negotiated and reshaped by the interaction between the field and its actors” (p. 4). I propose that we also explore, as essential components of our data ethics endeavours, the complex dynamics of the ways in which powers are distributed and how interests are met in spaces of negotiation.

Evidently, we must also recognise data ethics policy initiatives as components of a general infrastructural development’s rhythm rather than caved in ethical solutions and isolated events. Understand them as the kind of negotiation posts that repeatedly occur throughout the course of a technological system’s development (Bijker et al., 1987), and as segments of a process of standardisation and consensus-building within a complex general technological evolution of our societies that “contain messy, complex, problem-solving components”(Hughes, 1987, p. 51). The technological systems of modernity are like the architecture of mundane buildings. They reside, as Edwards (2002, p. 185) claims, in a “naturalised background”, ordinary as “trees, daylight, and dirt”. Silently they represent, constitute and are constituted by both our material and imagined modern societies and the distribution of power within. They remain unnoticed until they fail (Edwards, 2002). But when they do fail, we see them in all their complexity. An example is the US intelligence officers PowerPoint presentations (The Guardian, 2013) detailing the “PRISM program” leaked by Edward Snowden in 2013 that provide a detailed map of an information and data infrastructure that is characterised by intricate interconnections between a state organisation of mass surveillance, laws, jurisdictions and budgets, and the technical design of the world wide web and social media platforms. The technological infrastructures are indeed like communal buildings. With doors that we never give a second thought until the day we find one of them locked.

Conclusion

October 2018:“These are just tools!” one person exclaimed. We were at a working group meeting where an issue with using Google Docs for the practical work of the group was raised and discussed at length. While some were arguing for an official position on the use of the online service, mainly with reference to what they described as Google’s insufficient compliance with European data protection law, others saw the discussion as a waste of time. Why spend valuable work time on this issue?

What is data ethics? Currently, the reply is shrill, formally framed in countless statements, documents and mission statements from a multitude of sources, including governments, intergovernmental organisations, consultancy firms, companies, non-governmental organisations, independent experts and academics. But it also emerges when least expected, in “non-allocated” moments of discussion. Information technologies that permeate every aspect of our lives today, from micro work settings to macro economics and politics, are increasingly discussed as “ethical problems” (Introna, 2005, p. 76) that must be solved. Their pervasiveness sparks moments of ethical thinking, negotiated in terms of moral principles, values and ideal conditions (Brey, 2010). In allocated or unallocated spaces of negotiation, moments of pause and sense-making (Moor, 1985), we discuss the values (Flanagan et al., 2008) and politics (Winner, 1980) of the business practices, cultures and legal jurisdictions that shape them. These spaces of negotiation encompass very concrete discussions regarding specific information technology tools, but increasingly they also evolve into reflections concerning general challenges to established legal frameworks, individuals’ agency and human rights, as well as questions regarding the general evolution of society. As one Danish minister said at the launch of a national data ethics expert group: “This is about what society we want” (Holst, 11 March 2018).

In this article, I have explored data ethics in the context of a European data protection legal reform. In particular, I have sought to answer the question: “What is data ethics?” with the assumption that the answer will shape how we perceive the role and function of data ethics policy initiatives. Based on a review of policy documents, reports and press material, alongside analysis of the ways in which policymakers and civil servants make sense of data ethics, I propose that we recognise these initiatives as open-ended spaces of negotiation and cultural positioning.

This approach to ethics might be criticised as futile in the context of policy and action. However, I propose that understanding data ethics policy initiatives as spaces of negotiation does not prevent action. Rather, it forces us to make apparent our point of departure: the social and cultural values and interests that shape our ethical action. We can thus create the potential for a more transparent negotiation of ethical action in the “Big Data Era”, enabling us to acknowledge the macro-level data ethics spaces of negotiation that are currently emerging not only in Europe but globally.

This article’s analytical investigation of European data ethics policy initiatives as spaces of cultural value negotiations has revealed a set of actionable thematic areas. It has illustrated a clash of values and an emerging concern with the alternative forms of power and control embedded in our technological environment, which exert pressure on people and individuals in particular. Here, a data ethics of power that takes its point of departure in Gilles Deleuze’s description of computerised Societies of Control (1992) enables us to think about the ethical action that is necessary today. Ethical action could for example concern the empowerment of individuals to challenge the laws and norms of opaque algorithmic computer networks, as we have noted in debates on the right to explanation and the accountability and interpretability of algorithms. Ethical action may also strive towards ideals of freedom in order to break away from coding, to become indiscernible to “Weapons of Math Destruction” (O’Neil, 2016) that increasingly define, shape and limit us as individuals, as seen for instance in the digital self-defence movement (Heuer & Tranberg, 2013). Data ethics missions such as these are rooted in deeply personal experiences of living in coded networks, but they are also based on growing social and political movements and sentiments (Hasselbalch & Tranberg, 2016).

Much remains to be explored and developed regarding the power dynamics embedded in the evolving data ethics debate, not only in policy-making, but also in business, technology and public discourse in general. This article seeks to open up a more inclusive and holistic discussion of data ethics in order to advance investigation and understanding of the ways in which values are negotiated, rights and authority are distributed, and conflicts are resolved.

Acknowledgements

  • Clara
  • Francesco Lapenta for the many informed discussions regarding the sociology of data ethics.
  • Jens-Erik Mai for insightful comments on the drafts of this article.
  • The team at DataEthics.eu for inspiration.

References

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias - There’s software used across the country to predict future criminals. And it’s biased against blacks. Propublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Albrecht, J. P. (2017, January 26) MEP debate: The regulation is here! What now? [video file] Retrieved from https://www.youtube.com/watch?v=28EtlacwsdE

Bergson, H. (1988). Matter and Memory (N. M. Paul & W. S. Palmer, Trans.) New York: Zone Books.

Bergson, H. (1998), Creative Evolution (A. Mitchell, Trans.). Mineola, NY: Dover Publications.

Bijker, W. E., Hughes, T. P., & Pinch, T. (1987). General introduction. In W. E. Bijker, T. P. Hughes, & T. Pinch. (Eds.), The Social Construction of Technological Systems (pp. 1-7). Cambridge, MA: MIT Press.

Brey, P. (2000). Disclosive computer ethics. Computer and Society, 30(4), 10-16. doi:10.1145/572260.572264

Brey, P. (2010). Values in technology and disclosive ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 41-58). Cambridge: Cambridge University Press.

Bourdieu, P. (1997). Outline of a Theory of Practice. Cambridge: Cambridge University Press.

Bourdieu, P., & Wacquant, L. (1992). An Invitation to Reflexive Sociology. Cambridge: Polity Press.

Callon, M. (1987). Society in the making: the study of technology as a tool for sociological analysis. In Wiebe E. Bijker, Thomas P. Hughes, & Trevor Pinch (Eds.), The Social Construction of Technological Systems (pp. 83-103). Cambridge, MA: MIT Press.

Carr, J. (2015, December 13). Should I laugh, cry or emigrate? [Blog post]. Retrieved from Desiderata https://johnc1912.wordpress.com/2015/12/13/should-i-laugh-cry-or-emigrate/

Clifford, J. (1997). Routes: Travel and Translation in the Late Twentieth Century. Cambridge: Harvard University Press.

Cohen, J. E. (2013). What privacy is for. Harvard Law Review, 126(7). Retrieved from https://harvardlawreview.org/2013/05/what-privacy-is-for/

Danish Business Authority. (2018, March 12). The Danish government appoints new expert group on data ethics [Press release]. Retrieved from https://eng.em.dk/news/2018/marts/the-danish-government-appoints-new-expert-group-on-data-ethics

Deleuze, G. (1992). Postscript on the societies of control. October, 59, p. 3-7. Retrieved from http://www.jstor.org/stable/778828

Deleuze, G. (1966). Bergsonism (H. Tomlinson, Trans.). New York: Urzone Inc.

Edwards, P. (2002). Infrastructure and modernity: scales of force, time, and social organization in the history of sociotechnical systems. In Misa, T. J., Brey, P., & A. Feenberg (Eds.), Modernity and Technology (pp. 185-225). Cambridge, MA: MIT Press.

Ess, C. M. (2014). Digital Media Ethics. Cambridge, UK: Polity Press

Flanagan, M., Howe, D. C., & Nissenbaum, H. (2008). Embodying values in technology – theory and practice. In J. van den Hoven, & J. Weckert (Eds.), Information Technology and Moral Philosophy (pp. 322-353). Cambridge, UK: Cambridge University Press.

Floridi, L., & Taddeo, M. (2016). What is data ethics?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083). doi:10.1098/rsta.2016.0360

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330-347. doi:10.1145/230538.230561

Frohmann, B. (2007). Foucault, Deleuze, and the ethics of digital networks. In R. Capurro, J. Frühbauer, & T. Hausmanninger (Eds.), Localizing the Internet. Ethical Aspects in Intercultural Perspective (pp. 57-68). Munich: Fink.

Goffman, E. (1974). Frame Analysis: An Essay on the Organization of Experience. Boston, MA: Northeastern University Press

Harvey, P., Jensen, C. B., & Morita, A. (2017). Introduction: infrastructural complications. In P. Harvey, C. B. Jensen, & A. Morita (Eds.), Infrastructures and Social Complexity: A Companion. p. 1-22. London: Routledge.

Hasselbalch, G., & Tranberg, P. (2016, December 1). The free space for data monopolies in Europe is shrinking [Blog post]. Retrieved from Opendemocracy.net https://www.opendemocracy.net/gry-hasselbalch-pernille-tranberg/free-space-for-data-monopolies-in-europe-is-shrinking

Hasselbalch, G., & Tranberg, P. (2016, September 27). Personal data stores want to give individuals power over their data [Blog post]. Retrieved from DataEthics.eu https://dataethics.eu/personal-data-stores-will-give-individual-power-their-data/

Hasselbalch, G., & Tranberg, P. (2016). Data Ethics. The New Competitive Advantage. Copenhagen: Publishare.

Heisenberg, D. (2005). Negotiating Privacy: The European Union, The United States and Personal Data Protection. Boulder, CA: Lynne Reinner Publishers.

Heuer, S., & Tranberg, P. (2013). Fake It! Your Guide to Digital Self-Defense. Copenhagen: Berlingske Media Forlag.

Hill, R. (24 November 2017). Another toothless wonder? Why the UK.gov’s data ethics centre needs clout. The Register. Retrieved from https://www.theregister.co.uk/2017/11/24/another_toothless_wonder_why_the_ukgovs_data_ethics_centre_needs_some_clout/

Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: finding the governance in Internet governance. New Media & Society, 19(9), 1406-1423. doi:10.1177/1461444816639975

Holst, H. K. (2018, March 11). Regeringen vil lovgive om dataetik: det handler om, hvilket samfund vi ønsker [The government will legislate on data: it is about what we want to do in society]. Berlingske. Retrieved from https://www.berlingske.dk/politik/regeringen-vil-lovgive-om-dataetik-det-handler-om-hvilket-samfund-vi-oensker

Hughes, T. P. (1987). The evolution of large technological systems. In W. E. Bijker, T. P. Hughes, & T. Pinch (Eds.), The Social Construction of Technological Systems (pp. 51-82). Cambridge, MA: MIT Press.

Ingold, T. (2000) The Perception of the Environment: Essays in Livelihood, Dwelling and Skill. London: Routledge.

Introna, L. D. (2005). Disclosive ethics and information technology: disclosing facial recognition systems. Ethics and Information Technology, 7(2), 75-86. doi:10.1007/s10676-005-4583-2

Ingeniøren. (2018, March 16). Start nu med at overholde loven Brian Mikkelsen [Now start complying with the law, Brian Mikkelsen]. Version 2. Retrieved from https://www.version2.dk/artikel/leder-start-nu-med-at-overholde-loven-brian-mikkelsen-1084631

in ‘t, Veld, S. (2017, January 26). European Privacy Platform [video file]. Retrieved from https://www.youtube.com/watch?v=8_5cdvGMM-U

Lauristin, M. (2017, January 26). MEP debate: The regulation is here! What now? [video file] Retrieved from: https://www.youtube.com/watch?v=28EtlacwsdE

Lyon, D. (2014). Surveillance, Snowden, and big data: capacities, consequences, critique. Big Data & Society, 1(2). doi:10.1177/2053951714541861

Lyon, D. (2010). Liquid surveillance: the contribution of Zygmunt Bauman to surveillance studies. International Political Sociology, 4(4). (pp. 325-338). doi:10.1111/j.1749-5687.2010.00109.x

Mayer-Schonberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work and Think. London: John Murray.

Meyrowitz, J. (1985). No Sense of Place: The Impact of the Electronic Media on Social Behavior. Oxford: Oxford University Press.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data & Society, 3(2), 1-21. doi:10.1177%2F2053951716679679

Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266-275. doi:10.1111/j.1467-9973.1985.tb00173.x

Nemitz, P. (2017, January 26) European Privacy Platform [video file]. Retrieved from: https://www.youtube.com/watch?v=8_5cdvGMM-U

O’Neil, C. (2016). Weapons of Math Destruction. New York: Penguin Books.

Pasquale, F. (2015). The Black Box Society – The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press

Poikola, A., Kuikkaniemi, K., & Honko, H. (2018). Mydata – A Nordic Model for human-centered personal data management and processing [White paper]. Helsinki: Open Knowledge Finland. Retrieved from https://www.lvm.fi/documents/20181/859937/MyData-nordic-model/2e9b4eb0-68d7-463b-9460-821493449a63?version=1.0

Pohle, J., Hosl, M. & Kniep, R. (2016). Analysing internet policy as a field of struggle. Internet Policy Review, 5(3) doi:10.14763/2016.3.412

Powles, J. (2015–2018). Julia Powles [Profile]. The Guardian. Retrieved from https://www.theguardian.com/profile/julia-powles

Richards, N. M., & King J. H. (2014). Big data ethics. Wake Forest Law Review, 49, 393- 432.

Richardson, J. (2015, December 10). European General Data Protection Regulation draft: the debate. Retrieved from Medium https://medium.com/@janicerichardson/european-general-data-protection-regulation-draft-the-debate-8360e9ef5c1

Schultz, M. (2016, March 3) Technological totalitarianism, politics and democracy [video file] Retrieved from: https://www.youtube.com/watch?v=We5DylG4szM

Solove, D. J. (2008). Understanding Privacy. Cambridge: Harvard University Press.

Spiekermann, S., Hampson P., Ess, C. M., Hoff, J., Coeckelbergh, M., & Franckis, G. (2017). The Ghost of Transhumanism & the Sentience of Existence., Retrieved from The Privacy Surgeon http://privacysurgeon.org/blog/wp-content/uploads/2017/07/Human-manifesto_26_short-1.pdf

The Guardian. (2013, November 1). NSA Prism Programme Slides. The Guardian. Retrieved from https://www.theguardian.com/world/interactive/2013/nov/01/prism-slides-nsa-document

Vestager, M. (2016, September 9). Making Data Work for Us. Retrieved from European Commission https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/making-data-work-us_en Video available at https://vimeo.com/183481796

de Wachter, M. A. M. (1997). The European Convention on Bioethics. Hastings Center Report, 27(1), 13-23. Retrieved from https://onlinelibrary.wiley.com/doi/full/10.1002/j.1552-146X.1997.tb00015.x

Wagner, B. (2018). Ethics as an escape from regulation: from ethics-washing to ethics-shopping? In M. Hildebrandt (Ed.), Being Profiling. Cogitas Ergo Sum. Amsterdam: Amsterdam University Press. Retrieved from https://www.privacylab.at/wp-content/uploads/2018/07/Ben_Wagner_Ethics-as-an-Escape-from-Regulation_2018_BW9.pdf

Warman, M. (2012, February 8). EU Privacy regulations subject to ‘unprecedented lobbying’. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/news/9070019/EU-Privacy-regulations-subject-to-unprecedented-lobbying.html

Williams, R. (1993). Culture is ordinary. In A. Gray, & J. McGuigan (Eds.), Studying Culture: An Introductory Reader (pp. 5-14). London: Edward Arnold.

Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136. Retrieved from https://www.jstor.org/stable/20024652

Wong, S. (2009) Tales from the frontline: The experiences of early childhood practitioners working with an ‘embedded’ research team. Evaluation and Program Planning, 32(2), 99–108. doi:10.1016/j.evalprogplan.2008.10.003

Zuboff, S. (2016, March 5). The secrets of surveillance capitalism. Frankfurter Allgemeine. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html

Zuboff, S. (2014, September 9). A digital declaration. Frankfurter Allgemeine. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshan-zuboff-on-big-data-as-surveillance-capitalism-13152525.html

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London; New York: Profile Books; Public Affairs.

Policy documents and reports

AI Task Force & Agency for Digital Italy. (2018). Artificial Intelligence at the service of the citizen [White paper]. Retrieved from: https://libro-bianco-ia.readthedocs.io/en/latest/Council of Europe. (1997). Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine. (The “Oviedo Convention”) Treaty No.164. Retrieved from https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/164

Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A31995L0046

EC High-Level Expert Group. (2018). Draft Ethics Guidelines for Trustworthy AI. Working document, 18 December 2018 (final document was not published when this article was written). Retrieved from https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

European Commission. (2012, January 25). Proposal for a Regulation of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data (General Data Protection Regulation). Retrieved from http://www.europarl.europa.eu/registre/docs_autres_institutions/commission_europeenne/com/2012/0011/COM_COM(2012)0011_EN.pdf

European Commission. (2018). Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions - Coordinated Plan on Artificial Intelligence (COM(2018) 795 final). Retrieved from https://ec.europa.eu/digital-single-market/en/news/coordinated-plan-artificial-intelligence

European Parliament. (2019, February 12). European Parliament Resolution of 12 February 2019 on a Comprehensive European Industrial Policy on Artificial Intelligence and Robotics (2018/2088(INI)). Retrieved from http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2019-0081+0+DOC+PDF+V0//EN

European Union Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1528874672298&uri=CELEX%3A32016R0679

Gov.uk. (2018, January 25). Digital Strategy. Retrieved from https://www.gov.uk/government/publications/digital-charter/digital-charter

European Data Protection Supervisor (EDPS). (2015). Towards a New Digital Ethics Data Dignity and Technology. Retrieved from https://edps.europa.eu/sites/edp/files/publication/15-09-11_data_ethics_en.pdf

European Data Protection Supervisor (EDPS). Ethics Advisory Group. (2018). Towards a Digital Ethics. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf

Footnotes

1. By “European” I am not only focusing on the European Union (EU), but on a constantly negotiated cultural context, and thus for example I do not exclude organisations like the Council of Europe or instruments such as the European Convention of Human Rights.

2. Interviews informing the article (anonymous, all audio recorded, except from one based on written notes): four directly quoted in the article; two policy advisors; four European institution officers; one data protection commissioner; one representative of a European country to the Committee of Ministers of the Council of Europe; one European parliamentarian.

3. I am the vice chair of the IEEE P7006 standard on personal data AI agents.

4. I was one of the 12 appointed members of this committee (2018).

5. I was one of the 52 appointed members of this group (2018-2020).


Citizen or consumer? Contrasting Australia and Europe’s data protection policies

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

Governments are becoming increasingly concerned about widespread corporate data collection and the information asymmetries produced through these practices. People provide their personal data in exchange for various free services, from social media platforms to fitness apps (see Andrejevic, 2014), allowing companies to gather detailed information across their customer base. However, individuals have little to no knowledge about how their data is collected, used, stored, managed or handled. In addition to concerns around the collection of personal information, the growing importance of data as an economic good has also made legislators uneasy, with many citing ‘competition’ as a reason for regulation (Esayas & Daly, 2018). There is a fear that consumers could be ‘locked in’ to particular commercial arrangements if they are unable to transfer their valuable data to a competitor (Frieden, 2017; also see Esayas & Daly, 2018). In response, numerous jurisdictions have attempted to intervene in this state of affairs by engaging in legislative reform, with the European Union’s General Data Protection Directive (GDPR) standing as the most prominent example.

The initial interest of this paper is to investigate how this reform moment has increased the visibility data access and portability provisions, through a comparative study of recent reform agendas in the European Union (EU) and Australia. This analysis compares the Australian CDR reform process to the GDPR, which features a right to access data (art. 15, GDPR) and data portability (art. 20, GDPR), and data access and portability rights found in other European legislative instruments. The most prominent of these is the reformed payment services directive (PSD 2), which allows individuals and third parties to access certain banking data. At the outset, it is important to note that all of these legislative instruments have different aims. The GDPR is a regulatory framework that covers the entire European Union. It grants new rights to individuals, represents a substantial strengthening of the Data Protection Directive (1995), which it replaces (in terms of scope and enforcement, for example) and also purports to regulate algorithms (or, in the regulation’s terms, “automated decision making”, Art. 22). In contrast, PSD 2 requires banks to “provide access and […] communicate, to authorized third parties, customer and payment account information” (Omarini, 2018, p. 28), providing a framework for Open Banking in Europe. Similar directives in other sectors also empower data transfers in certain situations (see Esayas and Daly, 2018). The CDR is similar to these sector-specific directives but operates on a broader scale. It introduces a general framework that gives Australians the power to ask companies that hold data to transfer all or some of that data to a third party, which can be another company in the same sector or an adjacent business. It will be introduced on a sector by sector basis (see Explanatory Memorandum, 2018).

However, this paper also extends this initial analysis and argues that the CDR is indicative of a broader conceptual divergence that places Australia at odds with Europe (despite the fact that the CDR introduces some European ‘elements’ into Australian law). We pursue this argument by exploring the rhetoric around the CDR, showing how politicians and policymakers have presented the right as a solution to the problem of information asymmetry. We compare this stance to the introduction of the GDPR, which saw Europe transition from a market-oriented data protection framework to one that embraced fundamental rights and freedoms (Hijmans, 2010; 2016). We argue that these different reform moments have resulted in two separate conceptual approaches to data, with Europe increasingly focused on fundamental rights and citizenship and Australia focused on the consumer and the market. We go on to suggest that Australia needs to develop a broader conceptual foundation for its data policies and move beyond questions of economic value and efficiency to meaningfully engage with fundamental rights and embrace stronger enforcement regimes in line with existing European policy.

We also note that while existing research has already contrasted the policy proposal for the CDR with European law (Esayas and Daly, 2018), this paper analyses what is likely to be the final legislated version of the right. The Australian Coalition (centre-right) government has been a strong advocate for the CDR and supported policy development around the right throughout the 45th Parliament (2016 – 2019). Legislation was tabled in late 2018 and a subsequent Senate (upper house) Committee recommended that the bill be passed unamended. The bill did not pass parliament before it was dissolved in preparation for a May 2019 election. However the Coalition returned to power and as a result, while minor amendments may still be made the substance of the legislation examined in this article (in the form of an exposure draft) is likely to be passed. The launch of the Consumer Data Right is likely to go ahead as planned on 1 February 2020. In addition to legislation, we have also consulted public documentation and commentary to analyse the scope and purpose of the CDR.

The paper proceeds as follows. We begin by briefly discussing the different legal philosophies that influence each jurisdiction’s approach to data protection. Then we introduce the CDR and compare its operation to European data access and transfer regimes. Following this, we critique the rhetoric around the right that either promotes an equivalence to the ‘European’ approach to data or holds up the Australian approach as superior. Finally, we compare the separate reform trajectories in both jurisdictions and suggest that the CDR is an example of Australia’s broader economic approach to data and the issue of information asymmetry, which stands in stark contrast to Europe’s growing commitment to fundamental rights as part of its overarching data protection framework.

Europe and Australia: data rights versus data bureaucracy

A central piece of legislation regulates privacy and data protection in each jurisdiction: the GDPR in Europe and the Privacy Act in Australia. There are some similarities between these legal frameworks to the extent that Australia’s ‘Privacy Act is based on a similar model to the EU Data Protection Directive’ (Esayas and Daly, 2018, p. 188). However, the two differ in how they approach privacy conceptually. The European Union treats data protection ‘as a fundamental right anchored in interests of dignity, personality, and self-determination’ (Schwartz and Peifer, 2017, p.123). These rights emerge constitutionally from the Charter of Fundamental Rights (8 CFR), through a specific article focused on data protection (see also Schwartz and Peifer, 2017).

In contrast, Australian does not have a constitutional foundation for data protection. Instead, it is bound up with a suite of broader protections around privacy. Protections are available at common law through the tort of breach of confidence, which is ‘centred on the management and protection of private information’ (Meese and Wilken, 2014, p. 320). If a confidence between parties is breached then people can make use of the tort to protect their privacy interests - which may include their data (Richardson, 2002). However, this tort is rarely used and the Privacy Act stands as the central legislative (and regulatory) instrument.

Indeed, its introduction in 1988 gave Australians additional protections, a new set of privacy standards for government bodies and a complaints mechanism. A newly appointed Australian Privacy Commissioner was made responsible for ensuring that government bodies complied with relevant legislation and administering complaints from individuals. In 2000, these standards and compliance requirements were extended to private and not-for-profit organisations that had an annual turnover of AUS$ 3 million or more (Australian Law Reform Commission, 2008). While government agencies had separate compliance requirements to private and not-for-profit organisations, all of the above bodies have had to adhere to a series of Australian Privacy Principles (APP) since 2014.1

Following the introduction of the Act in 1988, complaints were heard by the Privacy Commissioner (latterly called the Information and Privacy Commissioner). Today, if Australians have a complaint about data protection or privacy, they must first complain directly to the offending organisation. They can only turn to the Office of the Australian Information Commissioner (OAIC)2 if there has been no response or they feel the reply is unsatisfactory (Meese and Wilken, 2014). Once this has occurred, the Commissioner can ask parties to undertake a specific action, seek an injunction to limit particular forms of conduct or pursue a civil penalty (Office of the Australian Information Commissioner, 2018a). While suing for breach of confidence is still an option available to people, this avenue is rarely taken and it does not address all potential data protection or privacy harms an individual might face (Lindsay, 2005; Meese and Wilken, 2014). The prominence of this statutory body has caused Australian privacy law to operate within a certain bureaucratic context, standing in contrast to the European rights-based approach.

Another important conceptual distinction is that the jurisdictions have different approaches to defining data (or information). Since the enacting of the Data Protection Directive (1995), the European Union has had a continuing interest in regulating ‘personal data’. This is defined as ‘any information relating to an identified or identifiable natural person (‘data subject’)’ (art. 2 (a) DPD or art 4.1 GDPR, our emphasis), with ‘an identifiable natural person’ defined as ‘one who can be identified, directly or indirectly’ (art. 2 (a) DPD or art 4.1 GDPR). In contrast, Australian law has only focused on protecting ‘personal information’, which is ‘information or an opinion about an identified individual, or an individual who is reasonably identifiable’ (see Section 6 of the Act, our emphasis). Moreover, the definition of personal information is currently undetermined, following a legal case where a journalist tried to get access to his metadata (Privacy Commissioner v Telstra Corporation Limited, 2017). Due to the nature of the appeal, the Full Federal Court found that personal information must be ‘about an individual’ but made no determination as to whether or not that included metadata. The critical issue here is that with information only needing to relate to an individual in Europe, a broader suite of data can fall under the auspices of any regulation such as data generated through an individual’s use of a service. Conversely, Australian law has only regulated data that is expressly about an individual such as ‘a person’s name, address, contact details, signature, place of employment, work role’ (Productivity Commission, 2017, p. 56) and so on.

Recent reform trajectories in each jurisdiction have further entrenched some conceptual distinctions. The GDPR strengthened Europe’s commitment to a rights-based approach. A series of new rights were introduced, such as including the right to erasure (art. 17 GDPR) and the right to data portability (art. 20, GDPR). Some existing rights such as the right to access data (art. 15, GDPR) and the definition of ‘personal data’ (art. 4, GDPR) were carried over from the Data Protection Directive. These rights apply to any organisation that processes data: from airlines and pizza shops to cloud services and social media platforms (Diker Vanberg and Ünver, 2017). The GDPR also expanded the territorial scope of the regulation (art. 3, GDPR) and introduced stronger penalties for not abiding by the regulations (art. 83.2, GDPR). Much of the popular press has framed the GDPR as a counterweight against large tech monopolies (Satariano, 2018) and the European Union has not shied away from this characterisation with parliamentarians threatening severe action for data breaches or misuse (Powles, 2018). In contrast, Australian legislators have ignored calls to introduce stronger privacy rights, maintaining a strong aversion towards individual rights or any constitutional protection of privacy. Their only improvement has been to streamline the regulatory framework in 2014 (see Meese and Wilken, 2014).

Introducing the Consumer Data Right

However, over the last two years the Australian government has started to take a more reformist approach towards data, calling for major reforms rather than incremental improvements. The CDR is a salutary example of this change. The right aims to give Australians more control over their data and a greater capacity to intervene in the growing data economy. While Australians have had the right to access their data under the Privacy Act for some time, they were only able to access their ‘personal information’, which as discussed above, only accounts for a small amount of the data that individuals produce every day (see Australian Privacy Principle 12). Under this existing right, while Australians can ask to receive their ‘personal information’ in a specific format, government bodies and companies can refuse the request (or ask to provide data in a different format) if the original request is not ‘reasonable and practicable’ to fulfil (Office of the Australian Information Commissioner, 2018b, 12.68). They can also charge for access in some cases. This means there is no standardised access process across Australia, making it arduous for people to effectively transfer data between providers.

The CDR proposes to change this. In addition to an individual’s existing right to access their own personal information, it gives people (and businesses) the right to access and transfer data that ‘relates’ to them: that is, their personal data as well as data relating to the products they use. While the type of data that people can request will change depending on the sector, it is broadly expected to consist of data generated through the normal use of services, such as transaction histories (banking) or usage data (in energy or telecommunications). They can also ask a company to transfer their data to an approved third party, which can be based in Australia or overseas.

These data access and data transfer mechanisms are also expected to be provided for free in many circumstances (see Explanatory Memorandum, 2018, 1.55). Another important dimension is that data will also be standardised within sectors. Rather than the format of data provision being negotiated between an individual and a company (as per the existing Privacy Act), a Data Standards Body will ‘prescribe the format of data, method of transmission and security requirements for data’ (Explanatory Memorandum, 2018, 1.270), which data holders and accredited data recipients have to abide by. The right will gradually roll out across specific sectors, beginning in banking (and launching Open Banking in Australia as a result), before moving to the energy and telecommunications sectors.

The CDR provides Australians with better data protection by enhancing existing privacy protections and providing meaningful redress for individuals. The CDR introduces thirteen ‘Privacy Safeguards’ (Explanatory Memorandum, 2018, 1.6), which are variously applied to entities that hold and receive data and they largely align with existing Australian Privacy Principles. The fact that these safeguards apply to a broader range of data significantly enhances existing protections, at least with respect to access and portability. The safeguards are largely regulated as part of the existing OAIC enforcement regime (discussed earlier), with the ACCC enforcing non-privacy related compliance. However, the bill also introduces a direct right of action for ‘a person who suffers damage or loss […] as a result of a breach of the Privacy Safeguards or consumer data rules about the privacy or confidentiality of CDR data’ (Explanatory Memorandum, 2018, 1.461). This provision stands as a notable and uncommon embrace of individual rights by Australian legislators and in addition to the broader definition of data, sees Australia taking more of a European approach to data protection.

However, the right also differs from European law in important ways. The CDR provides a comprehensive framework for the entire economy whereas the European Union has taken a gradual sector by sector approach to supporting data transfers (see Esayas and Daly, 2018). It is unclear whether this broad approach will actually work effectively across multiple sectors, and indeed some sectors (like telecommunications) are unconvinced (Communications Alliance, 2019). There is also a heavy presumption that consumers will actively use the right, which may not actually be the case. Research on data access in Europe conducted prior to the introduction of the GDPR has found that ‘certain organisations reported that they never received an access request, indicating that the right of access is rarely exercised by citizens’ (Mahieu et al., 2018; Ausloos and Dewitte, 2018). A similar situation is present in Australia. An Australian Community Attitudes to Privacy Survey (Office of the Australian Information Commissioner, 2017, p. 15) found that ‘just over a third (37%) of Australians are aware that they can request to access their personal information from government agencies and businesses which hold the information’. This lack of engagement with existing rights casts some doubt on the planned take up expected by policymakers and politicians.

The other interesting point of comparison is that the right also treats businesses as (rights bearing) consumers, which stands in stark contrast to the GDPR’s focus on individual rights. While the original goal of the reform was to empower consumers and small businesses, the final bill expands the scope of the right dramatically. The explanatory memorandum states that a consumer can be “an identifiable or reasonably identifiable person, including a business enterprise” (Explanatory Memorandum, 2018, 1.100). This is a controversial further expansion of a supposed consumer-facing right (see Communications Alliance, 2019), which on its face grants significant data rights to major companies. The ease with which this expansion occurred, highlights the continuing inability of Australian law to grant individual citizens substantive data protection rights and minimises the goal of the original policy, which was to mitigate information asymmetry.

Indeed, this tendency to ignore (or at least, conflate) the rights of businesses and individuals underlines our broader concerns with the CDR. As noted in our introduction, what is particularly interesting for the purposes of this comparative paper, is the extent to which Australian policymakers and politicians either promote an equivalence to ‘European’ approach to data or hold up the Australian approach as superior. The following section shows how the CDR has been sold as a world-leading reform that essentially solves the ‘data problem’ for Australians. This framing is based on an unsupported belief in the power of big data (see Tene and Polonetsky, 2012), a limited understanding of the associated risks and an inaccurate framing of the Australian legislative environment (see Nissenbaum, 2017). We discuss these developments below by outlining this rhetoric and the broader political context surrounding the right.

Rights and rhetoric: a data access revolution?

While various government reviews throughout the 2010s suggested introducing a data right for consumers, a workable concept only emerged through an inquiry run by Australia’s peak economic advisory agency, the Productivity Commission. The Commission was tasked with investigating ‘the benefits and costs of options for improving availability and use of data’ (Productivity Commission, 2017, p. vi) and they undertook a study titled Data Availability and Use. The study approached the issue of data and information asymmetry from a largely economic perspective. Following this process, they presented two reform options: the consumer data right and a structure for sharing and releasing public and private data.

The Australian government welcomed the proposals and have committed to introducing both reforms. This is a problem as the CDR will only be introduced alongside the more controversial Data Sharing and Release bill. The latter bill seeks to allow government to compile data sets from public sector data and share this anonymised data with industry and researchers. While a detailed critique of the proposed sharing and release model and associated legislation are beyond the scope of this paper (criticisms are readily available, see Williams, 2018), it is enough to note that it aims to ‘streamline the process for sharing public sector data’ with the goal of providing more efficient government services, greater government transparency and better research data. However, the government wants to remove ‘500 existing data secrecy and confidentiality provisions across more than 175 different pieces of Australian Government legislation’ (Department of the Prime Minister and Cabinet, 2018a, p. 10) to make this possible. It removes substantive protections for the benefit of researchers and government, with the promise of vague positive outcomes in the future, with the bill empowering the government to ‘authorise data sharing and release’ for broad purposes like ‘supporting the efficient delivery of government services or government operations’ (Department of the Prime Minister and Cabinet, 2018a, p. 6). As it stands, while Australians would get improved data access and transfer rights, this would be in exchange for allowing the sharing of public data between public and private organisations under a liberal risk assessment model.

The fact that the CDR is linked to a controversial open data framework that plans to remove a range of protections for public data has not influenced the rhetoric around the CDR. The Productivity Commission and the Australian Government have both pushed positive messages that present the reform as part of a world leading data framework. Around the launch of the initial report, the Commission’s comments were incredibly optimistic, with reference to the CDR, simply noting that the right ‘would provide greater insight and control for individuals over how data that is collected on them is used’ (Productivity Commission, 2017, p.191). In comparison, the government’s initial response was relatively restrained. They restricted their comments to the field of competition policy, where the right was anticipated to carry the greatest impact, noting that the right could ‘drive greater competition between businesses to attract new customers and encourage new business models to unlock the value of consumer data’ (Department of the Prime Minister and Cabinet, 2018b, p. 6).

However, as the CDR moved from idea to implementation the policy was imbued with greater significance. In 2018, the Productivity Commissioner Peter Harris wrote an article discussing the report and arguing that “the new consumer right will put Australia in the forefront of countries attempting to claw back community and individual control over their data” (Harris, 2018). Harris (2018) noted that the CDR was not the same as the GDPR and said that ‘the GDPR may expand people’s thinking’. However, he was bullish about his proposed reforms, arguing that the economic approach taken by the Commission (which we discuss later on) was more effective ‘as a first step in a better foundation for managing both the threat and the benefit [of data]’, when compared to the GDPR, which only held a ‘limited interest in this asset-driven focus of ours’ (Harris, 2018).

The Australian government has been more cautious in their public statements and have not directly compared the CDR to developments in Europe. However, they have made strong statements about the capacity of the right to reduce information asymmetry. Public documentation from the Treasury, which has carriage of this reform, states that the CDR will improve the ‘control, choice, convenience and confidence of consumers’ (Treasury, 2018, p. 2). A media release from the then Treasurer (and now Prime Minister) Scott Morrison was equally positive, stating that the right will ‘empower customers to use their data for their own benefit’ and ‘determine which data is shared, on what terms and with whom’ (Morrison, 2018). These statements present implicit (and inaccurate) promises that Australians will be able to have some control over their data within the broader environment of surveillance capitalism (Zuboff, 2019), they are forced to contend with daily (as opposed to the ability to access a limited subset of data from specific providers). In the above cases, policymakers and politicians respectively promote the CDR as either superior to the GDPR or as a panacea to the ongoing concerns consumers hold about information asymmetry.

However, as multiple consumer advocacy groups have noted, the CDR is not a foundational reform like the GDPR, nor does it structurally intervene in data collection. In fact, both the Consumer Policy Research Centre (2018) and the Australian Communications and Consumer Action Network (2018) have argued for the introduction of a GDPR equivalent instead. The CPRC also noted that consumers may misunderstand the scope and purpose of the right:

Considering the establishment of the GDPR in the EU, consumers may be misled in the naming of the CDR, in that it will provide the same data rights and level of protection as the GDPR when it does not (Consumer Policy Research Centre, 2018, p. 5).

These responses from civil society groups (noted elsewhere, Goggin et al., 2019), highlight both the obvious limitations of the right as well as the broader discursive power ascribed to the right by those introducing it.

Indeed, our argument rests on the premise that the introduction of the CDR and the Data Sharing and Release Bill is an important decision in Australia’s general approach to data reform. As the consumer stakeholders above have signalled, this period of reform was a critical one, where Australia could have chosen to import various features from the recent European reform process. As it stands, they have only introduced a data portability right and in doing so have implied that the reform solves more problems than it actually does. It is true that Australia has a questionable history when it comes to introducing strong privacy and data protection laws (Mann and Daly, 2018), however the Productivity Commission (and subsequently, the Australian government) were charged with having to grapple with the real social and economic issues associated with information asymmetry (even as the government are facilitating elements of it, see Mann and Daly, 2018) and decide on a suitable reform agenda.

Their choices in this regard are notable. Australia has embraced what Helen Nissenbaum (2017, p. 4) calls Big Data Exceptionalism, where policymakers simply accept large-scale data collection and focus their ‘regulatory effort’ on ‘data use rather than data collection’. Both the CDR and the Data Sharing and Release Act have been justified on a normative basis around ‘the potential of big data to deliver benefits to individuals and societies’ (Nissenbaum, 2017, p. 17). As both bills make it clear, the Australian government has taken the position that data is a resource to drive economic activity and create efficiencies rather than a fundamentally political object that relates to the rights and obligations of both individuals and government. The ultimate outcome of this is that politicians and policymakers have presented this economic approach as potentially superior to the GDPR and as the foundational philosophical framework that will drive future reforms in this area.

There are potential reform options that could be seen as a counter to this general trend. The Australian Competition and Consumer Commission (ACCC) is currently holding an inquiry into digital platforms like Facebook and Google (see also Goggin et al., 2018). Their preliminary report has proposed to strengthen the definition of consent and introduce notification and erasure rights for consumers’ personal information (Australian Competition and Consumer Commission, 2018). The Australian Human Rights Commission (2018) is also reviewing how human rights and technology intersect and both inquiries will present their findings shortly. However, regardless of what final reform options are proposed, we suggest that the Australian government will continue to take a market-oriented approach to data and embrace Big Data Exceptionalism. As noted earlier, Australia has a long history of ignoring rights-based reform proposals in the area of privacy and data protection (Meese and Wilken, 2014). Moreover, even if minor changes are made (say by introducing a right to erasure), these will not form part of a uniform reform agenda based on foundational rights. Instead, they will feature as an isolated selection of rights amongst a broader data framework oriented towards the market.

The rights of a citizen or consumer agency?

The Australian government has established a clear position on data and information asymmetry through the above reform proposals. It is clear that they will approach data through a largely economic lens. In the following section, we examine the implications of this decision by comparing this philosophy with Europe’s data framework. Through this analysis, we argue that these competing approaches to data affect how the rights-bearing subject is configured in each jurisdiction.

As we have already outlined Australia’s approach in detail, we will begin by exploring their conceptualisation of the right-bearing subject. As might be expected, Australia takes a neoliberal approach to citizenship that only grants individuals substantive rights as consumers, as seen through the CDR. While consumer advocacy groups petitioned for a broader spectrum of rights equivalent to the GDPR (Australian Communications and Consumer Action Network, 2018; Consumer Policy Research Centre, 2018), the Australian government has chosen to proceed with a data portability right that is oriented around the consumer and positioned in the context of the market. As the public documentation and commentary from politicians and policymakers cited above makes clear, while they are all drawing on the language of rights, these rights are only applicable to the marketplace and unable to be seriously used outside of that context. Indeed, it is notable that despite ongoing petitions for an actionable right to privacy in Australia for years (see Australian Law Reform Commission, 2008; 2013), the first direct right of action is being introduced within the context of this wholly consumer-oriented framework. While this is a welcome development, it is a long way from individuals being granted fundamental rights that they can call upon irrespective of the context.

Indeed, the focus on the market over broader political concerns was evident in how the broader data framework was conceived. Both the CDR and the Data Sharing and Release Bill emerged out of a policy debate focused on economics and competition law. The Productivity Commission’s original report viewed the ubiquitous availability of data and broader infrastructures of personal data collection as a natural feature of contemporary life (Couldry and Yu, 2018) and simply aimed to better embed consumers within the existing market processes surrounding data. This economic orientation is particularly clear, if we consider the fact that these reform options were also linked. This choice ultimately sets up an inequitable tradeoff with the entire reform agenda implying that protections over data held by government can be traded away for more agency in the market.

While various actors have aimed to give these reforms more import, their rhetorical efforts actually do harm by tying foundational questions around the collection, use, spread of data to an economic base and distorting the broader political context around personal data. We suggest that such an approach aligns with ‘a neoliberal philosophy of government in which citizens are defined through their autonomous choices as consumers of goods, services, and information’ (Cohen, 2012, p. 145), with Australians being encouraged to action data rights only in relation to the market and to limit their engagement with rights to that sphere of activity.

This consumer-focused policy-making process can be usefully compared to the introduction of the GDPR, which has presented ‘the most radical challenge so far to datafication’ (Couldry and Yu, 2018, p. 4474). Nick Couldry and Jun Yu argue that grounding the regulation in a recognition of fundamental rights gives ‘the GDPR a different character, as a discourse, from those market-driven business discourses’ (Couldry and Yu, 2018, p. 4486) and (we would add), to market-oriented policy reforms like the CDR. What is perhaps more telling, is that the European Union moved away from a market-oriented structure, in the lead up to the introduction of the GDPR.

Its predecessor, the Data Protection Directive, was introduced in 1995 with the goal of harmonising data protection across the European Union and supporting a data market across the European Union (see Hijmans, 2010; 2016). The Directive sought to balance the rights of the market and the sovereign individual rather than subsuming the individual into a broader concept of ‘informational capitalism’ (Castells, 2010 [1996], see also Cohen, 2012). This belief was “confirmed” in the 2010 Commission v Germany case heard by the European Court of Justice (see Hijmans, 2016, p. 56). On its face, the GDPR appears to support these historical goals. It is interested in maintaining economic markets and facilitating the ‘free flow of personal data within the Union and the transfer to third countries and international organisations’ (Recital 6, GDPR).

However, as Hielke Hijmans (2016, p. 57) points out, developments in European law have changed this balance, with the GDPR now granting more weight to data protection and fundamental rights than the ‘the free movement of data’. This stems from the Treaty of Lisbon in 2009, which radically changed how data protection was approached at law. The treaty gave ‘binding force to the Charter of the Fundamental Rights of the European Union’ (Hijmans, 2010, p. 220), which includes a ‘right to the protection of personal data’. It also included a specific article in the Treaty on the Functioning of the European Union (the TFEU) that:

not only contains an individual right of the data subject to the protection of his or her personal data, but it also obliges the European Parliament and Council to provide for data protection in all areas of European Union law (Hijmans, 2010, p. 220).

This new rebalancing of data protection has been confirmed in ‘recent case law’, which has ultimately ‘given a more authoritative foundation to data protection as a fundamental right, rather than as an off-shoot of the internal market’ (Hijmans, 2016, p. 57). Subsequently, while the GDPR makes reference to facilitating data flows across markets, these recent reforms have ensured that the regulation’s primary objective is to ensure “a high level of the protection of personal data (Recital 6, GDPR). This emphasis on protection is further evidenced later on in the regulation where it specifically notes that ‘human dignity’ (art. 88, GDPR) is a consideration alongside fundamental rights (see Floridi, 2016).

The European story of data protection (both in terms of protections and rights) is drastically different to the Australian one discussed earlier. The jurisdiction has moved from a balanced market-oriented framework to one where fundamental rights are of central importance. What is immediately of interest is that even when Europe viewed their data protection framework as a market-enhancing policy, fundamental rights were always part of that ‘balance’. This stands in stark contrast to the CDR. While there are privacy safeguards in place, the ultimate value of the reform is presumed to be generated through a consumer’s greater purchasing power and ability to better choose between commercial competitors. Conversely, the European data protection framework has always considered the needs of the citizen. Historically, this has been placed in balance with the needs of the market (and as a result of that, the consumer). However, following the Treaty of Lisbon and the introduction of the GDPR, the rights of the citizen have become paramount. As a result of this we see a stronger vision of the rights-bearing subject, with individuals being granted a set of rights associated with the political realm well beyond the confines of the market.

These differences also emerge at a practical and structural level around how legislation and regulations are conceptualised. In Australia, privacy is protected by a ‘patchwork of specific legislation’ (Greenleaf, 2010, p. 148). Regulation and enforcement is disjointed and confusing as a result (Meese and Wilken, 2014). Rather than solve this problem, the CDR adds to the confusion by introducing a different set of privacy standards and presenting a new cause of action. Whatever ‘European-style’ rights and protections are present, these only occur within the context of data access and transfer and as a result, set up a multi-tiered system of privacy protection, which has the potential to confuse businesses and consumers alike. In such a context, it is difficult to articulate an appropriate vision of a citizen with respect to data protection, where clearly demarcated rights and responsibilities are evident. Australia has simply increased the complexity of its existing privacy laws and failed to provide a clear sense of what rights people have as citizens, beyond the auspices of the market.

Conversely, while there was some concern from business when the GDPR was enacted (Powles, 2018), the scope of the reform meant that it was able to provide a baseline orientation of the European Union in relation to widespread data collection processes. The GDPR positions the Union as an actor who can intervene significantly if there is evidence of data misuse (art. 83.2, GDPR) and provides European citizens with a clearer sense of their rights and obligations in the context of growing ‘datafication’ (see Couldry and Yu, 2018).

The obvious answer that explains this separation is that numerous Australian governments have shown a disinterest in introducing a constitutional or statutory Bill of Rights. This has meant that Australia’s ‘courts do not have a convenient platform in domestic law from which to develop privacy law [incorporating data protection] as an aspect of human rights’ (see Greenleaf, 2001, p. 262). However, as we argue above, whatever Australia’s legal history, the recent reform moment gave the country a chance to establish its position with respect to the ongoing problem of information asymmetry. Despite embracing some European tendencies, as a whole Australia has missed an opportunity to reshape the conversation around data protection. Instead, it has presented a limited data policy that locates the vast majority of substantive rights within the context of the market.

Conclusion: contrasting data futures

As we conclude this article, it is important to state that we still believe that the CDR is an interesting and innovative policy. We do not agree with the legislation as it currently stands, for the reasons outlined in the paper, but there is scope to amend it and establish a data transfer framework that is more comprehensive than the European sector-based approach. Indeed, this reform could be of interest to a jurisdiction like Europe, which already bases its approach to data protection and privacy on a human rights framework, offers a more comprehensive vision of the digital citizen and carries stronger enforcement powers. However, its introduction in Australia only complicates and confuses an already weak privacy framework.

More critically, it promises to limit the policy discussion around data. Our key concern is that the CDR has been presented as a reform that solves the ‘data problem’ in Australia, when it is only a data access and portability right. While the right purports to transform Australians’ relationship with data it ultimately restricts this freedom to the marketplace. New and potentially useful legal tools like the new cause of action are similarly restricted, only becoming relevant when individuals engage with the CDR. As a result, it does not provide Australians with a set of foundational policies that can respond effectively to increasingly powerful data collection processes. The open question is whether a future government will reorient Australia’s data policy and present a reform agenda that is grounded in a recognition for human rights and offers a vision for the rights-bearing subject beyond the market.

References

Australian Law Reform Commission. (2008). For your information: Australian privacy law and practice [Report No. 108]. Canberra: Commonwealth of Australia.

Australian Law Reform Commission. (2013). Serious Invasions of Privacy in the Digital Era [Report No. 123]. Canberra: Commonwealth of Australia.

Andrejevic, M. (2014). Big data, big questions: The big data divide. International Journal of Communication, 8, 1673–1689. Retrieved from https://ijoc.org/index.php/ijoc/article/view/2161

Ausloos, J., & Dewitte, P. Shattering One-Way Mirrors – Data Subject Access Rights in Practice. International Data Privacy Law, 8(1), 4–28. https://doi.org/10.1093/idpl/ipy001

Australian Communications and Consumer Action Network. (2018). Submission to The Treasury (first round). Treasury Laws Amendment (Consumer Data Right) Bill 2018. Retrieved from https://treasury.gov.au/consultation/c2018-t316972

Australian Competition and Consumer Commission. (2018). Digital Platforms Inquiry: Preliminary report. Canberra: Australian Competition and Consumer Commission. Retrieved from https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/preliminary-report

Australian Human Rights Commission. (2018). Human rights and technology issues paper. Sydney: Australian Human Rights Commission. Retrieved from https://tech.humanrights.gov.au/sites/default/files/2018-07/Human%20Rights%20and%20Technology%20Issues%20Paper%20FINAL.pdf

Castells, M. (2010). The rise of the network society: The information age: Economy, society, and culture (2nd edition). Chichester; Malden, MA: Wiley-Blackwell.

Cohen, J. E. (2012). What privacy is for. Harvard Law Review,126(7), 1904–1933. Retrieved from https://harvardlawreview.org/2013/05/what-privacy-is-for/

Commission v Germany, Case C-518/07, EU:C:2010:125.

Communications Alliance. (2019). Submission to the Senate Standing Committee on Economics. Treasury Laws Amendment (Consumer Data Right) Bill 2019 [Provisions]. Retrieved from https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Economics/TLABConsumerDataRight

Consumer Policy Research Centre. (2018). Submission to The Treasury (first round). Treasury Laws Amendment (Consumer Data Right) Bill 2018. Retrieved from https://treasury.gov.au/consultation/c2018-t316972

Couldry, N., & Yu, J. (2018). Deconstructing datafication’s brave new world. New Media & Society, 20(12), 4473–4491. doi:10.1177/1461444818775968

Department of the Prime Minister and Cabinet. (2018a). New Australian Government Data Sharing and Release Legislation: Issues paper for consultation. Canberra, Australia: Commonwealth of Australia.

Department of the Prime Minister and Cabinet. (2018b). The Australian Government’s response to the Productivity Commission Data Availability and Use Inquiry. Canberra, Australia: Commonwealth of Australia.

Diker Vanberg, A., & Ünver, M. B. (2017). The right to data portability in the GDPR and EU competition law: odd couple or dynamic duo? European Journal of Law and Technology, 8(1), Retrieved from http://ejlt.org/article/view/546/726

Explanatory Memorandum to the Treasury Laws amendment (Consumer Data Right) Bill 2018. Canberra: Commonwealth of Australia. Retrieved from https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r6281

Esayas, S.Y. & Daly, A. (2018). The Proposed Australian Consumer Data Right: A European Comparison. European Competition and Regulatory Law Review, 2(3), 187–202. doi:10.21552/core/2018/3/6

Floridi, L. (2016). On human dignity as a foundation for the right to privacy. Philosophy & Technology, 29(4), 307–312. doi:10.1007/s13347-016-0220-8

Frieden, R. (2017). The Internet of Platforms and Two-Sided Markets: Legal and Regulatory Implications for Competition and Consumers. doi:10.2139/ssrn.3051766

Goggin, G., Vromen, A., Weatherall, K., Martin, F., & Sunman, L. (2019). Data and digital rights: recent Australian developments. Internet Policy Review, 8(1). doi:10.14763/2019.1.1390

Greenleaf, G. W. (2001). Tabula rasa: Ten reasons why Australian privacy law does not exist. University of New South Wales Law Journal, 24(1). 262–269.

Greenleaf, G. W. (2010). Privacy in Australia. In J. B. Rule & G. W. Greenleaf (Eds.), Global privacy protection: The first generation (pp. 141 – 173). Cheltenham: Edward Elgar Publishing. doi:10.4337/9781848445123.00009

Harris, P. (2018, July 5). Data, the GDPR and Australia’s new consumer right. The Mandarin. Retrieved from https://www.themandarin.com.au/95351-data-the-gdpr-and-australias-new-consumer-right/

Hijmans, H. (2010). Recent developments in data protection at European Union level. In ERA Forum, 11(2), 219–231. doi:10.1007/s12027-010-0166-8

Hijmans, H. (2016). The European Union as a constitutional guardian of internet privacy and data protection (PhD Thesis, University of Amsterdam). Retrieved from https://hdl.handle.net/11245/1.511969

Lindsay, D. (2005). An exploration of the conceptual basis of privacy and the implications for the future of Australian privacy law. Melbourne University Law Review, 29(1), 131–178. Available at https://law.unimelb.edu.au/__data/assets/pdf_file/0006/1708017/29_1_4.pdf

Mahieu, R. L. P., Asghari, H., & van Eeten, M. (2018). Collectively exercising the right of access: individual effort, societal effect. Internet Policy Review, 7(3). doi:10.14763/2018.3.927

Mann, M., & Daly, A. (2018). (Big) Data and the North-in-South: Australia’s Informational Imperialism and Digital Colonialism. Television & New Media, 20(4). doi:10.1177/1527476418806091

Meese, J., & Wilken, R. (2014). Google Street View in Australia: Privacy implications and regulatory solutions. Media Arts Law Review, 19(4), 305-324.

Morrison, S. (2018). More power in the hands of consumers. Canberra: Australian Government, The Treasury. Retrieved from http://sjm.ministers.treasury.gov.au/media-release/087-2018/

Nissenbaum, H. (2017). Deregulating collection: must privacy give way to use regulation? Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3092282

Office of the Australian Information Commissioner. (2017). Australian Community Attitudes to Privacy Survey. Canberra, Australia: Commonwealth of Australia. Retrieved from https://www.oaic.gov.au/resources/engage-with-us/community-attitudes/acaps-2017/acaps-2017-report.pdf

Office of the Australian Information Commissioner. (2018a). Guide to privacy regulatory action. Canberra, Australia: Commonwealth of Australia. Retrieved from https://www.oaic.gov.au/resources/about-us/our-regulatory-approach/guide-to-oaic-s-privacy-regulatory-action/oaic-regulatory-action-guide.pdf

Office of the Australian Information Commissioner. (2018b). Australian Privacy Principle Guidelines. Canberra, Australia: Commonwealth of Australia. Retrieved from https://www.oaic.gov.au/agencies-and-organisations/app-guidelines/

Omarini, A. (2018). “Banks and Fintechs: How to Develop a Digital Open Banking Approach for the Bank’s Future.” International Business Research11(9), 23-36. doi:10.5539/ibr.v11n9p23

Powels, J. (2018, May 25). The G.D.P.R., Europe’s New Privacy Law, and the Future of the Global Data Economy. The New Yorker. Retrieved from https://www.newyorker.com/tech/annals-of-technology/the-gdpr-europes-new-privacy-law-and-the-future-of-the-global-data-economy

Privacy Commissioner v Telstra Corporation Limited [2017] FCAFC 4

Productivity Commission. (2017). Data Availability and Use. Inquiry Report. Canberra: Commonwealth of Australia.

Richardson, M. (2002). Whither breach of confidence: A right of privacy for Australia. Melbourne University Law Review,26(2), 381–395.

Satariano, A. (2018, May 24). G.D.P.R., a New Privacy Law, Makes Europe World’s Leading Tech Watchdog. The New York Times. Retrieved from https://www.nytimes.com/2018/05/24/technology/europe-gdpr-privacy.html

Schwartz, P. M., & Peifer, K. N. (2017). Transatlantic Data Privacy Law. The Georgetown Law Journal, 106(1), 115–179. Retrieved from https://georgetownlawjournal.org/articles/249/transatlantic-data-privacy-law

Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology and Intellectual Property, 11(5), 239–273. Retrieved from https://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

Treasury, Australian Government. (2018). Consumer Data Right. Canberra: Commonwealth of Australia. Retrieved from https://static.treasury.gov.au/uploads/sites/1/2018/05/t286983_consumer-data-right-booklet.pdf

Williams, R. (2018, August 7). The 'Data Sharing and Release Act' is coming for your data. Independent Australia. Retrieved from https://independentaustralia.net/life/life-display/-the-data-sharing-and-release-act-is-coming-for-your-data,11761

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.

Footnotes

1. Some smaller businesses such as health service providers also have to comply with this legislative framework.

2. The OAIC is the current statutory body and is headed by the Australian Information and Privacy Commissioner.

Technology, autonomy, and manipulation

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Public concern is growing around an issue previously discussed predominantly amongst privacy and surveillance scholars—namely, the ability of data collectors to use information about individuals to manipulate them (e.g., Abramowitz, 2017; Doubek, 2017; Vayena, 2018). Knowing (or inferring) a person’s preferences, interests, and habits, their friends and acquaintances, education and employment, bodily health and financial standing, puts the knower in a position to exercise considerable influence over the known (Richards, 2013).1 It enables them to better understand what motivates their targets, what their weaknesses and vulnerabilities are, when they are most susceptible to influence and how most effectively to frame pitches and appeals.2 Because information technology makes generating, collecting, analysing, and leveraging such data about us cheap and easy, and at a scarcely comprehendible scale, the worry is that such technologies render us deeply vulnerable to the whims of those who build, control, and deploy these systems.

Initially, for academics studying this problem, that meant the whims of advertisers, as these technologies were largely developed by firms like Google and Facebook, who identified advertising as a means of monetising the troves of personal information they collect about internet users (Zuboff, 2015). Accordingly, for some time, scholarly worries centred (rightly) on commercial advertising practices, and policy solutions focused on modernising privacy and consumer protection regulations to account for the new capabilities of data-driven advertising technologies (e.g., Calo, 2014; Nadler & McGuigan, 2018; Turow, 2012).3 As Ryan Calo put it, “the digitization of commerce dramatically alters the capacity of firms to influence consumers at a personal level. A specific set of emerging technologies and techniques will empower corporations to discover and exploit the limits of each individual consumer’s ability to pursue his or her own self-interest” (2014, p. 999).

More recently, however, the scope of these worries has expanded. After concerns were raised in 2016 and 2017 about the use of information technology to influence elections around the world, many began to reckon with the fact that the threat of targeted advertising is not limited to the commercial sphere.4 By harnessing ad targeting platforms, like those offered by Facebook, YouTube, and other social media services, political campaigns can exert meaningful influence over the decision-making and behaviour of voters (Vaidhyanathan, 2018; Yeung, 2017; Zuiderveen Borgesius et al., 2018). Global outrage over the Cambridge Analytica scandal—in which the data analytics firm was accused of profiling voters in the United States, United Kingdom, France, Germany, and elsewhere, and targeting them with advertisements designed to exploit their “inner demons”—brought such worries to the forefront of public consciousness (“Cambridge Analytica and Facebook: The Scandal so Far”, 2018; see also, Abramowitz, 2017; Doubek, 2017; Vayena, 2018).

Indeed, there is evidence that the pendulum is swinging well to the other side. Rather than condemning the particular harms wrought in particular contexts by strategies of online influence, scholars are beginning to turn their attention to the big picture. In their recent book Re-Engineering Humanity, Brett Frischmann and Evan Selinger describe a vast array of related phenomena, which they collectively term “techno-social engineering”—i.e., “processes where technologies and social forces align and impact how we think, perceive, and act” (2018, p. 4). Operating at a grand scale reminiscent of mid-20th century technology critique (like that of Lewis Mumford or Jacques Ellul), Frischmann and Selinger point to cases of technologies transforming the way we carry out and understand our lives—from “micro-level” to the “meso-level” and “macro-level”— capturing everything from fitness tracking to self-driving cars to viral media (2018, p. 270). Similarly, in her book The Age of Surveillance Capitalism (2019), Shoshana Zuboff raises the alarm about the use of information technology to effectuate what she calls “behavior modification”, arguing that it has become so pervasive, so central to the functioning of the modern information economy, that we have entered a new epoch in the history of political economy.

These efforts help to highlight the fact that there is something much deeper at stake here than unfair commerce. When information about us is used to influence our decision-making, it does more than diminish our interests—it threatens our autonomy.5 At the same time, there is value in limiting the scope of the analysis. The notions of “techno-social engineering” and “surveillance capitalism” are too big to wield surgically—the former is intended to reveal a basic truth about the nature of our human relationship with technology, and the latter identifies a broad set of economic imperatives currently structuring technology development and the technology industry.6 Complementing this work, our intervention aims smaller. For the last several years, public outcry has coalesced against a particular set of abuses effectuated through information technology—what many refer to as “online manipulation” (e.g., Abramowitz, 2017; Doubek, 2017; Vayena, 2018). In what follows, we theorise and vindicate this grievance.7

In the first section, we define manipulation, distinguishing it from neighbouring concepts like persuasion, coercion, deception, and nudging, and we explain why information technology is so well-suited to facilitating manipulation. In the second section, we describe the harms of online manipulation—the use of information technology to manipulate—focusing primarily on its threat to individual autonomy. Finally, we suggest directions for future policy efforts aimed at curbing online manipulation and strengthening autonomy in human-technology relations.

1. What is online manipulation?

The term “manipulation” is used, colloquially, to designate a wide variety of activities, so before jumping in it is worth narrowing the scope of our intervention further. In the broadest sense, manipulating something simply means steering or controlling it. We talk about doctors manipulating fine instruments during surgery and pilots manipulating cockpit controls during flight. “Manipulation” is also used to describe attempts at steering or controlling institutions and systems. For example, much has been written of late about allegations made (and evidence presented) that internet trolls under the authority of the Russian government attempted to manipulate the US media during the 2016 presidential election.8 Further, many suspect that the goal of those efforts was, in turn, to manipulate the election itself (by influencing voters). However, at the centre of this story, and at the centre of stories like it, is the worry that people are being manipulated, that individual decision-making is being steered or controlled, and that the capacity of individuals to make independent choices is therefore being compromised. It is manipulation in this sense—the attempt to influence individual decision-making and behaviour—that we focus on in what follows.

Philosophers and political theorists have long struggled to define manipulation. According to Robert Noggle, there are three main proposals (Noggle, 2018b). Some argue that manipulation is non-rational influence (Wood, 2014). On that account, manipulating someone means influencing them by circumventing their rational, deliberative decision-making faculties. A classic example of manipulation understood in this way is subliminal messaging, and depending on one’s conception of rationality we might also imagine certain kinds of emotional appeals, such as guilt trips, as fitting into this picture. The second approach defines manipulation as a form of pressure, as in cases of blackmail (Kligman & Culver, 1992, qtd. in Noggle, 2018b). Here the idea is that manipulation involves some amount of force—a cost is extracted for non-compliance—but not so much force as to rise to the level of coercion. Finally, a third proposal defines manipulation as trickery. Although a variety of subtly distinct accounts fall under this umbrella, the main idea is that manipulation, at bottom, means leading someone along, inducing them to behave as the manipulator wants, like Iago in Shakespeare’s Othello, by tempting them, insinuating, stoking jealousy, and so on.9

Each of these theories of manipulation has strengths and weaknesses, and our account shares certain features in common with all of them. It hews especially close to the trickery view, but operationalises the notion of trickery more concretely, thus offering more specific tools for diagnosing cases of manipulation. In our view, manipulation is hidden influence. Or more fully, manipulating someone means intentionally and covertly influencing their decision-making, by targeting and exploiting their decision-making vulnerabilities. Covertly influencing someone—imposing a hidden influence—means influencing them in a way they aren’t consciously aware of, and in a way they couldn’t easily become aware of were they to try and understand what was impacting their decision-making process.

Understanding manipulation as hidden influence helps to distinguish it from other forms of influence. In what follows, we distinguish it first from persuasion and coercion, and then from deception and nudging. Persuasion—in the sense of rational persuasion—means attempting to influence someone by offering reasons they can think about and evaluate.10 Coercion means influencing someone by constraining their options, such that their only rational course of action is the one the coercer intends (Wood, 2014). Persuasion and coercion carry very different, indeed nearly opposite, normative connotations: persuading someone to do something is almost always acceptable, while coercing them almost always isn’t. Yet persuasion and coercion are alike in that they are both forthright forms of influence. When someone is trying to persuade us or trying to coerce us we usually know it. Manipulation, by contrast, is hidden—we only learn that someone was trying to steer our decision-making after the fact, if we ever find out at all.

What makes manipulation distinctive, then, is the fact that when we learn we have been manipulated we feel played.11 Reflecting back on why we behaved the way we did, we realise that at the time of decision we didn’t understand our own motivations. We were like puppets, strung along by a puppet master. Manipulation thus disrupts our capacity for self-authorship—it presumes to decide for us how and why we ought to live. As we discuss in what follows, this gives rise to a specific set of harms. For now, what is important to see is the kind of influence at issue here. Unlike persuasion and coercion, which address their targets openly, manipulation is covert. When we are coerced we are usually rightly upset about it, but the object of our indignation is the set of constraints placed upon us. When we are manipulated, by contrast, we are not constrained. Rather, we are directed, outside our conscious awareness, to act for reasons we can’t recognise, and toward ends we may wish to avoid.

Given this picture, one can detect a hint of deception. On our view, deception is a special case of manipulation—one way to covertly influence someone is to plant false beliefs. If, for example, a manipulator wanted their partner to clean the house, they could lie and tell them that their mother was coming for a visit, thereby tricking them into doing what they wanted by prompting them to make a rational decision premised on false beliefs. But deception is not the only species of manipulation; there are other ways to exert hidden influence. First, manipulators need not focus on beliefs at all. Instead, they can covertly influence by subtly tempting, guilting, seducing, or otherwise playing upon desires and emotions. As long as the target of manipulation is not conscious of the manipulator’s strategy while they are deploying it, it is “hidden” in the relevant sense.

Some argue that even overt temptation, guilting, and so on are manipulative (these arguments are often made by proponents of the “non-rational influence” view of manipulation, described above), though they almost always concede that such strategies are more effective when concealed.12 We suspect that what is usually happening in such cases is a manipulator attempting to covertly tempt, guilt, etc., but failing to successfully hide their strategy. On our account, it is the attempted covertness that is central to manipulation, rather than the particular strategy, because once one learns that they are the target of another person’s influence that knowledge becomes a regular part of their decision-making process. We are all constantly subject to myriad influences; the reason we do not feel constantly manipulated is that we can usually reflect on, understand, and account for those influences in the process of reaching our own decisions about how to act (Raz, 1986, p. 204). The influences become part of how we explain to ourselves why we make the decisions we do. When the influence is hidden, however, that process is undermined. Thus, while we might naturally call a person who frequently engages in overt temptation or seduction manipulative—meaning, they frequently attempt to manipulate—strictly speaking we would only say that they have succeeded in manipulating when their target is unaware of their machinations.

Second, behavioural economists have catalogued a long list of “cognitive biases”—unreliable mental shortcuts we use in everyday decision-making—which can be leveraged by would-be manipulators to influence the trajectory of our decision-making by shaping our beliefs, without the need for outright deception.13 Manipulators can frame information in a way that disposes us to a certain interpretation of the facts; they can strategically “anchor” our frame of reference when evaluating the costs or benefits of some decision; they can indicate to us that others have decided a certain way, in order to cue our intrinsic disposition to social conformity (the so-called “bandwagon effect”); and so on. Indeed, though deception and playing on people’s desires and emotions have likely been the most common forms of manipulation in the past—which is to say, the most common strategies for covertly influencing people—as we explain in what follows, there is reason to believe that exploiting cognitive biases and vulnerabilities is the most alarming problem confronting us today.14

Talk of exploiting cognitive vulnerabilities inevitably gives rise to questions about nudging, thus finally, we briefly distinguish between nudging and manipulation. The idea of “nudging”, as is well known, comes from the work of Richard Thaler and Cass Sunstein, and points to any intentional alteration of another person’s decision-making context (their “choice architecture”) made in order to influence their decision-making outcome (Thaler & Sunstein, 2008, p. 6). For Thaler and Sunstein, the fact that we suffer from so many decision-making vulnerabilities, that our decision-making processes are inalterably and unavoidably susceptible to even the subtlest cues from the contexts in which they are situated, suggests that when we design other people’s choice-making environments—from the apps they use to find a restaurant to the menus they order from after they arrive—we can’t help but influence their decisions. As such, on their account, we might as well use that power for good, by steering people’s decisions in ways that benefit them individually and all of us collectively. For these reasons, Thaler and Sunstein recommend a variety of nudges, from setting defaults that encourage people to save for retirement to arranging options in a cafeteria in way that encourages people to eat healthier foods.15

Given our definition of manipulation as intentionally hidden influence, and our suggestion that influences are frequently hidden precisely by leveraging decision-making vulnerabilities like the cognitive biases nudge advocates reference, the question naturally arises as to whether or not nudges are manipulative. Much has been written on this topic and no consensus has been reached (see, e.g., Bovens, 2009; Hausman & Welch, 2010; Noggle, 2018a; Nys & Engelen, 2017; Reach, 2016; Selinger & Whyte, 2011; Sunstein, 2016). In part, this likely has to do with the fact that a wide and disparate variety of changes to choice architectures are described as nudges. In our view, some are manipulative and some are not—the distinction hinging on whether or not the nudge is hidden, and whether it exploits vulnerabilities or attempts to rectify them. Many of the nudges Thaler and Sunstein, and others, recommend are not hidden and work to correct cognitive bias. For example, purely informational nudges, such as nutrition labels, do not seem to us to be manipulative. They encourage individuals to slow down, reflect on, and make more informed decisions. By contrast, Thaler and Sunstein’s famous cafeteria nudge—placing healthier foods at eye-level and less healthy foods below or above—seems plausibly manipulative, since it attempts to operate outside the individual’s conscious awareness, and to leverage a decision-making bias. Of course, just because it’s manipulative does not mean it isn’t justified. To say that a strategy is manipulative is to draw attention to the fact that it carries a harm, which we discuss in detail below. It is possible, however, that the harm is justified by some greater benefit it brings with it.

Having defined manipulation as hidden or covert influence, and having distinguished manipulation from persuasion, coercion, deception, and nudging, it is possible to define “online manipulation” as the use of information technology to covertly influence another person’s decision-making, by targeting and exploiting decision-making vulnerabilities. Importantly, we have adopted the term “online manipulation” from public discourse and interpret the word “online” expansively, recognising that there is no longer any hard boundary between online and offline life (if there ever was). “Online manipulation”, as we understand it, designates manipulation facilitated by information technology, and could just as easily be termed “digital manipulation” or “automated manipulation”. Since traditionally “offline” spaces are increasingly digitally mediated (because the people occupying them carry smartphones, the spaces themselves are embedded with internet-connected sensors, and so on), we should expect to encounter online manipulation beyond our computer screens.

Given this definition, it is not difficult to see why information technology is uniquely suited to facilitating manipulative influences. First, pervasive digital surveillance puts our decision-making vulnerabilities on permanent display. As privacy scholars have long pointed out, nearly everything we do today leaves a digital trace, and data collectors compile those traces into enormously detailed profiles (Solove, 2004). Such profiles comprise information about our demographics, finances, employment, purchasing behaviour, engagement with public services and institutions, and so on—in total, they often involve thousands of data points about each individual. By analysing patterns latent in this data, advertisers and others engaging in behavioural targeting are able to detect when and how to intervene in order to most effectively influence us (Kaptein & Eckles, 2010).

Moreover, digital surveillance enables detection of increasingly individual- or person-specific vulnerabilities.16 Beyond the well-known cognitive biases discussed above (e.g., anchoring and framing effects), which condition most people’s decision-making to some degree, we are also each subject to particular circumstances that can impact how we choose.17 We are each prone to specific fears, anxieties, hopes, and desires, as well as physical, material, and economic realities, which—if known—can be used to steer our decision-making. In 2016, the voter micro-targeting firm Cambridge Analytica claimed to construct advertisements appealing to particular voter “psychometric” traits (such as openness, extraversion, etc.) by combining information about social media use with personality profiles culled from online quizzes.18 And in 2017, an Australian newspaper exposed internal Facebook strategy documents detailing the company’s alleged ability to detect when teenage users are feeling insecure. According to the report, “By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel ‘stressed’, ‘defeated’, ‘overwhelmed’, ‘anxious’, ‘nervous’, ‘stupid’, ‘silly’, ‘useless’, and a ‘failure’” (Davidson, 2017). Though Facebook claims it never used that information to target advertisements at teenagers, it did not deny that it could. Extrapolating from this example it is easy to imagine others, such as banks targeting advertisements for high-interest loans at the financially desperate or pharmaceutical companies targeting advertisements for drugs at those suspected to be in health crisis.19

Second, digital platforms, such as websites and smartphone applications, are the ideal medium for leveraging these insights into our decision-making vulnerabilities. They are dynamic, interactive, intrusive, and adaptive choice architectures (Lanzing, 2018; Susser, 2019b; Yeung, 2017). Which is to say, the digital interfaces we interact with are configured in real time using the information about us described above, and they continue to learn about us as we interact with them. Unlike advertisements of old, they do not wait, passively, for viewers to drive past them on roads or browse over them in magazines; rather, they send text messages and push notifications, demanding our attention, and appear in our social media feeds at the precise moment they are most likely to tempt us. And because all of this is automated, digital platforms are able to adapt to each individual user, creating what Karen Yeung calls “highly personalised choice environment[s]”—decision-making contexts in which the vulnerabilities catalogued through pervasive digital surveillance are put to work in an effort to influence our choices (2017, p. 122).20

Third, if manipulation is hidden influence, then digital technologies are ideal vehicles for manipulation because they are already in a real sense hidden. We often think of technologies as objects we attend to and use with focus and attention. The language of technology design reflects this: we talk about “users” and “end users,” “user interfaces,” and “human-computer interaction”. In fact, as philosophers (especially phenomenologists) and science and technology studies (STS) scholars have long shown, once we become habituated to a particular technology, the device or interface itself recedes from conscious attention, allowing us to focus on the tasks we are using it to accomplish.21 Think of a smartphone or computer: we pay little attention to the devices themselves, or even to the way familiar websites or app interfaces are arranged. Instead, after becoming acclimated to them, we attend to the information, entertainment, or conveniences they offer (Rosenberger, 2009). Philosophers refer to this as “technological transparency”—the fact that we see, hear, or otherwise perceive through technologies—as though they were clear, transparent—onto the perceptual objects they convey to us (Ihde, 1990; Van Den Eede, 2011; Verbeek, 2005). Because this language of transparency can be confused with the concept of transparency familiar from technology policy discussions, we might more helpfully describe it as “invisibility” (Susser, 2019b). In addition to pervasive digital surveillance making our decision-making vulnerabilities easy to detect, and digital platforms making them easy to exploit, the ease with which our technologies become invisible to us—simply through frequent use and habituation—means the influences they facilitate are often hidden, and thus potentially manipulative.

Finally, although we focus primarily on the example of behavioural advertising to illustrate these dynamics, it is worth emphasising that advertisers are not the only ones engaging in manipulative practices. In the realm of user interface/experience (UI/UX) design, increasing attention is being paid to so-called “dark patterns”—design strategies that exploit users’ decision-making vulnerabilities to nudge them into acting against their interests (or, at least, acting in the interests of the website or app), such as requiring automatically-renewing paid subscriptions that begin after an initial free trial period (Brignull, 2013; Gray, Kou, Battles, Hoggatt, & Toombs, 2018; Murgia, 2019; Singer, 2016). Though many of these strategies are as old as the internet and not all rise to the level of manipulation—sometimes overtlyinconveniencing users, rather than hiding their intentions—their growing prevalence has led some to call for legislation banning them (Bartz, 2019).

Worries about online manipulation have also been raised in the context of gig economy services, such as Uber and Lyft (Veen, Goods, Josserand, & Kaine, 2017). While these platforms market themselves as freer, more flexible alternatives to traditional jobs, providing reliable and consistent service to customers requires maintaining some amount of control over workers. However, without access to the traditional managerial controls of the office or factory floor, gig economy firms turn to “algorithmic management” strategies, such as notifications, customer satisfaction ratings, and other forms of soft control enabled through their apps (Rosenblat & Stark, 2016). Uber, for example, rather than requesting (or demanding) that workers put in longer hours, prompts drivers trying to exit the app with a reminder about their progress toward some earnings goal, exploiting the desire to continue making progress toward that goal; Lyft issues game-like “challenges” to drivers and stars and badges for accomplishing them (Mason, 2018; Scheiber, 2017).

In their current form, not all such practices necessarily manipulate—people are savvy, and many likely understand what they are facing. These examples are important, however, because they illustrate our present trajectory. Growing reliance on digital tools in all parts of our lives—tools that constantly record, aggregate, and analyse information about us—means we are revealing more and more about our individual and shared vulnerabilities. The digital platforms we interact with are increasingly capable of exploiting those insights to nudge and shape our choices, at home, in the workplace, and in the public sphere. And the more we become habituated to these systems, the less attention we pay to them.

2. The harm(s) of online manipulation

With this picture in hand, the question becomes: what exactly is the harm that results from influencing people in this way? Why should we be worried about technological mediation rendering us so susceptible to manipulative influence? In our view, there are several harms, but each flows from the same place—manipulation violates its target’s autonomy.

The notion of autonomy points to an individual’s capacity to make meaningfully independent decisions. As Joseph Raz puts it: “(t)he ruling idea behind the ideal of personal autonomy is that people should make their own lives” (Raz, 1986, p. 369). Making one’s own life means freely facing both existential choices, like whom to spend one’s life with or whether to have children, and pedestrian, everyday ones. And facing them freely means having the opportunity to think about and deliberate over one’s options, considering them against the backdrop of one’s beliefs, desires, and commitments, and ultimately deciding for reasons one recognises and endorses as one’s own, absent unwelcome influence (J. P. Christman, 2009; Oshana, 2015; Veltman & Piper, 2014). Autonomy is in many ways the guiding normative principle of liberal democratic societies. It is because we think individuals can and should govern themselves that we value our capacity to collectively and democratically self-govern.

Philosophers sometimes operationalise the notion of autonomy by distinguishing between its competency and authenticity conditions (J. P. Christman, 2009, p. 155f). In the first place, being autonomous means having the cognitive, psychological, social, and emotional competencies to think through one’s choices, form intentions about them, and act on the basis of those intentions. Second, it means that upon critical reflection one identifies with one’s values, desires, and goals, and endorses them authentically as one’s own. Of course, many have criticised such conceptions of autonomy as overly rationalistic and implausibly demanding, arguing that we rarely decide in this way. We are emotional actors and creatures of habit, they argue, socialised and enculturated into specific ways of choosing that we almost never reflect upon or endorse. But we understand autonomy broadly—our conception of deliberation includes not only beliefs and desires, but also emotions, convictions, and experiences, and critical reflection can be counterfactual (we must in principle be able to critically reflect on and endorse our motivations for acting, but we need not actually reflect on each and every move we make).

In addition to rejecting overly demanding and rationalistic conceptions of autonomy, we also reject overly atomistic ones. In our view, autonomous persons are socially, culturally, historically, and politically situated. Which is to say, we acknowledge the “intersubjective and social dimensions of selfhood and identity for individual autonomy and moral and political agency” (Mackenzie & Stoljar, 2000, p. 4).22 Though social contexts can constrain our choices, by conditioning us to believe and behave in stereotypical ways (as, for example, in the case of gendered social expectations), it is also our social contexts that bestow value on autonomy, teaching us what it means to make independent decisions, and providing us with rich sets of options from which to choose. Moreover, it is crucial for present purposes that we emphasise our understanding of autonomy as more than an individual good—it is an essential social and political good too. Individuals express their autonomy across a variety of social contexts, from the home to the marketplace to the political sphere. Democratic institutions are meant to register and reflect the autonomous political decisions individuals make. Disrupting individual autonomy is thus more than an ethical concern; it has social and political import.

Against this picture of autonomy and its value, we can more carefully explain why online manipulation poses such a grave threat. To manipulate someone is, again, to covertly influence them, to intentionally alter their decision-making process without their conscious awareness. Doing so undermines the target’s autonomy in two ways: first, it can lead them to act toward ends they haven’t chosen, and second, it can lead them to act for reasons not authentically their own.

To see the first problem, consider examples of targeted advertising in the commercial sphere. Here, the aim of manipulators is fairly straightforward: they want people to buy things. Rather than simply put products on display, however, advertisers can construct decision-making environments—choice architectures—that subtly tempt or seduce shoppers to purchase their wares, and at the highest possible price (Calo, 2014). A variety of strategies might be deployed, from pointing out that one’s friends have purchased the item to countdown clocks that pressure one to act before some offer expires, the goal being to hurry, evade, or undermine deliberation, and thus to encourage decisions that may or may not align with an individual’s deeper, reflective, self-chosen ends and values.

Of course, these strategies are familiar from non-digital contexts; all commercial advertising (digital or otherwise) functions in part to induce consumers to buy things, and worries about manipulative ads emerged long before advertising moved online.23 Equally, not all advertising—perhaps not even all targeted advertising—involves manipulation. Purely informational ads displayed to audiences actively seeking out related products and services (e.g., online banner ads displaying a doctor’s contact information shown to visitors to a health-related website) are unlikely to covertly influence their targets. Worries about manipulation arise in cases where advertisements are sneaky—which is to say, where their effects are achieved covertly. If, for example, the doctor was a psychiatrist, his advertisements were shown to people suspected of suffering from depression, and only at the specific times of day they were thought to be most afflicted, our account would offer grounds for condemning such tactics as manipulative.

It might also be the case that manipulation is not a binary phenomenon. We are the objects of countless influence campaigns and we understand some of them more than others; perhaps we ought to say that they are more or less manipulative in equal measure. On such a view, online targeted (or “behavioural”) advertising could be understood as exacerbating manipulative dynamics common to other forms of advertising, by making the tweaks to individual choice architectures more subtle, and the seductions and temptations that result from them more difficult to resist (Yeung, 2017). Worse still, the fluidity and porousness of online environments makes it easy for marketers to conflate other distinct contexts with shopping, further blurring a person’s reasoning about whether they truly want to make some purchase. For example, while chatting with friends over social media or searching for some place to eat, an ad may appear, thus requiring the target to juggle several tasks—in this case, communication and information retrieval—along with deliberation over whether or not to respond to the marketing ploy, thus diminishing the target’s ability to sustain focus on any of the them. This problem is especially clearly illustrated by so-called “native advertising” (advertisements designed to look like user-generated, non-commercial content). Such advertisements are a kind of Trojan horse, intentionally conflating commercial and non-commercial activities in an attempt to undermine our capacity for focused, careful deliberation.

In the philosophical language introduced above, these strategies challenge both autonomy’s competency and authenticity conditions. By deliberately and covertly engineering our choice environments to steer our decision-making, online manipulation threatens our competency to deliberate about our options, form intentions about them, and act on the basis of those intentions. And since, as we’ve seen, manipulative practices often work by targeting and exploiting our decision-making vulnerabilities—concealing their effects, leaving us unaware of the influence on our decision-making process—they also challenge our capacity to reflect on and endorse our reasons for acting as authentically on our own. Online manipulation thus harms us both by inducing us to act toward ends not of our choosing and for reasons we haven’t endorsed.

Importantly, undermining personal autonomy in the ways just described can lead to further harms. First, since autonomous individuals are wont to protect (or at least to try and protect) their own interests, we can reasonably expect that undermining people’s autonomy will lead, in many cases, to a diminishment of those interests. Losing the ability to look out for ourselves is unlikely to leave us better off in the long run. This harm—e.g., being tricked into buying things we don’t need or paying more for them than we otherwise would—is well described by those who have analysed the problem of online manipulation in the commercial sphere (Calo, 2014; Nadler & McGuigan, 2018; Zarsky, 2006; Zarsky, 2019). And it is a serious harm, which we would do well to take seriously, especially given the fact that law and policy around information and internet practices (at least in the US) assume that individuals are for the most part capable of safeguarding their interests (Solove, 2013). However, it is equally important to see that this harm to welfare is derivative of the deeper harm to autonomy. Attempting to “protect consumers” from threats to their economic or other interests, without addressing the more fundamental threat to their autonomy, is thus to treat the symptoms without addressing the cause.

To bring this into sharper relief, it is worth pointing out that even purely beneficent manipulation is harmful. Indeed, it is harmful to manipulate someone even in an effort to lead them more effectively toward their own self-chosen ends. That is because the fundamental harm of manipulation is to the processof decision-making, not its outcome. A well-meaning, paternalistic manipulator, who subtly induces his target to eat better food, exercise, and work hard, makes his target better off in one sense—he is healthier and perhaps more materially well-off—but it harms him as well by rendering him opaque to himself. Imagine if some bad habit, which someone had spent their whole life attempting to overcome, one day, all of a sudden, disappeared. They would be happy, of course, to be rid of the habit, but they might also be deeply confused and suspicious about the source of the change. As T.M. Scanlon writes, “I want to choose the furniture for my own apartment, pick out the pictures for the walls, and even write my own lectures despite the fact that these things might be done better by a decorator, art expert, or talented graduate student. For better or worse, I want these things to be produced by and reflect my own taste, imagination, and powers of discrimination and analysis. I feel the same way, even more strongly, about important decisions affecting my life in larger terms: what career to follow, where to work, how to live” (Scanlon, 1988).

Having said that, we have not demonstrated that manipulation is necessarily wrong in every case—only that it always carries a harm. One can imagine cases where the harm to autonomy is outweighed by the benefit to welfare. (For example, a case where someone’s life is in immediate danger, and the only way to save them is by manipulating them.) But such cases are likely few and far between. What is so worrying about online manipulation is precisely its banality—the fact that it threatens to become a regular part of the fabric of everyday experience. As Jeremy Waldron argues, if we allow that to happen, our lives will be drained of something deeply important: “What becomes of the self-respect we invest in our own willed actions, flawed and misguided though they often are, when so many of our choices are manipulated to promote what someone else sees (perhaps rightly) as our best interest?” (Waldron, 2014) That we also lack reason to believe online manipulators really do have our best interests at heart is only more reason to resist them.

Finally, beyond the harm to individuals, manipulation promises a collective harm. By threatening our autonomy it threatens democracy as well. For autonomy is writ small what democracy is writ large—the capacity to self-govern. It is only because we believe individuals can make meaningfully independent decisions that we value institutions designed to register and reflect them. As the Cambridge Analytica case—and the public outcry in response to it—demonstrates, online manipulation in the political sphere threatens to undermine these core collective values. The problem of online manipulation is, therefore, not simply an ethical problem; it is a social and political one too.

3. Technology and autonomy

If one accepts the arguments advanced thus far, an obvious response is that we need to devise law and policy capable of preventing and mitigating manipulative online practices. We agree that we do. But that response is not sufficient—the question for policymakers is not simply how to mitigate online manipulation, but how to strengthen autonomy in the digital age. In making this claim, we join our voices with a growing chorus of scholars and activists—like Frischmann, Selinger, and Zuboff—working to highlight the corrosive effects of digital technologies on autonomy. Meeting these challenges requires more than consumer protection—it requires creating the positive conditions necessary for supporting individual and collective self-determination.

We don’t pretend to have a comprehensive solution to these deep and complex problems, but some suggestions follow from our brief discussion. It should be noted that these suggestions—like the discussion, above, that prompted them—are situated firmly in the terrain of contemporary liberal political discourse, and those convinced that online manipulation poses a significant threat (especially some European readers) may be struck by how moderate our responses are. While we are not opposed to more radical interventions, we formulate our analysis using the conceptual and normative frameworks familiar to existing policy discussions in hopes of having an impact on them.

Curtail digital surveillance

Data, as Tal Zarsky writes, is the “fuel” powering online manipulation (2019, p. 186). Without the detailed profiles cataloguing our preferences, interests, habits, and so on, the ability of would-be manipulators to identify our weaknesses and vulnerabilities would be vastly diminished, and so too their capacity to leverage them to their ends. Of course, the call to curtail digital surveillance is nothing new. Privacy scholars and advocates have been raising alarms about the ills of surveillance for half a century or more. Yet, as Zarsky argues, manipulation arguments could add to the “analytic and doctrinal arsenal of measures which enable legal intervention in the new digital environment” (2019, p. 185). Furthermore, outcry over apparent online manipulation in both the commercial and political spheres appears to be generating momentum behind new policy interventions to combat such strategies. In the US, a number of states have recently passed or are considering passing new privacy legislation, and the U.S. Congress appears to be weighing new federal privacy legislation as well. (“Congress Is Trying to Create a Federal Privacy Law”, 2019; Merken, 2019). And, of course, all of that takes place on the heels of the new General Data Protection Regulation (GDPR) taking effect in Europe, which places new limits on when and what kinds of data can be collected about European citizens and by firms operating on European soil.24 To curb manipulation and strengthen autonomy online, efforts to curtail digital surveillance ought to be redoubled.

Problematise personalisation

When asked to justify collecting so much data about us, data collectors routinely argue that the information is needed in order to personalise their services to the needs and interests of individual users. Mark Zuckerberg, for example, attempted recently to explain Facebook’s business model in the pages of the Wall Street Journal: “People consistently tell us that if they're going to see ads, they want them to be relevant,” he wrote. “That means we need to understand their interests” (2019).25 Personalisation seems, on the face of it, like an unalloyed good. Who wouldn’t prefer a personalised experience to a generic one? Yet research into different forms of personalisation suggests that individualising—personalising—our experiences can carry with it significant risks.

These worries came to popular attention with Eli Pariser’s book Filter Bubble (2011), which argued forcefully (though not without challenge) that the construction of increasingly singular, individualised experiences, means at the same time the loss of common, shared ones, and describes the detriments of that transformation to both individual and collective decision-making.26 In addition to personalised information environments—Pariser’s focus—technological advances enable things like personalised pricing - sometimes called “dynamic pricing” or “price discrimination” (Calo, 2014) and personalised work scheduling - or “just-in-time” scheduling (De Stefano, 2015). For the reasons discussed above, many such strategies may well be manipulative. The targeting and exploiting of individual decision-making vulnerabilities enabled by digital technologies—the potential for online manipulation they create—gives us reason to question whether the benefits of personalisation really outweigh the costs. At the very least, we ought not to uncritically accept personalisation as a rationale for increased data collection, and we ought to approach with care (if not skepticism) the promise of an increasingly personalised digital environment.

Promote awareness and understanding

If the central problem of online manipulation is its hiddenness, then any response must involve a drive toward increased awareness. The question is what form such awareness should take. Yeung argues that the predominant vehicle for notifying individuals about information flows and data practices—the privacy notice, or what is often called “notice-and-consent”—is insufficient (2017). Indeed, merely notifying someone that they are the target of manipulation is not enough to neutralise its effects. Doing so would require understanding not only that one is the target of manipulation, but also who the manipulator is, what strategies they are deploying, and why. Given the well-known “transparency paradox”, according to which we are bound to either deprive users of relevant information (in an attempt to be succinct) or overwhelm them with it (in an attempt to be thorough), there is little reason to believe standard forms of notice alone can equip users to face the challenges of online manipulation.27

Furthermore, the problem of online manipulation runs deeper than any particular manipulative practice. What worries many people is the fact that manipulative strategies, like targeted advertising, are becoming basic features of the digital world—so commonplace as to escape notice or mention.28 In the same way that machine learning and artificial intelligence tools have quickly and quietly been delegated vast decision-making authorities in a variety of contemporary contexts and institutions, and in response, scholars and activists have mounted calls to make their decision-making processes more explainable, transparent, and accountable, so too must we give people tools to understand and manage a digital environment designed to shape and influence them.29

Attend to context

Finally, it is important to recognise that moral intuitions about manipulation are indexed to social context. Which is to say, we are willing to tolerate different levels of outside influence on our decision-making in different decision-making spheres. As relatively lax commercial advertising regulations indicate, we are—at least in the US—willing to accept a fair amount of interference in the commercial sphere. By contrast, somewhat more stringent regulations around elections and campaign advertising suggest that we are less willing to accept such interference in the realm of politics.30 Responding to the threats of online manipulation therefore requires sensitivity to where—in which spheres of life—we encounter them.

Conclusion

The idea that technological advancements bring with them new arrangements of power is, of course, nothing new. That online manipulation threatens to subordinate the interests of individuals to those of data collectors and their clients is thus, in one respect, a familiar (if nonetheless troubling) problem. What we hope to have shown, however, is that the threat of online manipulation is deeper, more insidious, than that. Being steered or controlled, outside our conscious awareness, violates our autonomy, our capacity to understand and author our own lives. If the tools that facilitate such control are left unchecked, it will be to our individual and collective detriment. As we’ve seen, information technology is in many ways an ideal vehicle for these forms of control, but that does not mean that they are inevitable. Combating online manipulation requires both depriving it of personal data—the oxygen enabling it—and empowering its targets with awareness, understanding, and savvy about the forces attempting to influence them.

References

Abramowitz, M. J. (2017, December 11). Stop the Manipulation of Democracy Online. The New York Times. Retrieved from https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html

Anderson, J., & Honneth, A. (2005). Autonomy, Vulnerability, Recognition, and Justice. In J. Christman & J. Anderson (Eds.), Autonomy and the Challenges to Liberalism (pp. 127–149). doi:10.1017/CBO9780511610325.008

Bartz, D. (2019, April 13). U.S. senators introduce social media bill to ban “dark patterns” tricks. Reuters. Retrieved from https://www.reuters.com/article/us-usa-tech-idUSKCN1RL25Q

Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. New York: Oxford University Press.

Blumenthal, J. A. (2005). Does Mood Influence Moral Judgment? An Empirical Test with Legal and Policy Implications. Law & Psychology Review, 29, 1–28.

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online Behavioral Advertising: A Literature Review and Research Agenda. Journal of Advertising, 46(3), 363–376. doi:10.1080/00913367.2017.1339368

Bovens, L. (2009). The Ethics of Nudge. In T. Grüne-Yanoff & S. O. Hansson (Eds.), Preference Change: Approaches from Philosophy, Economics and Psychology (pp. 207–219). Dordrecht: Springer Netherlands.

Brignull, H. (2013, August 29). Dark Patterns: inside the interfaces designed to trick you. Retrieved June 17, 2019, from The Verge website: https://www.theverge.com/2013/8/29/4640308/dark-patterns-inside-the-interfaces-designed-to-trick-you

Calo, M. R. (2014). Digital Market Manipulation. The George Washington Law Review, 82(4). Retrieved from https://www.gwlr.org/wp-content/uploads/2018/01/82-Geo.-Wash.-L.-Rev.-995.pdf

Cambridge Analytica and Facebook: The Scandal so Far. (2018, March 28). Al Jazeera News. Retrieved from https://www.aljazeera.com/news/2018/03/cambridge-analytica-facebook-scandal-180327172353667.html

Christman, J. P. (2009). The Politics of Persons: Individual Autonomy and Socio-Historical Selves. Cambridge; New York: Cambridge University Press.

Congress Is Trying to Create a Federal Privacy Law. (2019, February 28). The Economist. Retrieved from https://www.economist.com/united-states/2019/02/28/congress-is-trying-to-create-a-federal-privacy-law

Davidson, D. (2017, May 1). Facebook targets “insecure” to sell ads. The Australian.

De Stefano, V. (2015). The Rise of the “Just-in-Time Workforce”: On-Demand Work, Crowd Work and Labour Protection in the “Gig-Economy.” SSRN Electronic Journal. doi:10.2139/ssrn.2682602

Doubek, J. (2017, November 16). How Disinformation And Distortions On Social Media Affected Elections Worldwide. Retrieved March 24, 2019, from NPR.org website: https://www.npr.org/sections/alltechconsidered/2017/11/16/564542100/how-disinformation-and-distortions-on-social-media-affected-elections-worldwide

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. doi:10.1080/1369118X.2018.1428656

Franken, I. H. A., & Muris, P. (2005). Individual Differences in Decision-Making. Personality and Individual Differences, 39(5), 991–998. doi:10.1016/j.paid.2005.04.004

Frischmann, B., & Selinger, E. (2018). Re-Engineering Humanity (1st ed.). doi:10.1017/9781316544846

Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The Dark (Patterns) Side of UX Design. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18, 1–14. doi:10.1145/3173574.3174108

Hausman, D. M., & Welch, B. (2010). Debate: To Nudge or Not to Nudge. Journal of Political Philosophy, 18(1), 123–136. doi:10.1111/j.1467-9760.2009.00351.x

Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana University Press.

Kahneman, D. (2013). Thinking, Fast and Slow (1st pbk. ed). New York: Farrar, Straus and Giroux.

Kaptein, M., & Eckles, D. (2010). Selecting Effective Means to Any End: Futures and Ethics of Persuasion Profiling. In T. Ploug, P. Hasle, & H. Oinas-Kukkonen (Eds.), Persuasive Technology (Vol. 6137, pp. 82–93). doi:10.1007/978-3-642-13226-1_10

Kligman, M., & Culver, C. M. (1992). An Analysis of Interpersonal Manipulation. Journal of Medicine and Philosophy, 17(2), 173–197. doi:10.1093/jmp/17.2.173

Lanzing, M. (2018). “Strongly Recommended” Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies. Philosophy & Technology. doi:10.1007/s13347-018-0316-4

Levinson, J. D., & Peng, K. (2007). Valuing Cultural Differences in Behavioral Economics. ICFAI Journal of Behavioral Finance, 4(1).

Mackenzie, C., & Stoljar, N. (Eds.). (2000). Relational Autonomy: Feminist Perspectives on Automony, Agency, and the Social Self. New York: Oxford University Press.

Mason, S. (2018, November 20). High score, low pay: Why the gig economy loves gamification. The Guardian. Retrieved from https://www.theguardian.com/business/2018/nov/20/high-score-low-pay-gamification-lyft-uber-drivers-ride-hailing-gig-economy

Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. J. (2017). Psychological Targeting as an Effective Approach to Digital Mass Persuasion. Proceedings of the National Academy of Sciences, 114(48), 12714–12719. doi:10.1073/pnas.1710966114

Merken, S. (2019, February 6). States Follow EU, California in Push for Consumer Privacy Laws. Retrieved March 25, 2019, from Bloomberg Law website: https://news.bloomberglaw.com/privacy-and-data-security/states-follow-eu-california-in-push-for-consumer-privacy-laws-1

Murgia, M. (2019, May 4). When manipulation is the business model. Financial Times.

Nadler, A., & McGuigan, L. (2018). An Impulse to Exploit: The Behavioral Turn in Data-Driven Marketing. Critical Studies in Media Communication, 35(2), 151–165. doi:10.1080/15295036.2017.1387279

Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Daedalus, 140(4), 32–48. doi:10.1162/DAED_a_00113

Noggle, R. (2018a). Manipulation, Salience, and Nudges. Bioethics, 32(3), 164–170. doi:10.1111/bioe.12421

Noggle, R. (2018b). The Ethics of Manipulation. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy (p. 24). Retrieved from https://plato.stanford.edu/entries/ethics-manipulation/

Nys, T. R., & Engelen, B. (2017). Judging Nudging: Answering the Manipulation Objection. Political Studies, 65(1), 199–214. doi:10.1177/0032321716629487

Oshana, M. (Ed.). (2015). Personal Autonomy and Social Oppression: Philosophical Perspectives (First edition). New York: Routledge, Taylor & Francis Group.

Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=1118322

Rachlinski, J. J. (R). Cognitive Errors, Individual Differences, and Paternalism. University of Chicago Law Review, 73(1), 207–229. Available at https://chicagounbound.uchicago.edu/uclrev/vol73/iss1/11/

Raz, J. (1986). The Morality of Freedom (Reprinted). Oxford: Clarendon Press.

Reach, G. (2016). Patient education, nudge, and manipulation: Defining the ethical conditions of the person-centered model of care. Patient Preference and Adherence, 10, 459–468. doi:10.2147/PPA.S99627

Richards, N. M. (2013). The Dangers of Surveillance. Harvard Law Review, 126(7), 1934–1965. Available at https://harvardlawreview.org/2013/05/the-dangers-of-surveillance/

Rosenberger, R. (2009). The Sudden Experience of the Computer. AI & Society, 24(2), 173–180. doi:10.1007/s00146-009-0190-9

Rosenblat, A., & Stark, L. (2016). Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers. International Journal of Communication, 10, 3758–3784. Retrieved from https://ijoc.org/index.php/ijoc/article/view/4892

Rudinow, J. (1978). Manipulation. Ethics, 88(4), 338–347. doi:10.1086/292086

Scanlon, T. M. (1988). The Significance of Choice. In A. Sen & S. M. McMurrin (Eds.), The Tanner Lectures on Human Values (Vol. 8, p. 68).

Scheiber, N. (2017, April 2). How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons. The New York Times. Retrieved from https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html

Selbst, A. D., & Barocas, S. (2018). The Intuitive Appeal of Explainable Machines. Fordham Law Review, 87(3), 1085-1139. Retrieved from https://ir.lawnet.fordham.edu/flr/vol87/iss3/11/

Selinger, E., & Whyte, K. (2011). Is There a Right Way to Nudge? The Practice and Ethics of Choice Architecture. Sociology Compass, 5(10), 923–935. doi:10.1111/j.1751-9020.2011.00413.x

Singer, N. (2016, May 14). When Websites Won’t Take No for an Answer. The New York Times. Retrieved from https://www.nytimes.com/2016/05/15/technology/personaltech/when-websites-wont-take-no-for-an-answer.html

Solove, D. J. (2004). The Digital Person: Technology and Privacy In The Information Age. New York: New York University Press.

Solove, D. J. (2013). Privacy Self-Management and the Consent Dilemma. Harvard Law Review, 126(7), 1880–1903. Retrieved from https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/

Stanovich, K. E., & West, R. F. (1998). Individual Differences in Rational Thought. Journal of Experimental Psychology: General, 127(2), 161–188. doi:10.1037/0096-3445.127.2.161

Stole, I. L. (2014). Persistent Pursuit of Personal Information: A Historical Perspective on Digital Advertising Strategies. Critical Studies in Media Communication, 31(2), 129–133. doi:10.1080/15295036.2014.921319

Sunstein, C. R. (2016). The Ethics of Influence: Government in the Age of Behavioral Science. Cambridge: Cambridge University Press.

Susser. (2019a). Notice After Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren’t. Journal of Information Policy, 9, 37–62. doi:10.5325/jinfopoli.9.2019.0037

Susser, D. (2019b). Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures. Presented at the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19), Honolulu. Available at http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_54.pdf

Susser, D., Roessler, B., & Nissenbaum, H. (2018). Online Manipulation: Hidden Influences in a Digital World. SSRN Electronic Journal. Retrieved from https://papers.ssrn.com/abstract=3306006

Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven: Yale University Press.

Tufekci, Z. (2014). Engineering the Public: Big Data, Surveillance and Computational Politics. First Monday, 19(7). doi:10.5210/fm.v19i7.4901

Turow, J. (2012). The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth. Retrieved from https://books.google.com/books?id=rK7JSFudXA8C

Vaidhyanathan, S. (2018). Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. New York; Oxford: Oxford University Press.

Van Den Eede, Y. (2011). In Between Us: On the Transparency and Opacity of Technological Mediation. Foundations of Science, 16(2/3), 139–159. doi:10.1007/s10699-010-9190-y

Vayena, M. I., Effy. (2018, March 30). Cambridge Analytica and Online Manipulation. Retrieved March 24, 2019, from Scientific American Blog Network website: https://blogs.scientificamerican.com/observations/cambridge-analytica-and-online-manipulation/

Veen, A., Goods, C., Josserand, E., & Kaine, S. (2017, June 18). “The way they manipulate people is really saddening”: Study shows the trade-offs in gig work. Retrieved June 16, 2019, from The Conversation website: http://theconversation.com/the-way-they-manipulate-people-is-really-saddening-study-shows-the-trade-offs-in-gig-work-79042

Veltman, A., & Piper, M. (Eds.). (2014). Autonomy, Oppression, and Gender. Oxford; New York: Oxford University Press.

Verbeek, P.-P. (2005). What Things Do: Philosophical Reflections on Technology, Agency, and Design. University Park: Pennsylvania State University Press.

Waldron, J. (2014, October 9). It’s All for Your Own Good. The New York Review of Books. Retrieved from https://www.nybooks.com/articles/2014/10/09/cass-sunstein-its-all-your-own-good/

Westin, A. F. (2015). Privacy and Freedom. New York: IG Publishing.

Wood, A. (2014). Coercion, Manipulation, Exploitation. In C. Coons & M. Weber (Eds.), Manipulation: Theory and Practice. Oxford ; New York: Oxford University Press.

Yeung, K. (2017). Hypernudge: Big Data as a Mode of Regulation by Design. Information, Communication & Society, 20(1), 118–136. doi:10.1080/1369118X.2016.1186713

Zarsky, T. (2006). Online Privacy, Tailoring, and Persuasion. In K. J. Strandburg & D. S. Raicu (Eds.), Privacy and Technologies of Identity: A Cross-Disciplinary Conversation (pp. 209–224). doi:10.1007/0-387-28222-X_12

Zarsky, T. Z. (2019). Privacy and Manipulation in the Digital Age. Theoretical Inquiries in Law, 20(1), 157–188. http://www7.tau.ac.il/ojs/index.php/til/article/view/1612

Zittrain, J. (2014). Engineering an Election. Harvard Law Review Forum, 127(8), 335–341. Retrieved from https://harvardlawreview.org/2014/06/engineering-an-election/

Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1), 75–89. doi:10.1057/jit.2015.5

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (First edition). New York: Public Affairs.

Zuckerberg, M. (2019, January 25). The Facts About Facebook. Wall Street Journal. Retrieved from http://ezaccess.libraries.psu.edu/login?url=https://search-proquest-com.ezaccess.libraries.psu.edu/docview/2170828623?accountid=13158

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K., Dobber, T., … De Vreese, C. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1), 82–96. doi:10.18352/ulr.420

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., De Vreese, C. H., & Helberger, N. (2016). Should We Worry About Filter Bubbles? Internet Policy Review, 5(1). doi:10.14763/2016.1.401

Footnotes

1. Richards describes this influence as “persuasion” and “subtle forms of control”. In our view, for reasons discussed below, the subtler forms of influence ought really to be called “manipulation”.

2. For a wide-ranging review of the scholarly literature on targeted advertising, see (Boerman, Kruikemeier, & Zuiderveen Borgesius, 2017).

3. See, for example, Zarsky gestures at there being more at stake than consumer interests, but he explicitly declines to develop the point, framing the problem instead as one of consumer protection. See (2006; 2019)

4. Which is not to say that no one saw this coming. As far back as 1967, Alan Westin warned about “the entire range of forthcoming devices, techniques, and substances that enter the mind to implant influences or extract data” and their application “in commerce or politics” (Westin, 2015, p. 331). See also (Tufekci, 2014; Zittrain, 2014).

5. Frischmann and Selinger write: “Across cultures and generations, humans have engineered themselves and their built social environments to sustain capacities for thinking, the ability to socialize and relate to each other, free will, autonomy, and agency, as well as other core capabilities. […T]hey are at risk of being whittled away through modern forms of techno-social engineering.” (2018, p. 271). And Zuboff argues that the behaviour modifications characteristic of surveillance capitalism “sacrifice our right to the future tense, which comprises our will to will, our autonomy, our decision rights, our privacy, and, indeed, our human natures” (2019, p. 347).

6. As Frischmann and Selinger write, “We are fundamentally techno-social animals” (2018, p. 271).

7. For a more fully developed and defended version of our account, see Susser, Roessler, and Nissenbaum (2018).

8. (Benkler, Faris, & Roberts, 2018). See also the many excellent reports from the Data & Society Research Institute’s “Media Manipulation” project: https://datasociety.net/research/media-manipulation/

9. Examples from Noggle (2018b).

10. The term “persuasion” is sometimes used in a broader sense, as a synonym for “influence”. Here we use it in the narrower sense of rational persuasion, since our goal is precisely to distinguish between different forms of influence.

11. Assuming we ever do learn that we have been manipulated. Presumably we often do not.

12. As Luc Bovens writes about nudges (discussed below), such strategies “typically work better in the dark” (2009, p. 209).

13. The classic formulation of these ideas comes from Daniel Kahneman and Amos Tversky, summarised in (Kahneman, 2013). See also (Thaler & Sunstein, 2008).

14. Writing about manipulation in 1978, Joel Rudinow observed: “Weaknesses are rarely displayed; they are betrayed. Since our weaknesses, in addition to making us vulnerable, are generally repugnant to us, we generally do our best to conceal them, not least from ourselves. Consequently too few people are insightful enough into or familiar enough with enough other people to make the use of resistible incentives a statistically common form of manipulation. In addition we are not always so situated as to be able genuinely to offer someone the incentive which we believe will best suit our manipulative aims. Just as often it becomes necessary to deceive someone in order to play on his weakness. Thus it is only to be expected that deception plays a role in the great majority of cases of manipulation.” (Rudinow, 1978, p. 347) As we’ll see below, it is precisely the limitations confronting the would-be manipulator in 1978, which Rudinow identifies, that thanks to technology have since been overcome.

15. Thaler and Sunstein refer to this as “libertarian paternalism” (2008).

16. Our thanks to a reviewer of this essay for the term “person-specific vulnerability.”

17. In fact, while we are all susceptible to the kinds of cognitive biases discussed by behavioral economists to some degree, we are not all susceptible to each bias to the same degree (Rachlinski, R; Stanovich & West, 1998). Empirical evidence suggests that individual differences in personality (Franken & Muris, 2005), cultural background (Levinson & Peng, 2007), and mood (Blumenthal, 2005), among others, can modulate how individuals are impacted by particular biases. It is not difficult to imagine digital tools detecting these differences and leveraging them to structure particular interventions.

18. Cambridge Analytica’s then-CEO Alexander Nix discusses these tactics here: https://www.youtube.com/watch?v=n8Dd5aVXLCc. Research suggests such tactics are plausible, see Matz, Kosinski, Nave, and Stillwell (2017).

19. For a deeper discussion about vulnerability, its varieties, and the ways vulnerabilities can be leveraged by digital tools, see Susser, Roessler, and Nissenbaum (2018).

20. See also Susser (2019b).

21. For an excellent discussion of the different ways this idea has been elaborated by a variety of philosophers and STS scholars, see Van Den Eede (2011).

22. It is worth noting, however: just because individuals and their capacities are inextricably social, that does not mean autonomy is only possible in egalitarian social contexts. See Anderson and Honneth (2005).

23.“While much about digital advertising appears revolutionary, it would be wrong to accept the notion of customer surveillance as a modern phenomenon. Although the internet’s technological advances have taken advertising in new directions and the practice of ‘data-mining’ to almost incomprehensible extremes, nearly all of what is transpiring reflects some of the basic methods developed by marketers beginning a hundred years ago” (Stole, 2014).

24. See https://eugdpr.org

25. Zuckerberg also cited needing user information for “security and operating our services”.

26. Some empirical researchers have expressed skepticism about the alleged harms of filter bubbles, some even suggesting that they are beneficial (Dubois & Blank, 2018; Zuiderveen Borgesius et al., 2016). Their findings, however, are far from conclusive.

27. On the “transparency paradox,” see Nissenbaum (2011). Though privacy notices are, in themselves, insufficient for shielding individuals from the effects of online manipulation, that does not mean that they are entirely without value. They might support individual autonomy, even if they can’t guarantee it: see Susser (2019a).

28. For example, Marcella Vayena writes: “[N]ot just Cambridge Analytica, but most of the current online ecosystem, is an arm’s race to the unconscious mind: notifications, microtargeted ads, autoplay plugins, are all strategies designed to induce addictive behavior, hence to manipulate” (Vayena, 2018).

29. For a helpful discussion about the calls for—and limits of—explainable artificial intelligence, see (Selbst & Barocas, 2018)

30. In a longer version of this paper, we also consider online manipulation in the context of the workplace. See Susser et al. (2018).

Making data colonialism liveable: how might data’s social order be regulated?

$
0
0

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

A new order is being constructed through the continuous extraction of data from our social lives. This new order, optimised for the creation of economic value, may well become the social order on which the next phase of capitalism depends for its viability. As part of that emerging order, calls for the regulation of data processing have intensified in the past two years, unsurprisingly perhaps given that capitalism has shown that it needs to be regulated if it is to be made liveable (Polanyi, 2001). But this push for regulation has been framed entirely in terms of taming certain rogue forms of contemporary capitalism. This article argues, however, that to frame data issues solely in terms of a “bad” form of capitalism misses the full scope, scale and nature of what is happening with data. Legal, social and civic responses to what is underway need to be grounded in a broader argument about what we will call “data colonialism”.

There is no doubt of course that what is happening with data today is inextricably linked to the development of capitalism. But is something even larger going on? We argue here that today’s quantification of the social—also known as datafication (Mayer-Schönberger and Cukier, 2013; Van Dijck, 2014)—represents the first step in a new form of colonialism. This emerging orderhas long-term consequences that may be as far-reaching as were the appropriations carried out by historic colonialism for the benefit of the capitalist economies and international legal order that subsequently developed.

Recognising what is happening with data as a colonial move means acknowledging the full scope of the resource appropriation under way today through datafication: it is human life itself that is being appropriated so that it can be annexed directly to capital as part of a reconstruction of the very spaces of social experience. In arguing this, we share some common ground with Shoshana Zuboff’s well-known argument on “surveillance capitalism”, but there are also crucial differences, which we briefly summarise in three points here (and further unpack later). 1

First, the transformation of what can be considered an input to capital actually goes well beyond what has been observed in the social media sector to include, for example, the rise of logistics, the new methods of control in the workplace, the emergence of platforms as new structures for profit extraction (for instance, in transportation and tourism), and most generally the reformulation of capitalism’s default business model around the extraction and management of data (Davenport, 2014). 2 What is going on with data, in other words, is much wider than a problem with a limited number of rogue surveillance capitalists who have gone astray, a problem that can be corrected by their reform. There is only one historic precedent for such a shift in the resources available for economic exploitation, and that is the emergence of colonialismin the late 15th and early 16th centuries. 3

Second,rethinking data processes on this longer 500-year time-scale allows us to see their implications for capitalism’s future in a broader way, too. Here we must recall that industrial capitalism itself was only made possible by the profits and socioeconomic reconfigurations that came with historic colonialism.

Third, a colonial framing highlights two central aspects of today’s transformations that would otherwise seem like mere collateral: the subjugation of human beings that is necessary to a resource appropriation on this scale (relations of subjection to external powers were central to historic colonialism), and the grounding of this entire transformation in a general rationality which imposes upon the world a very singular vision of Big Data’s superior claim on knowledge (just as colonisers justified their appropriation on the ground of the West’s superior rationality).

Our argument will consider the long-term historical relations between capitalism and colonialism in the first part of this article, and in the second part offer a discussion—informed by decolonial theory—of Carl Schmitt’s classic interpretation of historic colonialism’s relation to international law. We hope to give more substance to general calls to recognise the fight against “dataism” (Van Dijck, 2014) as “the most urgent political and economic project” of the 21st century (Harari, 2016, p. 459). This article, written from the intersection of social theory, decolonial theory, and critical data studies rather than policy studies, will hopefully be useful to those who wish to develop a more robust starting-point for critical work on data policy.

A decolonial reading of datafication

In this first section, we summarise our arguments for analysing contemporary practices of data extraction and data processing as replicating colonial modes of exploitation (see Couldry and Mejias, 2018; Couldry and Mejias, 2019). This will allow us to provide the starting-point for our policy-related discussion later on.

The public is often told that "data is the new oil" (Economist, 2017). A recent article in the Harvard Business Review goes further and argues not only that “data is the fuel of the new economy, and even more so of the economy to come,” but also that:

Algorithms trained by all these digital traces will be globally transformational. It’s possible that new world order will emerge from it, along with a new “GDP” – gross data product – that captures an emerging measure of wealth and power of nations (Chakravorti, Bhalli and Chaturvedi, 2019).

While the evocative idea of “new oil” might recall the benefits (for some) of historic colonialism, it obscures precisely the most important level at which data colonialism must be empirically studied. The most fundamental fact about data is that it is not like oil, but rather a social construct operating at a specific moment in history (Gitelman, 2014; Scholz, 2018), driven by much wider economic and social forces. The concept of data colonialism, therefore, highlights the reconfiguration of human life around the maximisation of data collection for profit. Without the resulting data flow, there would be no substance related to human life that could, even potentially, be called “oil”. The claim that data islike oil is thus an attempt to naturalise the outcome of data’s collection, and so make data extraction (and the categories it embeds in daily life) part of a social landscape whose contestability is hidden from view (Bowker and Star, 1999). Since regulating data depends, fundamentally, on opening up that contestability, it is essential to understand how the naturalisation of data collection occurs.

To do this, we draw on critical political economy and decolonial theory to trace continuities from colonialism’s historic appropriation of territories and natural resources to the datafication of everyday life today. While the modes, intensities, scales and contexts of dispossession have changed, the underlying drive of today’s data processes remains the same: to acquire “territory” and resources from which economic value can be extracted. To do so in no way diverts us from an analysis of capitalism. On the contrary, it places datafication squarely within the centuries-long relations between colonialism and capitalism, whose separation is now widely contested (Williams, 1994; Beckert and Rockman, 2016). Far from being disconnected from capitalism, the current phase of colonialism (data colonialism) is understood as preparing the way for a new, still undefined stage of capitalism, just as historic colonialism paved the way gradually for industrial capitalism. The medium for this long-term transformation are the interdependencies and rationalities through which social relations, conducted and organised via processes of data extraction, become a normal part of everyday life.

We therefore use the term “colonialism” not as a metaphor, 4 but to name an actual reality. In this non-metaphorical usage, however, our focus is on colonialism’s longer-term historical function: the dispossession of resources and the normalisation of that dispossession so as to generate a new fuel for capitalism’s global growth. Distinctive to data colonialism are the subjection of human beings to new types of relations configured around the extraction of data, and, even more broadly, the imposition on human life of a new vision of knowledge and rationality tailored to data extraction (the vision of Big Data). Each generates fundamental questions, in turn, about legal values such as freedom and autonomy, and challenges for existing systems of commercial regulation (we return to those challenges in the next section).

Underlying our argument are two forms of analysis: an analysis of the political economy of the data industry, or what we call the social quantification sector; and an analysis of the multimodal forms of exploitation that unfold through our participation in digital platforms and data-processing infrastructures, or what we call data relations. These two terms deserve more explanation.

The social quantification sector can be broken down into various sub-groups, starting with the manufacturers of digital devices and personal assistants: well-known media brands such as Amazon, Apple, Microsoft and Samsung, and less well-known makers of devices operating in the fast-expanding ‘Internet of Things’. Another group in the social quantification sector includes the builders of the computer-based environments and tools by means of which we connect: household names such as Alibaba, Baidu, Facebook, Google, TenCent and WeChat. Yet another group comprises the growing field of data brokers and data processing organisations such as Acxiom, Equifax, and (in China) TalkingData that collect, aggregate, process, repackage, sell and make decisions based on data of all sorts, while also supporting other organisations in their uses of data. In addition, the social quantification sector also includes the vast domain of organisations that increasingly depend for their basic functions on processing data from social life, whether to customise their services (like Netflix and Spotify), to link sellers and buyers (like Airbnb, Uber, and Didi), or to exploit data in areas of government or security, such as Palantir and Axon (formerly Taser). Finally, analytical consideration of the social impact of the social quantification sector needs to take into account the vast areas of economic life where internal data collection has become normalised as corporations’ basic mode of operation, for example in logistics (Cowan, 2014). Corporations such as IBM are key supporters of this wider infrastructure of business data collection (Davenport, 2014), even though they are not associated with either social media platforms or specialised data brokerage.

By data relations we do not mean relations between data, but the new types of human/institutional relations through which data becomes extractable and available for conversion into economic value. When fully established in daily life, data relations will become as naturalised as labour relations, and together comprise a second pillar of the social order on which capitalism is based. 5 This transformation—we propose—goes much further even than the shaping of social relations around the extraction of “surveillance capital” that Zuboff describes. Under data colonialism, human life becomes, as it were, present to capital without obstruction, although this “presence” is based on many levels of technosocial mediation. Data relations give corporations a privileged “window” onto the world of social relations, and a privileged “handle” on the levers of social differentiation. More generally, human life itself, including its relations to technology, becomes a direct input to capital and potentially exploitable for profit. Data relations make the social world readable to and manageableby corporations in ways that allow not just the optimisation of profit, but also new models of social governance, what legal scholars Niva Elkin-Koren and Eldar Haber (2016) call “governance by proxy”.

In this context, digital spaces for social life and economic transactions called “platforms” (Gillespie, 2010; compare Bucher, 2016; Gerlitz and Helmond, 2013) have significance beyond their convenience for individuals and corporations. Platforms become software-constructed spaces that produce the social for capital. Social life is thereby transformed into an open resource for extraction that is somehow “just there” for exploitation. For sure, capitalism has always sought to commodify everything and control all inputs to its production process. But how “everything” is defined at specific historical moments varies. What is unique about this historical moment is that human life is becoming organised through data relations so that it can be a direct input to capital. This transformation depends on many things: shifts in daily habits and conventions, software architectures that shape human life through, as Lessig famously argued, “code” (Lessig, 2001), and explicit legal frameworks that legitimate, sanction and regulate such arrangements. In this article, we focus on the last, but including the underlying legal rationalities that, as Julie Cohen (2017) argues, work to frame data as owner-less, redefining notions of privacy and property in order to establish a new moral order that justifies the appropriation of data.

To summarise the argument so far: humanity is currently undergoing a large-scale transformation of a social, economic and legal order, based on the massively expanded appropriation by capital of human life itself through the medium of data extraction. The long-term sustainability of this transformation depends, however, on the regulation or harmonising of various factors: the weight of habit and convenience in daily life; various social pressures on consumers, producers and workers towards datafication, which amount to something like a life force (Grewal, 2008); and, crucially, an emerging legal infrastructure. As a result, larger questions arise as to how to regulate this transformation and its emerging institutions. The answers depend on what approach we take to the question of what sort of transformation this is. We have argued, in condensed form, that this transformation can only be fully understood bifocally, that is, through the double lens of capitalism and colonialism. In the second part of the article, we extend this discussion into a brief review of current approaches to regulating personal data processing, and their limitations.

Thinking beyond existing legal approaches to datafication

The building of a new social and economic order based on the extraction of value from human life through data relations is not something that individuals can resist, or even manage, by themselves. It matters little whether I delete an app from my phone or withdraw from a platform. Nor, incidentally, can we expect much from the possibility that some players in data markets might act more ethically than others. Society-wide responses are needed to such society-wide transformations. If—to return once again to Polanyi (2001)—large-scaleeconomic change requires a double regulatory movement (first, the transformation of social relations so as to fit the new economic organisation, and then the emergence of a social counter-movement to make the transformation actually liveable), then the project of socially managing datafication is likely to be long and complex, and legal reform must play some part in that.

We have little interest here in proposed legal reforms that make partial adjustments to how social media platforms manage aspects of their operations (for example, the algorithms that organise personal news feeds). Our concern instead is with the prospects for large-scale regulation of the extraction of economic value from personal data, and what might currently be blocking this regulation (by “personal data” we mean not just data which explicitly relates to an individual person, but any data whose collection and processing can generate decisions relating to that person).

There is no doubt that important legal reforms concerning data practices have been advanced recently. Five years ago, North American market rhetoric went largely uncontested, arguing that the wholesale collection and processing of data, whether about a person (personal data, in a narrow sense) or otherwise, was essential to the development of the global economy. It is easy to find examples of such discourse, for example, from the World Economic Forum or from business consultants (Letouzé, 2012; World Economic Forum, 2011; McKinsey, 2011). But the balance has been disturbed by one particular legislative intervention, the European General Data Protection Regulation (GDPR), which came into effect in May 2018.

The GDPR’s very first sentence announces a normative challenge to market rhetoric about data: “the protection of natural persons in relation to the processing of personal data is a fundamental right” (GDPR, recital 1). Thus, one of the GDPR’s basic ideas is that whether or not she is likely to consent to it, the “data subject” must be informed “of the existence of a [data] processing operation” which affects her, and “its purposes.” Indeed, she should be informed of the “consequences of any data profiling” (Recital 60). This challenged the until-then dominant idea that personal data processing is just what corporations and markets do, and has been going on for so long and on such a scale that it cannot be challenged (an argument Helen Nissenbaum (2017) calls Big Data exceptionalism). Without going into the GDPR in detail, its importance as a symbolic challenge to the ideology of ‘dataism’ (Van Dijck, 2014) cannot be denied. The GDPR is being used as a model for legislative proposals in a number of countries across the world, including Brazil and the UK, and compliance with the GDPR has become a major feature of recent business practice.

While it is still unclear how effective the GDPR’s challenge to data practices from the perspective of human rights such as privacy will be, there is no doubt of the influence its publication has had on the climate of a global debate around data issues. Consider two UN reports from 2014 and 2018, both called “The Right to Privacy in the Digital Age” (UN High Commissioner for Human Rights, 2014, 2018). The 2014 report is almost entirely concerned with state surveillance; when it mentions corporations (paragraphs 42-46), it focuses on whether they should accede to state requests for access to their data. The question of whether corporations themselves should be more responsive to human rights concerns regarding how they collect data—arguably the key issues revealed, if not debated, in the 2013 Snowden revelations—is not even mentioned. By 2018 however, the emphasis had shifted to include a discussion of the growth in corporations’ data collection practices and their “analytic power” (paragraphs 15 and 16). The later report mentions “a growing global consensus on minimum standards that should govern the processing of personal data by state, business enterprises and other private actors” (paragraph 28), and insists that the resulting human rights protection “should also apply to information derived, inferred, and predicted by automated means, to the extent that the information qualifies as personal data” (paragraph 30). In effect, the 2018 UN report encourages states to adopt something like the GDPR. Yet there are still important gaps in its recommendations: at no point does the report challenge corporate data collection as such, or recognise how the continuous collection of data from and about persons might in itself undermine values such as freedom and autonomy, even though the report references the fundamental European law principle that the “individual should have an area of autonomous development, interaction and liberty” (para 5), a point to which we shall return.

These legal principles, if pursued, might have the potential to disrupt datafication. But so far it is not legislation but the work of critical legal scholars which has articulated these principles more fully. Scholars of privacy law have often noted that traditional notions of privacy are inadequate to deal with the vast amount of data which flows without being specifically attached to a particular named person, yet, which in combination with even small amounts of other information related to that person can lead to their identification. The result is, as Solon Barocas and Helen Nissenbaum put it in the language of American football, “Big Data’s end run around anonymity and consent” (Barocas and Nissenbaum, 2014). In other words, the scale of data processing that generates decisions affecting the algorithmically produced entities or “data doubles” (Haggerty and Ericson, 2000) to which actual individuals are tethered makes old style privacy regulation by individual consent almost impossible to practice. And yet “consent” is the basic principle on which the GDPR relies.

In response to this problem, Julie Cohen (2013, p. 1931-1932) has proposed an important meta-principle for regulating data practices, that of “semantic discontinuity”. This is designed to limit the possibility of separate data sets being combined so as to generate inferences of a sort that data subjects did not consent to being made. Recently Frischmann and Selinger (2018, p. 275-276) have endorsed this proposal, which radicalises the older principle of “contextual integrity” (Nissenbaum, 2010). But we do not know yet if this proposal has any chance of being translated into law in some form. It runs directly contrary to the purpose of corporate data collection, which is precisely to combine data streams without limit, so as to maximise the algorithmic inferences that can be generated from them. How can semantic discontinuity be made effective as a legal principle when it contradicts the stated purposes of countless corporations who seek access to personal data? Would the injunction of the 2018 UN report that “personal data processing should be necessary and proportionate to a legitimate purpose that should be specified by the processing entity” be sufficient to ground the principle of semantic discontinuity? Presumably not, if a business had a legitimate purpose which depended on semantic continuity, and that purpose was in broad terms disclosed to, and consented on by, a data subject. The same question could be asked of non-commercial organisations which might be protected prima facie by the “public interest exception” written into the GDPR (Article 21 (6)). On what ground could a “higher” principle of semantic discontinuity override that exception?

What becomes clear here is that a far-reaching challenge to the expanding rationalities of continuous data collection and value extraction runs against the basic organisation of power in contemporary economies and societies, issues which have not yet been broached by even the most enlightened legislation. This potential conflict between critical legal thinking and capitalism’s investment in datafication was anticipated in a remarkable article two decades ago by Paul Schwartz (1999). Schwartz foresaw that the emerging data collection practices made possible by the internet’s new infrastructure of connection would generate “a new structure of power over individuals” with “significant implications for democracy” (1999, p. 815). Schwartz also predicted that individualist liberal notions of autonomy would prove inadequate to counter this development, because they ignore the “constitutive value” (1999, p. 816) that protecting individuals from regular privacy violations and their consequences have for democratic culture itself. Schwartz’s implicitly relational (and post-liberal: Cohen, 2013) understanding of autonomy/freedom connects with more recent accounts of the social costs of datafication and algorithmic decision-making (Eubanks, 2017; Noble, 2018). But the way forward for building effective opposition to the changes under way requires us to move beyond the domain of contemporary legal theory and introduce a decolonial perspective on what is going on with datafication. We turn to this in the next section.

Schmitt and colonialism’s relation to law

At this juncture our argument finds support in a surprising source, someone who was certainly not an opponent of historic colonialism: the controversial German legal and political theorist Carl Schmitt. Schmitt (2006 [o.p. 1950]) offered the most clear-sighted account of the relation between law and the appropriation of territory and natural resources within historic colonialism, an account which has implications, we suggest, for grasping the regulatory implications of today’s data colonialism. 6 In discussing Schmitt as an exemplary case, we will admittedly be abstracting from the centuries-long debates about the possible legal justifications for the domination by some humans of others. Choosing Schmitt however is justified because of the clarity with which he makes explicit the underlying links between law, force and rationality within historic colonialism.

Schmitt analysed law’s relation to historic colonialism, and therefore to the industrial capitalism which colonialism made possible (Schmitt, 2006, p. 4), at a nostalgic moment. Looking back at colonialism, he found it to be an essential underpinning of a eurocentric international legal order which he believed had been shattered by Germany’s defeat in World War II. This context does not, however, diminish the importance of Schmitt’s remarkably direct portrayal of colonialism and its relation to law.

For Schmitt, controversially, the very idea of law (nomos) is based on the seizure of land (2006, p. 42). He interprets the international law of property and nations that dominated the world from the 17th to mid-20th centuries as emerging from the demise of an earlier order, the “medieval spatial order of the respublica Christiana” whose legitimacy was fading by the 16th century. According to Schmitt, what enabled a new international legal order to be built was the discovery of “previously unknown (i.e., by Christian sovereigns) oceans, islands, and territories” (2006, p. 131).

Two things are remarkable about the analysis Schmitt develops. First, he makes no pretence that colonial conquests were legal in a conventional sense; rather he distinguishes two types of land-appropriation, those which proceed in accordance with international law, and those (of which historic colonialism was an example) “which uproot an existing spatial order and establish a new nomos” of property entitlement (2006, p. 82). In this initially law-less, but ultimately lawful move of historic colonialism, “law and order are one . . . they cannot be separated” (2006, p. 81). Order, that is, makes law. Second, Schmitt regards the extra-legal seizure of territory by colonial powers as justified by a higher principle of rationality, or rather a legitimate hierarchy in relation to rationality itself. As he writes (2006, p. 131), “the means of the legal title ‘discovery’ lay in an appeal to the historically higher point of the discoverer vis-à-vis the discovered.” For Schmitt, the conqueror’s “scientific cartographic survey was a true title to a terra incognita,” because it embodied a superior rationality, generating a “completely different type of legal title . . . ‘effective occupation’” (2006, p. 133).

For Schmitt, the history of colonial appropriation represented the legitimate fusion of effective force (order) into law, justified by a claim to higher knowledge or rationality. Here is Schmitt’s fullest statement of the relations between law, force and a certain “modern” reading of rationality: “European discovery of a new world in the 15th and 16th centuries thus did not occur by chance . . . it was an achievement of newly awakened Occidental rationalism . . . The Indians lacked the scientific power of Christian-European rationality. The intellectual advantage was entirely on the European side, so much so that the New World could simply be ‘taken’” (2006: 132). This unapologetic argument for colonialism’s rationality offers some interesting parallels with the contemporary justification and rationalisation of Big Data practices, parallels that we can only notice within the bifocal approach to capitalism and colonialism that we are proposing. Within this perspective, we also see more clearly the significance of the failure so far of even the boldest legislation on datafication to challenge its basic practice: the banal, almost universal collection of personal and non-personal data, and, through this, the creation from the flow of human existence of an informational terrain from which extraction for economic value is possible, indeed increasingly seamless. What are the parallels between the legal status of contemporary datafication (understood as a new type of colonial enterprise) and Schmitt’s reading of the legal status of historic colonialism?

First, datafication involves a defacto appropriation of resources, a domain of connectible information that, through processing, can be attached reliably to entities that are proxies for actual individuals (“data doubles”) and thus provide a basis for judgements that effectively discriminate between real individuals. That appropriation depends on the prior collection of data, that is, on the multi-dimensional monitoring of as much of these individuals’ online activity as possible, regardless of the device they are using. Granted, there is a legal debate and potential conflict at present (for example via the GDPR) around the legality of some of the consequences of this appropriation, just as there was early on in relation to the Spanish conquests of the “New” World. But, as we saw, these legal debates tend never to challenge the fundamental fact of continuous monitoring itself, even if it is in tension with established values such as autonomy (for example the “right to full development of the personality” under German constitutional law: Hornung and Schnabel, 2009).

Second, although it is as yet only in the early stages of development, a justificatory ideology of data appropriation is emerging that parallels Schmitt’s version of colonial ideology: the vision that only through the superior calculating power of Big Data and machine learning can a higher state of human knowledge be achieved, thereby justifying corporate access to data that can be extracted from the flow of individuals’ daily lives. The core issue here is the imposing on the whole domain of human life a very specific version of rationality which requires all life to be tracked continuously in the interests, simultaneously, of capital and of a certain version of human knowledge (the vision of Big Data or dataism).

It follows, thirdly, —and here we move from parallels onto implications— that the more fundamental challenge to processes of datafication to which critical legal scholars such as Cohen and Frischmann are committed requires a challenge to the underlying legitimacy of acquiring data through data relations, which is today a feature of most platforms, apps, and mechanisms for knowledge production and daily organisation (think of the Internet of Things). Cohen’s principle of “semantic discontinuity” is important, but only goes so far as challenging the transferability of data, when it is the very act of collecting data that must above all be challenged.

There are indeed good reasons (which Cohen in her work has noted) for arguing that the continuous collection of data from and about individuals conflicts with the principle of autonomy on which democracies, fundamentally, rely. Continuous surveillance or monitoring by the state is, after all, generally regarded as “chilling” of individual agency (Cohen, 2013, p. 1911-1912). The same is true of surveillance when it is conducted by private corporations, particularly if those corporations often have both capacity and need to yield up data to the state. What so far has been difficult to assert is the primacy of these concerns against the opposing rationality of the social quantification sector, which relies on its “effective occupation” of human life (to use Schmitt’s chilling phrase) as the starting-point for defending its practices of data collection against interference by the state. What is needed is to reject precisely this act of “effective occupation”.What cuts through all the rationalities which mask the dynamics of datafication is precisely the realisation that the social quantification sector’s “right” to hold what they gather is no more legally justifiable than (and just as legally contentious as) the effective occupation of overseas territory by colonial states once was.

If so, the existence (or not) of “consent” to continuous monitoring is beside the point. What matters are the implications of this occupation for what we call the space of the self, that is, the basic idea of selfhood on which most notions of democracy and even legal authority rely. 7 We are drawing here on a relational notion of freedom which assumes that “individual” freedom can only emerge through a web of social relations (Elias, 1978), but also more specifically on the idea that, underlying all notions of freedom and autonomy (some of which no doubt are today unsatisfactory) and underlying also all culturally relative formulations of personal privacy, is a basic notion of the “space of the self”: that is, “the socially grounded integrity without which we cannot recognize ourselves or others as selves at all” (Couldry and Mejias, 2019, p. 155). This is the space that Hegel captured in his relational definition of freedom as “the freedom to be with oneself in the other” and that Dussel terms the “natural substantivity of the person.” 8

Our approach to reframing legal challenges to datafication is, we acknowledge, expansive. It cuts across the detailed debates of policy and law in particular contexts. But it usefully sidesteps the confusion caused by the anomalous notion of “personal data”. As many critics of traditional notions of privacy have noted, much of the data that makes a difference to how we are treated by corporations is not personal data, because it is not exactly “about” us. Rather, it is relational data, in which patterns emerge across myriad comparisons within much larger data sets, patterns that predict particular outcomes for a data double to which as a real individual each of us is tethered. The protection of “personal data” in a more straightforward sense—data about individuals and data files such as photos that an individual claims to own—is therefore likely only to protect people from part of the harms that can be done to them through data. Our approach challenges the very validity of continuous data collection, regardless of what entities happen to be affected by any one particular decision or practice. It challenges, in other words, the multiple practices which construct the new “territory” of human life from which something like “personal data” emerges as potentially extractable, a territory which is steadily supplanting the space of social interaction and social governance that was taken for granted before datafication through a process that started centuries before the advent of digital data. In other words, it makes this challenge in response to processes of human subjection that only a colonial perspective can fully recognise.

There is one last and crucial respect in which legal and civic challenges to datafication require the frame of colonialism. This regards the underlying rationality of Big Data itself which works as a reference-point for and legitimation of data collection in all its breadth and depth. Underlying all the specific and important issues under discussion about algorithmic injustice lies a deeper injustice that, following decolonial thinker Boaventura de Sousa Santos (2014), we can call “cognitive injustice”. Put simply, this is the assumption that there is only one path to human knowledge and that it lies through the progressive extraction, collection, processing and evaluation of data from the flow of human life, and indeed life more generally. 9 The characteristics of this rationality have been expressed not by an analyst of capitalism or even modernity, but a decolonial thinker, the Peruvian sociologist Aníbal

Quijano, reflecting on the relations between capitalism, modernity and the longer process of not just historic colonialism but coloniality:

Outside the ‘West’, virtually in all known cultures… all systematic production of knowledge is associated with a perspective of totality. But in those cultures, the perspective of totality in knowledge includes the acknowledgement of the heterogeneity of all reality; of the irreducible, contradictory character of the latter; of the legitimacy, i.e., the desirability of the diverse character of the components of all reality — and therefore, of the social. The [better, alternative] idea of social totality, then, not only does not deny, but depends on the historical diversity and heterogeneity of society, of every society. In other words, it not only does not deny, but it requires the idea of an ‘other’ — diverse, different. (Quijano, 2007, p. 177, added emphasis).

Through the quantification of the social, we risk installing a new version of this exclusive notion of rationality, via what Jose van Dijck (2014) has called “dataism”. Only legal proposals which challenge rationales of data collection in this more fundamental way can hope, effectively, to challenge the direction of data colonialism.

Our approach therefore stands firmly against other recent proposals for individuals to own “their” data, be free to manage access to it, and perhaps even to be paid in return for such access (Lanier, 2013; Arrieta-Ibarra et al, 2018; for a recent popular argument in The Economist, see will.i.am (2019)). Such proposals risk legitimating precisely the underlying practices of data collection, and ignoring completely the rationality of appropriation which underlies data colonialism.

Conclusion

Our goal in this article has been to develop the starting-points of a more radical and potentially more comprehensive approach to framing critical legal and policy responses to ongoing processes of datafication. We began by reframing what is currently going on with data not just within the continuing expansion of capitalism, but as a new and epochal renewal of colonialism itself, which, in time, may pave the way for a stage of capitalism whose full outline we cannot yet predict.

By placing datafication within the longer history of colonial appropriations of territory and natural resources on a global scale, we seek to address more effectively the fundamental unease across wide sectors of the population at today’s practices of expanding surveillance via marketing, artificial intelligence and the Internet of Things. Existing legal approaches, and even critical legal theory, fall short of providing an adequate starting-point for wider critique. So too do accounts of capitalism which frame what is going on with data principally in terms of recent developments (surveillance capitalism, platform capitalism, and the like), rather than the longer term relations between colonialism and capitalism.

By contrast, legal approaches which take seriously Carl Schmitt’s reading of the role of historic colonialism in making law through effective force (that is, what becomes an order) offer a warning of the underlying direction of change. Unless we grasp this, policy debate regarding the challenges of datafication is always likely to fall short of the mark.

A postscript: One day after being fined for privacy violations, Google announced that “data is more like sunlight than oil” (Ghosh and Kanter, 2019). In other words, instead of a resource that is being appropriated from someone’s territory, Google would like us to believe that data is a replenishable, inexhaustible, owner-less resource that can be harvested sustainably for the benefit of humanity. This illusion, once again, conveniently bypasses the questions about privacy and protecting the individual that any attempt at “regulation” would normally want to raise. Instead, this “regulation” attempts to establish data colonialism as the status quo. It is time for a more radical grounding of established regulatory discourse that enables it to challenge datafication’s social order. This must involve more than regulatory adjustments to certain aspects of contemporary capitalism. What is required is a fundamental challenge to the direction and rationale of capitalism as a whole in the emerging era of data colonialism.

References

Arrieta-Ibarra, I., Goff, L., Hernandez, D., Lanier, J., & Weyl, G. (2018). Should We Treat Data as Labor? Moving Beyond “Free”. AEA Papers and Proceedings, 108, 38–42. doi:10.1257/pandp.20181003

Barocas, S. & Nissenbaum, H. (2014). Big Data’s End Run Around Anonymity and Consent (pp. 44-75). In J. Lane, V. Stodden, S. Bendo, & H. Nissenbaum (Eds.), Privacy, Big Data and the Public Good. New York: Cambridge University Press.

Beckert, S. & Rockman, S. (Eds). (2016). Slavery’s Capitalism. Philadelphia: University of Pennsylvania Press.

Bowker, G. & Leigh Star, S. (1999). Sorting Things Out. Cambridge, MA: The MIT Press.

Bratton, B. (2016). The Stack: On Software and Sovereignty. Cambridge, MA: The MIT Press.

Bucher, T. (2017). The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms. Information Communication and Society, 20(1),30–44. doi:10.1080/1369118X.2016.1154086

Chakravorti, B., Bhalli, A., & Chaturvedi, R. S. (2019, January 24). Which Countries are Leading the Data Economy? Harvard Business Review. Retrieved from https://hbr.org/2019/01/which-countries-are-leading-the-data-economy

Cohen, J. (2013). What Privacy Is for. Harvard Law Review,126(7), 1904–1933. Retrieved from https://harvardlawreview.org/2013/05/what-privacy-is-for/

Cohen, J. (2018). The Biopolitical Public Domain: The Legal Construction of the Surveillance Economy. Philosophy & Technology, 31(2), 213–233. doi:10.1007/s13347-017-0258-2

Cohen, J. (2019). Between Truth and Power. Oxford: Oxford University Press.

Couldry, N., & Mejias, U. A. (2018). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media, 20(4): 336-349. doi:10.1177/1527476418796632

Couldry, N., & Mejias, U. A. (2019). The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism. Redwood City, CA: Stanford University Press.

Cowen, D. (2014). The Deadly Life of Logistics. Minneapolis: University of Minnesota Press.

Davenport, T. (2014). Big Data @ Work. Cambridge, MA: Harvard Business Review Press.

Dussel, E. (1985). Philosophy of Liberation. Oregon: Wipf and Stock.

The Economist. (2017, May 6). The World’s Most Valuable Resource Is No Longer Oil, but Data. Retrieved from https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data

Elias, N. (1978). What is Sociology? London: Hutchinson.

Elkin-Koren, N., & Haber, E. (2016). Governance by Proxy: Cyber Challenges to Civil Liberties. Brooklyn Law Review, 82(1), 105–162. Retrieved from https://brooklynworks.brooklaw.edu/blr/vol82/iss1/3/

Eubanks, V. (2018). Automating Inequality. New York: St. Martin’s Press.

Frischmann, B. & Selinger, E. (2018). Reengineering Humanity. Cambridge:Cambridge University Press.

Gerlitz, C., & Helmond, A. (2013). The Like Economy: Social Buttons and the Data-intensive Web. New Media & Society15 (8), 1348-1365. doi:10.1177/1461444812472322

Gillespie, T. (2010). The Politics of ‘Platforms’. New Media & Society12(3): 347-364. doi:10.1177/1461444809342738

Gitelman, L. (Ed). (2013). “Raw Data” is an Oxymoron. Cambridge, MA: The MIT Press.

Gosh, S., & Kanter, J. (2019, January 22). Google says data is more like sunlight than oil, one day after being fined $57 million over its privacy and consent practices. Business Insider. Retrieved February 1, 2019, from https://www.businessinsider.com/google-data-is-more-like-sunlight-than-oil-france-gdpr-fine-57-million-2019-1.

Grewal, D. (2008). Network Power. New Haven, CT: Yale University Press.

Haggerty, K., & Ericson, R. (2000). The Surveillant Assemblage. British Journal of Sociology, 51(4): 605–622. doi:10.1080/00071310020015280

Hegel, G. W. F. (1991). Elements of the philosophy of right (A. W. Wood, Ed.; H. B. Nisbet, Trans.). Cambridge: Cambridge Univ. Press.

Hildebrandt, M. (2015). Smart Technologies and the End(s) of Law. Cheltenham: Edward Elgar Publishing.

Hornung, G., & Schnabel, C. (2009). Data Protection in Germany I: The Population Census Decision and The Right to Informational Self-determination. Computer Law & Security Review, 25(1): 84–88. doi:10.1016/j.clsr.2008.11.002

Lanier, J.(2014) Who Owns the Future? London: Allen Lane.

Lessig, L. (2000). Code and Other Laws of Cyberspace. New York: Basic Books.

Letouzé, E. (2012). Big Data for Development: Challenges & Opportunities [Report]. New York: UN Global Pulse. Retrieved from http://www.unglobalpulse.org/sites/default/files/BigDataforDevelopment-UNGlobalPulseJune2012.pdf

Mayer-Schönberger, V., & Cukier, K. (2013). Big Data. London: John Murray.

McKinsey (2011). Big Data: The next frontier for innovation, competition, and productivity [Report]. McKinsey Global Institute.

Nissenbaum, H. (2010). Privacy in Context. Stanford, CA: Stanford University Press.

Nissenbaum, H. (2017). Deregulating Collection: Must Privacy Give Way to Use Regulation? doi:10.2139/ssrn.3092282

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Pippin, R. (2008) Hegel’s Practical Philosophy. Cambridge: Cambridge University Press.

Polanyi, K. (2001). The Great Transformation. Boston: Beacon Press.

Postone, M. (1998). Rethinking Marx (In a Post-Marxist World) (pp. 45-80). In C. Camic (Ed.), Reclaiming the Sociological Classics. Oxford: Wiley-Blackwell.

Quijano, A. (2007). Coloniality and Modernity/Rationality. Cultural Studies 21(2-3): 168-178. doi:10.1080/09502380601164353

Santos, B. de S. (2016). Epistemologies of the South: Justice Against Epistemicide. London: Routledge. doi:10.4324/9781315634876

Schmitt, C. (2006) The Nomos of the Earth. Candor, NY: Telos Press.

Scholz, L. (2018). Big Data is not Big Oil: The Role of Analogy in the Law of New Technologies [Research paper No. 895]. Tallahassee, FL: FSU College of Law. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3252543

Schwartz, P. (1999). Internet Privacy and the State. Connecticut Law Review, 32, 815-859. Available at https://scholarship.law.berkeley.edu/facpubs/766/

Sen, A. (2002). Rationality and Freedom. Cambridge, MA: Harvard University Press.

Shepherd, T. (2015). Mapped, Measured and Mined: The Social Graph and Colonial Visuality. Social Media + Society, 1(1). doi:10.1177/2056305115578671

Thatcher, J., O’Sullivan, D. & Mahmoudi, D. (2017). Data Colonialism Through Accumulation by Dispossession: New Metaphors for Daily Data. Environment and Planning D: Society and Space, 34 (6), 990-1006. doi:10.1177/0263775816633195

UN High Commissioner for Human Rights. (2014). The Right to Privacy in the Digital Age. Retrieved from http://www.justsecurity.org/wp-content/uploads/2014/07/HRC-Right-to-Privacy-Report.pdf

UN High Commissioner for Human Rights. (2018). The Right to Privacy in the Digital Age. Retrieved from https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/239/58/PDF/G1823958.pdf

Van Dijck, J. (2014). Datafication, Dataism and Dataveillance: Big Data Between Scientific Paradigm and Ideology. Surveillance & Society, 12(2), 197-208. doi:10.24908/ss.v12i2.4776

will.i.am. (2019, January 21). We Need to Own Data as a human right – and be compensated for it. The Economist. Retrieved from https://www.economist.com/open-future/2019/01/21/we-need-to-own-our-data-as-a-human-right-and-be-compensated-for-it

Williams, E. (1994). Capitalism and Slavery. Chapel Hill: University of North Carolina Press.

World Economic Forum. (2011). Personal Data: The Emergence of a New Asset Class. Retrieved from http://www3.weforum.org/docs/WEF_ITTC_PersonalDataNewAsset_Report_2011.pdf.

Zuboff, S. (2015). Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1),75–89. doi:10.1057/jit.2015.5

Zuboff, S. (2019) The Age of Surveillance Capitalism. London, UK:Profile Books.

Footnotes

1. The full context of our argument is provided in Couldry and Mejias (2019). We have developed it since 2016, and first presented it publicly at the Big Data in the Global South network at IAMCR, Cartagena, Colombia, in July 2017 (https://data-activism.net/2017/07/datactive-presents-big-data-from-the-south-in-cartagena-july-15/). For a summary version of our book’s argument, see Couldry and Mejias (2018).

2. We therefore question the boundary between “capitalism” and “surveillance capitalism” (sometimes called “raw surveillance capitalism”) on which Zuboff relies, when she writes: “When a firm collects behavioral data with permission solely as a means to product or service improvement, it is committing capitalism but not surveillance capitalism” (2019, p. 22). But this assumes a world where “permission” is clearly delineated, and the purposes of data use and scope of data collection are neatly delineated too: the purpose of data colonialism is to blur those boundaries in the service of a broader appropriation of human life itself.

3. Interestingly Zuboff notes the colonial precedent at certain points (e.g., Chapter 6), but without either theorising data processes as a new type of colonialism, or explaining the implications of the colonial precedent for her framing of what’s going on with data exclusively in terms of capitalism.

4. For recent valuable discussions of the colonial in relation to data, Thatcher et al. (2017) see ‘data colonialism’ explicitly as a metaphor, while Cohen (2017) and Shepherd (2015) emphasise neo-colonial continuities in data practices. None proposes, as we do, that data practices constitute literally a new phase of colonialism.

5. This unorthodox extension of Marx’s critical theory of capitalism is inspired by Moishe Postone’s reading of Marx and the importance of abstraction, rather than labour as such, as the fundamental driver of creating a capitalist social order (Postone, 1998). There is no space to discuss this in detail here, but see Couldry and Mejias, 2018; Couldry and Mejias, 2019, chapter 1.

6. For an earlier discussion of Schmitt’s discussion of law and colonialism in relation to the internet, see Bratton (2016: 19-40).

7. Our larger argument here draws on the philosophy of G. W. F. Hegel and Enrique Dussel: for more detail, see Couldry and Mejias (2019, chapter 5). On the question of legal authority, see Hildebrandt (2015).

8. See Hegel’s Encyclopedia quoted by Pippin (2008, p. 186); Dussel (1985, p.158).

9. On data extraction from physical nature, see Gabryz (2016).

The recursivity of internet governance research

$
0
0

The recursivity of internet governance research

Introduction

Technological visionary Stewart Brand once remarked that “[o]nce a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road” (1987, p.9). About forty years after the somewhat muddled invention of the internet and right after the 25th birthday of the web, it seems that these technologies have quite thoroughly rolled over contemporary societies. But instead of simply shaping our societies from the outside, the internet’s “message” – to speak with McLuhan – has become increasingly difficult to read. While the mythos of cyberspace as a new frontier has long faded, common terms like “internet culture” or even “online shopping” signal that there is some kind of elsewhere in the clouds behind our screens. But the stories about election tampering, privacy breaches, hate speech, or algorithmic bias that dominate the headlines are just one reminder that issues still commonly prefixed with “digital”, “internet”, “online”, or similar terms have fully arrived at the centre of collective life. Elsewhere is everywhere. Trends like datafication or platformisation have seeped deeply into the fabric of societies and when scholars discuss questions of internet governance or platform governance, they know all too well that their findings and arguments pertain to social and cultural organisation in ways that go far beyond the regulation of yet another business sector.

It therefore comes as no surprise that not only the subject areas covered by conferences like the one organised by the Association of Internet Researchers every year since 2000 are proliferating, but also that the stakes have grown in proportion. As technologies push deeper into public and private spheres, they encounter not only appropriations and resistances, but complex forms of negotiation that evolve as effects become more clearly visible. Steamroller and road, to stick with Brand’s metaphor, blend into a myriad of relations operating at different scales: locally, nationally, supra-nationally, and globally.

The centrality of the internet in general and online platforms in particular means that the number and variety of actors seeking to gain economic or political advantages continues to grow, pulling matters of governance to the forefront. While the papers assembled in this special issue do not fall into the scope of “classic” internet governance research focused on governing bodies such as the ICANN or W3C and the ways they make and implement decisions, they indeed highlight the many instances of shaping and steering that follow from the penetration of digital technologies into the social fabric. The term “governance” raises two sets of questions: how societies are governed by technologies and how these technologies should be governed in return (cf. Gillespie, 2018). These questions are complicated by the fact that technologies and services are deeply caught up in local circumstances: massive platforms like Facebook or YouTube host billions of users and are home to a vast diversity of topics and practices; data collection and decision-making involving computational mechanisms have become common practices in many different processes in business and government—processes that raise different questions and potentially require different kinds of policy response. Global infrastructures reconfigure local practices, but these local practices complicate global solutions to the ensuing problems.

This knotty constellation poses significant challenges to both the descriptive side of governance research concerned with analysis of the status quo and the prescriptive side that involves thinking about policy and, in extremis, regulation. The papers assembled here do not neatly fit into this distinction, however. Instead, they highlight the complicated interdependence between is and ought, to speak with Hume, and indicate a need for recursive dialogue between different perspectives that goes beyond individual contributions. In this sense, this special issue maps the larger field of debate emerging around governance research in terms of perspectives or entry points rather than disciplines or clearly demarcated problem areas. Three clusters emerge:

First, a normative perspective that testifies and responds to the destabilisation of normativity that characterises societies, which are challenged on several levels at the same time. This involves an examination of the possibilities and underpinnings of critique: how can we evaluate our current governance and political perspectives in normative terms and thereby lay the ground for thinking about adaptations or alternative arrangements?

Second, a conceptual perspective concerned with the intellectual apparatus we use to address and to render our current situation intelligible. The authors in this group indeed argue that conceptual reconfigurations are necessary to capture many of the emerging fault lines, such as the need for transnational policy-making and the complex relationships between groups of stakeholders.

Third, an empirical perspective can be framed as asking how these more abstract concerns can be connected with understanding and evidence of actual practices and effects and how they affect the lived realities of individuals and social groups. The diversity of situations indeed challenges and complicates theoretical discussion, but also play a crucial role in shedding light on situations that may be opaque and counterintuitive.

We will discuss each of these perspectives in greater detail but suffice to say that adequate understanding of contemporary societies depends on their recursive interrelation: normative engagement serves as moral grounding, conceptual work sharpens our analytical grids, and empirical evidence connects us to the actual realities of lived lives. Internet researchers are tasked with the responsibility to advance on all three lines to increase our knowledge of the world we live in and to open pathways for policy responses that are up to the considerable challenges we face.

Normative perspectives: governing the data-subject

Research into the governance of platform-based, data-fueled, and algorithmically driven societies is obviously informed by economic and political theories. Over the past few years, several economic scholars have critically interrogated orthodox political models, such as capitalism and liberal democracy, to find out whether they still apply to societies where offline activities—private or public—are increasingly scarce (Zuboff, 2019; Jacobs and Mazzucato, 2016; Mayer-Schönberger and Ramge, 2018). Wavering between “surveillance capitalism” and “algocracy”, markets can be seen adapting to the advent of data as a new resource and predictive analytics as significant tools that turn users into “data-subjects”. But the study of data-subjects cannot easily be delineated as the study of “citizens” or “consumers” fitting the contextual parameters of “democracies” and “markets”. Normative perspectives cover economic and political principles but also pertain to moral principles—norms and values; the study of data-subjects, in other words, also involves the fundamental rights of human beings participating in “democracies” and “markets”.

Norms and principles are often invisible, hidden in the ideological folds of a social fabric woven together by an invisible technological apparatus that barely leaves traces upon its imprints. It is important to bare the normative perspectives by which the internet is governed; it is equally important to articulate and discuss normative perspectives on the basis of which the internet should be governed—what we called above the complicated interdependence between is and ought. Contributing perspectives from sociology, political economy and philosophy, the authors of the first three articles in this special issue each highlight a different aspect of “governing the data subject”: as an economic resource, as a citizen in a democracy, and as an autonomous individual. All three papers take a broader view of data-subjects as the centre of data practices and try to rethink the normative frameworks by which they are governed.

Nick Couldry and Ulises Mejias propose the political-historical perspective of “data colonialism” to dissect the new social order that has been the result of rapid datafication linked to extractive capitalism. Data colonialism, they argue, is about more than capitalism; it is “human life itself that is being appropriated … as part of a reconstruction of the very spaces of social experience.” Colonialism should thus not be understood metaphorically, and neither should data simply be seen as the “new oil”; data colonialism is a new phase in the history of colonialist expansion—a phase that is characterised by a massive transformation of humanity’s socio-legal and economic order through the appropriation of human life itself by means of data extraction. The data-subject emerging from this perspective is at once personal and relational. Data are not “personal” in the sense that they are “about” our individual selves, but they emerge as constructions of data points—“data doubles”—out of a myriad of data sets. Hence, privacy is important for individuals and collectives: data doubles are projections of the social and thus contribute to reshaping social realities. Couldry and Mejias conclude that existing legal approaches and policy frameworks are profoundly inadequate when it comes to governing datafied societies. Instead, they propose a radical reframing of regulatory discourse that calls into question the direction and rationale of a social order resting on exploitative data extraction.

Starting from the rapid shift from broadly optimistic attitudes concerning the relationship between digitalisation and democracy to broadly negative ones, Jeanette Hofmann argues that the fundamental relationship between media and democratic life should be (re)considered in greater conceptual depth to form a starting point for a critical perspective on governance. Instead of merely describing the “effect” or “influence” of media, she makes a distinction between medium and form that highlights the “alterability” of technologies and the normatively charged struggles over architecture and design that ensue. This perspective allows for a reading of the internet’s history through the lens of shifting and competing ideological models, through “different modes of social coordination and political regulation, which became inscribed as operational principles and standards into the network architecture and as such again subject of political interpretation”. While concepts like “connective action” (Bennett and Segerberg, 2012) emphasise the distributed character of the internet, Hofmann argues that the contemporary emergence of digital platforms is still lacking a clearer appreciation in terms of its consequences for democratic agency. Only a deeper conceptual understanding of the treacherous waters of mediated democracy would allow for a programmatic appropriation of alterability and the realisation of “unrealised alternatives”.

Daniel Susser, Beate Roessler, and Helen Nissenbaum move from broad conceptualisations of digital societies to a more fine-grained level of analysis that deals with a phenomenon that is often mentioned when discussing potential harms but is rarely examined in greater depth: the notion of (online) manipulation. Starting from the specific possibilities for steering and controlling digital platforms incorporate, they argue that core liberal values—autonomy in particular—are under threat when cognitive biases and data profiles can be easily exploited through mechanisms that often remain hidden. But the gist and merit of this paper lie not so much in highlighting these increasingly well-known phenomena, but to submit them to a normative assessment that connects to existing policy discussions, proposing concrete measures for “preventing and mitigating manipulative online practices”. The authors thus invest precisely in what we mean by recursivity: the connection between descriptive and prescriptive modes as well as the tighter coupling between academic research and government policy.

Conceptual perspectives: digital governance between policy-making and politics

Gravitating between what is and what ought are conceptual perspectives of internet governance: what needs to be done to get us from current (inadequate) legal and policy frameworks to frames that work? The papers in this section critically assess foundational notions such as markets, consumers, companies, stakeholders, agreements, and contracts—notions on which much of our governance structures rest but which have become porous, to say the least. If “classic” governance structures no longer seem to apply to an platform-based, data-fuelled, and algorithmically driven society, how can they be reconceptualised? Such reframing and retooling exercises inevitably raise questions of policy-making and political manoeuvring. Not everything that can be theoretically reconceived is politically conceivable. A useful political reality-check is to compare different national governance frameworks and show how policy-making for the internet is an intensely (geo)political affair. The conceptual perspectives in this section range from the very broad to the very specific: they interrogate the foundations of platform power and how power is distributed between state, market, and civil society actors (Van Dijck et al.; Gorwa); they compare (trans)national initiatives of data governance (Meese et al.) and probe the geopolitical implications of compliance with regulatory standards (Meese at al.; Tusikov); and finally, they study how the digital rendering of consumer-facing contracts can be both a threat and an opportunity (Cornelius).

José van Dijck, Thomas Poell, and David Nieborg probe the very assumptions underlying recent decisions by the European Commission to impose substantial fines upon Alphabet-Google for anti-competitive behaviour. They argue that the concepts of consumer welfare, internet companies, and markets—concepts in which many regulatory frameworks are staked—no longer suffice to catch the complex interrelational and dynamic nature of online activities. Instead, they propose expansive concepts such as citizen well-being, an integrated platform ecosystem, and societal platform infrastructures to inform policy-making efforts. But more than a theoretical proposal, their “reframing power” exercise hints at the need for recursive internet governance research. Researchers should help policy-makers in defining the dynamics of platform power by providing a set of analytical tools that help explain the complex relationships between platforms and their responsible actors. Armed by detailed insights from national and comparative case studies, policy-makers, and politicians can help articulate regulatory principles at the EU-level.

Conceptual rethinking is obviously not restricted to formal regulatory frameworks, but also extends into informal governance arrangements. Robert Gorwa, in his contribution to this special issue, reviews the growing number of non-binding governance initiatives that have been proposed by platform companies over the past few years, partly in response to mounting societal concerns over user-generated content moderation. The question “who is responsible for a fair, open, democratic digital society across jurisdictions?” is picked up not just by (transnational) bodies like the EU, but by a variety of actors in multi-stakeholder organisations. Companies like Facebook and Google seek out provisional alliances to create “oversight bodies” and other forms of informal governance. However, as Gorwa shows, the power relationships in the “governance triangle” of companies, states, and civil society actors in these informal arrangements remains unbalanced because civil society actors are notoriously underrepresented. The poignant issue is responsibility rather than liability: we are all responsible for a fair, open, and democratic society, but “we” is not an easy-to-define collective concept. Detailed analyses of big platform companies’ “spheres of influence” through informal arrangements—in conjunction to in depth analyses of formal regulatory toolboxes, as suggested in the previous article—are needed to map the complex power relationships between actors with varying degrees of power. Once again, recursivity is the magic word: researchers inform policy-makers who inform researchers.

James Meese, Punit Jagasia, and James Arvanitakis, in their article “Citizen or consumer?” continue the reframing exercise of this section by comparing data access rights between the EU and Australia. They ask whether the two continents’ regulatory frameworks—the General Data Protection Regulation (GDPR) versus the Consumer Data Right (CDR)—are grounded in different ideological concepts of citizen versus consumer. The authors show the deep interpenetration of policy-making and politics. In Europe, this results in the GDPR’s strong emphasis on protecting fundamental rights of citizens, such as privacy and data protection against (ab)use by companies and governments. In Australia, the CDR betrays clear signs of a neoliberal approach which grants individual rights in the context of markets. This concrete comparison between Europe’s and Australia’s regulatory efforts on data protection signal the importance of including ideological and (geo)political premises into a conceptual approach to governance. Across the globe, we are witnessing the clash between market-oriented approaches vis-à-vis approaches that start from the fundamental rights and freedoms of citizens. Whereas the GDPR, in the eyes of some Europeans, does not go far enough in the second direction, for Australians this would mean a major straying from the first.

A second comparative perspective is provided by Natasha Tusikov, who closely examines the effects of US regulation on China’s internet governance in the area of intellectual property rights protection. A detailed analysis of policy and regulatory policy documents illuminates the power choreography between American private actors, American state regulators, and Chinese platform companies; the US state exerts coercive power on Chinese actors to comply with American standards, as illustrated by Alibaba adopting US-drafted rules to prohibit the sale of counterfeit products via their Taobao marketplace. Tusikov’s careful reconstruction of the “compliance-plus” process demonstrates that the US dominance in transnational platform governance continues a long history of setting rules and standards to benefit its own economic interests and those of its industry actors. Such analysis of reciprocal fine-tuning between regulation, policy-making, and politics is extremely relevant when trying to understand the recent trade war between the US and China which is a clash between two giants to secure their economic, political, and national security interests through internet governance. The world of geopolitics is no longer external to issues of internet governance; on the contrary, disputes concerning internet governance are at the core of geopolitical conflicts.

Kristin Cornelius’ contribution finally approaches the intersection of technology and governance from a very different angle. Looking at the explosive proliferation of “consumer-facing standard form contracts” such as Terms of Service – contracts we constantly submit to yet hardly ever read – she argues that the “digital form” of these documents merits closer attention. Taking a conceptual perspective grounded in information science and document-engineering, she shows not only how the technical form that implements a legal relationship has a normative dimension in the sense that it structures power relations, but also that this technicity is an opportunity: emphasising elements such as standardisation, stabilisation, and machine-readability would not necessarily change the content of these (zombie) contracts, but allow for different forms of social embedding that keep them from coming to haunt the users they apply to. Looking at contracts as documents having specific material forms instead of limiting them to their abstract legal meaning shows how crucial conceptual frames have become for making sense of a situation where technical principles shake established lines of reasoning.

Empirical perspectives: data uses and algorithmic governance in everyday practices.

The last section of this special issue brings us from the higher spheres of politics and policy-making to the concrete everyday practices in which “data subjects” play a central role. The three papers listed in this section scrutinise empirical cases concerning actual data uses which, in turn, serve to inform researchers and policy-makers intent on reshaping internet governance. Whether adopting the notion of “citizens” or “consumers”, these articles ground their research perspectives in empirical observations and interrogations of data subjects—the way they are steered by algorithms and how they respond to certain manipulations of online behaviour. Moreover, all three papers seek to tie concrete, empirical research to normative and conceptual perspectives: from what is to what ought and what could be. Whether the cases concern “citizen scoring” practices at the local levels (Dencik et al.), revolts of YouTube users against the platform’s algorithmic advertising and moderation practices (Kumar), or the broader question of how to study real-world effects of algorithmic governance in different areas of everyday life (Latzer and Festic)—they all come back to the recursivity of research: how to make sense of current algorithmic and data practices in light of the wider political and economic transformations of internet governance?

Lina Dencik, Joanna Redden, Arne Hintz, and Harry Warne provide an insightful analysis of data analytics uses in UK public services. The authors draw on a large number of databases and interviews to investigate what they call “citizen scoring practices”: the categorisation and assessment of data (e.g., financial data, welfare data, health data and school attendance) to predict citizen behaviour at both the individual and the population level. Significantly, Dencik et al. show how the interpretation of data analytics is the result of negotiation between the various stakeholders in data-driven governance, from the private companies that provide the data analytics tools to the public sector workers that handle them. While the use of data analytics in public service environments is steadily increasing, there appears to be no shared understanding of what constitutes appropriate technologies and standards. And yet, such a “golden view” seems to inform the various data-driven analytics practises at the local level. One important goal of this paper is to understand the heterogeneity of local data-based practices against the backdrop of a regulatory vacuum and quite often an austerity-driven policy regime. Hopefully, studies like this one provide a much needed empirical basis for articulating policies that address broader concerns of data use with regards to discrimination, stigmatisation, and profiling.

The article by Sangeet Kumar moves to a very different arena, one where data-driven governance has been at the centre from the very beginning: analysing the so called “Adpocalypse”, an advertiser revolt against YouTube in 2017, he shows how decisions concerning the monetisation of videos have complemented practices such as content moderation or deplatforming as instruments of governance. More subtle in nature, they may have nonetheless a large effect on the overall composition of the platform by steering money flows away from conflictual yet important subjects, transforming YouTube—and the web more broadly—from “a plural, free and heterogenous space” into a “sanitised, family-friendly and scrubbed version” of itself. The paper ends with a call for wider stakeholder participation to put the inevitable decisions on rules and modes of governance on wider bases. Given the outsized role YouTube has come to play in the emerging “hybrid media system” (Chadwick, 2013), one could rightfully ask whether platforms of this size should become touted as “public utilities” as Van Dijck et al. suggest in their conceptual reframing.

While Michael Latzer and Noemi Festic’s contribution does not rely on empirical research itself, it is very much concerned with the question of how empirical evidence on complicated and far-reaching concepts like algorithmic governance can be collected in the first place. While theoretical models proliferate and efforts for algorithmic accountability gain traction, the actual integration of the various mechanisms for selection, ranking, and recommendation users regularly encounter into the practices of everyday life remains elusive. Qualitative studies have given us some idea concerning effects and imaginaries on the user side, but “generalisable statements at the population level” are severely lacking. Such a broad ground-level view is, however, essential for informed policy choices. The authors therefore propose a programmatic framework and mixed-methods approach for studying the actual consequences of algorithmic governance on concrete user practices, in the hope of filling a research gap that continues to blur the picture, despite the heightened attention the topic has recently received. The methodologies used to produce empirical insights thus constitute yet another area where internet researchers have a crucial social role to play, despite the significant challenges they face.

Conclusion

It may still be too early to omit the terms “digital”, “online”, or “internet” as meaningful adjectives when discussing the transformation of societies in which data, algorithms, and platforms play a central and crucial role. Obviously, we are no longer restricted by a predominantly technological discourse when discussing the internet and its governance—like we were in the 1990s when most researchers saw the steamroller coming, but did not quite know how to gauge its power and envision its implications. And, perhaps on a hopeful note, we have not yet become part of the “road” which the steamroller threatens to flatten. However, it takes a conscious and protracted effort for researchers to understand the “internet” and the “digital” as transformative forces before they become part of the road we walk on. And that is what makes the recursivity of governance and policy research so relevant at precisely this moment in time.

When studying the effects of data-informed practices first-hand, internet researchers can detect patterns in how society is governed by platforms; in turn, their insights and conceptual probes might inform regulators and policy-makers to adjust and tweak existing policies. There is a clear knowledge gap, an asymmetry of information that affects not only researchers as they study complicated actor constellations and powerful companies, but also democratic institutions themselves. Governments may be able to wield considerable power in specific situations, in particular around market competition, but they are nonetheless increasingly dependent on the multifaceted input of a wide range of disciplines. Internet researchers may be rightfully sceptical about engaging with institutions that are clearly imperfect; but our current situation requires that we accept our responsibilities as knowledge producers and push the insights we develop beyond the boundaries of our disciplines and institutions. The recursive nature of normative, conceptual, and empirical approaches hopefully encourages collectives of researchers and policy-makers to cooperate in governance design.

References

Bennett, W., & Segerberg, A. (2012). The Logic of Connective Action. Interaction, Communication & Society, 15(5), 739–768. doi:10.1080/1369118X.2012.670661

Brand, S. (1987). The Media Lab: Inventing the Future at MIT. New York: Viking; Penguin.

Chadwick, A. (2013). The Hybrid Media System. Oxford; New York: Oxford University Press.

Gillespie, T. (2018). Custodians of the Internet. Platforms, content moderations, and the hidden decisions that shape social media. New Haven: Yale University Press.

Jacobs, M., & Mazzucato, M. (2016). Rethinking Capitalism: Economics and Policy for Sustainable and Inclusive Growth. London: Wiley.

Mayer-Schönberger, V., & Ramge, T. (2018). Reinventing Capitalism in the Age of Big Data. New York: Basic Books.

Zuboff, S. (2019). The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.

Platform ad archives: promises and pitfalls

$
0
0

Introduction

In 2018, the online platforms Google, Facebook and Twitter all created political ad archives: publicly accessible databases with an overview of political advertisements featured on their services. These measures came in response to mounting concerns over a lack of transparency and accountability in online political advertising, related to illicit spending and voter manipulation. Ad archives have received widespread support in government and civil society. However, their present implementations have also been criticised extensively, by researchers who find their contents to be incomplete or unreliable. 1 Increasingly, governments and civil society actors are therefore setting up their own guidelines for ad archive architecture – in some cases even binding legislation. Ad archive architecture has thus rapidly gained relevance for advertising law and policy scholars, both as a tool for regulation and as an object of regulation. 2

This article offers an overview of the ad archive governance debate, discussing the potential benefits of these tools as well as pitfalls in their present implementations. Section two starts with a basic conceptual and legal framework which describes the basic features of archives and applicable regulations, followed by a normative framework which discusses the potential benefits of ad archives in terms of transparency and accountability. Section three reviews the shortcomings of current ad archive initiatives, focusing on three core areas of ongoing debate and criticism. Firstly, we discuss scoping: ad archives have faced difficulty in defining and identifying, at scale, what constitutes a “political advertisement”. Secondly, verifying: ad archives have proven vulnerable to inauthentic behaviour, particularly from ad buyers seeking to hide their true identity or the origin of their funding. Thirdly, targeting data: ad archives do not document in meaningful detail how ads are targeted or distributed. We propose several improvements to address these shortcomings, where necessary through public regulation. Overall, we argue that both legal scholars and communications scientists should pay close attention to the regulation of, and through, this novel and potentially powerful tool.

Promises: the case for ad archives

Conceptual framework: what are ‘ad archives’?

This paper focuses on ad archives, which are systems for the automated public disclosure of advertisements via the internet. The key examples are Facebook’s Ad Library, Google’s Advertising Transparency Report and Twitter’s Ad Transparency Center. These systems document the advertisement messages sold on the platform, as well as associated metadata (e.g., the name of the buyer, the number of views, expenditure, and audience demographics). These archives are public, in the sense that they are available without restriction to anyone with a working internet connection.

In practice, the major ad archives have focused on documenting political advertisements, rather than commercial advertisements. Beyond this, they differ in important respects. Firstly, they differ significantly in how they define “political” advertising in order to determine what ads are included in the archive. The major archives also differ in how they verify their contents - particularly the identity of their ad buyers – and in terms of the metadata they publish related to ad targeting. Section three considers these questions of scoping, verifying and targeting in further detail.

The major ad archives went live in 2018. Facebook’s archive was first announced in October 2017 and went live the next year in May 2018. Google and Twitter followed soon after. They initially focused exclusively on the United States, but they have since gradually expanded their efforts. Facebook and Twitter’s archives now offer worldwide coverage, although certain functions are still regionally restricted. Google covers only the US, the European Union and India (Google, 2019a).

In theory, ad archives can be created not only by platform intermediaries but also by a range of other actors, including advertisers, academics or NGOs. For instance, political parties can maintain their own online database documenting their political advertisements, as has been proposed in the Netherlands (Netherlands Ministry of the Interior, 2019). As early as 2012, Solon Barocas argued for a centralised non-profit database, or ‘clearing house’, for political ads (Barocas, 2012). The London School of Economic’s Truth and Trust Commission proposes that the government administer a central database, or “political advertising directory” (Livingstone, 2018). The investigative journalists of ProPublica have maintained a public database of Facebook ads which they crowd-sourced from a group of volunteers (Merrill & Tobin, 2019). While we do not discount these approaches, our discussion focuses on platform-operated archives, since these have recently attracted the most widespread traction in policy and practice.

Formally speaking, the major platform ad archives are self-regulatory measures. But they emerged in response to significant public pressure from the ongoing “techlash” (Smith, 2018; Zimmer, 2019). These “voluntary” efforts are therefore best understood as an attempt to stave off binding regulation (Wagner, 2018). Indeed, platforms have no immediate commercial incentive to offer transparency in their advertising practices. The role of public regulation, or at least the threat thereof, is therefore essential in understanding the development of ad archives (see Vlassenroot et al., 2019). Below we offer an overview of key policy developments.

Both platforms and policymakers present ad archives as a means to improve accountability in online political advertising (e.g., Goldman, 2017; Warner, 2017). Political advertising in legacy media has historically been regulated in various ways, to prevent undue influence from concentrated wealth on public discourse. Online advertising is placing new pressure on these legacy regimes. In many cases, the language of existing law has simply not been updated to apply online. Furthermore, online political micro-targeting has unique affordances that can enable new types of harms demanding entirely new regulatory responses. For instance, platform advertising services lower the barrier to buying ads across borders, and to buy ads under false or misleading identities. Furthermore, micro-targeting technology, which enables advertisers to target highly specific user segments based on personal data analytics, can enable novel methods of voter deception, manipulation and discrimination (Borgesius et al., 2018; Chester & Montgomery, 2017). For instance, targeted advertising can enable politicians to announce different or even conflicting political programmes to different groups, thereby fragmenting public discourse and making it more difficult to hold politicians accountable to their electoral promises (Bodó, Helberger, & de Vreese, 2017; Borgesius et al., 2018). Targeted advertising can also enable discrimination between voter groups, both intentionally through advertisers’ targeting decisions and unintentionally through undocumented algorithmic biases (Boerman, Kruikemeier, & Borgesius, 2017; Ali et al., 2019).

These concerns about online advertising are compounded by the fact that the online advertising ecosystem is difficult to monitor, which undermines efforts to identify, diagnose and remedy potential harms (Chester & Montgomery, 2017). This opacity is due to personalisation: personalised advertisements are invisible to everyone except the specific users they target, hiding them from observation by outsiders (Guha, Cheng, & Francis, 2010). As Benkler, Faris and Roberts observe, this distinguishes online advertisers from mass media advertisers, who necessarily acted “in the public eye”, thus “suffering whatever consequences” a given message might yield outside of its target audience (Benkler, Faris, & Roberts, 2018, p. 372). As a result, the online advertising ecosystem exhibits structural information asymmetries between, on one side, online platforms and advertisers, and on the other, members of the public who might hold them accountable. Researchers can potentially resort to data scraping methods, but these suffer from severe limitations and are vulnerable to interference by the platforms they target (Bodó et al., 2018; Merrill & Tobin, 2019). Accordingly, targeted advertising creates structural information asymmetries between advertisers and their publics.

These concerns over online political advertising took centre stage in the “techlash”, which followed the unexpected outcomes of the 2016 Brexit referendum and US presidential elections. In the UK, the Vote Leave campaign was accused of deceptive messaging, and violations of data protection law and campaign spending law in their political micro-targeting activities (Merrick, 2018; Waterson, 2019a). In the US, ad spending from Russian entities such as the Internet Research Agency raised concerns about foreign election interference. In both countries, Facebook shared selected advertising data sets in response to parliamentary investigations (Lomas, 2019; Shane, 2017). But these came well over a year after the events actually took place – driving home the general lack of transparency and accountability in the advertising ecosystem. Similar controversies have also played out subsequent elections and referenda, such as the Irish abortion referendum of 2018 which drew an influx of foreign pro-life advertisements (Hern, 2018). The actual political and electoral impact of these ad buys remains debatable (e.g., MacLeod, 2019; Benkler, Faris, & Roberts, 2018). But in any case, these developments drew attention to the potential for abuse in targeted advertising, and fuelled the push for more regulation and oversight in this space.

Ad archives have formed a key part of the policy response to these developments. The most prominent effort in the US is the Honest Ads Act, proposed on 19 October 2017, which would require online platforms to “maintain, and make available for online public inspection in machine readable format, a complete record of any request to purchase on such online platform a qualified political advertisement” (Klobuchar, Warner, & McCain, 2017, Section 8(a)(j)(1)(a)). This bill has not yet passed (Montellaro, 2019). But only several days after its announcement, Facebook declared its plans to voluntarily build an ad archive, which would largely conform to the same requirements (Goldman, 2017). Google and Twitter followed suit the next year.

Since 2018, governments have started developing binding legislation on ad archives, often with resistance from platforms. Canada’s Elections Modernization Act of December 2018 compels platforms to maintain public registers of political advertising sold through their service. Facebook and Twitter have sought to comply with these measures, but Google instead responded by discontinuing the sale of political advertisements in this jurisdiction altogether (Cardoso, 2019). Similarly, the State of Washington’s Public Disclosure Commission attempted to regulate ad archives by requiring advertisers publicly disclose political ads sold in the state (Sanders, 2019). In this case, both Google and Facebook have refused to comply with the disclosure rules and instead banned political advertising in this region (Sanders, 2019). Citing federal intermediary liability law, the Communications Decency Act of 1996, Facebook contended it was immune to any liability for political advertising content (Sanders, 2019). Some reporters also claim that Facebook has lobbied to kill the Honest Ads Act, despite publicly claiming to support regulation and implement its requirements voluntarily (Timmons & Kozlowska, 2018).

Europe is also poised to regulate ad archives. In the run-up to the EU elections of May 2019, the European Commission devised the Code of Practice on Disinformation, which is not a binding law but rather a co-regulatory instrument negotiated with major tech companies including Google, Facebook, Twitter, Microsoft and Mozilla. 3 By signing the Code, these companies have committed to a range of obligations from fact-checking and academic partnerships to the creation of ad archives (European Commission, 2018, Section II.B.). Furthermore, leaked documents from the European Commission show that political advertisements will receive particular attention in the upcoming reform of digital services rules (Fanta & Rudl, 2019). Member states are also exploring the regulation of ad archives. In the UK and the Netherlands, parliamentarians have expressed support for further regulation in, respectively, a parliamentary resolution and a committee report (Parliament of the Netherlands, 2019; House of Commons Select Committee on Digital, Culture, Media and Sport, 2019). France has passed a binding law requiring the public disclosure of payments received for political advertisements – if not a comprehensive regulation of ad archives per se (Republic of France, 2018).

Ad archives exist alongside a number of other proposals for regulating targeted advertising. One popular measure is installing user-facing disclaimers, intended to inform audiences about e.g., the identity of the advertisers, the source of their funding, and/or the reason why they are being targeted. Another approach is to regulate funding, e.g., trough spending limits, registration requirements, or restrictions on foreign advertising. Finally, targeting technology and the use of personal data can also be regulated. Some combination of these measures is found in, inter alia, the US Honest Ads Act, the EU’s Code of Practice, Canada’s Elections Modernization Act, and France and Ireland’s new election laws. The EU’s General Data Protection Regulation (GDPR) is also a highly relevant instrument, since it grants users’ information rights, and constrains the ability for advertisers to use personal data for ad targeting purposes (Bodó, Helberger, & De Vreese, 2017).

Of course, present ad archive initiatives are far from uniform. Definitions of e.g., the relevant platforms, disclosure obligations and enforcement mechanisms all differ. An exhaustive comparative analysis of these differences would exceed the scope of this paper. The second half of this paper discusses how these policy initiatives differ on some of the key design issues outlined above (scoping, verifying, and targeting data), and how the major platforms have responded to their demands. First, we discuss the policy principles driving this new wave of regulation.

Normative framework: what are the policy grounds for ad archives?

Ad archive initiatives have typically been presented in terms of ‘transparency and accountability’, but these are notoriously vague terms. The concrete benefits of ad archives have not been discussed in much depth. To whom do ad archives create accountability, and for what? The answer is necessarily somewhat abstract, since ad archives, being publicly accessible, can be used by a variety of actors in a variety of accountability processes. Indeed, this diversity is arguably their strength. Other advertising transparency measures have focused on particular groups of stakeholders, such as user-facing disclaimers, third party audits or academic partnerships. Ad archives, by contrast, can enable monitoring by an unrestricted range of actors, including not only academics but also journalists, activists, government authorities and even rival advertisers – each with their own diverse capacities and motivations to hold advertising accountable. In this sense, ad archives can be seen as recreating, to some extent, the public visibility that was inherent in mass media advertising and is now obfuscated by personalisation (see above). Broadly speaking, this public visibility can be associated with two types of accountability: (a) accountability to the law, through litigation, and, (b) accountability to public norms and values, through publicity.

Ad archives can contribute to law enforcement by helping to uncover unlawful practices. Although online political advertising is not (yet) regulated as extensively as its mass media counterparts, it may still violate e.g., disclosure rules and campaign finance regulations. And, as discussed previously, new rules may soon be coming. Commercial advertising, for its part, may be subject to a range of consumer protection rules, particularly in Europe, and also to competition law, unfair commercial practice law and intellectual property law. Ad archives can allow users to proactively search for violations of these rules. Such monitoring could be done by regulators, but importantly also by third parties including commercial rivals, civil rights organisations, consumer protection organisations, and so forth. These third parties might choose to litigate independently, or simply refer the content to a competent regulator. Indeed, regulators often rely on such third party input to guide their enforcement efforts, e.g., in the form of hotlines, complaints procedures and public consultations. In most cases, litigation is likely to be straightforward and inexpensive, since most platforms operate notice and takedown procedures for the removal of unlawful advertising without the need for judicial intervention. 4 Platforms can also remove advertising based on their own community standards, even if they do not violate any national laws.In this light, ad archives can contribute to enforcement in a broad sense, including not only public advertising laws but also platforms’ private standards, and relying not only on public authorities but on any party with the time and interest to flag prohibited content.

In addition to litigation, ad archives also facilitate publicity about advertising practices, which can serve to hold platforms accountable to public norms and values. Journalists, researchers and other civil society actors can draw on archives to research and publicise potential wrongdoings that might previously have flown under the radar. For instance, the US media has a strong tradition of analysing and fact-checking television campaign ads; ad archives could help them do similar coverage of online political micro-targeting. Such publicity may encourage platforms and/or advertisers to self-correct and improve their advertising standards, by raising the threat of reputational harm. And failing such a private ordering response, publicity can also provide an impetus for new government interventions. In these ways, ad archives can contribute not only to the enforcement of existing laws, but also to informed public deliberation, and thus to the articulation and enforcement of public norms and values (see Van Dijck, Poell, & de Waal, 2018). Such publicity effects may be especially important in the field of online political advertising, since, as discussed, this space remains largely unregulated under existing laws, and raises many novel policy questions for public deliberation.

In each case, it is important to note the factor of deterrence: the mere threat of publicity or litigation may already serve to discipline unlawful or controversial practices. Even for actors who have not yet faced any concrete litigation or bad publicity, ad archives could theoretically have a disciplinary effect. In this sense, a parallel can be drawn with the concept of the Panopticon, as theorised in surveillance studies literature; subjects are disciplined not merely through the fact of observation, but more importantly through the pervasive possibility of observation (Foucault, 1977; Lyon, 2006). Put differently, Richard Mulgan describes this as the potentiality of accountability; the possibility that one “may be called to account for anything at any time" (Mulgan, 2000, p. 564). Or, as the saying goes: The value in the sword of Damocles is not that it drops, but that it hangs (e.g., Arnett v. Kennedy, 1974).

Of course, these accountability processes depend on many other factors besides transparency alone. Most importantly, ad archives depend on a capable and motivated user base of litigators (for law enforcement effects) and civil society watchdogs (for publicity effects). For publicity effects, these watchdogs must also be sufficiently influential to create meaningful reputational or political risks for platforms (see Parsons, 2019; Wright & Rwabizambuga, 2006). These conditions can certainly not be assumed; which researchers are up to the task of overseeing this complex field, and holding its powerful players to account? This may call for renewed investment in our public watchdogs, including authorised regulators as well as civil society. Ad archives might be a powerful tool, but they rely on competent users.

Finally, of course, the above analysis also assumes that ad archives are designed effectively, so as to offer meaningful forms of transparency. As we discuss in the following section, present implementations leave much to be desired.

Pitfalls: key challenges for ad archive architecture

Having made the basic policy case for the creation of ad archives, we now discuss several criticisms of current ad archive practice. Firstly, we discuss the issue of scoping: which ads are included in the archive? Second, verifying: how do ad archives counteract inauthentic behaviour from advertisers and users? Third, targeting: how do ad archives document ad targeting practices? Each of these issues can create serious drawbacks to the research utility of ad archives, and deserve further scrutiny in future governance debates.

Ad archive architecture is very much a moving target, so we emphasise that our descriptions represent a mere snapshot. Circumstances may have changed significantly since our time of writing. Accordingly, the following is not intended as an exhaustive list of possible criticisms, but rather as a basic assessment framework for some of the most controversial issues. For instance, one important criticism of ad archives which we do not consider in detail is the need for automated access through application programming interfaces (APIs). When ad archive data is exclusively available through browser-based interfaces, this can make it relatively time-consuming to perform large-scale data collection. To enable in-depth research, it is clear that ad archives must enable such automated access. Until recently, Facebook did not offer public API access to their ad archive data (Shukla, 2019). And once the API was made publicly accessible, it quickly appeared to be so riddled with bugs as to be almost unusable (Rosenberg, 2019). As noted by Laura Edelson, these API design issues are not novel or intractable from a technical perspective, but eminently “fixable”, and thus reflect sub-standard implementation on the part of Facebook (Rosenberg, 2019). In response, Mozilla, together with a coalition of academics, has drafted a list of design principles for ad archive APIs (Mozilla, 2019). Such public, automated access can be seen as a baseline condition for effective ad archive policy. What then remains are questions about the contents of the archive, which include scoping, verifying and targeting.

Scoping: what ads are included in the archive?

A key design question for ad archives is that of scope: what ads are included in the archive? First, we discuss the concept of “political” advertising, which is the central scoping device in most existing initiatives and has led to many implementation challenges. Second, we discuss the attempts to exempt news reporting from political ad archives.

“Political” ad archives: electoral ads v. issue ads v. all ads?

Ad archive initiatives, both self-regulatory and governmental, have emphasised “political” advertising rather than commercial advertising. However, their precise interpretations of this concept differ significantly. Below we discuss these differing approaches and relevant policy trade-offs.

The main dividing line in existing political ad archives is between issue ads and electoral ads (or “campaign ads”). “Election ads” explicitly reference an election or electoral candidate, whereas “issue ads” reference a topic of national importance. Google focuses exclusively on election ads, whereas Facebook and Twitter also include issue ads in certain jurisdictions, and even non-political ads.Most public policy instruments also focus on issue ads, including the US Honest Ads Act and the EU Code of Practice. There is good reason to include issue ads, since they have been central to recent controversies. During the 2016 US election, for instance, foreign actors such as the Russian-controlled Internet Research Agency advertised on divisive issues such as racial politics, sexual politics, terrorism, and immigration, in an apparent attempt to influence the election (Howard et al., 2018). An approach which focuses on election ads would fail to address such practices.

However, the drawback of “issue ads” as a scoping device, is that the concept of a political “issue” is broad and subjective, and makes it difficult for archive operators to develop actionable definitions and enforce these in practice. Google, in its implementation reports for the EU’s Code of Practice, reported difficulties in developing a workable definition of a “political issue” (Google, 2019). The European Commission later lamented that “Google and Twitter have not yet reported further progress on their policies towards issue-based advertising” (European Commission, 2019). In Canada, where the Election Act also requires the disclosure of issue-based ads, Google has claimed that they are simply unable to comply with disclosure requirements (Cardoso, 2019). These difficulties might explain why the company announced plans, as discussed previously, to ban political advertising entirely for Canadian audiences during election periods.

Yet these attempts to ban political advertising, as an alternative to disclosure, beg the question whether platforms can actually enforce such a ban. After all, the platforms themselves admit they struggle to identify political ads in the first place. Simply declaring that political ads are prohibited will not guarantee that advertisers observe the ban and refrain from submitting political content. Could platforms then still be liable for a failure to disclose? Here, a tension emerges between ad archive regulation and intermediary liability laws, which typically immunise platforms for (advertising) content supplied by their users. Canada, Europe and the US all have such laws, although their precise scope and wording differ. Indeed, Facebook has argued that it is immunised against Washington State’s disclosure rules based on US federal intermediary liability law – the Communications Decency Act of 1996 (Sanders, 2018a). Similarly, the EU’s intermediary safe harbours, which prohibit “proactive monitoring obligations” imposed on platforms (e-Commerce Directive 2000/31/EC, Article 15). Such complex interactions with intermediary liability law should be taken into account in ongoing reforms.

Compared to Google, Facebook is relatively advanced in its documentation of issue ads. But that company too has faced extensive criticism for its approach. The company employs approximately 3,000-4,000 people in reviewing ads related to politics or issues, using “a combination of artificial intelligence (AI) and human review”, and is estimated to process upwards of a million ad buyers per week in the US alone (Matias, Hounsel, & Hopkins, 2018). Facebook’s website offers a list of concrete topics which they consider “political issues of national importance”, tailored to the relevant jurisdiction. The US list of political issues contains 20 entries, including relatively specific ones such as “abortion” and “immigration”, but also relatively broad and ambiguous ones such as “economy” and “values” (Facebook, 2019). The EU list contains only six entries so far, including “immigration”, “political values” and “economy” (Matias, Hounsel, & Hopkins, 2018).

Despite these efforts, research suggests that Facebook’s identification of political issue ads is error-prone. Research from Princeton and Bloomberg showed that a wide range of commercial ads are at risk of being mislabeled as political, including advertisements for e.g., national parks, veteran’s day celebrations, and commercial products that included the words “bush” or “clinton" (Frier, 2018; Hounsel et al., 2019). Conversely, data scraping research by ProPublica shows that Facebook failed to identify political issue ads on such topics as civil rights, gun rights, electoral reform, anti-corruption, and health care policy (Merrill & Tobin, 2019). These challenges are likely to exacerbate as platforms expand their efforts beyond the United States to regions such as Africa and Europe, which contain far greater political and linguistic diversity and fragmentation. Accordingly, further research is needed to determine whether the focus on issue ads in ad archives is appropriate. It may appear in future that platforms are able to refine their processes and identify issue ads with adequate accuracy and consistency. But given the major scaling challenges, the focus on issue ads may well turn out to be impracticable.

In light of the difficulties with identifying “issue ads”, one possible alternative would be to simply include all ads without an apparent commercial objective. In other words, a definition a contrario. This approach could capture the bulk of political advertising, and would avoid the difficulties of identifying and defining specific political “issues”. Such an approach would likely be more scalable and consistent than the current model, although this might come at the cost of increased false positives (i.e., a greater overinclusion of irrelevant, non-political ads in the archive).

Another improvement could be to publish all advertisements in a comprehensive archive, regardless of their political or commercial content (Howard, 2019). This would help third parties to independently evaluate platforms’ flagging processes for political ads, and furthermore to research political advertising according to their own preferred definitions of the ”political”. This is what Twitter does in its Ad Transparency Center: the company still takes steps to identify and flag political advertisers (at least in the US), but users have access to all other ads as well (Twitter, 2019a). However, only political ads are accompanied by detailed metadata, such as ad spend, view count, targeting criteria, et cetera. Facebook, in an update from 29 March 2019, also started integrating commercial ads into its database (Shukla, 2019). Like Twitter, however, these ads are not given the same detailed treatment as political ads. In this light, Twitter and Facebook appear to be moving towards a tiered approach, with relatively more detail on a subset of political ads, and relatively less detail on all other ads.

Of course, a more fundamental advantage of comprehensive publication ads is that it extends the benefits of ad archives to commercial advertising. Commercial advertising has not been the primary focus of ad archive governance debates thus far, but here too ad archives could be highly beneficial. A growing body of evidence indicates that online commercial ad delivery raises a host of legal and ethical concerns, including discrimination and manipulation (Ali et al., 2019; Boerman, Kruikemeijer, & Borgesius, 2017). Furthermore, online advertising is also subject to a range of consumer protection laws, including child protection rules and prohibitions on unfair and deceptive practices. With comprehensive publication, ad archives could contribute to research and reporting on such issues, especially if platforms abandon their tiered approach and start publishing more detailed metadata for these ads.

Platforms may not be inclined to implement comprehensive ad archives since, as discussed, their commercial incentives may run counter to greater transparency. But from a public policy perspective, there appear to be no obvious drawbacks to comprehensive publication, at least as a default rule. If there are indeed grounds to shield certain types of ads from public archives – though we see none as of yet – such cases could also be addressed through exemption procedures. The idea of comprehensive ad archives therefore warrants serious consideration and further research, since it promises to benefit the governance of both commercial and political advertising.

Exemptions for news reporting

Some ad archive regimes offer exemptions for news publishers and other media actors. News publishers commonly use platform advertising services to promote their content, and when this content touches on political issues it can therefore qualify as an issue ad. Facebook decided to exempt news publishers from their ad archive in 2018, following extensive criticism from US press industry trade associations, who penned several open letters criticising their inclusion in ad archives. They argued that “[t]reatment of quality news as political, even in the context of marketing, is deeply problematic” and that the ad archive “dangerously blurs the lines between real reporting and propaganda” (Carbajal et al., 2018; Chavern, 2018). Similar exemptions can now also be found in Canada’s Elections Modernization Act and in the EU Code of Practice (Leathern, 2019). However, the policy grounds for these exemptions are not particularly persuasive. There is little evidence to suggest, or reason to assume, that inclusion in ad archives would meaningfully constrain the press in its freedom of expression. Indeed, ad archive data about media organisations is highly significant, since the media are directly implicated in concerns about misinformation and electoral manipulation (Benkler, Faris, & Roberts, 2018). Excluding the media’s ad spending is therefore a missed opportunity without a clear justification.

Verifying: how do archives account for inauthentic behaviour?

Another pitfall for ad archives is verifying their data in the face of fraud and other inauthentic behaviours. One key challenge is documenting ad buyers’ identities. Another is the circumvention of ad archive regimes by “astroturf”, sock puppets and other forms of native advertising. More generally, engagement and audience statistics may be inaccurate due to bots, click fraud and other sources of noise. As we discuss below, these pitfalls should serve as a caution to ad archive researchers, and as a point of attention for platforms and their regulators.

Facebook’s archive in particular has been criticised for failing to reliably identify ad buyers (e.g., Edelson et al., 2019). Until recently, Facebook did not verify the names that advertisers submitted for their “paid for by” disclaimer. This enabled obfuscation by advertisers seeking to hide their identity (Albright, 2018; Andringa, 2018; Lapowsky, 2018; O’Sullivan, 2018; Waterson, 2019). For instance, ProPublica uncovered 12 different political ad campaigns that had been bought in the name of non-existent non-profits, and in fact originated from industry trade organisations such as the American Fuel & Petrochemical Manufacturers (Merrill, 2018). Vice Magazine even received authorisation from Facebook to publish advertisements in the name of sitting US senators (Turton, 2018).More recently, Facebook has therefore started demanding proof of ad buyer identity in several jurisdictions, such as photo ID and notarised forms (Facebook, 2019b). Twitter and Google enforce similar rules (Google, 2019b; Twitter, 2019b). The Canadian Elections Modernization Act now codifies these safeguards by requiring platforms to verify and publish ad buyers’ real names.

Such identity checks are only a first step in identifying ad buyers, however. Ad buyers wishing to hide their identity can still attempt to purchase ads through proxies or intermediaries. In theory, platforms could be required to perform even more rigorous background checks or audits so as to determine their ultimate revenue sources. But there may be limits to what can and should be expected of platforms in this regard. Here, ad archive governance intersects with broader questions of campaign finance regulation and the role of “dark money” in politics. These issues have historically been tackled through national regulation, including standardised registration mechanisms for political advertisers, but many of these regimes currently do not address online advertising. Platforms’ self-regulatory measures, though useful as a first step, cannot make up for the lack of public regulation in this space (Lapowsky, 2018; Livingstone, 2018). Even Facebook CEO Mark Zuckerberg has called for regulation here, arguing in a recent op-ed that “[o]ur systems would be more effective if regulation created common standards for verifying political actors” (Zuckerberg, 2019).

Another weak spot for ad archives is that they fail to capture “native advertising” practices: advertising which is not conducted through social media platforms’ designated advertising services, but rather through their organic content channels. Such “astroturfing” strategies have seen widespread deployment in both commercial and political contexts, from Wal-Mart and Monsanto and from Russian “troll farms” to presidential Super PACs (Collins, 2016; Howard et al., 2018; Leiser, 2016). Ad archives do not capture this behaviour, and indeed their very presence could further encourage astroturfing, as a form of regulatory arbitrage. Benkler, Faris, and Roberts suggest that ad archive regulation should address this issue by imposing an independent duty on advertisers to disclose any “paid coordinated campaigns” to the platform (Benkler, Faris, & Roberts, 2018). One example from practice is the Republic of Ireland’s Online Advertising and Social Media Bill of 2017, which would hold ad buyers liable for providing inaccurate information to ad sellers, and also prohibit the use of bots which “cause multiple online presences directed towards a political end to present as an individual account or profile on an online platform” (Republic of Ireland, 2017). Enforcing such rules will remain challenging, however, since astroturfing is difficult to identify and often performed by bad actors with little or no interest in complying with the law (Leiser, 2016).

For ads that are actually included in the archive, inauthentic behaviour can also distort associated metadata such as traffic data. Engagement metrics, including audience demographic data, can be significantly disturbed by click fraud or bot traffic (Edelman, 2014; Fulgoni, 2016). Platforms typically spend extensive resources to combat inauthentic behaviour, and this appears to be a game of cat-and-mouse without definitive solutions. In light of these challenges, researchers should maintain a healthy scepticism when dealing with ad archive data and, where necessary, continue to corroborate ad archive findings with alternative sources and research methods (see, generally: Vlassenroot et al., 2019).

The above is not to say that all information supplied by ad buyers should be verified. There may still be an added value in enabling voluntary, unverified disclosures by ad buyers in archives. Facebook, for instance, gives advertisers the option to include “Information From the Advertiser” in the archive. Such features can enable good faith advertisers to further support accountability processes, e.g., by adding further context or supplying contact information. It is essential, however, that such unverified submissions are recognisably earmarked as such. Ad archive operators should clearly describe which data is verified, and how, so that users can treat their data with the appropriate degree of scepticism.

Targeting: how is ad targeting documented?

Another key criticism of ad archives is that they are not detailed enough, particularly in their documentation of ad targeting practices. Micro-targeting technology, as discussed previously, is the source of many public policy concerns for both political and commercial advertising, including discrimination, deception, and privacy harms. These threats are relatively new, and are both undocumented and unregulated in many jurisdictions – particularly as regards political advertising (Bodó et al., 2017). Regrettably, ad archives currently fail to illuminate these practices in any meaningful depth.

At the time of writing, the major ad archives differ significantly in their approach to targeting data. Google’s archive indicates whether the following targeting criteria have been selected by the ad buyer: age, location, and gender. It also lists the top five Google keywords selected by the advertiser. Facebook’s Ad Library, by contrast, does not disclose what targeting criteria have been selected, but instead shows a demographic breakdown of the actual audience that saw the message - also in terms of age, location and gender. Twitter offers both audience statistics and targeting criteria, and covers not only the targeting criteria of age, location, and gender, but also their preferred language. These data vary in granularity. For instance, Google’s archive lists six different age brackets between the ages of 18 and 65+, whereas Twitter lists 34. For anyone familiar with the complexities of online behavioural targeting, it is apparent that these datasets leave many important questions unanswered. These platforms offer far more refined methods for ad targeting and performance tracking than the basic features described above.

For better insights into ad targeting, one helpful rule of thumb would be to insist that ad archives should include an equivalent level of information as is offered to the actual ad buyer – both in terms of targeting criteria and in terms of actual audience demographics (Mozilla, 2019). For some targeting technologies, full disclosure of targeting practices might raise user privacy concerns. For instance, Facebook’s Custom Audience feature enables advertisers to target users by supplying their own contact information, such as email addresses or telephone numbers. Insisting on full disclosure of targeting criteria for these custom audiences would lead to the public disclosure of sensitive personal data (Rieke & Bogen, 2018). Anonymisation of these data may not always be reliable (Ohm, 2010). In these cases, however, Facebook could at a minimum still disclose any additional targeting criteria selected by the ad buyer in order to refine this custom audience. Furthermore, ad performance data, rather than ad targeting data, can also provide some insight into targeting without jeopardising the custom audience’s privacy (Rieke & Bogen, 2018). Other platforms’ advertising technologies might raise comparable privacy concerns, demanding a case-by-case assessment of relevant tradeoffs. These exceptions and hard cases notwithstanding, however, there are no clear objections (either technical or political) that should prevent platforms from publicly disclosing the targeting methods selected by their advertisers.

In light of such complexities, designing appropriate disclosures will likely require ongoing dialogue between archive operators, archive users and policymakers. The first contours of such a debate can already be found in the work of Edelson et al., Rieke & Bogen, and Mozilla, who have done valuable work in researching and critiquing early versions of Google, Twitter and Facebook’s data sets (Edelson et al., 2019; Mozilla, 2019; Rieke & Bogen, 2018). For the time being, researchers may also choose to combine ad archive data with other sources, such as Facebook’s Social Science One initiative, or GDPR data access rights, in order to obtain a more detailed understanding of targeting practices (Ausloos & Dewitte, 2018; Venturini & Rogers, 2019). For instance, Ghosh et al. supplemented ad archive research with data scraped with ProPublica’s research tool, which gave insights into ad targeting that were not offered through the ad archive (Ghosh et al., 2019). Along these lines, ad archives can help to realise Pasquale’s model of “qualified transparency”, which combines general public disclosures with more limited, specialist inquiries (Pasquale, 2015).

Conclusion

This paper has given an overview of a new and rapidly developing topic in online advertising governance: political ad archives. Here we summarise our key findings, and close with suggestions for future research in both law and communications science.

Ad archives can be a novel and potentially powerful governance tool for online political advertising. If designed properly, ad archives can enable monitoring by a wide range of stakeholders, each with diverse capacities and interests in holding advertisers accountable. In general, ad archives can not only improve accountability to applicable laws, but also to public opinion, by introducing publicity and thus commercial and political risk into previously invisible advertisements.

Public oversight will likely be necessary to realise these benefits, since platforms ostensibly lack the incentives to voluntarily optimise their ad archives for transparency and accountability. Indeed, our analysis here has already identified several major shortcomings in present ad archive policies: scoping, verifying, and targeting. To realise the full potential of ad archives, these issues will require further research, critique, and likely regulation. Our review suggests that major advances can already be made by comprehensively publishing all advertisements, regardless of whether they have been flagged as political; revoking any exemptions for media organisations; requiring basic verification of ad buyers’ identities; documenting how ad archive data is verified; and disclosing all targeting methods selected by the ad buyer (insofar as possible without publishing personal data).

Looking forward, ad archives present a fruitful research area for both legal and communication sciences scholars. For legal scholars, the flurry of law making around political advertising in general, and transparency in particular, raises important questions about regulatory design (in terms of how relevant actors and duties are defined, oversight and enforcement mechanisms, etc.). In future, ad archives also deserve consideration in commercial advertising governance, in such areas as consumer protection, child protection, or anti-discrimination.

The emergence of ad archives also has important implications for communications science. Firstly, ad archives could become an important resource of datafor communications research, offering a range of data that would previously have been difficult or impossible to obtain. Although our paper has identified several shortcomings in this data, they might nonetheless provide a meaningful starting point to observe platforms’ political advertising. Secondly, ad archives are an interesting object of communications science research, in terms of how they are used by relevant stakeholders, and how this impacts advertising and communications practice. Further research along these lines will certainly be necessary to better understand ad archives, and to make them reach their full potential.

Acknowledgements

The authors wish to thank Frédéric Dubois, Chris Birchall, Joe Karaganis and Kristofer Erickson for their thoughtful reviewing and editing of this article. The authors also wish to thank Frederik Zuiderveen Borgesius and Sam Jeffers for their helpful insights during the writing process, as well as the participants in the ICA 2019 Post-Conference on the Rise of the Platforms and particularly the organisers: Erika Franklin Fowler, Sarah Anne Ganter, Dave Karpf, Rasmus Kleis Nielsen, Daniel Kreiss and Shannon McGregor.

References

Albright, J. (2018, November 4). Facebook and the 2018 Midterms: A Look at the Data – The Micro-Propaganda Machine. Retrieved from https://medium.com/s/the-micro-propaganda-machine/the-2018-facebook-midterms-part-i-recursive-ad-ccountability-ac090d276097

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes. Arxiv [Cs]. Retrieved from https://arxiv.org/pdf/1904.02095.pdf

Angelopoulos, C. J., Brody, A., Hins, A. W., Hugenholtz, P. B., Leerssen, P., Margoni, T., McGonagle, T. van Hoboken, J. V.J. (2015). Study of fundamental rights limitations for online enforcement through self-regulation. Institute for Information Law (IViR). Retrieved from https://pure.uva.nl/ws/files/8763808/IVIR_Study_Online_enforcement_through_self_regulation.pdf

Andringa, P. (2018). Interactive: See Political Ads Targeted to You on Facebook. NBC. Retrieved from http://www.nbcsandiego.com/news/tech/New-Data-Reveal-Wide-Range-Political-Actors-Facebook-469600273.html

Arnett v. Kennedy, 416 U.S. 134 (Supreme Court of the United States, 1974).

Parliament of the Netherlands. (2019). Motion for Complete Transparency in the Buyers of Political Advertisements on Facebook. Retrieved from https://www.parlementairemonitor.nl/9353000/1/j9vvij5epmj1ey0/vkvudd248rwa

Ausloos, J., & Dewitte, P. (2018). Shattering one-way mirrors – data subject access rights in practice. International Data Privacy Law, 8(1), 4–28. doi:10.1093/idpl/ipy001

Barocas, S. (2012). The Price of Precision: Voter Microtargeting and Its Potential Harms to the Democratic Process. Proceedings of the First Edition Workshop on Politics, Elections and Data, 31–36. doi:10.1145/2389661.2389671

Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press.

Bodó, B., Helberger, N., & Vreese, C. H. de. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Towards the next generation of political micro-targeting research. Internet Policy Review, 6(4). doi: 10.14763/2017.4.776

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online Behavioral Advertising: A Literature Review and Research Agenda. Journal of Advertising, 46(3), 363–376. doi:10.1080/00913367.2017.1339368

Borgesius, F.J., Moller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K., Dobber, T., Bodo, B., & de Vreese, C. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review, 14(1). doi:10.18352/ulr.420

Carbajal, A., Kint, J., Mills Wade, A., Brooks, L. T., Chavern, D., McKenzie, A. B., & Golden, M. (2018). Open Letter to Marck Zuckerberg on Alternative Solutions for Politics Tagging. Retrieved from https://www.newsmediaalliance.org/wp-content/uploads/2018/06/vR_Alternative-Facebook-Politics-Tagging-Solutions-FINAL.pdf

Cardoso, T. (2019, March 4). Google to ban political ads ahead of federal election, citing new transparency rules. The Globe and Mail. Retrieved from https://www.theglobeandmail.com/politics/article-google-to-ban-political-ads-ahead-of-federal-election-citing-new/

Chavern, D. (2018, May 18). Open Letter to Mr. Zuckerberg. News Media Alliance. Retrieved from http://www.newsmediaalliance.org/wp-content/uploads/2018/05/FB-Political-Ads-Letter-FINAL.pdf

Chester, J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). doi:10.14763/2017.4.773

Collins, B. (2016, April 21). Hillary PAC Spends $1 Million to ‘Correct’ Commenters on Reddit and Facebook. Retrieved from https://www.thedailybeast.com/articles/2016/04/21/hillary-pac-spends-1-million-to-correct-commenters-on-reddit-and-facebook

House of Commons Select Committee on Digital, Culture, Media and Sport. (2019). Disinformation and ‘fake news’: Final Report. Retrieved from https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179103.htm#_idTextAnchor000

Edelman, B. (2014). Pitfalls and Fraud In Online Advertising Metrics: What Makes Advertisers Vulnerable to Cheaters, And How They Can Protect Themselves. Journal of Advertising Research, 54(2), 127–132. doi:10.2501/JAR-54-2-127-132

Edelson, L., Sakhuja, S., Dey, R., & McCoy, D. (2019). An Analysis of United States Online Political Advertising Transparency. ArXiv [Cs]. Retrieved from http://arxiv.org/abs/1902.04385

European Commission. (2018, September 26). Code of Practice on Disinformation. Retrieved from https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

European Commission. (2019). Third monthly intermediate results of the EU Code of Practice against disinformation. Retrieved from https://ec.europa.eu/digital-single-market/en/news/third-monthly-intermediate-results-eu-code-practice-against-disinformation

Fanta, A. & Rudl, T. (2019, July 17). Leaked document: EU Commission mulls new law to regulate online platforms. Netzpolitik.org. Retrieved from: https://netzpolitik.org/2019/leaked-document-eu-commission-mulls-new-law-to-regulate-online-platforms/

Facebook. (2019a). Issues of national importance. Retrieved from https://www.facebook.com/business/help/214754279118974

Facebook. (2019b). Ads about social issues, elections or politics. Retrieved from https://www.facebook.com/business/help/208949576550051

Frier, S. (2018, July 2). Facebook’s Political Rule Blocks Ads for Bush’s Beans, Singers Named Clinton. Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2018-07-02/facebook-s-algorithm-blocks-ads-for-bush-s-beans-singers-named-clinton

Fulgoni, G. M. (2016). Fraud in Digital Advertising: A Multibillion-Dollar Black Hole: How Marketers Can Minimize Losses Caused by Bogus Web Traffic. Journal of Advertising Research, 56(2), 122. doi:10.2501/JAR-2016-024

Foucault, M. (1977). Discipline and Punish The Birth of the Prison. New York: Pantheon Books.

Ghosh, A., Venkatadri, G., & Mislove, A. (2019). Analyzing Political Advertisers’ Use of Facebook’s Targeting Features. 7. Available at https://www.ieee-security.org/TC/SPW2019/ConPro/papers/ghosh-conpro19.pdf

Goldman, R. (2017). Update on Our Advertising Transparency and Authenticity Efforts. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2017/10/update-on-our-advertising-transparency-and-authenticity-efforts/

Google (2019a). Implementation Report for EU Code of Practice on Disinformation. Retrieved from https://ec.europa.eu/information_society/newsroom/image/document/2019-5/google_-_ec_action_plan_reporting_CF162236-E8FB-725E-C0A3D2D6CCFE678A_56994.pdf

Google (2019b). Verification for election advertising in the European Union. Retrieved from https://support.google.com/adspolicy/answer/9211218

Guha, S., Cheng, B., & Francis, P. (2010). Challenges in measuring online advertising systems. Proceedings of the 10th Annual Conference on Internet Measurement - IMC ’10, 81. doi:10.1145/1879141.1879152

Hansen, H. K., Christensen, L. T., & Flyverbom, M. (2015). Logics of transparency in late modernity: Paradoxes, mediation and governance. European Journal of Social Theory, 18(2), 117–131. doi:10.1177/1368431014555254

Hounsel, A., Mathias, J. N., Werdmuller, B., Griffey, J., Hopkins, M., Peterson, C., … Feamster, N. (2019). Estimating Publication Rates of Non-Election Ads by Facebook and Google. Retrieved from https://github.com/citp/mistaken-ad-enforcement/blob/master/estimating-publication-rates-of-non-election-ads.pdf

Howard, P. N., Ganesh, B., Liotsiou, D., Kelly, J., & François, C. (2018). The IRA, Social Media and Political Polarization in the United States, 2012–2018 [Working Paper 2018.2]. Oxford: Project on Computational Propaganda. Retrieved from https://comprop.oii.ox.ac.uk/research/ira-political-polarization/

Howard, P. (2019, March 27). A Way to Detect the Next Russian Misinformation Campaign. The New York Times. Retrieved from https://www.nytimes.com/2019/03/27/opinion/russia-elections-facebook.html?module=inline

Keller, D. & Leerssen, P. (in press). Facts and where to find them: Empirical research on internet platforms and content moderation. In N. Persily & J. Tucker (eds), Social Media and Democracy: The State of the Field. Cambridge: Cambridge University Press.

Klobuchar, A., Warner, R., & McCain, J. (2017, October 19). The Honest Ads Act. Retrieved from https://www.congress.gov/bill/115th-congress/senate-bill/1989/text

Kuczerawy, A. (2019, in press). Fighting online disinformation: did the EU Code of Practice forget about freedom of expression? In E. Kużelewska, G. Terzis, D. Trottier, & D. Kloza (Eds.), Disinformation and Digital Media as a Challenge for Democracy. Cambridge: Intersentia.

Lomas, N. (2018, July 26). Facebook finally hands over leave campaign Brexit ads. Techcrunch. Retrieved from: https://techcrunch.com/2018/07/26/facebook-finally-hands-over-leave-campaign-brexit-ads/

Lapowsky, I. (2018). Obscure Concealed-Carry Group Spent Millions on Facebook Political Ads. WIRED. Retrieved from https://www.wired.com/story/facebook-ads-political-concealed-online/

Leathern, R. (2019). Updates to our ad transparency and authorisation efforts. Retrieved from: https://www.facebook.com/facebookmedia/blog/updates-to-our-ads-transparency-and-authorisation-efforts

Leiser, M. (2016). AstroTurfing, ‘CyberTurfing’ and other online persuasion campaigns. European Journal of Law and Technology, 7(1). Retrieved from http://ejlt.org/article/view/501

Livingstone, S. (2018). Tackling the Information Crisis: A Policy Framework for Media System Resilience [Report]. London: LSE Commission on Truth, Trust and Technology. Retrieved from http://www.lse.ac.uk/media-and-communications/assets/documents/research/T3-Report-Tackling-the-Information-Crisis-v6.pdf

Lyon, D. (2006). Theorizing Surveillance: The Panopticon and Beyond. Devon: Willan Publishing.

Macleod, A. (2019). Fake News, Russian Bots and Putin’s Puppets. In A. MacLeod (Ed.), Propaganda in the Information Age: Still Manufacturing Consent. London: Routledge.

Matias, J. N., Hounsel, A., & Hopkins, M. (2018, November 2). We Tested Facebook’s Ad Screeners and Some Were Too Strict. The Atlantic. Retrieved from: https://www.theatlantic.com/technology/archive/2018/11/do-big-social-media-platforms-have-effective-ad-policies/574609/

Merrick, R. (2019, December 25). Brexit: Leave ‘very likely’ won EU referendum due to illegal overspending, says Oxford professor’s evidence to High Court. The Independent. Retrieved from: https://www.independent.co.uk/news/uk/politics/vote-leave-referendum-overspending-high-court-brexit-legal-challenge-void-oxford-professor-a8668771.html

Merrill, J. B. (2018). How Big Oil Dodges Facebook’s New Ad Transparency Rules. Retrieved 22 April 2019, from ProPublica website: https://www.propublica.org/article/how-big-oil-dodges-facebooks-new-ad-transparency-rules

Merrill, J. B., & Tobin, A. (2019, January 28). Facebook Moves to Block Ad Transparency Tools — Including Ours. ProPublica. Retrieved from https://www.propublica.org/article/facebook-blocks-ad-transparency-tools

Montellaro, Z. (2019). House Democrats forge ahead on electoral reform bill. POLITICO. Retrieved from https://politi.co/2GO4eJ8

Mozilla (2019, March 27). Facebook and Google: This is What an Effective Ad Archive API Looks Like. The Mozilla Blog. Retrieved from: https://blog.mozilla.org/blog/2019/03/27/facebook-and-google-this-is-what-an-effective-ad-archive-api-looks-like

Mulgan, R. (2000). Comparing Accountability in the Public and Private Sectors. Australian Journal of Public Administration, 59(1), 87–97. doi:10.1111/1467-8500.00142

Ohm, P. (2010). Broken Promises of Privacy: Responding To The Surprising Failure of Anonymization. UCLA Law Review57, 1701–1777. Retrieved from https://www.uclalawreview.org/pdf/57-6-3.pdf

Netherlands Ministry of the Interior. (2019). Response to the Motion for Complete Transparency in the Buyers of Political Advertisements on Facebook. Retrieved from: https://www.tweedekamer.nl/kamerstukken/detail?id=2019Z03283&did=2019D07045

O’Sullivan, D. (2018). What an anti-Ted Cruz meme page says about Facebook’s political ad policy. CNN. Retrieved from: https://www.cnn.com/2018/10/25/tech/facebook-ted-cruz-memes/index.html

Parsons, C. (2019). The (In)effectiveness of Voluntarily Produced Transparency Reports. Business & Society, 58(1), 103–131. doi:10.1177/0007650317717957

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press. Retrieved from https://www.jstor.org/stable/j.ctt13x0hch

Republic of France. (2018). LoI n° 2018-1202 du 22 décembre 2018 relative à la lutte contre la manipulation de l’information. Retrieved from https://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000037847559&categorieLien=id

Republic of Ireland (2017), Online Advertising and Social Media (Transparency) Bill 2017. Retrieved from: https://data.oireachtas.ie/ie/oireachtas/bill/2017/150/eng/initiated/b15017d.pdf

Rieke, A., & Bogen, M. (2018). Leveling the Platform: Real Transparency for Paid Messages on Facebook. UpTurn Report. Retrieved from https://www.upturn.org/static/reports/2018/facebook-ads/files/Upturn-Facebook-Ads-2018-05-08.pdf

Rosenberg, M. (2019, July 25). Ad Tool Facebook Built to Fight Disinformation Doesn’t Work as Advertised. The New York Times. Retrieved from: https://www.nytimes.com/2019/07/25/technology/facebook-ad-library.html

Sanders, E. (2019, May 9). Washington Public Disclosure Commission Passes Emergency Rule Clarifying That Facebook and Google Must Turn Over Political Ad Data. The Stranger. Retrieved from https://www.thestranger.com/slog/2018/05/09/26158462/washington-public-disclosure-commission-passes-emergency-rule-clarifying-that-facebook-and-google-must-turn-over-political-ad-data

Sanders, E. (2018, October 16). Facebook Says It's Immune from Washington State Law. The Stranger. Retrieved from: https://www.thestranger.com/slog/2018/10/16/33926412/facebook-says-its-immune-from-washington-state-law

Shane, S. (2017, November 1). These are the Ads Russia Bought on Facebook in 2016. The New York Times. Retrieved from https://www.nytimes.com/2017/11/01/us/politics/russia-2016-election-facebook.html

Shukla, S. (2019, March 28). A Better Way to Learn About Ads on Facebook. Facebook Newsroom. Retrieved from https://newsroom.fb.com/news/2019/03/a-better-way-to-learn-about-ads/

Singer, N. (2018, August 16). ‘Weaponized Ad Technology’: Facebook’s Moneymaker Gets a Critical Eye. TheNew York Times.https://www.nytimes.com/2018/08/16/technology/facebook-microtargeting-advertising.html

Timmons, H. & Kozlawska, H. (2018, March 22). Facebook’s quiet battle to kill the first transparency law for online political ads. Quartz. Retrieved from: https://qz.com/1235363/mark-zuckerberg-and-facebooks-battle-to-kill-the-honest-ads-act/

Turton, W. (2018, October 30). We posed as 100 senators to run ads on Facebook. Facebook approved all of them. VICE News. Retrieved from: https://news.vice.com/en_ca/article/xw9n3q/we-posed-as-100-senators-to-run-ads-on-facebook-facebook-approved-all-of-them

Twitter. (2019a). Implementation Report for EU Code of Practice on Disinformation. Retrieved from http://ec.europa.eu/information_society/newsroom/image/document/2019-5/twitter_progress_report_on_code_of_practice_on_disinformation_CF162219-992A-B56C-06126A9E7612E13D_56993.pdf

Twitter. (2019b). How to get certified as a political advertisers. Retrieved from https://business.twitter.com/en/help/ads-policies/restricted-content-policies/political-content/how-to-get-certified.html

van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society: Public Values in a Connective World. Oxford: Oxford University Press.

Van Til, G. (2019). Zelfregulering door online platforms: een waar wondermiddel tegen online desinformatie? [Self-regulation by online platforms: a true panacea against online disinformation?]. Mediaforum, 1(13). Retrieved from https://www.ivir.nl/publicaties/download/Mediaforum_2019_1_vanTil.pdf

Vandor, M. (2018, 18). Indexing news Pages on Facebook for the Ad Archive. Facebook Media. Retrieved from: https://www.facebook.com/facebookmedia/blog/indexing-news-pages-on-facebook-for-the-ad-archive

Venturini, T., & Rogers, R. (2019). “API-Based Research” or How can Digital Sociology and Journalism Studies Learn from the Facebook and Cambridge Analytica Data Breach. Digital Journalism, 7(4), 532–540. doi: 10.1080/21670811.2019.1591927

Vlassenroot, E., Chambers, S., Di Pretoro, E., Geeraert, F., Haesendonck, G., Michel, A., & Mechant, P. (2019). Web archives as a data resource for digital scholars. International Journal of Digital Humanities, 1(1), 85–111. doi:10.1007/s42803-019-00007-7

Wagner, B. (2018). Free Expression?: Dominant information intermediaries as arbiters of internet speech. In M. Moore & D. Tambini (Eds.), Digital Dominance. Oxford: Oxford University Press.

Warner, R. (2017). The Honest Ads Act (primer). Retrieved from https://www.warner.senate.gov/public/index.cfm/the-honest-ads-act

Waterson, J. (2019, January 14). Obscure pro-Brexit group spends tens of thousands on Facebook ads. The Guardian. Retrieved from https://www.theguardian.com/politics/2019/jan/14/obscure-pro-brexit-group-britains-future-spends-tens-of-thousands-on-facebook-ads

Waterson, J. (2019, April 3). Facebook Brexit ads secretly run by staff of Lynton Crosby firm. The Guardian. Retrieved from: https://www.theguardian.com/politics/2019/apr/03/grassroots-facebook-brexit-ads-secretly-run-by-staff-of-lynton-crosby-firm

Zuckerberg, M. (2019, March 30). The Internet needs new rules. Let’s start in these four areas. The Washington Post. Retrieved from https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html

Footnotes

1. E.g. Mozilla, 2019; Mattias, Hounsel, & Hopkins, 2019; Merrill, 2018; Rieke & Bogen, 2018; Edelson et al., 2019; Andringa, 2018; Lapowsky, 2018; O’Sullivan, 2018; Waterson, 2019; Albright, 2018; Howard, 2019. See Section three for further discussion.

2. Parallel to the more general distinction between the governance of platforms and the governance by platforms (Gillespie, 2018).

3. The Commission describes the Code as a ‘self-regulatory’ instrument. However, given the Commission’s involvement in its development and oversight, we consider ‘co-regulatory’ a more apt description (Kuczerawy, 2019; more generally see Angelopoulous et al., 2015).

4. Installing such notice and takedown for unlawful content is a requirement under EU law. In the US, notice and takedown procedures are only required for copyright and trademark claims, and the majority of takedown occurs on a strictly voluntary basis. In practice, much of the content removed under these regimes is assessed on the basis of platforms’ voluntary standards (Keller & Leerssen, 2019).

Tax compliance and privacy rights in profiling and automated decision making

$
0
0

Introduction

The use of information technology is vital for the effective administration of tax systems and in recent years, tax administrations around the world have increasingly invested in information technology tools (OECD, 2016a; OECD, 2016b; OECD, 2019). Due to the high number of taxpayers that need to be effectively and efficiently assessed, the support offered by new technologies has represented an opportunity for tax administration. At the same time, while the digital economy imposes new challenges to tax authorities and efficient tax law enforcement (OECD, 2015), the evolution of the digital world including new cross-border business practices required revenue administrations to keep pace with new technologies themselves (Ehrke-Rabel, 2019a). Indeed, the complexity of interactions and transactions taking place at taxpayers’ level requires the processing of an increasing number of information (OECD, 2016a; OECD, 2016b; Ehrke-Rabel, 2019a). Thus, the growth in Big Data and electronic financial transactions presents opportunities for tax authorities to use this data to collect taxes more efficiently and to be very popular with governments to clamp down on tax avoidance/evasion.

In order to effectively manage a tax system, together with the use of new IT systems, an important role is played by data. Even in traditional reporting systems, the tax agency has always been relying on the high volume of information provided by the taxpayers and third parties. However, thanks to new technologies, it is much easier for tax administrations to gather and process such data. Governments have been creating and accessing a very high number of taxpayers’ data from different sources. These sources include public records, information gathered by other authorities (whether domestic or not), businesses (Jensen & Wöhlbier, 2012) and other third parties such as employers and banking or financial institutions (Ehrke-Rabel, 2019a; Ehrke-Rabel, 2019b). Data are the fuel of technological tools implemented by tax administrations and they can be used in tax collection, monitoring and for supporting auditing decisions. Indeed, by receiving and processing more data, tax administrations will be able to reduce information asymmetries which represent a threat to equal and complete tax collection (Ehrke-Rabel, 2019a; Doran, 2009; Lederman, 2010; OECD, 2017). Furthermore, the data gathered in this way can facilitate economic and policy design (OECD, 2016a).

On one hand, the increasing role of technology brings a lot of advantages to the tax system allowing for faster and more automated analysing of the high level of data, minimising the errors and saving time. However, on the other hand, the use of new technologies processing of many taxpayers’ data, including personal data, also brings uncertainty in relation to the level of automation that can be used without breaching privacy rights. For example, by processing this large amount of data, tax administrations can cluster taxpayers based on their profile to monitor and decide which taxpayers shall be audited. Finally, this automatic profiling of taxpayers could ultimately lead to the automation of decisions which might affect the taxpayers. Moreover, concerns might arise in regard to who should provide the IT system, whether it is built by the same agency or it is outsourced.

The General Data Protection Regulation (GDPR) has introduced new provisions on how individuals can be profiled and technology that enables automated profiling, such as the ones used by tax administration for risk management and advanced data analytics (OECD, 2016), has the potential to present difficulties in this area. At the same time the use of these technologies in the tax field is justified by the need to safeguard the public interest (Art. 6 (3), Art. 9 (2) (g), Art. 23 (1) (e) GDPR) and advocating for transparency of the tools at the disposal of tax authorities could represent a risk for tax authorities.

This paper aims at highlighting one of the current issues related to finding the right balance between tax compliance and privacy rights. The main focus of the article is to present the relations which arise between the Information and communications technology (ICT) tools enabling profiling and automated decision making in tax matters and the GDPR provisions, which apply to the protection of privacy rights in this context. Moreover, this contribution aims at describing the policy implications of such interactions between GDPR provisions and automated decision making carried out by tax authorities.

This article consists of four sections. The first one contains an overview of the ICT tools used by tax administrations when carrying out their activities (e.g., tax monitoring, auditing, collection). The second section analyses how these ICT tools might perform profiling and automated decision making pursuant to the GDPR definition of these two key notions. The third section highlights how in the context of the GDPR, the European legislator has tried to balance tax compliance needs with individual privacy rights. Finally, section four describes the policy implications for EU member states deriving from the relation between GDPR provisions regulating profiling and automated decision making, and the instruments at tax authorities´ disposal in the fight against tax evasion and fraud.

Section 1: ICT tools used by tax administrations

As recent studies conducted by the Organisation for Economic Co-operation and Development (OECD) and the Intra-European Organisation of Tax Administrations (IOTA) show, tax administrations around the world have integrated new technologies to improve their tax collection mechanisms (OECD, 2016a; OECD, 2016b; OECD, 2017; IOTA, 2018). Indeed, and more generally, revenue agencies need technologies in order to address transparency of operations, greater efficiency and responsiveness to the needs of government and taxpayers. The implementation of new technologies by tax administrations varies around the world and from developing to developed countries (Kariuki, 2014). The need for IT is also reflected in the budget of tax administrations and requires careful management (OECD, 2016a; OECD, 2016b). According to previous studies, 159 out of 193 UN member states use ICT-intensive systems for tax management (Tomar, Guicheney, Kyarisiima, & Zimani, 2016; World Bank, 2016).

In the last two decades, there have been different ways in which tax administrations have used ICT to enhance performance in revenue administrations, some of them include: to provide readily accessible historical data; to reduce mistakes, processing times and costs; to improve and promote voluntary compliance and consequently increase revenue collections (Smith,1969; Edwards-Dowe,2008; Chatama, 2013; Kariuki, 2014). Some administrations will use new technologies just in order to perform their core and basic tasks such as: registration, processing, payment and accounting, audit targeting and debt collection (OECD, 2016a; OECD, 2016b; IOTA, 2018). Recent examples of ICT implementations in tax matters can be found in Slovenia where certified electronic cash registers are connected to the tax administrations which are informed about transactions in real-time; also, in Chile or Italy where they have adopted an Electronic Invoicing System which directly connects the taxpayers and the tax administrations (IOTA, 2018). However, more broadly, examples of possible ICT tools used by tax authorities typically include: e-filing of tax returns, e-payments, data sharing and datamatching, taxpayer self-help portals, chatbots for technical enquiries (IOTA, 2018). These instruments rely on automated data matching, precedent databases, campaign management and rules-based systems. Data matching is fueled by the information that was gathered through several records and includes third party information as well. This information is typically used to assess the information which was provided by the taxpayer and a database informs the formulation of tax rulings. Finally, these systems are based on the data they are fueled with might be enabled to decide what actions should be taken, such as sending communication to the taxpayer about their tax situation (Kariuki, 2014).

In a recent study, the OECD has highlighted the benefits of ICT used for tax and a lot of attention has been given to the role of Big Data and advanced analytics techniques for tax administrations (OECD, 2016a). Referring to the collection of Big Data from third-party sources which could be then combined with tax data, the OECD underlines how this will allow revenue bodies to develop and create tailored e-services that target the specific needs of individual and business taxpayers (OECD, 2016a). With reference to Big Data, they could improve the ways in which revenue bodies examine and understand the activities and taxpayers’ behaviour through several implementations of Big Data for information storage, the analysis across multiple periods, compliance, control and risk management activities, identifying and tracking changes in taxpayer abilities and performance to enable revenue bodies to respond more effectively and in a timelier manner and for supporting whole-of-government outcomes by sharing insights and information (Ehrke-Rabel, 2019a; IOTA, 2018).

Regarding advanced analytics techniques, a 2016 OECD survey showed that advanced analytics is the principal application for audit case selection (OECD, 2016a). Moreover, 15 out of the 16 tax administrations that answered the OECD survey indicated that they were deploying advanced analytics to prioritise cases for investigation, audit or other compliance intervention (OECD, 2016a). According to the same OECD study, administrations generally create unsupervised modelswhich consist of models seeking to identify interesting or anomalous patterns in the data, rather than trying to learn from the outcomes of specific cases. Moreover, tax administrations such as the Irish and the Dutch ones have experimented with unsupervised segmentation techniques. These techniques represent a sectorial application of the broader cluster analysis through which it is possible to identify groups of taxpayers who are similar to each other in some significant respects, and dissimilar to the other groups identified (OECD, 2016a). Ireland has also adopted an alternative approach to segmentation, which focuses on grouping taxpayers based largely on their predicted response-to-intervention. According to this model if all taxpayers have the same response to a given intervention, then there is little practical value in segmentation, whereas if there are large and consistent differences in response-to-intervention, then segmentation is worthwhile. This approach is based on the uplift modellingtechniques which is likely to create multiple segmentations and ultimately, each type of intervention would require a different segmentation of the taxpayer base (OECD, 2016a).

Two examples of unsupervised modelscan be found also in the Australian nearest neighbours model, which is able to identify incorrect income tax deductions, and in the Irish income-consumptionmodel, aiming at the identification of under-declaration of income (OECD, 2016a). What is a common element in both models, even though they use different statistical techniques (k in the case of Australia’s nearest neighbours modeland multiple regression for Ireland’s income consumption model), consists of comparing a taxpayer’s return to those of his or her peers. In this way, it is possible to identify outliers for further investigation, and also to identify cases which, even though they may appear unusual on initial inspection, are in fact normal once compared to others, similar cases (OECD, 2016a). Other examples of implementation of advanced analytics are the Swedish predictive model to specifically identify unreported income, as distinct from over-claiming of deductions and the US structured income flows model which links the analysis of related entities to uncover misreporting at the entity-level and non-compliance associated with the structure of income flows (OECD, 2016a).

In the 2016 OECD survey, it also emerges how tax administrations are using both predictive and prescriptive techniques. The first ones aim at identifying taxpayers who are more likely to fail to meet their obligations, while the second ones are implemented to verify which is the most effective way to communicate to a certain group of taxpayers. Regarding predictive techniques, tax administrations from countries such as Australia, Canada, Norway and the United Kingdom have implemented programmes for risk modelling and controlled experimentation that identify which cases are likely to fail to meet payment or filing obligations, and which interventions are likely to remedy the problem. In these cases, analytic outputs are used both to prioritise cases and to determine treatment paths. For example, the United Kingdom has built models that are able to assess taxpayer risk prior to filing (e.g., determining which taxpayers are most likely to miss filing deadlines) in order to target interventions to encourage compliance (OECD, 2016a).

An example of prescriptive-analytics technique is the so-called experimental design where treatment and control groups are partitioned and observed in order to isolate the effects of specific actions, interventions, or treatments. This instrument is particularly used for direct taxpayer communications and the Norwegian administration, for example, has engaged with a behavioural economics researcher to test a variety of communications intended to improve compliance on declarations of foreign income (OECD, 2016a).

Relevant for the scope of this analysis is, in particular, the use of technology for tax auditing risk assessment. In this profiling modality, it should not be possible to single out individuals by name or identifying characteristics. However, it is quite problematic in determining where the collected information and the technological system are effectively singling out taxpayers. This could be the case when a process adds extra value to taxpayers of a certain postal code, gender, birth month (Ohm, 2010, as cited by Kroll et al., 2016). The auditing risk assessment is usually conducted by also checking the tax returns that were previously filled (Kroll et al., 2016).

Section 2: How the GDPR notions of profiling and automated decision making fit in the use of ICT tools by tax administrations

In the context of this paper, we focus on two concepts which are relevant in the way tax agencies are using ICT tools and which are both contained in the GDPR, namely profiling and automated decision making. While in the academic discourse, the tendency is to focus on the commercial applications of these techniques to better segment markets and tailor services and products to align them with individual needs, profiling and automated decision making can and are implemented also in the public sector (e.g., in education, healthcare and transportation). Indeed, both the private and public sector, profiling and automated decision-making can increase the efficiency of delivering a certain service. However, the use of these techniques may raise significant risks for individuals’ rights and freedoms.

As we have seen, in the previous sections, tax authorities are implementing new technologies for different reasons (e.g., better tax assessment and collection, better communication with the taxpayers, increasing tax compliance ex ante). In many of the examples reported there is a clustering of taxpayers based on the different purpose pursued by the tax administration.

Considering the personal income tax, new technologies clustering taxpayers based on the information contained in their tax returns and received by third parties can be a very useful tool in verifying whether the income declared by that natural person is correct or not. The way in which personal income tax is generally built, it relies on different income categories (e.g., business income, employment income, capital income), tax exemptions and the possibility to deduct expenses. This type of construction makes it possible to consider it as a progressive tax and be compliant with the ability to pay in principle.

Traditionally, in order to minimise the interference with taxpayers’ personal autonomy, tax collection has been based on the information provided by the taxpayers through the submission of her/his tax return (Ehrke-Rabel, 2019a). The tax return is the instrument through which natural persons declare the income they have produced during the previous fiscal year.1 Depending on the threshold under which taxpayers´ income will fall, taxes will be due according to a certain applicable tax rate. Once the tax return is submitted, the tax authority will proceed to the verification and assessment of the due taxes. Because of the high number of tax returns submitted to tax authorities which basically consist in a mass procedure, for a long time it has been assumed that tax authorities would not be able to thoroughly verify all returns before assessment. Consequently, initial assessments were (and still are) regularly subject to revision through tax audits (Ehrke-Rabel, 2019a; Vaillancourt et al., 2011; Russell, 2010; Jensen & Wöhlbier, 2012; EU Commission, 2006; OECD, 2006; OECD, 2017).

Moreover, maintaining a progressive system while at the same time avoiding revenue losses, created a complex system for both tax administrations and taxpayers. This has led to the introduction of pre-filled tax returns and the creation of online applications to calculate the due amount of taxes. By matching the submitted tax returns with other information which were gathered by other public administrations or third parties (e.g., employer, financial institutions, etc.), tax administrations are able to verify whether the declared income is correct or not. Indeed, a pivotal role in the good functioning of the tax auditing system is played by data transmitted to tax authorities by third parties.2 However, matching these data through ICT tools could lead to profiling of taxpayers and consequently to automated decision making pursuant to the GDPR definitions.

2.1 Profiling performed by tax authorities

As defined by the GDPR, profiling can be described as any form of automated processing of personal data aiming at the evaluation of certain personal aspects of a natural person. Among these aspects, the European legislator lists the natural person´s performance at work, economic situation, health, personal preferences, interests, behaviour, location or movements (Art. 4(4) GDPR).

From this definition, in order to verify whether profiling can take place in the tax sphere there are three elements which need to be present in the way tax administrations use the ICT tools at their disposal and in the way these tools are built:

  1. The processing must be automated;
  2. It must be carried out on personal data of a natural person;
  3. The processing scope is the evaluation of the personal aspects of a natural person.

As described above, the increasing number of possible deductions, the different types of income that taxpayers can produce simultaneously, and the high number of taxpayers itself makes it impossible for tax administrations to go through each tax return. The use of employees for checking each tax return would be too expensive for tax administrations (Ehrke-Rabel, 2019c; Lipniewicz, 2017) and would drive away resources which could be used for other public activities.

This has led to the adoption of automated systems which are able to go through a large amount of data and verify whether the information submitted by the taxpayers are correct or not. In this sense, the processing of the gathered taxpayers’ data is automated and thus, fulfills one of the GDPR requirements for the processing of data to be considered as profiling.

Another aspect which needs to be considered is whether the taxpayers’ data collected and processed by the tax administrations are personal data. Indeed, the information at disposal of the tax administrations in order to verify the income of a certain taxpayer relates to an identified or identifiable natural person who (as exactly stated in the definition of personal data of the GDPR) “can be identified, directly or indirectly, by reference to identifiers such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”.

Finally, in order to prove whether the use of ICT by tax administrations in the management of the tax returns and the consequent verification of the correctness of the declared income might consist in a profiling activity, the scope of the processing must be the evaluation of personal aspects about a natural person. Among the example of personal aspects cited in Art. 4 (4), which defines the notion of profiling, there is the economic situation of the natural person which is at the heart of the evaluation of whether the declared income is correct or not and in order to verify the correctness all the directly and indirectly, relevant economic and non-economic elements will be taken into consideration. Indeed, among these elements, there are financial accounts, expenses such as cars or immovable properties (and in this case the exact location and structural elements which intrinsically influence the price and value) but also medical, cultural or educational expenses.

One last aspect concerning profiling which needs to be considered is the possibility to carry out group profiling. This type of profiling is based on data from existing groups, but it can also involve categorisation based on aspects shared by group members without them realising that they belong to that particular group (Mantelero, 2016). In the tax sector, risk management tools might divide taxpayers into groups with different risk levels based on different sets of data. It has been noticed that in this type of profiling, there is a significant number of false positives (deciding that a person is a member of the group when they are not) or false negatives (deciding that a person is not a member of the group when they actually are) (Kamarinou, Millard, & Singh, 2017). Moreover, the presence of false positives and false negatives can lead to decisions which produce legal or significant effects on individual people. Consequently Art. 22 GDPR might be applicable since it requires that the decision based on the profiling addresses an individual and has legal or significant effects for him/her (Kamarinou et al., 2017).

2.2 Automated decision making in tax matters

As will be further investigated in this section, profiling might also lead to decisions based on the processed data which can be automated and consequently, both individual profiling and group profiling might lead to the application of Art. 22 GDPR. With regard to automated decision making under the GDPR, there are two aspects, which need to be further analysed especially in connection to their implications in tax matters. First of all, it is important to understand the scope of the word “decision”. Second of all, it is important to identify the cases where the decision is “solely” automated.

In tax matters, the use of software able to go through the amount of data collected by tax authorities in relation to tax returns and information provided by third parties will lead to the identification of possible mismatches between what has been declared by the taxpayer and what results from the combination of all the information available to the tax authorities. Consequently, a tax assessment notice indicating a different amount of tax to be paid and relevant sanctions (in the case where more taxes are due than what has been paid by the taxpayer) will be sent to the taxpayer. Depending on the different procedural rules of a single member state, the taxpayer will be given a certain amount of time to challenge the tax assessment notice. This means that the tax assessment notice which is based on the results of the software which match the different information available to the tax authorities is not a final decision and neither is a court decision.

The meaning of the word “decision” in the context of automated decision making can be derived by looking at the different parts of the GDPR text. It has already been highlighted that Art. 22 GDPR does not specify whether the decision mentioned in the article has to be a final decision or just a mere interim or individual step taken during the automated processing. However, recital 71 of the GDPR expressly states that the word “decision” should include also “measure”. Thus, the word “decision” is to be understood in a broader sense. At the same time, Art. 22 of the GDPR describes the word “decision” as the one which produces legal effects or similarly significantly affects the data subject. On the one hand, with regard to the “legal” element, this requires that the decision be binding or that the decision create legal obligations for the data subject. In the case of the tax notice, where the taxpayer does not challenge it, if he/she does not comply with it, the tax assessment notice can be enforced by the relevant authorities. On the other hand, the fact that the GDPR introduces the word “similarly”, absent in the previous directive, to the phrase “significantly affects” means that the threshold for significance must be similar to that of a decision producing a legal effect. Even if it can be argued that the “significant” element is rather vague, Article 29 Working Party has identified possible categories of decisions which can be considered as producing “similarly significantly” effects on the data subjects (Veale & Edwards, 2018). These decision categories include decisions affecting someone’s access to health services, to education, decisions denying someone an employment opportunity or put them at a serious disadvantage and decisions affecting someone’s financial circumstances. Undoubtedly, tax assessment notices affect the financial circumstances of the data subject (Art. 29 Working Party, 2017).

The second aspect that needs to be considered in order to identify a solely automated decision, is the level of human intervention. Art. 22 of the GDPR finds application only in cases where decisions are made in a “solely” automated way and the scope of the word “solely” is decisive in the determination of the practical extent of the rights granted to data subjects (Bygrave, 2001; Wachter et al., 2017; Veale & Edwards, 2018). In order to frame the scope of the notion of “solely” the attention needs to be focused on the level of human intervention in the loop. Indeed, it is difficult to find completely automated systems where the decisions are made “solely” by the algorithm (Veale & Edwards, 2018). Consequently, a literal interpretation of the word “solely” will significantly reduce the practical scope of application of Article 22 and it might even lead to a wider introduction of a nominal human intervention in the loop consisting in a mere “rubber-stamping” in order to limit the application of Article 22 (Veale & Edwards, 2018). Under Art. 29 Working Party (2017), the activity leading to the decision should not be a tokenised gesture but there must be an influential activity exercised by a human. The main issue in the context of this contribution is whether the mere signature by the tax agent responsible for the assessment procedure reported on the assessment notice, completely based on the ICT system used and to be sent to the taxpayer, can be considered a sufficient indication of human intervention. Depending on a case-by-case analysis, it might be that the tax agent had to go through further investigations before finalising and sending the assessment notice. Nevertheless, the outcome on which the assessment letter is based resulting from the implementation of an ICT system will be hardly questioned by the tax agent. In fact, there are studies showing that even in systems where the explicit intention is to merely support a human decision-maker, the trustworthiness of the intrinsic automated logic of the system, the lack of time and convenience reasons, tend to make the system operate as wholly automated (Skitka, 2000, as cited by Veale & Edwards, 2018). The difficulties in interpreting the level of human intervention emerge in particular by looking at national experiences. For example, the German Federal Court has adopted the restrictive interpretation and considered any minimum human intervention as excluding the applicability of the old Art. 15 of the Data Protection Directive.3 Differently, according to the opposite interpretation of the UK data protection authority (ICO), if an irrelevant human intervention has been involved, Art. 22 should be applicable (Information Commissioner’s Office, 2017). From a scholarly perspective, there are different opinions. Some scholars opted for the interpretation precluding the application of Art. 22 to any decision-making process where even a minimal intervention is involved (Martini, 2017, as cited by Malgieri & Comandé 2017). Differently, Malgieri and Comandé (2017) sustain that limiting the applications to these cases can be compared to “a rubber-stamping on the automated processing, easily performed even by a monkey or another trained animal”. Similarly, Veale & Edwards (2018), on the basis of the above-cited studies on the blind trust of automated logic by human decision-makers (Sktika, 2000), claim that there is a strong argument that the scope of Article 22 should include also decisions where there is some degree of human involvement, though the extent of this degree is hard to determine. Confirmed by the UK ICO, this interpretation argues that the interpretation of the word “solely” in the context of Art. 22 (1) is intended to cover those automated decision-making processes in which humans exercise no real influence on the outcome of the decision, for example where the result of the profiling or process is not assessed by a person before being formalised as a decision (Information Commissioner’s Office, 2017). Thus, minimal human intervention with no real influence on the outcome of the decision cannot be sufficient to exclude the applicability of Art. 22 (1) (Malgieri & Comandé, 2017), and this might be the case of merely signing the tax assessment notice to be sent to the taxpayer.

Finally, regarding the legal effects or significant effects, it is undoubtably that the decision to proceed to the assessment or to require taxpayers to pay a higher amount of taxes differently from what they had declared (or better not declared) will significantly affect the taxpayers’ sphere. Consequently, taxpayers shall be recognised the right to appeal that decision or more generally, they should have access to a judicial remedy. Admitting that the requirement for Art. 22 (1) is met is fundamental because it will mean that profiling and automated decision making will still be allowed in tax matters if, according to the second paragraph, these activities are authorised by the European Union or member state laws to which the controller is subject to. Moreover, these provisions must lay down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests. Thus, laws providing for the ICT system to run activities such as profiling and automated decision-making shall then lay down the suitable safeguards. These safeguards are actually not described in the text of the Directive but only in the recitals.

Section 3: The need to balance individual privacy rights with the public interest embodied in tax compliance

According to Art. 22 GDPR, profiling and automated decision-making are generally strongly limited. However, the need to balance privacy rights of taxpayers with the public interest embodied in tax compliance required the EU legislator to consider that in many member states, tax administrations’ activities carried out through the use of ICT tools could consist, as it emerges from the previous section, in forms of profiling which might also lead to automated decision making according to the GDPR definitions. Moreover, as reported by international organisations such as the OECD, these instruments represent an efficient lever to prevent and fight tax evasion and consequently, revenue losses.

For this reason, in the GDPR provisions concerning data processing, profiling and automated decision making there are important exceptions to the general rules governing these procedures. Nevertheless, these exceptions must be introduced by legislation and respect the essence of fundamental rights and freedoms (Ehrke-Rabel, 2019b). Indeed, the aim is to safeguard the public interest in which the protection of public revenues from tax evasion behaviours is and must be included.

3.1 Striking a balance in data processing

Starting with data processing, Art. 6 of the GDPR defines the cases where the processing will be considered as lawful. Relevant for the tax law sphere is letter e) which states that the processing of data is lawful if necessary for the performance of a task which is carried out in the public interest or in the exercise of official authority vested in the controller. Moreover, the allowed processing cases contained in Letter f) Art. 22 GDPR will not apply since this point will not find application if the processing is carried out by public authorities in the performance of their tasks, which is the case of tax authorities. However, the lawfulness of the processing in the case of the performance of a task carried out in the public interest or in the exercise of official authority, such as the one carried out by the tax authorities, Art. 6 (3) establishes the need for a legal basis which shall be laid down by: (a) Union law; or (b) Member state law to which the controller is subject and which shall be proportionate to the legitimate public interest aim pursued. The same Art. 6 also contains a series of specific provisions which need to be included in the legal basis for the processing according to Art. 6 (1) lit. e) and which consequently will find application in processing for tax matters as well. Examples of these specific provisions concern the type of processed data, the identification of the data subjects, the purpose limitation, the storage period and the general conditions governing the lawfulness of processing by the controller. Nevertheless, member states can provide for more specific requirements for the processing and other measures to ensure lawful and fair processing. Thus, it might be that a tax law allowing taxpayers’ data processing in one state might offer additional protection to taxpayers’ privacy when compared to that of other member states.

Moreover, regarding the processing of special categories of personal data, the relevant provision in the GDPR is Article 9. Special categories of data include the data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation. The general rule established in Art. 9 (1) prohibits the processing of these data. However, paragraph 2 states exceptions to the application of the first paragraph. Similarly to Art. 6, these exceptions include the case where processing is necessary for reasons of substantial public interest, on the basis of European Union or member state law. Indeed, one reason for substantial public interest is represented by tax compliance and the state´s need to safeguard its resources from tax evasion. However, the exception enshrined in Art. 6, which is relevant also in the field of taxation, is limited by a proportionality test4 which has to take place with reference to the aim pursued. The processing also has to respect the essence of the right to data protection and the law allowing the processing must provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.

From combining these two articles on processing, it is possible that for tax reasons which are part of the broader “public interest”, member states will process data, including the one belonging to the special categories. Nevertheless, for reasons of public interest this permission must still undergo a proportionality test, and it must provide for safeguards of the fundamental rights and interests of the data subject. However, the safeguards that need to be adopted are not listed or exemplified, therefore it remains quite vague what measures that member state will adopt. Due to the territoriality and the worldwide taxation principles, information gathered for tax purposes might still include racial or ethnic origins, or information on health expenses for obtaining tax exemptions. They might even include information on religious belief such as in the case where states levy so-called “church taxes”5 or in cases where there are tax deductions for donations to religious or charitable organisations.6 Moreover, in most of the tax systems, these pieces of information will be directly provided by the taxpayer or by third parties depending on the type of information.

3.2 Striking a balance in profiling and automated decision making

Regarding profiling, the relevant provision is Art. 22 which, as previously described, specifically establishes the right for the data subject to not be subject to a decision which is solely based on automated processing including profiling which will be able to produce legal effects on the data subjects (or can similarly affect him). However, this provision also provides for limitation to this data subject's right.

According to Recital n. 73, the right to not be subjected to automated decision making and profiling together with the “rights of information, access to and rectification or erasure of personal data, the right to data portability, the right to object, decisions based on profiling, as well as the communication of a personal data breach to a data subject and certain related obligations of the controllers” can be restricted by European Union or member state law in the taxation field. Art. 23 (1) (e) expressly mentions taxation matters as general public interest of the Union. However, Art. 23 (1) establishes that any legislative measure restricting those rights (provided for in Artt. 12 to 22 and Art. 34, as well as Art. 5 in so far as its provisions correspond to the rights and obligations provided for in Artt. 12 to 22) must respect the essence of the fundamental rights and freedoms and must be a necessary and proportionate measure. Additionally, in order to ensure the respect of fundamental rights and freedom which also include the right to privacy, Art. 23 (2) lists a series of information which need to be included in the legislative measure allowing such restrictions:

  1. the purposes of the processing or categories of processing;
  2. the categories of personal data;
  3. the scope of the restrictions introduced;
  4. the safeguards to prevent abuse or unlawful access or transfer;
  5. the specification of the controller or categories of controllers;
  6. the storage periods and the applicable safeguards taking into account the nature, scope and purposes of the processing or categories of processing;
  7. the risks to the rights and freedoms of data subjects; and
  8. the right of data subjects to be informed about the restriction, unless that may be prejudicial to the purpose of the restriction.

Nevertheless, this list can be integrated with other information at member states’ discretion.

On a different note, it might be argued that the information contained in the list of Art. 23(2) could reveal the red flags for when tax authorities are going to assess taxpayers and deprive them of an important instrument when assessing possible tax evasion or tax avoidance schemes. Indeed, by knowing exactly how the information is treated and how the technology works, taxpayers could fill in the tax return, or more generally adopt behaviours, which put to the test the predictive measures adopted by the tax revenue agencies in order to fight back tax evasion and avoidance (Reeves, 2015). As already highlighted by Kroll et al. (2016), the need to keep the decision policy as a secret is useful in preventing strategic gaming within the system. Thus, limiting meaningful information about the logic involved in the ICT tool used by the tax administration shall be considered as legitimate (Ehrke-Rabel, 2019a; Ehrke-Rabel, 2019b). Nonetheless, in my opinion, the information required by Art. 23 (2) is not able to offer a concrete overview of how the system works and therefore should not be considered as endangering the public tasks to be carried out when using these instruments.

Moreover, a second reference to the possible use of profiling and automated decision making can be found in Recital 71. Recital 71, even if differently from the text of the Directive is not legally binding,7 expressly mentions fraud and tax-evasion monitoring as the field where these activities can be authorised by member states’ law. However, despite the non-binding nature of recitals, they can be relevant as supplementary interpretative tools for the identification of safeguards which need to be included in the legal basis for profiling and automated decision making as stated in Art. 22. In fact, on the content of those safeguards, Recital 71 establishes that “In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision” and that in order to ensure fair and transparent processing in respect of the data subject, the controller should “implement for the profiling activities appropriate mathematical or statistical procedures and technical and organisational measures appropriate to ensure that in cases of inaccuracies in personal data there is the possibility to corrected them and that the risk of errors is minimized”. Moreover, personal data shall be secured by taking into account the potential risks involved for the interests and rights of the data subject and preventing discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect. Furthermore, the recital states that automated decision-making and profiling based on special categories of personal data should be allowed only under specific conditions.

Section 4: Policy implications for EU member states

As emerges from the GDPR provisions, the European legislator has clearly recognised that technologies allowing processing of large amounts of data and profiling (which might also lead to automated decisions) can represent a fundamental tool for tax administration in the fight against tax evasion and fraud. At the same time, the European legislator has attempted to strike a balance between the public interest to protect public revenue and the taxpayers´ data protection rights, by requiring the presence of safeguards in the legislation allowing for the use of such technologies.

From combining these two provisions on data processing (Art. 6 and Art. 9 GDPR), it is possible that for tax reasons which are part of the broader “public interest”, member states will process data, including the one belonging to the special categories. Nevertheless, this permission for reasons of public interest must still undergo a proportionality test, and the permission must provide for safeguards of the fundamental rights and interests of the data subject. Similarly, in the context of automated individual decision-making, including profiling restrictions to the rights of the subject, must respect the essence of the fundamental rights and freedoms and must be a necessary and proportionate measure in a democratic society (Art. 23).

Firstly, from the member states perspective, this means that they shall verify whether the use of ICT tools for carrying out tax administrations activities involve any form of data processing, profiling and automated decision making. If yes, there must be a specific legal basis in place. Indeed, the entrance into force of the GDPR has determined the need for a specific legal basis for ICT instruments such as the ones used by tax administrations through which data are processed, profiles are created, and automated decisions are taken. Secondly, if the use of these tools already has a legal basis or in cases where member states will need to adopt a new piece of legislation allowing the use of these instruments by tax administrations, these provisions must include the required safeguards as prescribed by the GDPR.

Nevertheless, due to the fact that these safeguards tend to be very vague, the GDPR leaves a lot of discretion to member states on the level of protection of taxpayers’ privacy. Indeed, the GDPR provides only for a minimum level of protection to be included in member states’ legislation allowing the use of ICT tools for profiling and automated decision making in tax matters. Thus, member states could increase the level of protection at their discretion. However, different margins in how to extend the scope of the safeguards might lead to misalignments in the way taxpayers’ privacy is protected among EU member states. Moreover, the lack of both a common auditing system in the European Union and of a common instrument ensuring taxpayers’ rights, such as a European Taxpayer Code (EU Commission, 2016) or Charter (CFE, 2018), intensifies even more the possible discrepancies in the level of protection of taxpayers data and privacy among member states.

Conclusions

In recent years, the use of ICTs by tax authorities has efficiently improved their abilities to carry out their tasks (e.g., tax monitoring, taxpayers’ auditing, tax collection) for the public interest. For this reason, investment in ICT for revenue agencies has been highlighted as a priority by many international institutions (OECD, 2016a; OECD, 2016b; Cotton & Dark, 2017). Using new technologies has simplified the ways in which tax administrations can assess taxpayers and individuate those who are tax evaders. However, if on the one hand tax authorities need to be provided with the most efficient instruments in order to prevent and fight tax evasion and tax avoidance, on the other hand, this need must be balanced with privacy rights of the taxpayers.

More specifically, ICT tools (including and in particular risk management systems) are able to combine data provided by third parties and by the taxpayers, process them in order to categorise taxpayers on the basis of their compliance risks and finally, based on their profiles, individuate the taxpayers that will be subjected to audits. The way in which these systems operate perfectly match the definitions of data processing, profiling and automated decision making contained in the GDPR. However, from analysing the text of the GDPR, it emerges that tax authorities, because of the public interests they are fulfilling, are enabled to use ICT instruments which might facilitate, also through profiling and data matching, the carrying out of tax authorities’ tasks. First of all, this means that member states will have to adopt (where not already in place) a legal basis allowing tax authorities to use ICT tools performing profiling and automated decision making. Secondly, according to Recital n. 71 of the GDPR, the legislative measures authorising decision-making based on profiling for fraud and tax-evasion monitoring shall provide the data subject the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision (De Raedt, 2018). However, the text of the regulation itself does not provide for the express indication or description of the safeguards mentioned in Art. 22. Differently, Art. 23 (2) with regard to automated decision making, provides for a list of information which shall be contained in the legislative measure adopted for permitting the use of automated decision making by tax authorities. Nevertheless, the presence of these requirements in the law and in the ICT systems effectively used by tax administrations needs to be assessed on a case-by-case basis at the national level. Indeed, the GDPR, by requiring the inclusion of these safeguards, only offers a minimal level of protection that might be extended at the national level. Moreover, the vagueness of these safeguards as indicated in the GDPR text and the discretion left to member states on the implementation of those, may lead to an even wider gap between different levels of taxpayer protection across member states.

Acknowledgements

I thank the reviewers and editors for their insightful comments and Professor Tina Ehrke-Rabel for the valuable and inspiring discussions.

References

Article 29 Working Party (2017). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679.  As last Revised and Adopted on 6 February 2018, Retrieved from https://ec.europa.eu/newsroom/article29/document.cfm?action=display&doc_id=49826

Bygrave, L. A. (2001). Minding the Machine: Article 15 of the EC Data Protection Directive and Automated Profiling. Computer Law & Security Report, 17(1), 17–24. doi:10.1016/S0267-3649(01)00104-2

CFE. (2018). Opinion Statement CFE 1/2018 on the Importance of Taxpayer Rights, Codes and Charters on Tax Good Governance. Retrieved from https://taxadviserseurope.org/wp-content/uploads/2018/06/CFE-Opinion-Statement-on-the-Importance-of-Taxpayer-Rights-Codes-and-Charters-on-Tax-Good-Governance-1.pdf

Chatama Y. J. (2013). The impact of ICT on Taxation: the case of Large Taxpayer Department of Tanzania Revenue Authority. Developing Country Studies, 3(2), 91–100. Retrieved from https://iiste.org/Journals/index.php/DCS/article/view/4258

Cotton, M., & Dark, G. (2017, March). Use of Technology in Tax Administrations 1: Developing an Information Technology Strategic Plan [Technical Note]. Washington, DC: International Monetary Fund. doi:10.5089/9781475583601.005

Coudert, F. (2010). When video cameras watch and screen: Privacy implications of pattern recognition technologies. Computer Law and Security Review, 26(4), 377–384. doi:10.1016/j.clsr.2010.03.007

De Raedt, S. (2018). The Impact of the GDPR for the Belgian Tax Authorities. Revue du Droit des Technologies de l’Information, 66-67, 129–143.

Doran, M. (2009). Tax Penalties and Tax Compliance. Harvard Journal on Legislation, 46(1), 111–161.

Edwards-Dowe, D. (2008). E-Filing and E-Payments – The Way Forward. Presented at the

Caribbean Organization of Tax Administration (COTA) General Assembly, Belize City, Belize. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.527.5799&rep=rep1&type=pdf 

Veale, M. & Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review, 34(2), 398–404. doi: 10.1016/j.clsr.2017.12.002

Ehrke-Rabel, T. (2019a). Big data in tax collection and enforcement. In W. Haslehner, G. Kofler, K. Pantazatou, & A. Rust (Eds.), Tax and the Digital Economy: Challenges and Proposals for Reform. Alphen aan den Rijn: Kluver Law International.

Ehrke-Rabel, T. (2019b). Profiling im Steuervollzug. FinanzRundschau, 101(2), 45–58.

Ehrke-Rabel, T. (2019c). Third Parties as Supplementary Sources of Tax Transparency. In F. Busran & J. Hey (Eds.), Tax Transparency. Amsterdam: IBFD.

EU Commission, Directorate General Taxation and Customs Union, Fiscalis Risk Analysis Project Group. (2006). Risk Management Guide for Tax Administrations. Retrieved from https://ec.europa.eu/taxation_customs/sites/taxation/files/resources/documents/taxation/tax_cooperation/gen_overview/risk_management_guide_for_tax_administrations_en.pdf

EU Commission (2016). Guidelines for a Model for a European Taxpayers’ Code. Retrieved from https://ec.europa.eu/taxation_customs/business/tax-cooperation-control/guidelines-model-european-taxpayers-code_en

Gutwirth, S., & Hildebrandt, M. (2010). Some caveats on profiling. In S. Gutwirth, Y. Poullet, & P. De Hert, (Eds.) Data protection in a profiled world. Dodrecht: Springer. doi:10.1007/978-90-481-8865-9_2

Hatfield, M. (2015). Taxation and surveillance: an agenda. Yale Journal of Law & Technology17, 319–367. Retrieved from https://yjolt.org/taxation-and-surveillance-agenda

Information Commissioner’s Office (ICO). (2017). Feedback request – profiling and automated decision-making. Retrieved from https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/feedback-request-profiling-and-automated-decision-making/

Internal Revenue Service (IRS). (2012). Annual Report to Congress. National Taxpayer Advocate. Retrieved from https://taxpayeradvocate.irs.gov/2012-Annual-Report/FY-2012-Annual-Report-To-Congress-Full-Report.html

Intra-European Organisation of Tax Administrations (IOTA). (2018). Impact of Digitalisation on the Transformation of Tax Administrations. Retrieved from https://www.iota-tax.org/publication/impact-digitalisation-transformation-tax-administrations-0;

Jensen, J., & Wöhlbier, F. (2012), Improving tax governance in EU Member States: Criteria for successful policies [European Commission Occasional Paper No. 14]. Retrieved from https://ec.europa.eu/economy_finance/publications/occasional_paper/2012/pdf/ocp114_en.pdf;

Kamarinou, D., Millard, C., & Singh, J. (2017), Machine Learning with Personal Data. In R. Leenes, R. van Brakel, S. Gutwirth, & P. De Hert, (Eds.), Data Protection and Privacy. The Age of Intelligent Machines. Sidney: Hart Publishing.

Kariuki, E. (2014), Automation in Tax Administration. Towards sustainable ICT systems in tax administrations. [APRIL Publication No. 4]. Nairobi: African Policy Research Institute Limited. Retrieved from http://www.april-ssa.com/assets/april--automation-in-tax-administrations.pdf

Kroll, J. A., Huey J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G. & Yu, H. (2016). Accountable Algorithms. University of Pennsylvania Law Review, 165(3), 633–705. Retrieved from https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3

Lederman, L. (2007). Statutory Speed Bumps: The Roles Third Parties Play in Tax Compliance. Stanford Law Review, 60(3), 695–743. Retrieved from https://www.stanfordlawreview.org/print/article/statutory-speed-bumps-the-roles-third-parties-play-in-tax-compliance/

Lipniewicz, R. (2017). Tax Administration and Risk Management in the Digital Age. Information Systems in Management, 6(1), 26–37. Available at http://yadda.icm.edu.pl/yadda/element/bwmeta1.element.ekon-element-000171468955

Malgieri, G., & Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law,7(4), 243–265. doi:10.1093/idpl/ipx019

Mantelero, A. (2016). Personal data for decisional purposes in the age of analytics: From an individual to a collective dimension of Data Protection. Computer Law & Security Review, 32(2), 238–255. doi:10.1016/j.clsr.2016.01.014

Martini, M. (2017). DS-GVO Art. 22 Automatisierte Entscheidungen im Einzelfall einschließlich Profiling [GDPR Art. 22 Automated Decisions in individual cases including profile]. In B. Paal, & D. Pauly, (Eds.), Datenschutz-Grundverordnung, (pp. 260–264). Munich: C.H. Beck.

OECD. (2006). Using Third Party Information Reports to Assist Taxpayers Meet their Return Filing Obligations— Country Experiences With the Use of Pre-populated Personal Tax Returns [Information Note]. Organisation for Economic Co-operation and Development. Retrieved from https://www.oecd.org/tax/administration/36280368.pdf

OECD. (2015). Addressing the Tax Challenges of the Digital Economy, Action 1 BEPS - 2015 Final Report. Organisation for Economic Co-operation and Development. Retrieved from https://www.oecd.org/ctp/addressing-the-tax-challenges-of-the-digital-economy-action-1-2015-final-report-9789264241046-en.htm

OECD. (2016a). Advanced Analytics for Better Tax Administration. Organisation for Economic Co-operation and Development. Retrieved from https://www.oecd.org/publications/advanced-analytics-for-better-tax-administration-9789264256453-en.htm;

OECD. (2016b). Technologies for Better Tax Administration: A Practical Guide for Revenue Bodies. Organisation for Economic Co-operation and Development. Retrieved from https://www.oecd.org/publications/technologies-for-better-tax-administration-9789264256439-en.htm;

OECD. (2017). The Changing Tax Compliance Environment and the Role of Tax Audit. Organisation for Economic Co-operation and Development. Retrieved from https://www.oecd.org/ctp/the-changing-tax-compliance-environment-and-the-role-of-audit-9789264282186-en.htm;

OECD. (2019). Unlocking the Digital Economy – A Guide to Implementing Application Programming Interfaces in Government. Organisation for Economic Co-operation and Development. Retrieved from http://www.oecd.org/ctp/unlocking-the-digital-economy-guide-to-implementing-application-programming-interfaces-in-government.htm;

Ohm, P. (2010). Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review, 57(6), 1701–1777. Retrieved from https://www.uclalawreview.org/broken-promises-of-privacy-responding-to-the-surprising-failure-of-anonymization-2/

Reeves, J. (2015, March 15). IRS Red Flags: How to Avoid a Tax Audit. Retrieved from USA Today http://www.usatoday.com/story/money/personalfinance/2014/03/15/irs-tax-audit/5864023;

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). (2016, May 4). Retrieved from https://eur-lex.europa.eu/eli/reg/2016/679/oj;

Russell, B. (2010). Revenue Administration: Developing a Taxpayer Compliance Program [Technical Notes]. Washington, DC: International Monetary Fund. Retrieved from https://www.imf.org/external/pubs/ft/tnm/2010/tnm1017.pdf

Skitka, L. J., Mosier, K., & Burdick, M. D. (2000). Accountability and automation bias. International Journal of Human-Computer Studies,52(4), 701–717. doi:https://doi.org/10.1006/ijhc.1999.0349

Smith, W. H. (1969). Automation in Tax Administration. Law and Contemporary Problems, 34 (4), 751–768. doi:10.2307/1190909

Tomar, L., Guicheney, W., Kyarisiima, H. & Zimani, T., (2016). Big Data in the Public Sector: Selected Applications and Lessons Learned [Discussion Paper No. IDB-DP-483]. New York: Inter-American Development Bank. Retrieved from https://publications.iadb.org/en/big-data-public-sector-selected-applications-and-lessons-learned;

Vaillancourt, F., Evans, C., Tran-Nam, B., Verdonck, M., Erard, B., & Duran-Cabre, J., (2011), Prefilled Personal Income Tax Returns: A Comparative Analysis of Australia, Belgium, California, Quebec and Spain. Fraser Institute. Retrieved from https://www.fraserinstitute.org/sites/default/files/prefilled-personal-income-tax-returns.pdf

Wachter, S., Mittelstadt, B. & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. doi:10.1093/idpl/ipx005

World Bank. (2016). World Development Report 2016: Digital Dividends Overview. Retrieved from https://www.worldbank.org/en/publication/wdr2016.

Footnotes

1. According to the OECD, Tax Administration 2017: Comparative Information on OECD and other Advanced and Emerging Economies (2017) p. 191, most tax administration systems are currently still based on a system requiring the taxpayer to fill his tax return.

2. Previous studies show that 97% of the taxpayers‘ information is provided to the IRS in routine reports from third parties (IRS, 2012 as cited by Hatfield, 2015).

3. BGH (German Federal Court) 28. 1. 2014 - VI ZR 156/13 (LG Gießen, AG Gießen), p. 169.

4. As described in his opinion by Advocate General Saugmandsgaard øe explained in ECJ joined cases C-203/15 and C-698/15, Tele2 Sverige AB, 21 December 2016, ECLI:EU:C:2016:572 para. 247, the “requirement of proportionality within a democratic society - or proportionality stricto sensu- flows both from Article 15(1) of Directive 2002/58 and Article 52(1) of the Charter, as well as from settled case-law: it has been consistently held that a measure which interferes with fundamental rights may be regarded as proportionate only if the disadvantages caused are not disproportionate to the aims pursued”. Other relevant case law on the proportionality test in the context of data protection is: ECJ Case C-275/06, Productores de Música de España (Promusicae) v Telefónica de España SAU, 29 January 2008, ECLI:EU:C:2008:54, para. 68; ECJ joined cases C-293/12 and C-594/12, Digital Rights Ireland, 8 April 2014, ECLI:EU:C:2014:238; ECJ joined cases C-203/15 and C-698/15, Tele2 Sverige AB, 21 December 2016, ECLI:EU:C:2016:572; ECJ Case C-83/14 Razpredelenie Bulgaria Ad,16 July 2015, ECLI:EU:C:2015:480; ECJ Case C-362/14 Schrems, 6 October 2015, ECLI:EU:C:2015:650.(EDPS, 2019)

5. Mandatory church taxes are levied in Austria, Germany, Finland, Denmark and Sweden (PEW, 2019).

6. According to previous studies, tax deduction schemes are in place in 9 of the 14 European nations (not only limited to EU member states but also including Switzerland) offering tax incentives on individual donations. These states include Austria, the Czech Republic, Germany, Italy (which also offers tax credits), Netherlands and Switzerland. Tax deductions can also be facilitated through the percentage allocation schemes, which is in use in Slovakia and Slovenia. In this case, a fixed percentage of income tax can be donated directly to charity from a tax return or statement. Meanwhile, donors in Belgium, France, Italy, Norway and Spain can claim a tax credit against the value of their donations (EFA, 2018). On the possibility of tax deductions to charitable organisations within the EU, please see Case C C-318/07, Hein Persche v Finanzamt Lüdenscheid, 27 January 2009, ECLI:EU:C:2009:33.

7. Recitals are not legally binding. However, they might perform a supplementary normative role which the European Commission has confirmed and even if the European Court of Justice has explained that they do not have autonomous legal effect and “cannot be relied upon to interpret in a manner clearly contrary to is wording”, this still does not undermine their supplementary interpretative nature (Malgieri & Comandé, 2017). ECJ, Case C-308/97, Manfredi, 25 November 1998, ECLI:EU:C:1998:566, para. 30. See also Case C-136/04, Deutsche Milch-Kontor, 24 November 2005, ECLI:EU:C:2005:716, para 32; Case C-134/08 Tyson Parketthandel, 2 April 2009, ECLI:EU:C:2009:229, para. 16; Case C-7/11, Caronna, 28 June 2012, ECLI:EU:C:2012:396, para 40.

Privacy

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Privacy: developments and contestations

Delivering a consolidated account of privacy, even when narrowing down the focus on its informational dimension, is not an easy task given the complexity of the issue; and the vast landscape of theoretical work referring to the concept (Roessler, 2005; Solove, 2009; Nissenbaum, 2010). Nevertheless, the notion of privacy plays a central role in public as well as in scholarly controversies about the multiple transformations accompanying the advent of ‘digital society’. Since privacy is implicated in one of the most basic distinctions pervading modern society, namely the distinction between the private and the public (e.g., Bobbio, 1989), it may be understood as an analytical ‘probe head’ potentially providing insights into the digital transformation of society at large. This entails that current developments of privacy regarding digital technology cannot be understood without considering the larger socio-historical currents that still structure practices and concepts of privacy. In consequence we will firstly present a sketch of the general outlines of this multi-layered notion, which particularly highlights the role that technologies have been playing for privacy from the very beginning. Having gained an overview, we will next briefly introduce some particularly ‘digital challenges’ of privacy, before moving on to a presentation of conceptual innovations having been developed by privacy scholars in response.

We may first of all note that, although some scholars have traced privacy in the most diverse geographical and historical formations (Moore, 1984) we will restrict our discussion to the modern phase of the historical West.1 Thus, although in medieval Europe the idea and practice of keeping secrets was well-known and widespread (Assmann and Assmann, 1997) framing these practices as a positive institution occurs only in the post-Ancien Régime era: the idea of privacy as an ethical or legal right emerges with the rise of bourgeois societies in Europe. Not only does the decline of the court society (Elias, 1983) demand actors to develop novel subjectification schemes, there are also new forms of architecture and interior design that include a sense of more or less public/private rooms, e.g., the salon vs the bedroom and, for the well-off: the study (Vincent, 2016). Moreover, novel cultural techniques emerge, such as letter-writing and -sending through the novel postal system (Siegert, 1993), and diary-keeping (Koschorke, 1999), which are considered constitutive elements of the Enlightenment idea of a self-reflecting, and thus autonomous subject (Ruchatz, 2003; Rössler, 2017). Kant famously builds his claim that every human being is capable of using his own understanding by experiencing a scholarly “reading world” that publishes and discusses educated writings (Kant, 1996). Here, autonomy is tied to public exchange whereas private occupations may be limited in all kinds of regards.

In this sense, privacy, understood as a practice to forge subjectivities of self-determination emerges in an early bourgeois societal setting; and is right from the outset strongly linked to the materiality and socio-technology of its environment. It is for this reason that media and technological inventions from the 18th century onwards have constantly spurred both public debates on, and theoretical developments of, privacy. In fact, one of the most influential legal definitions of privacy, Warren and Brandeis’ conceptualisation of privacy as the “right to be let alone” (Warren and Brandeis, 1890) was motivated by the emergence of instantaneous photography and the yellow press (Glancy, 1979). Nevertheless, while privacy ‘is’ as technological as its transformation, there is no technological determination of either aspect.

Apart from its cultural and material character, privacy has also always been ‘normative’, and massively contested for that matter. It has been challenged by social movements, such as feminist (Cohen, 2004) and queer activism (Gross, 1993) in particular, as well as by more conceptual enterprises, e.g., feminist (Allen, 2003; Cohen, 2012) and communitarian (Etzioni, 1999) social theory, critical theory and other strands of Marxist thought (Althusser, 2014), surveillance studies (Stalder, 2002), media studies (Osucha, 2009), legal theory (Roberts, 1996) and so on. To illustrate the contested nature of the concept we will sketch the outline of two groups of issues that are of particular importance for understanding privacy in digital societies.

Privacy: enabling individuality vs de-politicising issues: ‘Advocates’ value privacy as a ‘space’ where people can act without public scrutiny, and hence claim its importance for personal development: a ‘space’ for trying out things, for making ‘mistakes’ without too many consequences, etc. (Rössler, 2005, p. 144). They hold that, while societies are structured by power imbalances and the stigmatisation of both morally and legally permissible acts, privacy allows for practices to be performed and to prevail in spite of their stigmatisation. At the same time, however, critics argue that this is precisely what may become problematic, for the possibility to evade public visibility may turn into a necessity to hide: if controversial actions are restricted to the private realm, social change is stifled. Emancipatory politics quite in contrast involve public acknowledgement of issues as political ones concerning all of society (Arendt, 1970; Rancière, 1999). The relevance, or existence of, a social problem, is hard to press publicly if those being concerned are hidden in privacy. This issue forms an important context for current debates. Otherwise, digital technology is too easily conceived as a threat to privacy rather than a shift within an already ambivalent and complex relation. Similarly, without the focus on (de-)politicisation the endorsement of a more publicly visible digital life is too easily denigrated as naïve or lacking in autonomy. We return to these issues below.

Privacy as disavowal of social contexts: The notion of privacy, as discussed in this contribution, emerges with bourgeois society; and the latter’s idea of autonomous individuals is based on a negative conceptualisation of freedom (Berlin, 2017). From this point of view, social contexts and interactions count as limitations to freedom, as the interests of others have to be taken care of. In private, that is, in the absence of others, such infringements are likewise absent; consequently freedom increases. Again, feminist thinkers have taken issue with such a perspective in arguing that in privacy, actually others are present: family, houseworkers, etc. Those others take care of providing food, organising space, they contribute emotional labour and do reproductive and care work in general, all of which enables the absence from pressing needs and demands that we cherish as privacy in the first place. In this sense, autonomy is not the absence, but the presence, of others whose contributions to one’s social positioning is neglected. As subjectivity is a relational affair (Friedman, 2003; Nedelsky, 1989), the same goes for subjectivities of self-determination. As a result, the latter relates to, and at times contradicts, the valuation of others. This inherent relationality of privacy is particularly salient in recent debates and theoretical innovations for privacy in digital societies. Thus, the normative and socio-political issues sketched here form a second important context for current issues.

Both these groups of issues illustrate that privacy is to be discussed as an inherently ambivalent value. As a value it forms part of a broader societal framework of related or contravening values that are subject to constant negotiation. Moreover, most scholars one way or another do admit to its ‘downsides’ by granting the necessity of constraining privacy in certain cases etc., while at the same time arguing for privacy’s great individual and social value.

However, what is this “individual and social value”? Regarding the former there is one group of normative theories that sees privacy’s value rooted in autonomy. From this point of view, privacy is required to lead an autonomous life (Roessler, 2005). The argument pertains to both ‘situations’ and biographies. Under involuntary public scrutiny, the argument goes, we could not act freely. Furthermore, without privacy we could not even develop an individual character, try out things or commit errors (Reiman, 1976). Here privacy is not considered an end in itself; rather, its individual value is to be found in its being a precondition for the fostering of autonomy. A second group of theories locates privacy’s value not in individuals but in societal structuring. Authors belonging to this group argue that privacy is a precondition for democratic institutions, such as elections (Regan, 1995; De Hert and Gutwirth, 2006); and is furthermore a requirement of democratic society as it enables a plurality of life forms (Roessler, 2010). Obviously, such reasoning points out privacy’s potential to enable individual decisions as to how one wants to lead her life; liberal-democratic ideas, are central to classic privacy theorising, as the concept was connected to the notion of “freedom” already in mid-twentieth century privacy discourse (Westin, 1967).

In our general discussion of the concept we have thus far carved out three characteristics of privacy: its historical and cultural shaping; its material forming/transforming; and its societal contestation. What we have not touched upon so far are debates of how to define privacy. To cope with this task, we will begin by pointing out the multiple dimensions of privacy. Some scholars distinguish, e.g., informational from local and decisional privacy (Roessler, 2005), and thus knowledge related from spatial and decision-making aspects. Some theorists add still more dimensions, such as bodily and psychological privacy (Tavani, 2007), plus intellectual, communicational, associational, proprietary, and behavioural privacy with informational privacy “overlapping” all the other types of privacy (Koops et al., 2016). There are two things to note at this point: first, no matter how many dimensions any privacy theory is inclined to take into account, most or at least some of those accounted for, are only analytically distinguishable, but not so in empirical practice. This is illustrated by the trivial fact that in some circumstances closing the door might grant actors not only spatial privacy (a room for their own), but also informational (e.g., knowledge about what’s going on inside) as well as bodily privacy (e.g., romantic activities). In line with these considerations Roessler (2005, p. 87) argues that bodily privacy may be realised via spatial and decisional privacy. Considering the US Supreme Court ruling in Roe vs Wade, where the court introduced a legal right to decisional privacy and consequently stated that abortion is a private affair, we may infer that here decisional privacy is a precondition to bodily privacy (Cohen, 2004) – in fact, both are inextricably entangled.

It is for this reason that in this contribution we set out to discussing privacy in general, for informational privacy in digital society is intimately connected with the dimensions and genealogies of all the other privacies that can be distinguished only for analytic purposes. The entangled nature of privacy furthermore complicates, or in fact, renders impossible, its clear-cut definition. Historians have attempted to retrace privacy’s genealogy back to the notion of private property (Vincent, 2016),2 which currently re-emerges in attempts to implement data protection via a right of data ownership (Hornung and Goeble, 2015), while for other researchers privacy refers to some kind of inaccessibility, protection, shielding, or limiting of the possibilities for others to interact. A central debate in philosophy and legal theory concerns the question of whether privacy is about being inaccessible for others in some way – or about the possibility to control that access (DeCew, 1997; Fried, 1968; Parent, 1983). However, those long-standing discussions have not quite settled the dispute, but driven some influential scholars to rather conceive of “privacy” as the name given to the “family resemblance” of a set of practices (Solove, 2009); or to straightforwardly detach privacy from individuals and conceptualise it as a fit between information flows and appropriate social contexts (Nissenbaum, 2010).

In this paper we will not be able to provide the clear-cut definition privacy studies are lacking… since their establishment more than hundred years ago. In fact, as we turn to the specific challenges for privacy in digital societies below, we will see that they entail shifts and re-conceptualisations among the various aspects of privacy, rather than developments that could be scrutinised from the perspective of a clear-cut definition.

Contemporary digital transformations profoundly destabilise this notion of privacy by shifting the material-technological base of society, and thus, of privacy. As a result, the normative contestation of privacy comes to the forefront again, and the precariously balanced relationship between privacy and other values gets into disorder. We will next demonstrate how this comes about by illustrating the digital challenge of privacy before specifying the way these challenges transform privacy in a networked age.

Privacy in the digital society: existing theories and challenges

Digital technology troubles not only the informational realm, but affects other dimensions of privacy as well. For instance, when considering information related to activities within private space we may note that most people still believe such information was only accessible by third parties if actively passed along, or when third parties are granted physical access. However, given today’s devices, such as smartphones or “smart speakers”, we must account for imperceptible listening or watching also within private space (Ochs, 2017).

This is just one example for the way social digitisation transforms the groundwork of sociality. We will elaborate two aspects of this transformation in order to show how well-worn notions of privacy lose plausibility at least to account for the novel socio-technical situations emerging within digital society: the massive extension of the scope of perceptibility and action, on the one hand; and the uber-individuality of the resulting privacy problematics. Taking up the first point, we may set out from the observation that digital technologies shift the possibilities and boundaries of human perception and action; and that, as a result, normative questions emerge triggered by novel forms of action. A case in point is the apparently paradoxical notion of “privacy in public”. Persons in public, one might argue, cannot reasonably expect privacy, for they are visible to everybody. Indeed, this has been a longstanding legal and theoretical point of view (Nissenbaum, 1998). However, “everybody” here implicitly means everyone who is present where I am. Thus, when I sit in a public park, everyone who happens to be in the same area is able to see and approach me. Social stratification of cities and quarters further reduces the selection of people who might possibly do so in the first place. However, with people now having gained the means to take pictures or videos of the park and to upload them to the internet, the implied notion of “everybody” changes drastically: suddenly, the park-wide audience is replaced by a potentially world-wide audience. This raises the question of whether we actually should have a right to privacy regarding that newly extended audience (Nissenbaum, 1998), especially when taking into account the dangers that using an established notion of privacy in the context of new possibilities of action and perception might have as was demonstrated by Zimmer (2010) for the case of research. The increased reach of perception and interaction through digital technology that becomes visible here is augmented by two oft-cited factors: first, digital data is easy and cheap to store, thus things that appear in data acquire permanence as digital records. Combined with effective search engines and machine learning, vast troves of data can efficiently be queried. Such developments have led to the claim for a right to be forgotten (Frantziou, 2014) that is, a claim to legal guarantees that target the longevity of data by limiting the scope of search procedures.3 The second factor is the vast increase of sensors, for example by the proliferation of smartphones or so called “internet of things” devices, leading to a circumspect source of digital data in our vicinities (Ziegeldorf et al., 2014).

The second aspect of digitisation for privacy that we would like to invoke is that it troubles the inherent individualism of conventional privacy theories. Such individualism is also central to most data protection legislation like the European Union General Data Protection Regulation, which relies on the notion of personal data, personally identifiable information or similar concepts.4 All of them express a clear relation between particular bits of data and specific subjects. A similar individualism can be found in most theories of privacy that relate the latter to an individual value, particularly to autonomy. This of course includes autonomy regarding one’s communication, information about oneself or one’s self-presentation (Roessler, 2005). However, digital data tends to be relational, e.g., information about communication processes. Furthermore, the bulk of data collected nowadays is analysed on an aggregate level. The issue is not about specific pieces of information concerning particular persons, but rather about finding new behavioural patterns (Chun, 2016). Such data analytics technologies, often discussed under the label of “Big Data” or “Machine Learning”, do not disclose who you are but what you are like (Matzner, 2014). Emerging patterns are then used for all kinds of ends like credit scoring (Gandy, 2012), social sorting (Lyon, 2014), security procedures like algorithmic profiling (Leese, 2014), border controls (Jeandesboz, 2016) and many more purposes. Thus, the type of data that data protection schemes and individual notions of privacy enable us to control (personal data) and the types of data that render some corporate actors immensely powerful (aggregate/patterns of data) are not the same.

In a certain sense, privacy’s contestations identified above are reoccurring here, albeit in a different form: the imperceptibility of the listening and watching within private space, as induced by the digitally increased reach of perceptibility, hides the underlying socio-technical networks challenging privacy (Stalder, 2002; Fuchs, 2011; Lyon, 2015). The socio-technical dependency of the practices constituting digital society remains thus invisible; consequently, it is extremely difficult to break the de-politicising grip of the whole constellation, for collective risks (e.g., digitally induced decline of democracy) remain extremely abstract, while individual risks are hardly felt at all. Individualistic privacy notions tend to aggravate the problematic, for framing the latter in individualistic terms de-politicises the issue right from the outset, and furthermore conceals the social dependency of the whole constellation on users’ “invisible work” (Leigh Star and Strauss, 1999).

The summary offered above shows that the challenges of digital society by far exceed a narrow definition of privacy as informational privacy; and even more so the equation of privacy with data protection. In particular, they trouble the individualist notions of privacy which are also at the core of much national privacy laws as well as the European Union General Data Protection Regulation. They moreover unsettle deep-seated ideas and practices by changing perception, communication and social relations, all of which impact the various aspects or dimensions of privacy. However, there have been several theoretical innovations regarding privacy in the last twenty years that either are directly prompted by the aforementioned issues or allow to address them. We will next turn to these innovations.

Privacy in digital society: theoretical innovations

Already in the 1970s, probably most famously voiced in Rachels’ paper (1975), theories of privacy have turned away from equating the private subject with being “let alone”. Particularly in the wake of Goffman’s work (1959, 1977) subjects are seen as playing various roles in different social contexts. From this perspective, privacy still protects some kind of individual autonomy, as it now concerns the individual’s potential to determine the information to be disclosed in any one context. There are some relations that warrant knowledge of particular pieces of information or certain forms of interaction, while the same information is to be protected in others. Since Goffman and his successors have shown that our roles have to conform to all kinds of social expectations, which are in turn tied to power, resources and other forms of inequality, the right to privacy grants persons a claim to self-determination within these relations. In normative terms the protected autonomy of the sovereign individual gives way to the autonomy to perform identity management.

This point of view has become prevalent in the analysis of digital society. The first group of challenges mentioned above entails that different social contexts and their ensuing roles are no longer clearly separated. Thus, in addition to protecting one’s information within such contexts, the latter must be protected in relation to each other. This problematic has been studied under the topic of “context collapse” in the social sciences and in media studies. In particular, social networking sites are designed to interconnect different social contexts in which we lead our social lives. Thus, information which may be voluntarily disclosed in one context and with a particular audience in mind, is now easily transported to other contexts where this might entail harm (Marwick and boyd, 2014; Wesch, 2009). This analysis is important because it counters a particular version of the de-politicising problem mentioned earlier: information that is released to adverse effects in digital media often has been voluntarily provided elsewhere. Putting the blame on the individual’s original release in a specific context, however, ignores the social, cultural and technical interrelations between different contexts as a political issue. This is exemplified by former Google CEO’s infamous 2009 statement that “If you have something you don’t want anyone to know, maybe you shouldn’t be doing it in the first place” (Esguerra, 2009), which completely lost sight of the fact that the appropriateness of disclosing information about one’s doings in complex societies is not binary (disclosure/non-disclosure) but largely determined by differentiated contextual norms. Such blaming becomes particularly questionable when regarding young persons or gendered forms of interaction like non-consensual image sharing. Here the reduction of privacy issues to individual acts connects to other forms of blaming the victim (Henry and Powell, 2014; Ringrose, Harvey, Gill, and Livingstone, 2013).

For similar reasons, (Roessler and Mokrosinska, 2013) argue that individual privacy needs to be amended with the protection of social relations. Without such protection, the desired or required activities in these relations become defective. Recently, in particular German scholarship has radicalised this move. Rather than maintaining individual control over social relations at the core of privacy theories, critics argue that the subject whose privacy is protected needs to be understood in a more socially and/or technically embedded manner. This shifts the normative core from autonomy in the form of identity management towards particular possibilities to negotiate social positions. While there are some hints to this approach in Roessler and Mokrosinska (2013), they are still more pronounced in recent theoretical proposals that build firmly on a variety of social theories like critical theory (Seubert and Becker, 2018; Loh, 2018; Stahl, 2016), structuration theory and actor-network theory (Ochs, in press), or Arendtian political theory (Matzner, 2018).

In distinction to such approaches that see the individual value of privacy in a social context, other theorists locate the value of privacy itself on a social level. The approaches of Ochs (in press), Seubert and Becker (2018), and Stahl (2016) fuse both outlooks. Probably the most prominent approach from the latter group is Helen Nissenbaum’s idea of “privacy in context” (Nissenbaum, 2010). She argues that society is divided into particular spheres, like healthcare, education, etc. All of these spheres, she explains, are defined by intrinsic values, e.g., healthcare by healing and sanity. In consequence, Nissenbaum concludes that each of these contexts is governed by norms regarding the use and circulation of information; said norms derive in turn from the respective intrinsic aim of any given context. Accordingly, privacy in Nissenbaum’s definition is tantamount to treating each piece of information according to the norms that govern the context in which it emerged. That need not entail that all information stay in the original context of emergence, as is sometimes mistakenly stated. It rather requires that all applications and flows of data must respect the fact that any data whatsoever is gathered in a particular context for a particular aim, which does not necessarily warrant the use of the same data for other aims. According to Nissenbaum it is therefore not straightforward to assume that data released in one context is “up for grabs” in another. However, the aggregate and relational use of data particularly challenges the presumed separation of contexts (Matzner, 2014). Specifically the organisation of contemporary digital services in platforms (Bucher and Helmond, 2017) blurs such distinctions. Still Nissenbaum’s approach has been very influential, not the least because it has also been designed with deployment in mind. With its rather formal treatment of norms and contexts it has led to productive engagement and implementation in computer science (Benthall, Gürses, and Nissenbaum, 2017).

Quite generally, the challenges posed by privacy have led to innovations in computer science. Notions such as differential privacy (Dwork, 2008) or k-anonymity (Machanavajjhala, Kifer, Gehrke, and Venkitasubramaniam, 2007) acknowledge the importance of aggregate data, and in consequence that the meaning of a piece of data depends on the context in which it is evaluated. In this sense, these approaches define measures to determine the amount of information that can be derived about a person in the context of a specific data-base or other collection of data. Still, they are focused on privacy as prevention gathering information about a person, rather than preventing certain actions performed on this person.

This latter observation leads to recent debates on the question of regulating the usage instead of the collection of data, which so far have not born too many fruits. Instead of delving into this discussion we would like to flag the fact that the digital unsettling of privacy also at this point generates the requirement to conceptualise privacy as embedded within the sociotechnical structures of digital society. Privacy is linked up with all kinds of values, norms, institutions, and practices constituting the political economy from which it emerges – losing sight of the latter in theory breeds faint privacy notions in practice.

Conclusion

Privacy has become a pervasive issue in digital societies – in political, economic, and academic discourse as well as in everyday life of many. This is not surprising, since digital technologies challenge many established notions and practices related to the concept. However, this must not be understood as a recent attack on a hitherto unproblematic value. As we have seen, a lot of the transformations under way connect to the conceptual, socio-material and cultural history of privacy. In this regard, the digital transformation sustains and adds to existing critiques from feminist and social perspectives. At the same time, digital transformations and the many privacy-related incidents it causes highlight the urge to find re-conceptualisations that sustain its value. In navigating this tension research from computer science to the social sciences and law, and philosophy have highlighted the necessity to take groups, social relations and broader socio-cultural contexts into account. Such developments of privacy can also be seen as part of existing efforts to reconceive core tenets of liberal societies in a more socio-culturally situated manner (Friedman, 2003; Roessler and Morkosinska, 2013). At the same time, strands of social and political theory beyond liberalism (understood in its broadest sense), which so far have often been rather critical towards privacy are increasingly harnessed to find novel solutions to the digital challenge of privacy. As such, this short overview has described a concept as much as a process that doubtlessly must, and hopefully will continue in the future.

References

Allen, A. L. (2003). Why Privacy Isn’t Everything: Feminist Reflections on Personal Accountability. Lanham: Rowman & Littlefield.

Althusser, L. (2014). On the Reproduction of Capitalism: Ideology and Ideological State Apparatuses. London: Verso.

Arendt, H. (1970). On violence. New York: Harcourt, Brace & World.

Arendt, H. (1998). The Human Condition (2nd ed.). Chicago: University of Chicago Press.

Assmann, A., & Assmann, J. (1997). Geheimnis und Öffentlichkeit [Secret and Public]. München: Fink.

Bhandar, B. (2014). Critical Legal Studies and the Politics of Property. Property Law Review, 3(3), 186–194.

Benhabib, S. (2003). The reluctant modernism of Hannah Arendt. Lanham: Rowman & Littlefield.

Benthall, S., Gürses, S., & Nissenbaum, H. (2017). Contextual Integrity through the Lens of Computer Science. Foundations and Trends in Privacy and Security, 2(1), 1–69. doi:10.1561/3300000016

Berlin, I. (2017). Two Concepts of Liberty. In D. Miller (Ed.), The Liberty Reader (pp. 33–57). London: Routledge. doi:10.4324/9781315091822-3

Bobbio, N. (1989). Democracy and Dictatorship: The Nature and Limits of State Power. Minneapolis: University of Minnesota Press.

Bucher, T. and Helmond, A. (2017). The affordances of social media platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE handbook of social media (pp. 223–253). London: Sage.

Chun, W. H. K. (2016). Updating to remain the same: habitual new media. Cambridge, MA: The MIT Press.

Cohen, J. E. (2012). Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. New Haven: Yale University Press.

Cohen, J-L. (2004). Regulating intimacy: a new legal paradigm. Princeton, NJ: Princeton University Press.

DeCew, J. W. (1997). In Pursuit of Privacy: Law, Ethics, and the Rise of Technology. Ithaca: Cornell University Press.

Dwork, C. (2008). Differential Privacy: A Survey of Results. In M. Agrawal, D. Du, Z. Duan, & A. Li (Eds.), Theory and Applications of Models of Computation (pp. 1–19). doi:10.1007/978-3-540-79228-4_1

Elias, N. (1983). The Court Society. Oxford: Blackwell.

El Guindi, F. (1999). Veil: modesty, privacy, and resistance. Oxford; New York: Berg.

Esguerra, R. (2009). Google CEO Eric Schmidt Dismisses the Importance of Privacy. Retrieved from Deeplinks Blog: https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-dismisses-privacy

Ess, C. (2005). “Lost in Translation”?: Intercultural Dialogues on Privacy and Information Ethics (Introduction to Special Issue on Privacy and Data Privacy Protection in Asia). Ethics and Information Technology, 7(1), 1–6. doi:10.1007/s10676-005-0454-0

Etzioni, A. (1999). The Limits of Privacy. New York: Basic Books.

Fuchs, C. (2011). Towards an alternative concept of privacy. Journal of Information, Communication and Ethics in Society, 9(4), 220–237. doi:10.1108/14779961111191039

Frantziou, E. (2014). Further Developments in the Right to be Forgotten: The European Court of Justice’s Judgment in Case C-131/12, Google Spain, SL, Google Inc v Agencia Espanola de Proteccion de Datos. Human Rights Law Review, 14(4), 761–777. doi:10.1093/hrlr/ngu033

Fried, C. (1968). Privacy. The Yale Law Journal, 77(3), 475–493.

Friedman, M. (2003). Autonomy and Social Relationships: Rethinking the Feminist Critique. In Autonomy, Gender, Politics (pp. 81–97). Oxford: Oxford University Press.

Gandy, O. H. (2012). Statistical Surveillance: Remote Sensing in the Digital Age. In K. H. Kirstie Ball & D. Lyon (Eds.), Handbook of Surveillance Studies (pp. 125–132). New York: Routledge.

Glancy, D. J. (1979). The Invention of the Right to Privacy. Arizona Law Review, 21(1), 1–39. Available at https://digitalcommons.law.scu.edu/facpubs/317/

Goffman, E. (1959). The presentation of self in everyday life. New York: Doubleday.

Goffman, E. (1977). Relations in public: microstudies of the public order. New York: Harper & Row.

Gross, L. P. (1993). Contested closets: the politics and ethics of outing. Minneapolis: University of Minnesota.

De Hert, P., & Gutwirth, S. (2006), Privacy, data protection and law enforcement: opacity of the individual and transparency of power. In E. Claes, S. Gutwirth, & A. Duff (Eds.), Privacy and the Criminal Law (pp. 61–104). Antwerp: Intersentia.

Henry, N., & Powell, A. (2014). Beyond the ‘sext’: Technology-facilitated sexual violence and harassment against adult women. Australian & New Zealand Journal of Criminology 48(1), 104–118. doi:10.1177/0004865814524218

Hornung, G., & Goeble, T. (2015). „Data Ownership“ im vernetzten Automobil. Computer Und Recht, 31(4), 265–273. doi:10.9785/cr-2015-0407

Jeandesboz, J. (2016). Smartening border security in the European Union: An associational inquiry. Security Dialogue, 47(4), 292–309. doi:10.1177/0967010616650226

Kant, I. (1996). An Answer to the Question: What is Enlightenment? In J. Schmidt (Ed. & Trans.), What is Enlightenment? eighteenth-century answers and twentieth-century questions (pp. 58–65). Berkeley: University of California Press.

Koschorke, A. (1999). Körperströme und Schriftverkehr: Eine Mediologie des 18. Jahrhunderts [Body currents and written correspondence: a mediology of the 18th century]. München: Fink.

Leese, M. (2014). The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union. Security Dialogue, 45(5), 494–511. doi:10.1177/0967010614544204

Leigh Star, S., & Strauss, A. (1999). Layers of Silence, Arenas of Voice: The Ecology of visible and Invisible Work. Computer Supported Cooperative Work, 8(1–2), 9–30. doi:10.1023/A:1008651105359

Loh, W. (2018). A Practice–Theoretical Account of Privacy. Ethics and Information Technology, 20(4), 233–247. doi:10.1007/s10676-018-9469-1

Lyon, D. (2014). Surveillance, Snowden, and Big Data: Capacities, consequences, critique. Big Data & Society, 1(2). doi:10.1177/2053951714541861

Machanavajjhala, A., Kifer, D., Gehrke, J., & Venkitasubramaniam, M. (2007). L-diversity: Privacy beyond k-anonymity. ACM Transactions on Knowledge Discovery from Data, 1(1). doi:10.1109/ICDE.2006.1

Marmor, A. (2015). What Is the Right to Privacy? Philosophy & Public Affairs, 43(1), 3–26. doi:10.1111/papa.12040

Marwick, A. E., and boyd, d. (2014). Networked privacy: How teenagers negotiate context in social media. New Media & Society, 16(7), 1051–1067. doi:10.1177/1461444814543995

Matzner, T. (2014). Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data”. Journal of Information, Communication and Ethics in Society, 12(2), 93–106. doi: 10.1108/JICES-08-2013-0030

Matzner, T. (2018). Der Wert informationeller Privatheit jenseits von Autonomie [The value of informational privacy beyond autonomy]. In S. Burk, M. Hennig, B. Heurich, T. Klepikova, M. Piegsa, M. Sixt, & K. E. Trost (Eds.), Privatheit in der digitalen Gesellschaft (pp. 75–94). Berlin: Duncker & Humblot.

Moore, B. (1984). Privacy: Studies in social and cultural history. Armonk; London: M.E. Sharpe, Inc.

Nedelsky, J. (1989). Reconceiving Autonomy: Sources, Thoughts and Possibilities. Yale Journal of Law & Feminism, 1(1), 7–36. Retrieved from https://digitalcommons.law.yale.edu/yjlf/vol1/iss1/5/

Nissenbaum, H. (1998). Protecting privacy in an information age: The problem of privacy in public. Law and Philosophy, 17(5–6), 559–596. doi:10.2307/3505189

Nissenbaum, H. (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books.

Ochs, C. (in press). Privacies in Practice. In U. Bergermann, M. Dommann, E. Schüttpelz, & J. Stolow (Eds.), Connect & Divide. The Practice Turn in Media Studies. Zürich: Diaphanes.

Ochs, C. (2017). Rechnende Räume. Zur informationellen Transformation räumlicher Privatheiten [Spaces that compute. On the informational transformation of spatial private spheres]. In A. Henkel, H. Laux, & F. Anicker (Eds.), Zeitschrift für Theoretische Soziologie: Sonderband 4. Raum und Zeit: Soziologische Beobachtungen zur gesellschaftlichen Raumzeit (pp. 188–211). Weinheim; Basel: Beltz.

Osucha, E. (2009). The Whiteness of Privacy: Race, Media, Law. Camera Obscura, 24(1), 67–107. doi:10.1215/02705346-2008-015

Parent, W. A. (1983). Privacy, Morality, and the Law. Philosophy & Public Affairs, 12(4), 269–288. Retrieved from https://www.jstor.org/stable/2265374

Pateman, C. (1988). The Sexual Contract. Stanford: Stanford University Press.

Rachels, J. (1975). Why Privacy is Important. Philosophy & Public Affairs, 4(4), 323–333. Retrieved from https://www.jstor.org/stable/2265077

Rancière, J. (1999). Disagreement: politics and philosophy. Minneapolis: University of Minnesota Press.

Regan, P.M. (1995). Legislating Privacy. Chapel Hill: University of North Carolina Press.

Reiman, J. H. (1976). Privacy, Intimacy, and Personhood. Philosophy & Public Affairs, 6(1), 26–44. Retrieved from https://www.jstor.org/stable/2265060

Ringrose, J., Harvey, L., Gill, R., & Livingstone, S. (2013). Teen girls, sexual double standards and ‘sexting’: Gendered value in digital image exchange. Feminist Theory, 14(3), 305–323. doi:10.1177/1464700113499853

Roberts, D. E. (1996). Punishing Drug Addicts Who Have Babies: Women of Color, Equality, and the Right of Privacy. In K. Crenshaw, N. Gotanda, G. Peller, & K. Thomas (Eds.), Critical Race Theory – The Key Texts that Formed the Movement (pp. 384–426). New York: The New Press.

Roessler, B., & Mokrosinska, D. (2013). Privacy and social interaction. Philosophy & Social Criticism, 39(8), 771–791. doi:10.1177/0191453713494968

Rössler, B. (2005). The Value of Privacy. Cambridge: Polity.

Rössler, B. (2017). Autonomie: ein Versuch über das gelungene Leben [Autonomy: a conjecture on successful life] (2nd Edition). Berlin: Suhrkamp.

Ruchatz, J. (2013). Vom Tagebuch zum Blog. Eine Episode aus der Mediengeschichte des Privaten [From the diary to the blog. An episode in the media history of the private]. In Stefan Halft & Hans Krah (Eds.), Privatheit. Strategien und Transformationen. Passau: Karl Stutz.

Seubert, S., & Becker, C. (2018). Verdächtige Alltäglichkeit. Sozialkritische Reflexionen zum Begriff des Privaten [Suspicious banality. Socially critical reflections on the concept of the private]. Figurationen, 19(1), 105–120. doi:10.7788/figurationen-2018-190111

Siegert, B. (1993). Relais: Geschicke der Literatur als Epoche der Post [Relays: The fates of literature as the era of mail]. Berlin: Brinkmann & Bose.

Stalder, F. (2002). Privacy is not the Antidote to Surveillance. Surveillance & Society, 1(1), 120–124. doi:10.24908/ss.v1i1.3397

Stahl, T. (2016). Indiscriminate Mass Surveillance and the Public Sphere. Ethics and Information Technology 18(1), 33–39. doi:10.1007/s10676-016-9392-2

Tavani, H. T. (2007). Philosophical theories of privacy: Implications for an adequate online privacy policy. Metaphilosophy, 38(1), 1–22. doi:10.1111/j.1467-9973.2006.00474.x

Vincent, D. (2016). Privacy: a short history. Cambridge; Malden, MA: Polity.

Warren, S. D., & Brandeis, L. D. (1890). The Right to Privacy. Harvard Law Review, 4(5), 193–220. doi:10.2307/1321160

Westin, A. (1967). Privacy and Freedom. New York: Atheneum.

Wesch, M. (2009). Youtube and you: Experiences of self-awareness in the context collapse of the recording webcam. Explorations in Media Ecology, 8(2), 19–34. Available at https://core.ac.uk/download/pdf/5170117.pdf

Ziegeldorf, J. H., Morchon, O. G., & Wehrle, K. (2014). Privacy in the Internet of Things: threats and challenges. Security and Communication Networks, 7(12), 2728–2742. doi:10.1002/sec.795

Zimmer, M. (2010). But the data is already public: on the ethics of research in Facebook. Ethics and Information Technology, 12(4), 313–325. doi:10.1007/s10676-010-9227-5

Footnotes

1. While there are regulations and norms regarding visibility or spatial access in many cultures (e.g., in Middle Eastern Muslim countries, see El Guindi, 1999; or Confucian traditions, see Ess, 2005) that resemble central European privacy practices, great care has to be taken comparing these. To avoid lengthy discussion of this issue we take an agnostic stance here and restrict our treatment to the ‘historical West’.

2. This normative and conceptual legacy of private property has been examined critically only recently, for example Bhandar has shown that having property and being able to appropriate has been an essential element in the emergence of liberal political subjectivity, which was particularly visible in the colonies (Bhandar, 2014).

3. It has to be added though, that reliable long-term storage of data (where it is desired) is a complex problem.

4. Arguing from “an European point of view” we narrowly focus on the EU GDPR here.


Voter preferences, voter manipulation, voter analytics: policy options for less surveillance and more autonomy

$
0
0

This paper is part of Data-driven elections, a special issue of Internet Policy Review guest-edited by Colin J. Bennett and David Lyon.

Voting (including the decisions of whether to vote, and, if so, which way to vote) is the cornerstone of the democratic process. A vote (or the decision not to vote) is also a choice. Central to the democratic value of voting is the ability of the individual to exercise autonomy in making this choice; indeed, the secret ballot recognises and emphasises the need for privacy in order that voters can make an autonomous choice (Evans, 1917). Traditional political advertising is an obvious and very public tactic to influence voter preferences. The impact of such messages can be increased by two forms of personalisation: message targeting (directing messages to selected sub-populations) and tailoring (developing different versions of a message designed to appeal to different people) based on demographic, behavioural, or psychological characteristics (see, e.g., Hirsh, Kang, & Bodenhausen, 2012). Targeting and tailoring have long been used to increase the impact of political messaging, including speeches and broadcast media advertising, in the offline context (see, e.g., Miller & Sigelman, 1978). Recently, online targeting and tailoring techniques are being used in new, subtle, and powerful ways to design and deliver political messages that have an even greater potential to influence voter behaviour and voter choices.

Today’s political operatives develop highly detailed voter profiles, integrating demographic information, information about the economic, social, and political activities of potential voters, and detailed records of online and even offline behaviour into a rich voter profile that can also reveal, through powerful data analytics, additional insight into thoughts, beliefs, and psychological characteristics (see, e.g., Kosinski, Stillwell, & Graepel, 2013). The resulting voter profiles can be combined with insights from psychological studies to develop persuasive messages that are tailored with respect not only to the content but also the form of the message (e.g., appearance, specific language, timing of the message), designed specifically to appeal or persuade based on specific recipient characteristics (see, e.g., Issenberg, 2016). As Calo (2014) has demonstrated in the context of consumer marketing, these techniques can take advantage of cognitive limitations and vulnerabilities to shape consumer decisions.

Personalised political messages, using the same techniques Calo references, are being employed in the political realm and can shape political decisions - or, as Slovic (1995) argues, these techniques can be used to construct voters’ expressed preferences. The techniques go beyond targeting and tailoring messages based on demographic variables (age, gender, party affiliation) and/or social, political and economic activities, to designing messages for and delivering messages to individuals based on psychological variables such as personality characteristics (extroversion, neuroticism, authoritarianism, etc.), attitudes, and interests, and other psychological information that is revealed or can be inferred (see, e.g., Hine et al., 2014). Targeting and tailoring on these and other psychological variables is generally known as psychographic profiling. Effective use of psychographic profiling information includes the manipulation of the form, content and timing of political messages, often using strategies that have been identified in empirical research in cognitive psychology and decision making as increasing message impact. Manipulated messages can be designed to activate implicit attitudes and biases, with effects that are likely to be subtle, and operating at an unconscious level.

It is important to emphasise that these subtle techniques of persuasion and even of manipulation are enabled by equally subtle techniques of surveillance, often taking the form of increasingly sophisticated behavioural tracking techniques. In the consumer marketplace, it may well be the case that the consequences for consumers (e.g., paying a bit less for something or buying on impulse) are relatively small, and the upsides for firms are likewise marginal (Calo, 2014, p. 1002). The effects in the consumer marketplace, therefore, may be of little importance in terms of the number of people whose behaviour is affected and in terms of the impact of those effects on the marketplace; however, it is important to recognise that consumer profiling practices do have implications beyond consumption. In the political arena, however, affecting the political preferences, decisions, or actions of even a small proportion of voters in a competitive election could be critical to the outcome. Such manipulation of voters raises fundamental issues of democratic theory.

When political communicators have the advantage of deep and detailed knowledge about the public and when they leverage that information to develop and deliver political messages designed to persuade specific individuals based on what is known about their demographics, personality, attitudes, beliefs, etc., and when those messages take advantage of persuasive principles drawn from the empirical literature in order to exploit a predictable interaction between individual and message, the result is an unfair system that undermines voter autonomy. We refer to political ads that employ tailoring and/or targeting, manipulating the timing, content and form of messages, and that are employed, not in the interests of informing or even persuading voters, but rather with the goal of appealing to non-rational vulnerabilities as revealed through algorithmic (and particularly psychographic) profiling. We argue that the use of such ads warrants policy intervention, since they have the potential not only to affect individual autonomy and an individual’s ability to render her genuinely own voting decision, but also to contribute to the fragmentation and polarisation of the electorate – both results that are antithetical to democratic theory.

Drawing upon our backgrounds in psychology and political science, we previously explored the consequences of the individualised, highly selective, and structured information environment on voter preferences, examining the ways in which personal profiling could be used to manipulate voter preferences and thus undermine voter autonomy (Burkell & Regan, 2019). In this article, we refine that earlier analysis of the civil liberties, privacy and democratic values questions and extend the analysis to focus on psychographic profiling in political advertisements. We argue that this type of profiling in particular should be regulated to protect voter autonomy and mitigate political polarisation. We first discuss the development of personalisation techniques generally and its incorporation into political messaging. Next, we examine the particular issues associated with psychographic profiling as they arose in the commercial space and are now increasingly prevalent in the political arena. Finally, we identify policy options and approaches for regulating sophisticated voter analytics practices that employ psychographic profiling. We should note up front that delimiting psychographic profiling in a way that it can be separated from other forms of profiling is difficult; it is, in effect, only the latest stage in a continuum, as we note below.

Online personalisation moves into politics

Personalisation is ubiquitous in the online information environment, and indeed it is a natural technologically-mediated response to the overwhelming amount of information that confronts users online. While online search results could return ‘everything’ relevant to a users’ query, some order must be imposed on the results, and personalisation helps to ensure that the information deemed most relevant to users is the information they are most likely to encounter, by placing that information early in the search results. Filtering techniques, including personalised filtering, address what Benkler (2006) has termed the ‘Babel objection’:

Having too much information with no real way of separating the wheat from the chaff forms what we might call the Babel objection. Individuals must have access to some mechanism that sifts through the universe of information, knowledge, and cultural moves in order to whittle them down to a manageable and usable scope (p. 22-23).

Ranking algorithms for search results (e.g., Google PageRank) obviously and directly address the Babel objection. More subtle forms of information environment shaping that include some degree of personalisation are evident in recommender systems, which select a subset of items to suggest to users, and in online advertisements, which are directed to people who, based on demographic and behavioural information, are most likely to be interested and/or influenced. As the advertising industry realised the economic value of more finely tuned personalisation, and as more activities moved into the online environment, advances in computer modeling and behavioural economics identified more fine-grained methods for identifying individual characteristics that indicated preferences for certain products and services. In order to achieve a more personalised effect, complex algorithms select, sort, and prioritise information about the nature of the items themselves, the characteristics of the user and user interests/needs, and the match between item and user. User behaviour in response to this personalisation is folded into new analytics, feeding new algorithms and giving rise to even better predictions, in an iterative upward spiral of personalisation.

Personalisation depends critically on a great deal of information about users and their activities, gleaned through a range of surveillance techniques as well as through inferences, based on those data, about a person’s cognitive and psychological styles. Less finely-defined targeting relies upon demographic profiling on the basis of (often observable) demographic characteristics like age, gender, religious affiliation, political affiliation, etc. More finely-grained micro-targeting relies upon demographic characteristics combined with data about the activities of individuals, including buying patterns, travel destinations, and social interactions. The most finely-defined psychographic targeting relies upon psychographic profiling on the basis of personality and behaviour data and inferences: personality (e.g., extroversion/introversion), values, opinions, attitudes, and interests. Some characteristics, such as sexual orientation, may fall in a middle range — one can think of this as being a continuum from externally observable and relatively explicit characteristics (descriptors) to internal psychological characteristics and tendencies.

As these personalisation practices became commonplace, with demonstrated effectiveness, in the consumer arena, they were picked up by political operatives, beginning primarily in the mid-2000s and particularly in the United States (Baracos, 2012; Bennett, 2016; Bodó, Helberger, & de Vreese, 2017; Burkell & Regan, 2019; Rubinstein, 2014; Tufekci, 2014). Issenberg in particular details how these practices were incorporated in political campaigns, culminating in the success of the 2008 Obama campaign with its techniques “that represented an individualized way of predicting human behaviour, where a campaign didn’t just profile who you were but knew exactly how it could turn you into the type of person it wanted you to be” (2016, p. 326). Chester and Montgomery (2018) also document how these digital marketing practices evolved in the political arena and how they were used in the 2016 US presidential election including in campaigns closely working with Facebook and Google to target particular groups of voters and to direct ads in real-time and across devices. They quote Brad Parscale of the Donald Trump campaign as crediting these ads for Trump’s victory: “Facebook and Twitter were the reason we won this thing” (Chester & Montgomery, 2018, p. 39).

Space does not permit an exhaustive list of examples of such political targeting techniques but a few that demonstrate the realities of those based on psychographic profiling well illustrate the terrain. Digital technologies allow the ‘morphing’ of two or more faces into a single image. Bailenson, Ivengar, Yee, and Collins (2008) used digital morphing techniques to create new and individualised versions of candidate faces, subtly altering the candidate images to look more (but only slightly more) like the individual to whom the images were presented. Consistent with psychological theory that predicts increased liking of those who are similar to ourselves, viewers who received candidate images morphed with photographs of themselves expressed greater support for the candidates than did those who received candidate images morphed with photos of other people – even though the viewers were unaware that the images had been altered. In the 2016 US election political campaigns used Cambridge Analytica’s model rating individuals on a five-factor personality model (including: openness, conscientiousness, extroversion, agreeableness, and neuroticism) to develop ads tailored to the vulnerabilities of particular voters (Chester & Montgomery, 2018, p. 23-4)

Although these practices may, as Chester and Montgomery (2017) point out, in essence be classified as micro-targeting or behavioural advertising and the somewhat inevitable result of cross-adoption of behavioural economic insights, sophisticated computer analytics, and online platform and advertiser interest in expanding their markets, we believe their implications are qualitatively different in the political arena (see Turow, Deeli Carpini, Draper, & Howard-Williams, 2012) compared to the commercial arena. Personalisation and targeting practices have evident positive results, including helping to ensure that individuals are directed towards resources, products and services that are of the greatest value to them, and relieving them of the burden of sifting through mountains of irrelevant material. At the same time, there are significant and widely-recognised downsides of personalisation, including its reliance on the collection and repurposing of personal information, surveillance of a greater range of individual activities, discrimination based on selective exposure of information to particular audiences or individuals, and the possibility of manipulation. In the next section, we argue that these downsides, particularly selective exposure to information and possible manipulation, raise distinct and problematic issues in the political arena and that these downsides are greatest when they result from or incorporate psychographic profiling.

Personalisation and psychographic profiling in politics

In this section, we address two concerns that have emerged in debates about political micro-targeting generally, and that apply even more critically in micro-targeting using psychographic profiling: first, polarisation of the electorate in ways that challenge the ability of a democratic polity to understand political questions in similar ways and reach consensus on how to proceed; and second, manipulation of voters’ decision-making in ways that undermine their ability to act autonomously and develop opinions that reflect their interests. We also close this section with a brief discussion of those who take a more sceptical view of the negative effects of personalised messages.

Polarisation

One of the main concerns voiced about personalisation in exposure to political information has been about the development of ‘filter bubbles’ (Pariser, 2011) and digital information ‘echo chambers’ 1 that effectively stifle information inconsistent with previously expressed (or inferred) interests, opinions, and practices. Concerns arising from surveillance of users and sophisticated algorithmic processing focus on the restriction of content presented to users, and the potential for bias and loss of diversity in the information environment. Some recent empirical studies, including one measuring exposure to diverse news and opinions on Facebook (Bakshy, Messing, & Adamic, 2015), cast doubt on whether such ‘filter bubbles’ and ‘echo chambers’ exist online, though they nonetheless conclude that social media exposure to ideologically different viewpoints is possible and under the individual’s control. Many studies reveal a weak ‘filter bubble’ effect (see, e.g., Bakshy et al., 2015; and Bechmann & Nielbo, 2015) that has the potential to reinforce stronger individual and social information selection mechanisms. Filter bubbles may be more likely to affect some groups, including the politically disinterested who are not avid media consumers (Dubois & Bank, 2018). Regardless, as Lazer (2015) points out, although such selective exposure may not be occurring yet, this remains a potential concern as algorithms become even more sophisticated and opaque sparking subtle changes in behaviour.

In some ways, what have been termed ‘digital filter bubbles’ are simply a more extreme version of the limited information environment that results from our natural tendency to seek information consistent with confirmed or emerging perspectives (Nickerson, 1988; Sunstein, 2007). However, the multiplicity, ubiquity, and invisibility of the algorithms - their ‘black box’ (Pasquale, 2015) features - that determine our information environments (Pariser, 2011) will tend to enhance the isolating and fragmenting tendencies we demonstrate spontaneously in our offline information seeking. The technical process of information selection can even catalyse a self-selection process by taking away the choice to avoid or confront dissonant content (Bodó, Helberger, Eskens, & Möller, 2019, p. 2). In fact, it is easy to see how the two processes – personal and technical information isolation – can be mutually reinforcing: “People are diversity averse, and algorithms reduce diversity. Together, users and algorithms create a spiral, in which users are one-dimensional and prefer their information diet to be filtered so that it reflects their interests, and in which this filtering reinforces the individual’s one-dimensionality” (Bodó et al., 2019, p. 2).

Whether self-inflicted, technologically mediated, or both, the result of this information isolation in the political arena is that opposing viewpoints are removed, with a consequent negative effect on democratic dialogue. The particular negative effect differs under liberal and deliberative view of democracy. In the liberal perspective, in order to make reasonable decisions, citizens should know a range of opinions and options. If information is filtered to them, especially without their consent, that would “violate their autonomy, as it will interfere with their ability to choose freely, and be the judge of their own interests” (Bozdag & van den Hoven, 2015, p. 251). The deliberative democracy perspective puts less emphasis on the loss of autonomy and more on the loss of diversity of opinions and perspectives resulting from targeting information because that will negatively impact the ability of people to deliberate or reason together about issues and candidates (Bozdag & van den Hoven, 2015). Bruns (2019), after reviewing the debate and evidence about ‘filter bubbles’ concludes that the more fundamental questions are why different groups have come to view information from rather radically but fixed perspectives and how this can be prevented or reversed - “in order to mitigate the very real threat of fundamental polarisation, and even of a complete breakdown of the societal consensus” (Bruns, 2019, p. 10).

Indeed, there is significant public concern about the ‘fracturing’ or polarisation of the electorate through the creation of a fragmented information environment. A recent article in The Guardian highlighted this concern in a quote from Full Fact, the UK fact-checking charity:

When an election stops being a shared experience, democracy stops working … We are used to thinking of adverts as fixed things that appear in the same way to many people. This idea is out of date...The combination of media buying by computers, and adverts being created and personalised by computers, mean that online advertising is not a shared experience any more (Chadwick, 2018, n.p.)

The same article noted that important public debate on, and response to, political messaging is undermined when those messages are not universally shared. A previous Guardian (Wong, 2018) contrasted the widespread public responses to the ‘Daisy’ ad of Lyndon B. Johnson’s campaign and the divisive ‘Willie Horton’ ad put forth by George H.W. Bush with the complete lack of debate about ads that were placed by Trump in the 2016 US presidential election. Wong noted that ‘no such debate took place around Trump’s apparently game-changing digital political advertisements before election day’ – because there were 50 to 60 thousand versions of those ads each day, effectively ensuring that there was no single public representation that could be debated. She also quoted Ann Ravel, a former member of the Federal Election Commission, on her concerns: “The way to have a robust democracy is for people to hear all these ideas and make decisions and discuss… With microtargeting, that is not happening”. Sara Bannerman, the Canada Research Chair in Policy and Governance at McMaster University, expresses a similar concern: “On one hand, targeted messaging is similar to the practice of advertising to particular segments of the population in community publications. On the other hand, targeted messaging is completely different because it takes place in ‘the dark’… they’re visible only to specific selected people and not to a broader public” (Hirsh, 2018, n.p.).

Manipulation

A second concern regarding personalisation in the political arena, and particularly personalisation based on psychographic profiling, is the possibility not merely of persuading or influencing, but of manipulating voters. There is much written about manipulation, and numerous definitions (see Susser, Roessler, & Nissenbaum, 2019), but Sunstein’s definition serves well for our purposes: “An action counts as manipulative if it attempts to influence people in a way that does not sufficiently engage or appeal to their capacities for reflective and deliberative choice” (Sunstein, 2015, p. 443). Advertisement has always been an attempt to manipulate behaviour, but the potential is exacerbated in the online context, and enhanced by increasingly sophisticated algorithms that monitor and respond to user behaviour (Susser, 2019). As Spencer (2019) writes, “the existing infrastructure supporting online behavioural advertising allows for extreme personalisation, enabling marketers to identify or even trigger the biases and vulnerabilities that afflict each individual consumer and tailor content to exploit those biases and vulnerabilities” (p. 4). Also relevant is Zarsky’s suggestion of four elements that constitute unacceptable manipulations: 1) they tailor a unique response to every individual based on previously collected data; 2) they adapt the tailored response based on on-going feedback from the user and other peers, rendering the manipulation an on-going process rather than a one time action; 3) they occur in a non-transparent environment; and 4) they are facilitated by advanced data analytics tools allowing insights as to what forms of persuasion are effective over time. (Zarsky, 2019, p. 169). Floridi’s (2016) categories of ‘structural’ and ‘informational’ nudging also offer some insight into the distinction between acceptable and unacceptable forms of manipulation. According to Floridi, structural nudging alters the choice environment and the courses of action available to the decision maker, and can result in a de facto forced choice. Informational nudging, by contrast, changes the information available to the decision maker about the available alternatives, but does not attempt to shape directly the choice itself. The distinction is subtle, but worth careful consideration.

The ability to manipulate individuals has been enhanced by research in psychology, neuroscience, and behavioural economics. Research has demonstrated that social media and online behavioural tracking information can be used to predict personality characteristics, particularly in the case of extraversion and life satisfaction (Kosinski, Bachrach, Kohli, Stillwell, & Graepel, 2014), and that these predictions are more accurate than personality judgments made by friends and family (Youyou, Kosinski, & Stillwell, 2015). The words, phrases, and topics of social media postings are not only highly indicative of age and gender but also with appropriate analysis show strong relationships to the ‘big five’ personality traits of extraversion, agreeableness, conscientiousness, neuroticism, and openness (Park et al., 2015; Schwarz et al., 2013). Other researchers have leveraged photos and photo-related activities to successfully predict personality traits (Eftekhar, Fullwood, & Morris, 2014). Advertisements based on cognitive biases or vulnerabilities are difficult for recipients to detect and difficult to counteract, particularly if the effects are small; decades of research in behavioural economics and related fields suggest that these biases are unconscious and persistent (see, e.g., Newell & Shanks, 2014; Tversky & Kahneman, 1974).

The notion that political behaviour is being shaped by leveraging psychological research has been raised in the popular press (e.g., Issenberg, 2016); indeed, John et al. (2013) wrote an entire book examining the use of nudges to shape civic behaviour. It is precisely this concern that is raised by Zittrain in his article entitled ‘Engineering an Election’ (Zittrain, 2014), and Tufekci raises similar issues under the rubric of ‘computational politics’ and ‘engineering the public’ (Tufekci, 2014). At the root of all of these concerns lies the basic truth articulated by Slovic (1995): preferences are constructed in the process of political decision-making – and political decision makers can therefore be influenced by the information they encounter in the process of making a decision.

The possibility of manipulation has been discussed more in the commercial than in the political realm (Calo, 2014; Zarsky, 2019; Susser, Roessler, & Nissenbaum, 2018), but persuasive techniques that work to influence consumer purchasing decisions are likely to influence political decisions as well. Consistent with our interest in psychographic profiling, we focus our analysis on political ads that not only use data based on surveillance of one’s demographic characteristics and one’s social, political and economic behaviour, but that also use sophisticated analysis to draw inferences about one’s emotional and psychological inclinations and limitations.

Previously we analysed political targeting generally and the ways in which it challenged the ability of citizens to be autonomous agents in processing the information they receive (Burkell & Regan, 2019). Susser et al. (2019) provide a more general analysis of online manipulation and their conclusions regarding the harms of manipulation to autonomy and the implications for both individuals and society are likewise similar. In situating our concerns about manipulation, particularly in the political arena and as a result of psychographic profiling, Gorton’s (2016) argument is prescient and relevant:

The twentieth century revolution in social science never really made good on its promise of producing theories with genuine predictive power...[but] From the vantage of the twenty-first century...perhaps that magic of prediction and control has at long last arrived, at least in some measures and in certain domains (p. 62).

Gorton notes that the use of social science models and theories have enabled political campaigns to manipulate citizens in their roles as voters, through: 1) precise predictive power, especially when compared to earlier techniques; 2) ‘undermin[ing] a healthy public sphere by individualizing, isolating, and distorting political information’ (p. 63); and 3) altering the behaviour of citizens through the use of models of unconscious processes of the mind that ‘alter voting behaviour and public opinion formation through processes that often completely elude the understanding of their intended targets’ (p. 63). All of these capacities in political campaigns raise important and problematic effects but the ability to tap the unconscious processes of decision-making is novel, powerful, and not yet fully recognised. Gorton places responsibility for this capacity on framing theory and focus group research which help campaigns identify words and phrases that “activate certain frames in voters’ minds, especially frames that guide their moral thinking” and then use these “to alter voters’ beliefs and behaviours by intentionally and precisely targeting their unconscious cognitive processes” (Gorton, 2016, p. 75). Gorton builds upon Lakoff’s ideas about the ways in which framing theory affects political discourse and quotes his reasons for why appealing to logic and evidence fails in politics: “not only because the public’s mind is mostly unconscious, metaphorical, and physically affected by stress, [but] because its brain has been neutrally shaped by past conservative framing” (Lakoff, 2009, as quoted in Gorton, 2016, p. 76).

Such uses of framing theory are rendered more sophisticated and powerful by ubiquitous digital surveillance and sophisticated algorithms to reveal unique vulnerabilities of individuals; moreover, digital platforms facilitate leveraging insights about individual vulnerabilities into decision-making in something like real-time (Susser et al., 2019, pp. 6-7). Cambridge Analytica’s personality model, discussed above, provides a vehicle for these more sophisticated uses. Chester and Montgomery report that Cambridge Analytica compiled a database with thousands of data points per person to identify points on which an individual was ‘persuadable’ and tailor messages to the vulnerabilities of that individual (Chester & Montgomery, 2018, pp. 23-4). Moreover, research in neuroscience, psychology, and behavioural economics continue to advance more complex understandings of human emotion and behaviour, and ever more complex models to influence individuals.

Calo’s research on digital marketing is particularly helpful in identifying the distinctions we think are important. He notes that firms marketing to consumers can “surface and exploit how consumers tend to deviate from rational decision-making on a previously unimaginable scale. Thus, firms will increasingly be in the position to create suckers, rather than waiting for one to be born” (Calo, 2014, p. 1018). He argues that the techniques that enable this are distinguishable from previous advertising techniques in two respects – “digital market manipulation combines, for the first time, a certain kind of personalization with the intense systemization made possible by mediated consumption” (Calo, 2014, p. 1021). Through systemisation, “hundreds of thousands of ads [are matched] with millions of Internet users on the basis of complex factors in a fraction of a second” (Calo, 2014, p. 1021). As discussed above, these same techniques are being employed in the political arena with ads being framed in ways that appeal directly to an individual’s decision-making vulnerabilities and at times that they are likely to be most receptive to the message.

Calo argues that it is the “systemization of the personal coupled with divergent interests that should raise a red flag” (Calo, 2014, pp. 1022-23). He goes on to say “true digital market manipulation, like market manipulation in general, deals strictly in divergent incentives. The entire point is to leverage the gap between how a consumer pursuing her self-interest would behave leading up to the transaction and how an actual consumer with predictable flaws behaves when pushed, specifically so as to extract social surplus” (Calo, 2014, p. 1023). In the political arena, the divergent interests of voters and the campaign infrastructures are rooted in three factors. The first is the fairly obvious fact that a campaign is interested in promoting a certain candidate or policy position, and interested in persuading a voter to align herself with the interest of the campaign. The campaign is not interested in providing unbiased information so a voter can judge for herself whether the campaign does indeed represent her interests. Secondly, the digital platforms on which political messages are conveyed are commercial and the platforms are interested in generating as much revenue as possible. The more messages they can display the more revenue and the more they can precisely target an ad and the more accurate in timing the ad, the more they can charge for the ad. Finally, the intermediaries of the ad agencies and political operatives are likewise interested in generating revenue through more sophisticated analytical processing and online outreach.

In the consumer marketplace, Calo points out digital market manipulation can exact economic and privacy harms, as well as damaging consumer autonomy (Calo, 2014, pp. 1024-1034). In the political marketplace of ideas, individual privacy and autonomy will be similarly compromised – and there are very real political harms of fragmentation and polarisation. Additionally, voters arguably incur what could be considered “economic harms” in two respects. The first is that their political message environment is restricted - and if they are challenged by other voters or by confronting counter messages, they take on the costs of reconciling divergent messages. The second is that their vote may not result in the economic or policy results that they anticipated from the messages. As an example, the Trump voters in 2016 may not have benefitted from the tax cut in the way they expected.

Sceptical views

Some question the need to regulate micro-targeting in the political context. Zuiderveen Borgesius et al., 2018; see also Resnick, 2018) suggest that micro-targeting will have limited and potentially even positive effects on the democratic process, and there is doubt about the effectiveness of micro-targeted ads to change voting behaviour (Kalla & Brookman, 2018; Motta & Franklin Fowler, 2016). Vaccari (2017), evaluating the effectiveness of online mobilisation in three European countries, comes to a somewhat similar conclusion that such mobilisation increases political engagement (Vaccari, 2017, p. 85), however he does not explore the question of whether the engagements actually are in citizen’s interests or if they are manipulated. These studies, however, have typically focused on traditional forms of advertising, and may underestimate the impact of more personalised advertising campaigns or psychographic profiling, which can manipulate both advertisement content and advertisement form to achieve maximal persuasion.

The actual impact of micro-targeting as currently practiced may still be an open question but there is every reason to believe that micro-targeting strategies are becoming increasingly sophisticated, based on increasingly detailed profiles and thus potentially more effective. Based on our analysis, the dangers to autonomous decision-making and further political polarisation posed by psychographic profiling tip the scales on the need to regulate. Daniel Kreiss (2017) provides an additional concern about sophisticated targeting that also lends support for some regulation. He takes a more sceptical view of the danger of manipulation of individual voters and emphasises the group basis of politics which leads to his concern about the cultural power of micro-targeting to “create a powerful set of representations of democracy that undermines the legitimacy of political representation, pluralism, and political leadership” (Kreiss, 2017, p. 3) - representations that in effect cause further polarisation. Whether out of concern for manipulation of individual voters or concerns about polarisation of the body politic, some governmental intervention is warranted.

Options for regulating/controlling sophisticated voter analytics

The first challenge to regulation of sophisticated voter analytics, in particular psychographic profiling, is that political speech is a cherished value in democratic systems and central to a functioning democracy. In the United States political speech is relatively free from regulation. In other democratic countries, governments have imposed some constraints on political speech in order to ensure the rights of voters and to ensure a free and fair exchange of political information so that voters can make informed and autonomous decisions. To date, however, there have been no specific regulations that limit what is generally referred to as “microtargeting” of political messages based on detailed personal profiles. We identify three avenues of response to sophisticated voter analytics and personalised political communication. The first locates the responsibility with voters themselves - what we term voter responsibility. The second places the responsibility with the platforms delivering micro-targeted political communications (e.g., Google, Facebook) - what we term platform accountability. And the third rests the responsibility with the courts to uphold policies the government adopts to restrict voter manipulation and polarisation of the electorate - what we term judicial intervention. We consider all approaches to be important – and the last to be critical.

Voter responsibility

Some suggest that voters have access to multiple sources of political information and thus need not, and do not, rely solely on political advertisements. These arguments construct the citizen as an active and independent information seeker, capable of gathering and motivated to gather information from a wide range of sources creating an unbiased information sphere. One must consider, however, the difficulty individuals face in recognising that they are the recipients of targeted political advertisements or campaign messages, and their ability to ‘step outside’ of these selective information environments. Such stepping out may be difficult because, as Just and Latzer point out, “the market for attention – the central scarce resource in information societies – is increasingly being co-produced and allocated by automated algorithmic selection” (Just & Latzer, 2017, p. 239), influencing not only what individuals find or are exposed to but also the reputation of the source and their trust in it (Just & Latzer, 2017, p. 242). This complex interplay affects the ability of individuals, as consumers and voters, to discern the reliability and relevance of information they find or is presented to them. In effect, one’s online information reality is largely constructed by algorithmic selection.

In response to users’ concerns about the personalisation of messages, Facebook, the largest social media platform and the one at the forefront of current debate in the wake of the 2016 Cambridge Analytica controversy, developed two ad transparency mechanisms: a ‘why am I seeing this’ button, and an ‘Ad Preferences’ page. The first explains why a particular user is seeing a specific ad, while the second shows users a list of the information that Facebook has gathered about them and the sources of that information. These mechanisms provide users some insight into personalisation practices, but the mechanisms often offer incomplete, misleading, or vague information and explanations, and thus are of limited effectiveness in promoting ad transparency (Andreou et al., 2018). Moreover, users must be motivated to avail themselves of these mechanisms, and the information they receive only reveals that the advertisements they are viewing are selected specifically for them – they are not mechanisms for accessing unbiased or unfiltered advertisements. To address some of these limitations, Koene et al. (2015) suggest that the ‘Internet research community’ should develop monitoring tools 2 or ‘test kits’ that users could deploy to determine if the level of personalisation on a site is acceptable. This approach is consistent with the use ‘ad blocker’ plug-ins. These tools can provide valuable information to users with the motivation and technical know-how to deploy them, but again they only flag the fact that one is receiving a personalised message – the tools do not remove personalisation or message tailoring, nor do they inform others of the targeting and tailoring practices. Other transparency mechanisms such as ad registries that are being offered by or required of platforms (see platform accountability, below) offer solutions that require less technical skill, but still require significant and continued efforts on the part of users, who face a personalised information environment by default. Users can deploy strategies to circumvent personalisation, including deleting cookies, using search engines that do not track, providing false information, or carrying out random online actions such as haphazardly clicking on links (Bozdag & van den Hoven, 2015; Pariser, 2011), but these strategies will undermine the desired as well as undesired effects of personalisation, and they require conscious action on the part of the user. In other words, users must work, and work diligently, to escape and identify the effects of personalised messaging.

Media and information literacy initiatives to improve user skills and knowledge are also important responses, promoted most recently in relation to foreign interference with democratic elections (Tenove, Buffie, McKay, & Moscrop, 2018). In Britain, for example, the House of Commons Digital Culture, Media and Sport Committee highlighted the importance of digital literacy, recommending in its 2018 report that “digital literacy should be a fourth pillar of education, alongside reading, writing and maths” (DCMS, 2018, p. 312), and it suggested that a comprehensive educational digital literacy framework should be funded through a social media company levy. These initiatives seek to empower users by giving them the knowledge and skills to separate ‘fake news’ and ‘alternative facts’ from real content (Cooke, 2018). Although these are aimed at the more general issue of digital literacy and training (Stoddard, 2014), they help to increase awareness of the possible dangers in online information flows and may sensitise people to biases in political information. It is important to note, however, that media literacy campaigns will be less effective in protecting audiences against the subtle types of manipulation enabled by psychographic profiling, which often involve small changes to messages to engage processing heuristics and biases that operate below the level of consciousness.

Platform accountability

A second avenue for policy responses to sophisticated voter analytics and online personalised and micro-targeted political communication is to require more accountability on the part of internet platforms. In general, such accountability would be in the form of disclosure of who is sponsoring ads and how those ads are being targeted, in effect a form of algorithmic transparency. Since most countries already have some form of disclosure laws, this might be viewed as an incremental change and thus engender minimal opposition. Rubinstein proposes that disclosure of personal information practices, for example, could be required by the candidate and other electoral actors and by the data brokers who make personal information available to electoral actors (Rubinstein, 2014, pp. 910-921). The options we discuss below are instead directed to online platforms rather than to candidates or electoral actors. There appears to be interest in several countries to place more responsibility on platforms.

For example, in December 2018, Canada enacted the Elections Modernization Act, which requires that platforms maintain a record of the political and partisan advertisements they deliver, starting a year before an election, and maintained for two years afterwards (George-Cosh, 2019). Also, in 2018, the Washington State Public Disclosure Commission ruled that the State’s political advertising disclosure requirements applied to online advertising. The requirements included disclosure of: the ad; who or what the ad was supporting or opposing; the name and address of the ad’s purchaser; the ad’s cost; and, for digital ads, the total number of impressions and demographic information of the audiences targeted and reached to the extent that information is collected by the commercial advertiser (Sanders, 2019). At the national level in the US, the proposed Honest Ads Act is similarly designed to enhance the integrity of the democratic process by extending the disclosure requirements of who has paid for political ads from traditional media to the online environment. With respect to targeted audiences, the bill would require large digital platforms with at least 50,000,000 monthly viewers to maintain a public file of all electioneering communications which “would contain a digital copy of the advertisement, a description of the audience the advertisement targets, the number of views generated, the dates and times of publication, the rates charged, and the contact information of the purchaser.” Although the bill has bipartisan sponsorship, it does not have the support of the Republican leadership.

Some companies, such as Facebook and Twitter, have voiced some support for the proposed Honest Ads Act and adopted some of its requirements voluntarily (Newton, 2018a). In May 2018, Facebook required a “paid for” at the top of ads on Facebook and Instagram, with a link to a page with information about the cost of the ad and the demographic breakdown of the intended audience, including the age, location, and gender. This requirement will address targeting generally but not targeting based on more sophisticated voter analytics including psychographic profiling. Facebook has also created an Ad Library with ads for the last seven years and has established a partnership with an academic team to facilitate research about the nature and implications of online political advertising (Newton, 2018b). Twitter has instituted similar rules requiring disclosure of ad sponsors and has established an Ad Transparency Council to provide more detailed breakdowns of ad spending and targeting demographics (Statt, 2018).

It is unclear how effective these self-regulatory initiatives will actually be – and these companies have not been willing to comply with government mandates. For example, in response to the Canadian Election Modernization Act, Google decided to refrain from carrying any political ads rather than comply with legislation designed to support greater scrutiny to online advertising (Dubois et al., 2019). Google and Facebook responded similarly to the Washington State requirements, banning political advertisements rather than following the requirements. The companies argued that the burden of determining whether an ad was political was ‘enormous’ and that it might be ‘technologically impossible’ to know what ads are actually running on their platforms (Sanders, 2018). Google, for example, sells advertisement space on web pages through a real-time bidding process that auctions the ad ‘slots’ visible to a specific viewer who is visiting a web page. The process takes place in a fraction of a second, and the platform (Google in this case) may know only the identity of the successful bidder, and not the content of the ads that were placed by that bidder.

In the Canadian and US contexts, it is also important to note that the accountability required is itself limited, covering only ‘official’ political advertisements and leaving entirely unregulated other forms of influential online political speech, including bots, influencer marketing, and paid ‘audience builders’ (Reepschlager & Dubois, 2019). There can be no doubt that political messages, including those constituting foreign influence, will likely slip through the cracks of any system designed to identify them. Platforms, however, have addressed some of these technical issues to disrupt online communications by terrorist groups (Global Internet Forum to Counter Terrorism, n.d.); the same will to act, and the same solutions, could be applied to the identification of online political advertising.

The EU has gone a bit further than Canada and the US in addressing the responsibilities that platforms have with respect to transparency and political advertisements. In April 2018, the European Commission proposed an EU wide policy to counter online disinformation, which was later finalised with input from the major platforms including Google, Facebook and Twitter. By signing this Code of Practice on Disinformation, these platforms are responsible for:

  • Ensuring transparency about sponsored content, in particular political advertising, as well as restricting targeting options for political advertising and reducing revenues for purveyors of disinformation;
  • Providing greater clarity about the functioning of algorithms and enabling third-party verification;
  • Making it easier for users to discover and access different news sources representing alternative viewpoints;
  • Introducing measures to identify and close fake accounts and to tackle the issue of automatic bots;
  • Enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation (European Commission, 2018).

The policy also provides support for a network of fact-checkers and calls on “Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment”. This is largely a self-regulatory tool, but has also been described as a co-regulatory instrument given the Commission’s involvement in its development and oversight (Leerssen, 2019). 3 There appears to be growing support in Europe for efforts such as these. For example, Sofia Karttunen in an LSE Media Policy Blog argues: “Perhaps it would be time for European regulators to take a closer look at the algorithms of social media platforms, which determine which content is displayed to which person and run the risk of creating so-called ‘echo-chambers’ and ‘filter bubbles’ that can amplify certain communications over others… and can create social and behavioural change” (Karttunen, 2018). Mittelstadt (2016) proposes ‘algorithm auditing’ as an ‘ethical duty for providers of content personalisation systems to maintain the transparency of political discourse’ (Mittelstadt, 2016, p. 4991). He situates this duty in relation to the EU General Data Protection Regulation (GDPR) 4 that requires data processors to explain the logic of automated decision making, and suggests that algorithmic auditing could be carried out by a regulatory body “to oversee service providers whose work has a foreseeable impact on political discourse by detecting biased outcomes as indicated by the distribution of content types across political groups” (Mittelstadt, 2016, p. 4998).

Judicial intervention

Even if governments impose more effective regulations on internet platforms, political actors, or data brokers, these regulations are likely to be challenged in some countries, especially in the US, on the grounds that they restrict free speech rights. In such cases, it will be up to the courts to determine the validity of the restrictions and the appropriate balance of free speech and other rights and interests that individuals have. Based on our readings of democratic theory and judicial rulings, particularly in the US, we believe that four strains of thinking may provide some justification for restricting micro-targeted voting messages, especially those employing psychographic profiling: a more expansive view of corruption; more attention to the rights of listeners, including the right against compelled listening; application of the right to receive information and an expanded notion of the rights of voters. Each of these is discussed briefly below.

Corruption

Since 1976 in Buckley v. Valeo, the Supreme Court has consistently held that restrictions on campaign spending are unconstitutional unless there is a compelling interest outweighing the free speech interest. To date, the court has restricted such a compelling interest to that of corruption, narrowly defined as quid-pro-quo corruption. Walker Wilson (2010) argues for a more expansive view of “corruption” that would address “the relationship between money and potentially manipulative communication strategies” (Walker Wilson, 2010, p. 740), suggesting that “the definition of corruption ought to be expanded to include the potential for distortion in voting behaviour as a result of heavy-handed psychological tactics” (Walker Wilson, 2010, p. 741). As she notes “liberal democracy depends upon a free and willing voting public, and a voting process that is unencumbered by systematic, wide-scale manipulation by any segment of the public, individual candidate, or political party” (Walker Wilson, 2010, p. 742)

Rights of listeners

The rights of listeners have arguably been under-appreciated, especially in two areas. First is when speakers would prefer not to speak about something that could negatively impact the speaker. An example would be, as Kendrick (2017) points out in product labelling and disclosure, when the public might like to know whether food contains genetically modified ingredients but food producers prefer not to say (Kendrick, 2017, p. 1800). The second area is when courts themselves have shown little attention to the rights of listeners, in part because speakers are the parties invoking free speech claims. Kendrick notes that this has occurred in cases involving net neutrality rules where US courts might have pointed out that such rules served listeners’ rights but instead focused on the rights of speakers. Similarly, decisions giving search engines immunity from fair competition laws have not acknowledged that listeners' rights might be furthered by the application of such laws (Kendrick, 2017, p. 1805).

Related to the rights of listeners to hear are the rights of listeners not to hear. In the US, as Corbin (2009) points out, the ‘captive audience’ doctrine has provided some protection for ‘unwilling listeners’ especially when combined with privacy interests, such as being subject to protesters in front of one’s home. According to the captive audience doctrine, private speakers cannot always foist their speech onto unwilling listeners. In order for the government to restrict private speakers, listeners should not be able to easily avoid the message, thus raising listeners’ privacy interests. This would be especially true if the speaker follows the listener so the listener suffers repeated exposure (Corbin, 2009, pp. 944-50). One relevant question is whether physical captivity could be compared to online captivity; could, for example, being ‘followed’ by a message in the online world constitutes captivity similar to that experienced by an unwilling audience that is followed by a speaker in the physical world?

In the EU, freedom of expression entails a right not to listen and a right to refuse information, even if it might be beneficial or valuable, which restricts government involvement in providing a level of information exposure diversity that could infringe individual freedom. However, as Helberger (2012) notes, this interpretation seems to assume that the diversity to which one is exposed is the result of media sources reaching an undifferentiated audience instead of a targeted audience. Moreover, freedom of expression “as a constitutional value, does not only require policy makers to refrain from interferences. It can, under certain circumstances, at least in Europe, create positive obligations to actively protect and promote the realisation of people's right to freedom of expression, part of which is the ability to form one's opinions from diverse sources” (Helberger, 2012, p. 72). Helberger points out that “finding and accessing the kind of diversity that people may seek is also a matter of design aspects, many of which are principally invisible to users” (Helberger, 2012, p. 79). The Council of Europe in 2007 recognised “in particular the importance of transparency regarding the listing and prioritization of information provided by search engines with regard to the right to receive and impart information” (Helberger, 2012, p. 83). More recently, in 2018, the Council explicitly addressed the need for member states to take measures to “enhance users’ effective exposure to the broadest possible diversity of media content” (Bodó et al., 2017, p. 15).

Rights to access information

Related to the rights of listeners to hear is the right to access information. This has played out primarily with respect to access to government information, as enshrined in freedom of information laws, and with respect to libraries’ rights to provide information to the public (Mart, 2003). Language in Buckley v. Valeo (1976)reflects the importance of this right to access, noting that the First Amendment: “was designed to secure the widest possible dissemination of information from diverse and antagonistic sources, and to assure unfettered interchange of ideas for the bringing about of political and social changes desired by the people” (Buckley v. Valeo, 1976, pp. 48-9). As far back as 1943 in Martin v. Struthers, the Supreme Court recognised a constitutional right to receive information, noting that the value to be protected is the “vigorous enlightenment” of the people. In 1969, in Red Lion v. FCC, Justice White wrote: “It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount.... It is the right of the public to receive suitable access to social, political, aesthetic, moral, and other ideas and experiences whichis crucial here” (Mart, 2003, p. 178). In 1969, In Board of Education v. Pico (1982), Justice Brennan in his plurality opinion opined – “the right to receive ideas is a necessary predicate to the recipient’s meaningful exercise of his own rights of speech, press, and political freedom” (Mart, 2003, p. 181). The right to receive or access information may also provide a basis for legitimate restrictions on the use of sophisticated analytics in targeting political messages.

Rights of voters

Derfner and Herbert argue that voting should be treated as a fundamental right, protected by the First Amendment as a form of voice and expression (Derfner & Herbert, 2016, p. 485). Indeed, they find an argument for this in Buckley itself where the Court stated that “[i]n a republic where the people are sovereign, the ability of the citizenry to make informed choices among candidates for office is essential” (Derfner & Herbert, 2016, p.114) and that the “central purpose” of the First Amendment is to ensure that “healthy representative democracy [can] flourish” (Derfner & Herbert, 2016, p.116). As Derfner and Herbert say, “voters take the information that is put into the marketplace of ideas and ultimately make a decision about which view to adopt and which candidate or political party best represents it” (Derfner & Herbert, 2016, p. 489). If the Court were to recognise more directly that voting itself is an expressive act and that the purpose of the First Amendment is to enable that expressive act, then voting would be brought under the full protection of the First Amendment (Derfner and Herbert, 2016, pp. 489-90). Kendrick similarly argues that freedom of speech can be viewed as derived from right to vote and because individuals have the right to vote, they have a claim to information relevant to voting (Kendrick 2018, p. 1789). Elevation of the rights of voters could provide stronger justification for restrictions on targeting of political speech.

Conclusion

Governments in many jurisdictions have demonstrated a willingness to put some limitations on political speech and are increasingly recognising the dangers of highly personalised political messaging. Regulation is of increasing importance because both the sophistication and the penetration of digital marketing techniques has increased in the electoral context (Chester & Montgomery, 2019). The strongest protection of the rights of voters arguably would be to prohibit micro-targeted or personalised political advertising entirely. This would avoid difficult line-drawing between different types of profiling but would also challenge advocates of political speech. Moreover, the political reality is such that there is likely to be strong pushback by political operatives, including campaigns, political advertising and consulting agencies, and platforms, against a suggestion to prohibit micro-targeted political advertising. Voters themselves may even express, as consumers do, some preference for targeted advertisements; alternatively, they might reject targeted political advertisements consistent with some research that suggests a similar attitude toward targeted advertisements in general (Turow et al., 2009). Instead of a universal ban on personalised political communication, what is needed is clarification of what forms of targeting are problematic. Moreover, arriving at the ‘right’ policy framework will require multisectoral consultation open to input from all stakeholders, including government, platforms, and organised civil society organisations (Marda & Milan, 2018).

Our analysis indicates that micro-targeted political ads based on algorithmic profiling of big data sources about subsets of individuals has the potential to facilitate further polarisation of politics and the manipulation of voters’ decision-making capacity. We argue that micro-targeted ads employing psychographic profiling pose the greatest dangers because they are even more opaque, insidious and powerful, as they exploit the psychological vulnerabilities of individuals - in effect, treating citizens as ‘suckers’. Although it may be technically difficult to operationalise psychographic profiling and identify ads based on such criteria, we hope we have identified in a meaningful way the dangers of such messaging and outlined possible avenues for regulating these dangers. As democracies begin to grapple with these dangers, the most effective path forward may be through multi-stakeholder or co-regulatory mechanisms, as discussed above with respect to the European Commission’s Code of Disinformation Practice.

References

Andreou, A., Venkatadri, G., Goga, O., Gummadi, K., Loiseau, P., & Mislove, A. (2018). Investigating ad transparency mechanisms in social media: A case study of Facebook's explanations. Proceedings of the 2018 Network and Distributed System Security Symposium. https://doi.org/10.14722/ndss.2018.23191

Bailenson, J. N., Iyengar, S., Yee, N., & Collins, N. A. (2008). Facial similarity between voters and candidates causes influence. Public Opinion Quarterly, 72(5), 935–961. https://doi.org/10.1093/poq/nfn064

Bakshy, E., Messing, S., & Adamic, L.A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science348(6239), 1130–1132. https://doi.org/10.1126/science.aaa1160

Barocas, S. (2012). The price of precision: Voter microtargeting and its potential harms to the democratic process. Proceedings of the First Edition Workshop on Politics, Elections and Data - PLEAD ’12 (pp. 31-36). https://doi.org/10.1145/2389661.2389671

Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. New Haven: Yale University Press.

Bennett, C. J. (2016). Voter databases, micro-targeting, and data protection law: can political parties campaign in Europe as they do in North America? International Data Privacy Law, 6(4), 261–275. https://doi.org/10.1093/idpl/ipw021

Bodó, B., Helberger, N., & de Vreese, C. H. (2017). Political micro-targeting: a Manchurian candidate or just a dark horse? Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.776

Bodó, B., Helberger, N., Eskens, S., & Möller, J. (2019). Interested in Diversity: The role of user attitudes, algorithmic feedback loops, and policy in news personalisation. Digital Journalism, 7(2), 206–229. https://doi.org/10.1080/21670811.2018.1521292

Bozdag, E., & van den Hoven, J. (2015). Breaking the filter bubble: democracy and design. Ethics and Information Technology, 17(4), 249–265. https://doi.org/10.1007/s10676-015-9380-y

Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426

Buckley v. Valeo, 424 U.S. 1 (1976)

Burkell, J., & Regan, P. M. (2019). Voting public: Leveraging personal information to construct voter preference. In N. Witzleb, M. Paterson, & J.Richardson (Eds.), Big Data, Political Campaigning and the Law. Abingdon: Routledge.

Calo, R. (2104). Digital market manipulation. George Washington Law Review, 82(4), 995–1050. Retrieved from https://www.gwlr.org/calo/

Chadwick, P. (2018, October 7). This lawless world of online political ads is anti-democratic. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2018/oct/07/lawless-online-political-ads-anti-democratic

Chester, J., & Montgomery, K.C. (2017). The role of digital marketing in political campaigns. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773

Chester, J., & Montgomery, K.C. (2018, September). The influence industry: contemporary digital politics in the United States. Berlin: Tactical Technology Collective. Retrieved from https://cdn.ttc.io/s/ourdataourselves.tacticaltech.org/ttc-influence-industry-usa.pdf

Chester, J., & Montgomery, K. C. (2019). The digital commercialisation of US politics—2020 and beyond. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1443

Cooke, N. A. (2018). Fake news and alternative facts: Information literacy in a post-truth era. Chicago: ALA Editions.

Corbin, C. M. (2009). The First Amendment right against compelled listening. Boston University Law Review, 89(3), 939–1016. Retrieved from http://www.bu.edu/law/journals-archive/bulr/volume89n3/documents/CORBIN.pdf

Derfner, A. & Gerald, H. J. (2016). Voting is speech. Yale Law & Policy Review, 34(2), 471–491. https://ylpr.yale.edu/voting-speech

Digital, Culture, Media and Sports Committee (DCMS). (2018). Disinformation and ’fake news’: Final report. Digital culture, media and sport committee [Final Report]. London: Parliament. Retrieved from https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102.htm

Dobber, T., Fathaigh, R. Ó., & Zuiderveen Borgesius, F. J. (2019). The regulation of online political microtargeting in Europe. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1440

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: the moderating effect of political interest and diverse media. Information, Communication & Society21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656

Dubois, E., McKelvey F., & Owen, T. (2019, April 10). What have we learned from Google’s political ad pullout? Policy Options. Retrieved from https://policyoptions.irpp.org/magazines/april-2019/learned-googles-political-ad-pullout/

Eftekhar, A., Fullwood, C., & Morris, N. (2014). Capturing personality from Facebook photos and photo-related activities: How much exposure do you need? Computers in Human Behavior, 37, 162–170. https://doi.org/10.1016/j.chb.2014.04.048

European Commission (2018, April 25). Tackling online disinformation: Commission proposes an EU-wide code of practice [Press release]. Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/IP_18_3370.

Evans, E. C. (1917). A History of the Australian Ballot System in the United States. Chicago: University of Chicago Press.

Facebook (n.d.) Facebook ad library. Retrieved January 13 2020 from https://www.facebook.com/ads/library/?active_status=all&ad_type=political_and_issue_ads&country=US&impression_search_field=has_impressions_lifetime.

Floridi, L. (2016). Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and engineering ethics22(6), 1669–1688. https://doi.org/10.1007/s11948-015-9733-2

George-Cosh, D. (2019, March 5). Google bans political ads ahead of next Canadian federal election. BNN Bloomberg. Retrieved from https://www.bnnbloomberg.ca/

Global Internet Forum to Counter Terrorism (n.d.). Evolving an institution. Retrieved January 13, 2020 from https://www.gifct.org/about/.

Gorton, W. A. (2016). Manipulating citizens: how political campaigns’ use of behavioural social science harms democracy. New Political Science, 38(1), 61–80. https://doi.org/10.1080/07393148.2015.1125119

Helberger, N (2012). Exposure diversity as a policy goal. Journal of Media Law, 4(1), 65–92. https://doi.org/10.5235/175776312802483880

Hine, D. W., Reser, J. P., Morrison, M., Phillips, W. J., Nunn, P., & Cooksey, R. (2014). Audience segmentation and climate change communication: conceptual and methodological considerations. Wiley Interdisciplinary Reviews: Climate Change5(4), 441-459. https://doi.org/10.1002/wcc.279

Hirsh, J. (2018, November 20). Canadian elections can’t side-step social media influence. Waterloo, Ontario: Centre for International Governance Innovation. Retrieved from https://www.cigionline.org/articles/canadian-elections-cant-side-step-social-media-influence

Hirsh, J. B., Kang, S. K., & Bodenhausen, G. V. (2012). Personalized persuasion: Tailoring persuasive appeals to recipients’ personality traits. Psychological science23(6), 578–581. https://doi.org/10.1177/0956797611436349

Issenberg, S. (2016). The victory lab: The secret science of winning campaigns. New York: Broadway Books.

John, P., Cotterill, S., Richardson, L., Moseley, A., Stoker, G., Wales, C., & Smith, G. (2013). Nudge, nudge, think, think: Experimenting with ways to change civic behaviour. New York: Bloomsbury Academic Publishing.

Just, N., & Latzer, M. (2017). Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet. Media Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157

Kalla, J. L., & Broockman, D. E. (2018). The minimal persuasive effects of campaign contact in general elections: Evidence from 49 field experiments. American Political Science Review, 112(1), 148–166. https://doi.org/10.1017/s0003055417000363

Karttunen, S. (2018, September 20). Gearing up for the next European elections: will we see regulation of online political advertising [Blog post]. LSE Media Policy Project Blog. Retrieved from https://blogs.lse.ac.uk/mediapolicyproject/2018/09/20/gearing-up-for-the-next-european-elections-will-we-see-regulation-of-online-political-advertising

Kendrick, L. (2017). Are speech rights for speakers. Virginia Law Review, 103(8), 1767–1808. Retrieved from https://www.virginialawreview.org/volumes/content/are-speech-rights-speakers

Koene, A., Perez, E., Carter, C. J., Statache, R., Adolphs, S., O’Malley, C., ... & McAuley, D. (2015). Ethics of personalised information filtering. In T. Tiropanis, A. Vakali, L. Sartori, & P. Burnap (Eds), Internet Science, INSCI 2015, Lecture Notes in Computer Science, 9089 (pp. 123–132). Cham: Springer. https://doi.org/10.1007/978-3-319-18609-2_10

Kosinski, M., Bachrach, Y., Kohli, P., Stillwell, D., & Graepel, T. (2014). Manifestations of user personality in website choice and behaviour on online social networks. Machine Learning, 95(3), 357–380. https://doi.org/10.1007/s10994-013-5415-y

Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences110(15), 5802–5805. https://doi.org/10.1073/pnas.1218772110

Kreiss, D. (2017). Micro-targeting, the quantified persuasion. Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.774

Lazer, D. (2015). The rise of the social algorithm. Science,348(6239), 1090–1091. https://doi.org/10.1126/science.aab1422

Leerssen, P., Ausloos, J., Zarouali, B., Helberger, N., & de Vreese, C. H. (2019). Platform ad archives: Promises and pitfalls. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1421

Marda, V. & Milan, S. (2018, May 21). Wisdom of the crowd: Multistakeholder perspectives on the fake news debate [White paper]. Philadelphia: Internet Policy Review Observatory, Annenberg School of Communication.Retrieved from http://globalnetpolicy.org/wisdom-of-the-crowd/

Mart, S. N. (2003). The right to receive information. Law Library Journal, 95(2), 175–189. Available at https://www2.lib.uchicago.edu/~lar1/LIS450LI/mart2.pdf

Milan, S., & Agosti, C. (2019, February 7). Personalization algorithms and elections: Breaking free of the filter bubble. Internet Policy Review. Retrieved from https://policyreview.info/articles/news/personalisation-algorithms-and-elections-breaking-free-filter-bubble/1385

Mittelstadt, B. (2016). Auditing for transparency in content personalisation systems. International Journal of Communication, 10. Retrieved from https://ijoc.org/index.php/ijoc/article/viewFile/6298/1809

Motta, M. P., & Franklin Fowler, E. (2016). The content and effect of political advertising in US campaigns. In Oxford Encyclopedia of Politics. Oxford: Oxford University Press https://doi.org/10.1093/acrefore/9780190228637.013.217

Newell, B. R., & Shanks, D. R. (2014). Unconscious influences on decision making: A critical review. Behavioral and Brain Sciences, 37(1), 1–19. https://doi.org/10.1017/s0140525x12003214

Newton, C. (2018a). Congress roasted Facebook on TV, but won’t hear any bills to regulate it. The Verge. Retrieved from https://www.theverge.com/2018/6/7/17387120/congress-facebook-tv-regulation-bills

Newton, C. (2018b, May 24). Facebook disclosure requirements for political ads take effect in United States today. The Verge. Retrieved June 7, 2019, from https://www.theverge.com/2018/5/24/17389834/facebook-political-ad-disclosures-united-states-transparency

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology2(2), 175–220.

Pariser, E. 2011. The filter bubble: What the internet is hiding from you. New York, NY: Penguin Press.

Park, G., Schwartz, H. A., Eichstaedt, J. C., Kern, M. L., Kosinski, M., Stillwell, D. J., ... & Seligman, M. E. (2015). Automatic personality assessment through social media language. Journal of personality and social psychology, 108(6), 934–952. https://doi.org/10.1037/pspp0000020

Pasquale, F. (2015). The black box society. Cambridge, MA: Harvard University Press.

Reepschlager, A., & Dubois, E. (2019, January 2). New elections laws are no match for the Internet. Policy Options. Retrieved from https://policyoptions.irpp.org/magazines/january-2019/new-election-laws-no-match-internet/

Resnick, B. (2018, March 26). Cambridge Analytica’s “psychographic microtargeting”: what’s bullshit and what’s legit. Vox. Retrieved from https://www.vox.com/science-and-health/2018/3/23/17152564/cambridge-analytica-psychographic-microtargeting-what

Rubinstein, I. S. (2014). Voter privacy in the age of big data. Wisconsin Law Review, 2014(5), 861–936. Retrieved from http://wisconsinlawreview.org/wp-content/uploads/2015/02/1-Rubinstein-Final-Online.pdf

Sanders, E. (2018, September 26). Big tech is fighting to change Washington’s pioneering rules on election ad transparency, The Stranger. Retrieved from https://www.thestranger.com/slog/2018/09/26/32825020/big-tech-is-fighting-to-change-washingtons-pioneering-rules-on-election-ad-transparency

Sanders, E. (2019, January 2). As 2019 begins, so does Facebook’s ban on local political ads in Washington state. The Strange. Retrieved from https://www.thestranger.com/slog/2019/01/02/37628091/as-2019-begins-so-does-facebooks-ban-on-local-political-ads-in-washington-state

Schwartz, H. A., Eichstaedt, J. C., Kern, M. L., Dziurzynski, L., Ramones, S. M., Agrawal, M., … Ungar, L. H. (2013). Personality, gender, and age in the language of social media: The open vocabulary approach. PLOS ONE, 8(9),e73791. https://doi.org/10.1371/journal.pone.0073791

Slovic, P. (1995). The construction of preference. American psychologist, 50(5), 364–371. https://psycnet.apa.org/doi/10.1037/0003-066X.50.5.364

Spencer, S. B. (2019). The Problem of online manipulation.https://doi.org/10.2139/ssrn.3341653.

Statt, W. (2018, May 24). Twitter reveals new guidelines and disclosure rules for political ads. The Verge. Retrieved from https://www.theverge.com/2018/5/24/17390156/twitter-political-advertising-guidelines-transparency-rules

Stoddard, J. (2014). The need for media education in democratic education. Democracy and Education22(1). Retrieved from https://democracyeducationjournal.org/home/vol22/iss1/4/

Sunstein, C. R. (2007). Republic.com 2.0. Princeton: Princeton University Press.

Sunstein, C. R. (2015). The ethics of nudging. Yale Journal on Regulation, 32(2), 413–450. Retrieved from https://digitalcommons.law.yale.edu/yjreg/vol32/iss2/6

Susser, D., Roessler, B., & Nissenbaum, H. (2018). Online Manipulation: Hidden Influences in a Digital World. https://doi.org/10.2139/ssrn.3306006

Susser, D. (2019). Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures. Retrieved from https://philpapers.org/archive/SUSIIA-2.pdf

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, autonomy, and manipulation. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1410

Tenove, C., Buffie, J., McKay, S., & Moscrop, D. (2018). Digital threats to democratic elections: How foreign actors use digital techniques to undermine democracy [Report]. Vancouver: Centre for the Study of Democratic Institutions, University of British Columbia. Retrieved from https://democracy2017.sites.olt.ubc.ca/files/2018/01/DigitalThreats_Report-FINAL.pdf

Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday, 19(7). Retrieved from https://firstmonday.org/ojs/index.php/fm/article/view/4901/4097

Turow, J., Delli Carpini, M. X., Draper, N. A., & Howard-Williams, R. (2012). Americans roundly reject tailored political advertising—At a time when political campaigns are embracing it [Departmental Paper]. Philadelphia: Annenberg School for Communications, University of Pennsylvania. Retrieved from https://repository.upenn.edu/asc_papers/522

Turow, J., King, J., Hoofnagle, C. J., Bleakley, A., & Hennessy, M. (2009). Americans reject tailored advertising and three activities that enable it [Departmental Paper]. Philadelphia; Berkeley: Annenberg School for Communication, University of Pennsylvania; Berkeley School of Law, University of California, Berkeley. Retrieved from https://repository.upenn.edu/cgi/viewcontent.cgi?article=1551&context=asc_papers

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124

Vacari, C. (2017). Online mobilization in comparative perspective: digital appeals and political engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), 69–88. https://doi.org/10.1080/10584609.2016.1201558

Walker Wilson, M. J. (2010). Behavioral Decision Theory and Implications for the Supreme Court's Campaign Finance Jurisprudence. Cardozo Law Review, 31(3), 679–748.

Warner, M. (n.d.). The honest ads act. Retrieved January 13, 2020 from https://www.warner.senate.gov/public/index.cfm/the-honest-ads-act.

Wong, J. C. (2018, Mar 19). ‘It might work too well’: the dark side of political advertising online. The Guardian. Retrieved from https://www.theguardian.com/technology/2018/mar/19/facebook-political-ads-social-media-history-online-democracy

Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences, 112(4), 1036–1040. https://doi.org/10.1073/pnas.1418680112

Zarsky, T. Z. (2019). Privacy and manipulation in the digital age. Theoretical Inquiries in Law, 20(1), 157–188. https://doi.org/10.1515/til-2019-0006 Available at https://www7.tau.ac.il/ojs/index.php/til/article/viewFile/1612/1713

Zittrain, J. (2014). Engineering an election. Harvard Law Review Forum, 127, 335–341. Retrieved from https://harvardlawreview.org/2014/06/engineering-an-election/

Zuiderveen Borgesius, F. J., Möller, J., Kruikemeier, S., Fathaigh, R. Ó., Irion, K., Dobber, T., … de Vreese, C. H. (2018). Online political microtargeting: Promises and threats for democracy. Utrecht Law Review14(1), 82–96. https://doi.org/10.18352/ulr.420

Footnotes

1. Sometimes these two terms are conflated and used interchangeably. However, Bakshy et al. (2015) distinguish ‘echo chambers’ as when “individuals are exposed only to information from like-minded individuals” and ‘filter bubbles’ as when “content is selected by algorithms according to a viewer’s previous behavior” (p. 1130) and Bruns (2019) distinguishes ‘echo chambers’ as a group choosing “to preferentially connect with each other to the exclusion of outsiders” and ‘filter bubbles’ as a group choosing “to preferentially communicate” (p. 4).

2. For example, researchers at the University of Amsterdam have developed ALEX to unmask the functioning of personalisation algorithms on social media platforms. See: https://algorithms.exposed (Milan & Agosti, 2019)

3. Such a co-regulatory approach may be particularly well-suited as a governance mechanism as demonstrated also by Mard and Milan (2018) with respect to content regulation and fake news.

4. The GDPR places other restrictions on data collection and processing, as well as individual rights, that also limit micro-targeting (see Zuiderveen Borgesius et al., 2018 and Dobber et al., 2019). Dobber et al., for example, point out that data regarding people’s ‘political opinions’ falls within the category of sensitive data.

The rule of law in the time of coronavirus outbreak

$
0
0

A new regulation issued by the Israeli government on 16 March 2020 authorises the Israeli Security Agency (Shin Beit) to use technological means to track citizens to assist in containing the coronavirus epidemic. In particular, the regulation authorises the security agency to “receive, collect, and process technological information” concerning the location and activity of confirmed Covid-19 patients and potential carriers of the virus, during the 14 days prior to their diagnosis, in order to identify the locations visited by the patient and people they have been in contact with. The data are intended to be used by the Ministry of Health in epidemiological investigations. A second set of regulations authorises the police to get location data from mobile phone companies on confirmed coronavirus patients “for the purpose of warning the public or a specific person” without a court order. It further authorised the policy to receive location data on citizens which are required to stay in home quarantine.

Jurists have raised serious concerns regarding the governmental seizure of unprecedented powers without sufficient checks and balances. In fact, the government has bypassed a decision of the parliamentary Interim Foreign Affairs and Defence Committee, which refused to approve the regulation without thoroughly examining its implications to ensure it reflects an adequate balance between preserving the public’s health and protecting individual rights.

A petition filed on 17 March 2020 in the Israeli Court of Justice by civil activist Shahar Ben Meir, demands the court to freeze the surveillance by the security service until oversight mechanisms are set. The Court will hold a hearing on the issue on 19 March.

The petitioners claim that the broad and vaguely defined surveillance authority violates the right to privacy while failing to satisfy the constitutional proportionality standard. The right to privacy is a fundamental right under Israeli law, secured by the Basic Law: Human Dignity and Liberty. Regulation which infringes basic rights is subject to the Limitations Clause in section 8: “There shall be no violation of rights under this Basic Law except by a law befitting the values of the State of Israel, enacted for a proper purpose, and to an extent no greater than is required.”

The current mega health crisis certainly demands that government protects lives, sometimes at the cost of privacy violation. Yet, the question is whether less intrusive alternatives have been sought before resorting to such drastic measures. South Korea, which has been praised by health experts for curbing the coronavirus spread, for instance has launched an app to be used by those who were already subject to a self-quarantine order, rather than subjecting the entire population to surveillance. 

More importantly, fundamental rights are not a luxury, which must give priority to life saving efforts in times of crisis. The mitigation of the current pandemic depends primarily on individuals’ self-compliance with health guidelines. It also requires collaboration. People willingly share information on their location and contacts since they wish to protect their neighbours and loved ones. Yet, the self-discipline and social commitment, which are required to confront the pandemic, involve a high level of trust. Surveillance of the type introduced by the new Israeli regulation is likely to undermine trust, and to increase gaming and circumvention attempts. People may leave their phones at home, use different sim cards, and seek alternative means of communication to bypass surveillance. This may not only hamper public efforts to confront the health crisis, but also further deepen the distrust in government agencies and undermine social solidarity. 

The coronavirus pandemic meets Israel at a moment of deep constitutional crises. The current government suffers from fundamental distrust among citizens, and lacks legitimacy after failing to regain power after three election cycles.  

Restoring trust in a time of emergency is essential for overcoming national crises.  Restoring trust in our social contract requires compliance with the rule of law. Securing fundamental rights is therefore not a luxury in time of crisis. It is a must for successfully winning the fight against the virus. It is also a must for ensuring we wake up in a free society at the other end of the crisis.

Judicial review of digital tracking measures in coronavirus outbreak

$
0
0

Two new pieces of regulation issued by the Israeli government on 16 March 2020 authorise the Israeli Security Agency (Shin Beit) to use technological means to track citizens to assist in containing the coronavirus epidemic. Regulation one authorises the use of technological means by Shin Beit, while regulation two authorises the policy to digitally track coronavirus carriers.  

Three petitions were filed with the Israeli High Court of Justice, challenging these two sets of emergency regulation issued by the government. On 19 March 2020 around 10pm, the High Court of Justice issued several important temporary orders:  

First, banning the police of tracking coronavirus patients (regulation 2) until further notice. 

Second, limiting the powers granted to the security agency (regulation 1) to assist in tracking patients. The surveillance powers would apply only to those who are confirmed coronavirus patients (tested positive) and not to any “suspected patients”. Third, most importantly, the order bans the security agency from exercising any powers granted under regulation 1, beginning on Tuesday 24 March at noon, unless it is approved by the statutory committee of the Knesset (Israeli parliament) which is assigned by law to oversee the use of such special measures. 

The hearing of the other two petitions (challenging the authority of the government to use the secret service capabilities in time of a health crisis) will resume next week, following the filing of the government response on Sunday. 

What does this mean?

The decision of the High Court of Justice is short and balanced, and may offer some guidance to other courts around the world facing similar challenges.  

On the one hand, it allows efforts to mitigate the epidemic and save lives to get underway. At the same time, however, it also sets limits on the use of power by the government. 

In the absence of any other oversight body, due to a deep political crisis in the midst of the coronavirus outbreak, the High Court of Justice has undertaken two important roles: 

First, it effectively applied initial oversight of the emergency measures undertaken by the government. During the hearings on 19 March for over 90 minutes, the justices interrogated the representatives of the police and the security agency to learn more about the specific measure and procedures to be applied. This took place behind closed doors without the presence of the press and the petitioners. This is far from offering an ideal oversight, and it certainly does not substitute oversight by the parliamentary subcommittee of Foreign Affairs and Defence. The parliamentary committee may also hold hearings behind closed doors, yet it may call experts and is more reflective of the views of elected representatives. Given the circumstances however, even a limited judicial review by the Court offered some reassurance.   

Second, the order issued by the High Court of Justice may leverage pressure on the government to establish the parliamentary committees, and thereby fuel an otherwise stuck constitutional engine.  

As governments around the globe introduce emergency measures, courts in liberal democracies are tasked with the critical challenge of judicial review. Courts should not shy away from this duty, as it could be essential for winning the current crisis. Emergency times are dangerous times. Not simply to lives and safety, but also to our freedoms. Checks and balances embedded in our democratic principles of separation of powers, fundamental rights and judicial review are especially critical in times of emergency. This has become a cliché, yet it is extremely challenging to protect these core values when the lives and health, national security and economic safety and are at stake.  

Judicial review in time of crisis is a feature, not a bug. It could restore trust in the checks and balances embedded in our constitutional structure. Trust in the rule of law is the bond which holds our society together as a living organism and protects it from falling apart. Trust and solidarity are particularly important in times of mass quarantine, shelter-in-place orders, and lockdowns at a global scale, which makes individuals particularly vulnerable. Distrust over drastic measures may lead to social unrest, and may further destabilise our societies in critical times.

Generation NeoTouch: how digital touch is impacting the way we are intimate

$
0
0

This essay is part of Science fiction and information law, a one-time special series of Internet Policy Review based on an essay competition guest-edited by Natali Helberger, Joost Poort, and Mykola Makhortykh.

Note: This is the plain text version of the essay. For the full artistic version (really worth it!), please download this PDF.
Illustration: Annika Huskamp

TECH WEEKLY. Christine Wuerth. Sat 21 SEP 2039 14.00 BST.

When Barbara Wells got her first NeoTouch, she was over the moon. As one of the last teenagers in her school to not have the BCI (brain-computer interface), she had been feeling left out. It had taken her months of arguing with her tech-critical parents to get their permission. For her sixteenth birthday, she finally got her wish. “I can’t wait to finally experience digital touch,” she said in her online diary at the time. Now, just two years later, she has had the interface permanently deactivated. For Barbara, the negative effects greatly outweighed the benefits.

At first, adopting haptic communication did exactly what she had hoped for. “Before, I just constantly had to admit to people that I didn’t have it and explain why. It just made me feel really embarrassed. I mean, everyone else my age has it. People would be so surprised, and kind of suspicious. Like I was a weirdo.” With NeoTouch she finally felt part of her group, and more confident in making new friends and approaching boys.

Over the last decade, the adoption of NeoTouch has been fast and widespread, with an impressive 78% of teenagers between the age of 12 and 17 using it. (The age range in which it is legal to get the BCI set up, but only with parental approval). At this age, the tech is particularly common amongst girls. (This trend shows that, unfortunately, platonic touch is still far more common and accepted amongst girls than boys.)

This is a massive take-up in the eleven years since the technology first came to the market. But NeoTouch is not only popular amongst teenagers: the adoption rate has been surprising amongst all ages, most of all people in their 20s and 30s.

Soon, however, Barbara felt the pressure of being ‘always on’ - this expectation to constantly being accessible to her friends, and her new boyfriend. “I don’t think our brains are designed to always be connected to others,” she says. “Even though in a way it really does feel the same as being touched, the other person isn’t actually there. And somehow that contradiction really started to mess with my head.”

Despite its undeniable success, NeoTouch has also prompted voices expressing concern. The last few years have seen an increase in people deactivating their devices and specialists questioning the effects of the technology on the users’ privacy and physical integrity. But there are always those that oppose new technologies, so is NeoTouch really any different to other kinds of digital communication?

A brief history of haptics

The arrival of NeoTouch on the market in 2028 redefined our understanding of haptic technology. Initially, the word haptics described the use of mechanical pressure, vibration, and motion to send impulses through the skin. Early attempts to incorporate touch into everyday technology like touch screens on portable devices were pretty basic by today’s standards. Tactile interfaces developed in the 2010s were mainly used to navigate through information by touching the screen rather than touching anything beyond the screen. Now, haptic technology has evolved to the point that we cannot imagine daily activities like online shopping and watching films without a tactile dimension.

Early development of haptics saw competition from various fields. From gaming and VR to medicine, and from the automotive to the sex toy (or teledildonics) industry. While VR was making the headlines, it was, in fact, the sex industry that first came close to using haptics as an emotive interpersonal connection. Once the communication industry realised the potential that haptics held for a more emotive form of digital communication it started creating wearable devices incorporating vibration and muscle stimulation to express human touch. Smartphones had used vibrations as an alert mechanism for text messages and calls, but the new aim was to transform haptic input into a message in its own right.

The first Apple watch released in 2015, for example, was, in fact, introduced as the company’s ‘most personal device’ ever: “… alerts aren’t just immediate. They’re intimate… We found a way to give technology a more human touch. Literally.” 

If this doesn’t show how much haptic tech has developed in the past few decades, what does!? At the same time, developments in neuroscience and nanotechnology made brain-computer interfaces more versatile and precise and the process of embedding them into the brain more routine and much less invasive. First just used for medical devices, they were soon commonplace in mainstream products for cognitive and physical enhancement.

Fast forward to 2029, when Somas Technologies introduced NeoTouch – the first technology to create a tactile sensation directly via the brain rather than on the surface of the skin. By approaching the challenge of haptic communication from this angle, they managed to completely revolutionise the industry.

Mike Seymour’s new book “A new intimacy” investigates the rapid success of NeoTouch within the wider context of digital communication. “The height of globalisation, the ‘cult of the individual’, and ever busier lives in the early 21st century meant that we spent less and less time with our loved ones and in face-to-face interactions in general,” he writes. “It is no surprise that this coincided with the rise of digital communication and social media. After all, being lonely is literally unhealthy.” 

But these technologies – he claims – could not compensate for the loss of real face-to-face interaction. “Even though we don’t realise it, a major part of our communication is nonverbal, and the nature of digital, audio-visual communication means that most of that is lost. While we are not consciously aware of it, this still subconsciously diminishes the interaction. It’s just less fulfilling, if we don’t receive the same variety and density of cues.”

This paved the way for haptics in an attempt to create more emotional connections. Digital communication collapses the sense of spatial distance. In the past, this has been done mainly using vision and sound. But the haptic revolution aimed to create a digital space that allows us to experience synthetic touch as an immersive experience, creating a real sense of physical closeness. The greatest potential was seen in the improvement of long-distance relationships and in offering health benefits for touch-deprived people such as the elderly and the sick. “The special thing about touch is that it’s immediate and emotional. Being touched by someone makes us feel more connected to them.” Seymour explains. “Touch is our ‘private’ sense and it has so many benefits for our health and relationships.”

Despite the hype, early haptic devices were rather clunky and didn’t really live up to the expectations of ‘realistic’ experiences that people had come to expect from image and sound-based communication. A range of devices in the 2010s used localised vibration and muscle stimulation as symbolic messages of ‘touch’ to create a sense of physical presence. These novelty gadgets had very little use in everyday life. Later attempts relied on more and more advanced tech on or underneath the skin. However, this still confined the experience to specific parts of the body and was very limited in the quality and type of touch that could be conveyed.

The rise of NeoTouch

In the late 2020s bioelectronics finally left the lab and found their way into mainstream products for mental and physical health as well as cognitive improvement. These brain-computer interfaces also started being used to control devices remotely, directly through brain signals as well as to merge the human brain with artificial intelligence (AI). In the field of haptic technology, these interfaces made it possible to synthesise the experience of human touch without imitating it mechanically on the skin. Simply by stimulating the relevant brain areas directly, this new kind of haptic technology could create the physical illusion of touch in a very convincing way.

As Hannah Eisen, Designer at NeoTouch explained: “In the past technologists and scientists focused on simulating the physical process of touch. Obviously, they never got the technology to advance far enough to really create a realistic experience.

But then with the introduction of chip implants that interface directly with the brain came a massive shift in how we looked at the problem. Suddenly we were able to synthesise the holistic experience of touch instead of just recreating it mechanically.” (NeoTouch. An IA Lab documentary, 2037) 

As the first internal human-to-human interface, NeoTouch has – without a doubt – had the biggest impact on the nature of social interaction since the smartphone. In a video from 2028 that introduces the technology, the company claims: “NeoTouch lets you truly connect with another person at a distance. It enables instinctive, non-verbal communication through digital touch.” 

The video goes on to explain how it works: “The tactile interaction is received through your phone and then sent to the NeoTouch transducer: The Senser. This unit attaches to the skin behind the ear and communicates wirelessly with a network of nano-electronics in the brain to simulate the sensation of being in touch with another person. These neural implants serve as an internal brain-computer interface that controls and receives the communication. They interface directly with the somatosensory and motor cortex. It is the stimulation of these particular brain areas that allows us to create a realistic experience of touching and the sensation of being touched.” 

The Senser connects to the implants through ultrasound. Each chip is smaller than a grain of sand and can both pick up signals from the brain and send messages to the brain. That is how NeoTouch can send and receive the sensation of touch and create a natural interaction with another person. So how does it work?

The somatosensory cortex is a kind of map of our entire body. Stimulating targeted areas of this map can create a sense of touch anywhere on the skin. The motor cortex, on the other hand, controls the way we move and touch. Interfacing with these two brain areas means that NeoTouch can pick up our ‘thoughts’ of touching another person from our motor cortex and then send these impulses to recipient’s somatosensory cortex to experience as touch.

“The interesting thing about the somatosensory cortex,” says Seymour, “is the way in which it doesn’t just process the objective quality of touch, it also does a kind of social and personal evaluation. That’s why NeoTouch is able to give you a very individual experience with different people.” 

Stuart Johnson, CEO of Somas Technologies is quick to sell us NeoTouch as a valid replacement for real touch:  “For the first time we are able to artificially trigger our sense of intimacy and the feeling of physical presence.”

Mike Seymour, however, is sceptical about whether synthetic touch can really deliver the same benefits as the real deal. “If someone touches me it’s not just about the sensation of their hand touching me. It also depends on the way the rest of their body is positioned towards mine, the mood, and place… In a way, it’s as if their whole body is touching mine. This element of touch is simply lacking in NeoTouch.”

Barbara, too, was aware of that subtle but significant difference: “It feels the same, but something is off. Something is missing.”

Touch is an element of nonverbal communication such as body language and eye contact. As such it is experienced and understood in the context of these other aspects of communication that combine into a complex, multi-sensory experience. This also means that it has unconscious effects on our emotions and behaviour towards others.

In this sense, touch is more than a sensation on the skin. We even speak of something being touching when it affects us emotionally. Connecting to another person through skin contact is deeply intuitive, emotive and full of meaning. Even before birth, we are connected to our mothers, and after birth touch is essential to growth, as well as physical and emotional development. To touch us, someone has to enter our private space. They have to literally be within arm’s reach. Those we are comfortable being in such close proximity to are generally the people we are also emotionally close to. This makes the connection between touch and intimacy evident; the connection between emotional closeness and physical closeness.

So what is the consequence of taking touch out of its natural context? How does it affect how we think of privacy and intimacy?

The matter of privacy

The limitations of synthetic touch were not the only reason that ultimately made Barbara decide to get rid of her NeoTouch. “I felt trapped; like I could never really be alone,” she admitted. “I mean, even before, I was constantly in contact with people through my phone. But the fact that it’s in my head, that’s just a different level, you know. It made me kind of paranoid.”

Barbara is not alone with these concerns. Privacy has become the main focus of discussion again in recent years, precisely because of BCIs such as NeoTouch. While many are surprised that anyone might still consider privacy of any importance, others wonder why we still need to fight for this right. Lawmakers should naturally respond to technological developments that move the goalposts of what data is ‘private’.

“A main reason for the continuing debate is the question of how to define privacy,” says lawyer and journalist Margot Bloom. “In the most basic sense, it can be defined as the control over aspects of our personal life such as our body, home, thoughts, and feelings, as well as our identity as a whole. It gives us the right to choose which of these ‘private parts’ of ourselves we allow others to access. In this way, the idea of personal privacy shaped the notion of data privacy.”

Our relationship with privacy has continuously evolved throughout history in line with changing societal norms. Historically, the more people we are surrounded by and connected to, the more we rely on privacy to carve out a domain where we feel safe. But we also have an innate need for connection. Bloom links this to our origins as a tribal species. “Humans have a fundamental need to be social and close to others. We depend on each other for survival, so we physically and emotionally crave connections. We instinctively want others to know what we are thinking and doing, and we want to feel ‘in touch’. That need is only heightened in a world where we are often spatially separated from our ‘tribe’. As a consequence, we share our thoughts and our bodies online to feel connected. But simultaneously, we need a space to be ourselves, protected from the judgment of others, and have the freedom to express ourselves without the need to perform according to societal norms.”

Since the beginning of the digital age, however, we have been sold the idea of transparency as a means to national security while visibility is portrayed as a measure of popularity and success. Now, our private sphere is at stake more than ever. On one side, this is because of technologies like NeoTouch that keep us constantly linked to others. On the other, it is the access we give companies to our most private data – our thoughts, and feelings, our intimate interactions.

Psychologist and researcher Melanie McLeod has been very outspoken about the dilemma faced especially by young people in trying to find a healthy balance between their private and public selves. “When being seen or felt digitally is desirable, not just for the experience itself, but because it equates to being popular or successful, privacy in the form of anonymity becomes an obstacle. There is a societal expectation on how much we share of ourselves. Or more precisely, these societal norms affect how much we want to reveal and how much we feel comfortable revealing. This stands at odds with our natural need for privacy. So we perceive it as less relevant. This trade-off is known as the privacy paradox, and it completely undermines the concept of consent. Our desires are subconsciously used against us in such a way that we don’t even want to say no anymore. We’d rather pay the price than feel alone.”

Consequently, any demands for more privacy are so at odds with our present social norms that they don’t find much support. The majority of us are now increasingly comfortable with exposing ourselves online. While the rest of us might still feel troubled about it, we are resigned to the fact that to function in our society we need to exist online. We are aware that any data collected and saved about us is a commodity to be traded and exploited.

To Bloom, the issue lies with people in power who look at privacy from a purely economic lens. “They just see the money they can make from your data. Privacy in any sense means a loss in profit. So they weaponise our need to share, to silently undermine our rights to privacy.”  

Every year more details are exposed on the kind of data that is collected and used to manipulate us without us giving explicit – or sometimes any – consent or even knowing about it. Bloom is currently leading the campaign ‘Feel Safe’ that aims to change regulations on how companies like Somas Technologies are allowed to use the data they collect from our brains. “I have been lobbying for stricter legislation for years. When it was leaked that Soma’s Technologies had sold information of our interactions to health insurers, a line had been crossed, and I had to take action.”

But how has NeoTouch ended up at the forefront of these debates?

McLeod explains how this has to do with the way NeoTouch merges our physical body and our data body in an unprecedented way. “Accepting that NeoTouch creates a felt presence of another person also means that that individual intrudes on my personal space and of course my body in a way that was impossible through messages and calls. The fact that I can physically experience an infringement on my digital self also makes any harassment or attack much more threatening. This makes NeoTouch a unique weapon for cyberbullying. And lastly, the fact that the interface has access to my brain completely exposes certain layers of my data, experience and even control over my physical body to external companies and anonymous individuals.”

This gets us onto the threat of security breaches. Last year saw a wave of hacker attacks on the NeoTouch system. Even though it seems like Somas reacted quickly and there haven’t been any issues since, the incident caused an uproar about the safety of the NeoTouch network, and many people decided to deactivate their devices.

I spoke to several victims of the breach about the emotional repercussions of what they experienced. Mun Wei Chan reported a haunting incident in which a hacker had intercepted the link between him and his husband Rick, who was away for work in Australia at the time. “I was oblivious to the imposter until I woke up one night from the feeling of hands wrapped around my neck.” The experience has left its mark on Wei. Both he and his husband have since deactivated their devices and Wei has been seeing a therapist since the incident. He has also joined a larger group of victims in bringing a lawsuit against Somas Technologies for compensation.

Modern intimacy

“We don’t even have to go as far as anonymous hackers to seriously examine the question of who can touch whom,” argues McLeod. “I’m even more worried about our general desensitisation towards personal boundaries when it comes to touch. The great ease NeoTouch created to access another’s body seems to make people very blasé about the impact of touch, and the value of intimacy. Young people don’t learn to say no anymore. We don’t know if it is the physical distance that dilutes the sense of agency over their own body or something else, but I’m worried about this development.” 

Barbara’s story seems to validate McLeod’s concerns. “Shortly after I got NeoTouch I started dating Aaron. He seemed to be a nice guy, but he would always expect me to have my NeoTouch connected to his. Not that I don’t like that, but if I’m spending time with my friends or my parents it just feels wrong. And sometimes I just wanted time for myself. But he’d get really upset and annoyed and accused me of not wanting to be in the relationship. I felt like I had to be on NeoTouch to prove to him that I like him. When I ended up using NeoTouch less and less he started telling people at school that it’s because I’m frigid.”

This hits a nerve on the ever-present (yet often ignored) question of gender in the design and experience of digital technology. There is a difference in the online behaviour that society allows and expects from men compared to women. This, of course, is nothing new. But the physical distance and anonymity online heighten the expression of these biases. Questions of safety for women online have been raised from the very beginning of digital life but have yet to be resolved successfully. Women are much more often victims of cyberbullying and particularly of sexually charged remarks and threats. They are expected to share pictures and give access to their bodies. Simultaneously their mere presence online is often taken as an invitation for objectification and even abuse. The physical nature of NeoTouch raised the stakes as it poses a real and immediate threat to women’s bodies.

On a social level, NeoTouch has changed who we feel comfortable touching. A new study conducted by the British government Office for Science shows that we spend 50 minutes on average in digital touch interactions per day, compared to just 10 minutes of physical touch. This might make us worry about digital touch replacing physical closeness, but at the same time, this is a vast increase in our openness to tactile interactions as a whole. The mere number of people we would be happy to touch has increased dramatically since the adoption of the technology. These are mainly people in our social circle; online and offline. In the same study, people reported having developed a stronger physical bond with family members. And, just as specialists had predicted, the most positive feedback has been from people in long-distance relationships.

Samantha Fry and Jacob Lundt have been living in different countries for the last three years and love the way NeoTouch has enabled them to keep up a physical relationship despite the distance. “We actually didn’t have NeoTouch before. We got it specifically for this time apart and don’t use it to interact with anyone else,” explains Jacob. Both of them are aware of the pitfalls of digital life and weighed up all their options before giving NeoTouch a go. “I think, like with everything else, you need to have a healthy reflected relationship to technology. Especially if this tech facilitates your human relationships. I see how this can be challenging for people.”

Despite this, they both agree that NeoTouch has been an invaluable asset to their relationship during this time apart. “It’s not just about sexual intimacy,” assures Samantha, “it’s about all those little everyday interactions through which you show your affection and attention. Stroking your partner’s arm or hair, hugging, falling asleep together… Honestly, if we didn’t have this, I’m not sure our relationship would have survived the last three years.”

McLeod, however, is concerned about relationships where digital touch is being used not to bridge a distance but to create one: “NeoTouch changed the expectation of emotional and physical availability. Even sexual availability. At the same time, it shields us from real intimacy. This creates a strange paradox. We don’t recognise how lonely we are, nor how we’ve slowly lost agency of our own body - the physical, emotional and digital.”

Seymout warns that spending time in digital interactions to avoid real-world problems might only intensify these issues. “It is the same for haptic technology as for any other kind of digital communication. People don’t get to practice basic non-verbal communication, so they are bad at it and then more insecure about interacting in person. This is most prominent in young people. Lacking these skills makes it harder to empathise, and to be truly connected. We settle for digital touch which cannot replace true intimacy. I’m really worried about how this is changing the perception and value of intimacy. Where our society was suffering from a hyper-vigilance of personal boundaries only 20 years ago, these boundaries seem to have all but disappeared now, and touch has become superfluous and banal. Some see these numbers of increased intimacy through NeoTouch as a positive result, but I’m more concerned that this is actually a symptom of the way we devalue touch through this lesser, more casual substitute. This could be affecting the way we value intimacy as a whole. If I don’t care who has access to my data and my body, how can I ever truly feel close to someone?” 

McLeod spoke about the effect this has on the development of teenagers in an earlier interview: “I’m actively supporting the efforts to raise the age restriction on NeoTouch. We are already seeing a shift in behaviour in the younger generation. I’m sure we all remember how difficult and awkward it is to be a teenager. To have to deal with this new body and all these hormones and figure out how to be intimate with others. But to sidestep that development by interacting only through digital touch, at a so-called ‘safe distance’ is creating so many problems. Teenagers are not learning to emotionally connect, let alone to understand their own boundaries and needs, and those of others.”  (NeoTouch. An IA Lab documentary, 2037) 

Alone together

She is also interested in the connection between digital intimacy and loneliness. “In modern times – and especially in urban life – we retreat into our homes, and capsule ourselves off from other people. As a result, we feel more lonely and create evermore technologies to connect ourselves to others in increasingly immersive, realistic ways. In return, the fact that these connections happen in a digital space means we have fewer incentives to seek out physical spaces to connect.”

If we do leave our homes, many of our interactions still happen via digital channels and remove us emotionally from our physical environments and interactions. Even when out with friends, we are still connected to various devices, always aware of potential interactions waiting for us online. Can we really say we are fully present? This kind of multitasking created by technology keeps our attention constantly split between real-life interactions and digital ones.

We are in touch with someone digitally but often simultaneously continue with our real-life activities, only partly paying attention to the digital experience. This seriously affects how much we connect to the person at the other end and how much we can ‘profit’ from the connection.

Bloom insists that there is no way to truly avoid the influence technology has on human interactions. “We need to educate young people, not just in digital literacy but also in a new literacy that bridges the digital and the physical. And beyond that, we need to have more of a debate around the social and ethical implications. After all, changes in mindsets of the individual result in changes in society as a whole and therefore concern all of us, irrespective of age.”

But what scale of data breach would we need to understand the severity of the threat? What scientific findings will make us sit up and pay attention to the possible dangers of technologies like NeoTouch?

Research into the effects, whether positive or negative, have so far been inconclusive. The consequences can be observed in so many different aspects of life that it is difficult to see the bigger picture. It is hard to show the exact correlation of changes in phenomena spanning from empathy and understanding, to our sense of identity and willingness to take risk.

In the meantime Barbara is happy and certain about her decision to have NeoTouch deactivated. “I guess I had to find out for myself what it is like and it was kind of hard to ultimately have it switched off. A lot of my friends don’t understand my decision. The first few days after, I felt kind of isolated. But more than that it was just a relief. And now I don’t miss it at all.” She does however think, that it did make her more aware of touch. “I’m more strict with who I let touch me, yes. But I think I’m actually more physical with my family now. My close friends, too. I hope that maybe they’ll slowly realise how much better the real thing is.”

Imminent dystopia? Media coverage of algorithmic surveillance at Berlin-Südkreuz

$
0
0

Introduction

Mass-casualty terrorism, migration sparked by raging conflicts and humanitarian crises, and transnational corporate crime give rise to another era of unpredictability. Amid these global challenges, national governments are tasked with providing defence and security of the state, its citizens, institutions, and economy. In a quest to live up to this challenge, recent technological advancements seem to offer promising solutions and are often justified as a means to regain control. One of the most popular tools in this context are surveillance technologies, which are certainly not novel. Yet, the recent strikes towards automation open up unforeseen possibilities. Facial recognition software, for instance, enables the identification of individuals from a picture or video. While 'facial recognition' has become a catch-all term, it should be noted that facial recognition systems scan a person's face in an attempt to match it against a database, while facial detection systems simply scan for the presence of faces (Roux, 2019).

Surveillance technologies, particularly facial recognition software, get heavily promoted through national and EU funded programmes (Moorstedt, 2017). They are not only promoted as a solution to globalised crime but also as a boost to the growing EU security economy (Möllers & Hälterlein, 2013; OECD, 2004, p. 21). On the hunt for a panacea, it is easy to overlook that the creation and implementation of algorithms is not just the essence of mathematics. It is a social practice. Accordingly, the technological wiring of infrastructure through surveillance technology is a deeply social endeavour. Science and Technology Studies (STS) scholars make important contributions to the exposure of the complex social, political and cultural dimensions that questions of science and technology entail (Jasanoff, 2005; Tiles & Oberdiek, 1995; Verbeek, 2011; Winner, 1980). Technologies are often framed as the answer to security threats but are prone to creating a myriad of other issues. In the light of these complexities, STS offers compelling conceptual lenses, which can help foster comprehensive debates at the intersection of science, technology, and the field of security studies (for a more in-depth discussion on this intersection see Binder, 2016).

Discourse analysis is a valuable entry point to controversies on emerging technologies, as verbal texts provide important insights into the underlying socio-political currents. News reports, feature articles and commentary pieces are accessible sources for analysing the reception of new technologies, as well as the construction of identities, risks, threats and imaginaries of desirable futures.

In line with how STS scholars approach their object of study, this paper discusses the first phase of a pilot project of facial recognition technology at Berlin’s railway station Südkreuz, which was carried out from August of 2017 to July 2018. The project was initiated by the Ministry of the Interior, federal and state police and is supported by the incumbent German railway company Deutsche Bahn (Bundesministerium des Inneren, 2017; Horchert, 2017). The pilot project at Südkreuz quickly became a catalyst for media attention, spurring discourse on the efficiency and legitimacy of surveillance technology in the commentary, technology and politics sections of newspapers and online magazines and blogs. The headlines of the coverage of the pilot project in major outlets like Spiegel Online and Süddeutsche Zeitung read “Orwell and Kafka meet at the train station” (Stöcker, 2017) and “they see us”1 (Moorstedt, 2017). These headlines already hint at implications of structural power, which have a distinct presence throughout this discourse. This paper draws on discourse analysis to point out how the relationship between the public and the state is represented, how automated surveillance technology is linguistically framed and which problematisations were associated with the technology deployed at Berlin-Südkreuz.

First, I will go into the details of my approach to discourse analysis. To enable the sense-making process that is discourse analysis, I will introduce the broader socio-political context by briefly describing the relationship between the state and surveillance technology in Germany. I will also retrace the modalities and challenges that emerged with the pilot project at Berlin-Südkreuz. This is followed by discourse analysis, in which I introduce and interpret the linguistic representation of Berlin-Südkreuz in media discourse. Finally, I will then situate algorithmic surveillance within (post-)panoptic theory and show how the case at hand relates to the work of one of the most important post-panoptic theorists, Shoshana Zuboff (1988).

Methodology

As mentioned before, understanding (surveillance) technology as a social practice is of utmost importance. This corresponds with STS' interest in the cultural, political and social conditions under which technology, in this case, automated surveillance technology, is developed (Jasanoff, 2005, p. 248). Discourse analysis is most often employed to analyse how written text affects the reader and can help us understand how social reality is produced (Evans, 2013; Phillips & Hardy, 2002, p. 6).

With the rapid development and implementation of increasingly sophisticated surveillance technologies, it is perhaps unsurprising that the social, cultural and political impacts of these technologies have become a topic of lively debate (see Lyon, 2007). From an STS viewpoint, these debates are a cornerstone in the construction of security, threats and new surveillance technologies and, more generally speaking, the co-production of science and social order (Jasanoff, 2004). Media reports, policy briefings, commentary pieces and other verbal texts provide accessible and highly valuable resources for the analysis of sociotechnical imaginaries. Sociotechnical imaginaries can be defined as "collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order, attainable through, and supportive of, advances in science and technology" (Jasanoff, 2015, p. 4). The linguistic framings and symbolic elements in documents and other verbal forms of representation are a crucial element in the (re)production of sociotechnical imaginaries (STS Research Platform, 2018). A close study of the coverage on the trial run at Berlin-Südkreuz gives an insight into how science and technology can spark different associations and responses while invoking and challenging different visions of (un)desirable futures. While this explorative study does not aim to identify specific imaginaries, it provides a first exploration of how this project is linguistically framed and envisioned in and through media discourse. The goal of this paper is to provide some insight into the emerging social, technical and political realities of surveillance technology, complex power relations and their representation through language.

The underlying assumption is that in discourse, objects are not represented but systematically produced. In the sociological terms of SKAD (Sociology of Knowledge Approach to Discourse), discourses are knowledge that form patterns of interpretation and action. Sharing knowledge through discourse shapes interpretations and our everyday practices: some might agree with the application of surveillance technology, others might engage in protest. What is included in and what is excluded from discourse becomes important. Which voices are powerful and can be heard and which cannot? How are truth-claims and discursive identities constructed? (Schneider, 2013a). This paper draws on the analysis of communication around the controversy at hand to illuminate some of these questions. It is compelling to study the quite creative linguistically frames and rhetorical features, carefully filigreed representations of realities. In this regard, newspapers and other media outlets are an important discursive domain, shaping the patterns of interpretation and action (Schneider, 2013b).

This critical discourse analysis follows a text-based approach, drawing on media coverage of the policy discourse around the adaption of algorithmic surveillance technology at Berlin-Südkreuz.2 All materials are online publications from February 2017 to November 2018 in German language. In this period the pilot was first announced to the public, the year-long project was carried out and finally, in the fall of 2018, the results were published. This analysis codes the linguistic representation of (1) automated surveillance technology, (2) the relationship between the state and the public and (3) the problematisations associated with both. This exploratory discourse analysis draws on thirty-one news articles, commentary pieces and blog posts from a variety of national, regional and online-only outlets. These sources include more critical stances towards the issue (Süddeutsche Zeitung, Spiegel Online, Netzpolitik.org, Deutsche Welle (DW), Zeit Online), a comparatively moderate position (Berliner Zeitung, Der Tagesspiegel, Morgenpost, Welt) and four with a popular scientific focus (Spektrum, Heise Online, Computer Bild, Wissen.de). Additionally, the analysis included posts on blogs that are more or less loosely centred around the topics of data protection, privacy and security (datenschutz notizen, Datenschutzbeauftragter INFO, digitalcourage.org, IT-Security@Work, law blog, TEXperimenTales) and contributions by online outlets with a focus on digital technologies (tarnkappe.info, Gründerszene). Other articles were published in regional publications (Märkische Allgemeine Zeitung, QIEZ). These sources were selected with the assumption that they each might present the Südkreuz-project in different ways and with different foci. A blog on information security might offer a different perspective than a regional newspaper. Some outlets heavily covered the unfolding of the pilot project and were included in the analysis with more than one text. Although the analysis spans a variety of outlets, the results show that the pilot was generally critically portrayed and represented in a similar fashion. Although the selected sources only represent a small proportion of the many news reports, feature articles, editorials, columns, opinion pieces and blog posts that were published on this topic, they offer an insight into the linguistic framings that characterised the discourse. Thus, this first explorative study offers a baseline for further investigations into the controversy around the Berlin-Südkreuz pilot project.

Surveillance technology and the state

In Germany, as in other countries, the government is the driving force behind the adoption and development of surveillance technologies. The advancements in automated or “smart” surveillance technologies are still recent; thus, no common term has been established. This is partly due to the many new applications, e.g., prediction of criminal behaviour or traffic jams or facial recognition and the move out of local databases into networked systems (Galič et al., 2016; Roßnagel et al., 2011). The terms that are commonly implied in this context include “smart CCTV”, “second generation CCTV” and “algorithmic surveillance” (Musik, 2011). I will use the term “algorithmic surveillance”. It captures the nature of these systems, which use algorithms to interpret, combine and aggregate data, best.

The Ministry of the Interior, as well as federal and national police, are responsible for the protection of internal security and the provision of policing. Surveillance technologies tend to be justified as resources that enable the state to live up to its responsibility to provide security; as in preventing or reducing harm. In Germany, surveillance tools are increasingly developed and adapted as policing tools. The German Ministry of Education and Research is heavily investing in their development. So are various federal policing institutions across Germany, which run in-house research projects (Möllers & Hälterlein, 2013, p. 60). Additionally, the EU research projects P-REACT and INDECT explore how surveillance systems may be employed to detect criminal activity (European Commission, 2016; European Commission, 2017).

The state is expanding the legal framework to enable algorithmic surveillance. The adoption of biometric databases through the ‘e-Pass', is the first strike toward the large-scale acquisition of biometric data (Oepen, 2013). Since May 2017, federal and national security agencies can access the database (Reuter, 2017a). In March 2017, a law (“Videoüberwachungsverbesserungsgesetz”) was passed to extend the deployment of video surveillance and the possibilities for usage and transmission (Reuter, 2017a).

Nonetheless, the algorithmic surveillance software at Berlin-Südkreuz is most probably, if not certainly, in conflict with the current legal framework (Reinsch, 2017). Under German law, individuals are granted the right to informational self-determination, which refers to “the capacity of the individual to determine in principle the disclosure and use of his/her personal data” (BVerfGE 65, 1). This ruling is the “constitutional anchor for data protection” (Hornung & Schnabel, 2009, p. 4) and internationally unparalleled.

Nonetheless, the infrastructure is expanded for larger-scale public surveillance. In Germany, 900 train stations are already equipped with about 6000 CCTV cameras (Deutscher Bundestag, 2019). The pilot project at Berlin-Südkreuz, which I will outline in the next paragraph, is aimed at exploring the capabilities of the newest technological options (Stöcker, 2017).

A panacea? The pilot project at Berlin-Südkreuz

Berlin-Südkreuz, located just south of the German capital’s city centre, connects the local subway to federal and national trains. During a year-long trial from August 2017 to July 2018, algorithmic surveillance software by three different manufacturers was added to the already employed CCTV (Bundesministerium des Inneren, 2018; Morgenpost, 2017). During this first trial period, each software's facial recognition features were tested to determine if algorithmic surveillance should be adopted permanently. The second stage of the trial commenced in the summer of 2019. Targeted towards additional applications, phase two included the detection of stray objects and dangerous situations, such as acts of violence and individuals in distress (Borchers, 2017; Lobe, 2017, p. 2).

The project was initiated by the Federal Police, the national railway company Deutsche Bahn, the Ministry of the Interior and the Federal Criminal Police Office. Especially Thomas de Maizière, former German Minister of the Interior, has pushed towards the implementation of the project (Käppner, 2017). At Südkreuz three different areas were marked with blue stickers and signs to inform passers-by about the employed software. One camera is pointed at an entranceway, another at an escalator and the third was pointed at an exit (Morgenpost, 2017). With each, a different software application was tested. The Ministry of the Interior first declined to disclose the manufacturers but then announced that the software applications employed are by the multinational corporation Dell, much smaller German security provider ELBEX and another German software company, L-1 Identity Solutions AG (Kurz, 2017).

Figure 1: An area in front of an escalator at the train station Berlin-Südkreuz is separated into two sections: Passers-by on the right hand side are captured by automatic face recognition (blue decal), or can elect to stay on the left and opt out (white decal). (Suthorn, 2017, cc-by-sa-4.0).

Facial recognition applications can identify a person using digital images or video material. Generally, there are two approaches. The first one draws on mapping facial features, or landmarks e.g., jaw, eyes, or nose that are analysed in relation to each other and then compared to images for a match. The second approach calculates the “essence” of a face. The specific value is different for each individual, thus becomes comparable (see Gallbally et al., 2014; Gates, 2011).

Three hundred volunteers were recruited to test different products (Käppner, 2017). A template was extracted from each participant’s photograph, building a database (Lobe, 2017). Each volunteer carries a location-tracking transponder which helps to identify if the employed software successfully picked up and matched the individual passing through with the database. For their cooperation, each participant was compensated with a 25 Euro Amazon gift card. The individuals who crossed through most often were incentivised with additional prizes (e.g., Apple watches). The selection of incentives sparked some controversy (Horchert, 2017).

In this context, it is noteworthy that identifying specific individuals within a crowd always implies that there are individuals within a reference group. Thus, the distinction between participants and non-participants is precarious. Essentially every individual that passes through, volunteer or not, is picked up by the cameras and is thus a participant. Moreover, questions of informed consent emerged shortly after the project was rolled out. As it turns out, the volunteers were not informed about the scope of data that the transmitters could collect, which include not only location but other factors, e.g., speed and temperature (Kühl, 2017).

The goal of the project was to test if state-of-the-art algorithmic surveillance software works efficiently. In the long run, the idea is to employ systems that spot people in distress, stray potentially dangerous objects and suspicious behaviour of potential criminals (Bundesministerium des Inneren, 2017; 2018). As for this specific pilot project, the Ministry of the Interior did not specify beforehand what would constitute “efficiency” and thus a successful pilot project (Reuter, 2017b). In the end, the Ministry of the Interior deemed the 2017 pilot successful (Bundesministerium des Inneren, 2018). According to the official test report, the employed systems identified participants with an accuracy of 80% (Bundesministerium des Inneren, 2018). The Ministry’s claim sparked widespread criticism, as the accuracy rate of the various software employed during the trial’s first phase ranged between a meagre 65,8% and 12%. Only the combination of the three different systems employed produced higher accuracy rates (Chaos Computer Club, 2018). Despite the controversy, the Ministry of the Interior commenced with the second phase of the Berlin-Südkreuz pilot project in 2019 (Vogt, 2019). In January 2020, the Ministry of the Interior announced that although the results of the pilot project seemed promising, facial recognition software would not immediately be adopted at German train stations and airports. Instead, the Ministry made plans to expand on video surveillance technology (CCTV) at train stations and in other public gathering spaces (Tagesschau, 2020). Although this turn of events does not indicate a significant change of policy agenda, the Ministry’s hesitation towards the implementation of facial recognition software might be a response to the widespread public criticism. In this next section, I will give an insight into the media coverage that the controversial trial’s first phase sparked.

Discourse analysis

First, I will show how different authors present the project at Berlin-Südkreuz and point out the linguistic and rhetorical features, taking a close look at how they convey truth-claims and how they present power structures. For a better overview, I structured this section according to coding categories which consist of (1) the relationship between the public and the state, (2) the representation of automated surveillance technology, (3) the problematisations associated with both.

Discursive identities: the public and the state

First, the identities that are constructed in and through media discourse are quite insightful. A Süddeutsche Zeitung title reads “they see us” (Moorstedt, 2017). A Berliner Zeitung author alludes to the opacity of the algorithmic surveillance employed, calling the project “trials […] in hiding” (Neumann, 2017). Other headlines read suggestively “police seeking volunteers for total surveillance” (Poschmann, 2017) and “go ahead, scan me” (Rabenstein, 2017). One author proclaims that the pilot project marks a “high point of audacity in the relationship between the German state and its citizens” and adds “he [Thomas de Mazière] must not get through with this” (Stöcker, 2017). In Süddeutsche Zeitung Käppner refers to “technology of control” (Käppner, 2017), while many others allude to the "surveillance state" (Reuter, 2017a; Stürzl, 2018) playing along similar lines of the state-citizen relationship.

A distinct boundary is drawn between the protagonists: those under surveillance ("us"), presumably the public or citizens; and those who are in control, the authorities or "they" (e.g., Hermes, 2017; Stürzl, 2018). Although, subtler, "technology of control" implies that there is one party in control and one that is being controlled (Käppner, 2017). These linguistic acts construct two discursive identities. This is referred to as antagonism, constituting an opposing, even hostile, relationship between two subjects. Each subject is attributed to a specific identity, where one is determinately dominating the other (Fontanille, 2006). In critical discourse analysis (CDA), these instances are also referred to as oppositions, as in the creation of opposition through linguistic frames (Evans, 2013).

Across the articles and blog posts, it is difficult to pinpoint the exact agency of the antagonist(s). It particularly remains unclear, who “they” are, presumably because of the indistinct responsibility distribution across different institutions. Thus, authors sometimes refer to the state, the Ministry of the Interior, the federal criminal police office and/or Deutsche Bahn (Lobe, 2017; Moorstedt, 2017; Morgenpost, 2017; Stöcker, 2017). The opposition will appeal to the readers who will most likely feel drawn to identify with the protagonist “us”, the public, the citizens. The proclamation “he [Thomas de Maizière] must not get through with this” is an appeal for solidarity, a call for collective action (Stöcker, 2017). These antagonisms, as a linguistic twist, imply asymmetrical power-relations and create opposition through language.

The unobservable observer

A prominent aspect of linguistic representation is the variety of terms that are used to describe the technology employed at Südkreuz. Therefore, I examined naming, the analysis of nouns as the “units of language that name things in the world” (Evans, 2013). Through naming existence is assumed. If we call something “technology of control” (Käppner, 2017) we presuppose that it exists (Evans, 2013).

Naming varies, often within the same text, from “cameras” (Antonia, 2017; Horchert, 2017; Kühl, 2017; Moorstedt, 2017) to “the system” (Morgenpost, 2017; Stöcker, 2017; Wissen.de, 2018) to “a computer” (Moorstedt, 2017; Morgenpost, 2017), “future technology” (Schmiechen, 2017) or simply “software” (Dr. Datenschutz, 2018; Hummel, 2017; Morgenpost, 2017; Rieblinger, 2017). It is also insightful to consider the attributes that the authors assign to the employed technology. Adjectives range from “magical” (Stöcker, 2017), which mystifies the technology, to "relentless" (Lobe, 2017), "weapon-grade" (Moorstedt, 2017) and “totalitarian” (Schmidt, 2017), which convey that algorithmic surveillance poses a threat. “Staring” (Moorstedt, 2017), “face- and behaviour-scanner” (Reuter, 2017b) “autonomous” (Breyton, 2017), “intimidating” (Law Blog, 2017), “scrutinising” (Simon, 2017) and “all-seeing and always alert” (Stöcker, 2017), convey the Orwellian dystopia of pervasive systems that exercise discipline and control. Käppner reminds the reader that “The Thousand Eyes of Dr. Mabuse” might become a reality (Käppner, 2017). To the reader, this may sound like a warning. “Intelligent” (Borchers, 2017; Conrad, 2017; Horchert, 2017; Kurpjuweit, 2017; Lobe, 2017; Moorstedt, 2017) on the other hand is an adjective that is often employed in this context to communicate the innovative nature of the system. In this case, a system that does not only collect but also interpret, combine and aggregate data. Ultimately, these adjectives do not necessarily draw a positive picture of the employed technology. The ideological potencies of these adjectives are striking, especially considering that the authors seem to struggle to find a suitable term to capture the employed technology.

In fact, a lack of fitting terminology is characteristic of autonomous systems. They can hardly be captured in words, as technology disappears from the front end (cameras, control rooms) into the back end (algorithms) (see Galič et al., 2016; Roßnagel et al., 2011). Presumably, the many different applications and functions of automated surveillance technology add to these difficulties. There are software applications, motion analysis, and facial recognition, object tracking, options for classifications and predictions. Referring to "the system" or "intelligent software" are ways to linguistically capture these facets. There are also attempts to capture the material hardware components into words, referring to what we can observe: "intelligent cameras" (Moorstedt, 2017; Poschmann, 2017; Stürzl, 2018) or “computers” (Moorstedt, 2017; Morgenpost, 2017).

Another linguistic twist in this context are personifications, which are "metaphorical representation, common to literary texts, whereby nonhuman objects are ascribed human attributes or qualities" (Baker & Ellece, 2011, p. 60). Examples include the observation that “systems are not faultless but they can learn at a frightening speed” (Moorstedt, 2013) or that there are now “objects that stare at us” (Moorstedt, 2017) and an “all-seeing, always alert digital guard” (Stöcker, 2017). With the trend towards algorithmic surveillance, their technological focus shifts ways from cameras and human pendants in the control room. What can be grasped under the term algorithmic surveillance describes the move towards autonomous computer-based surveillance, where algorithms take over the formerly human task of analysis and interpretation (see Norris & Armstrong, 1999). The “unobservable observer” is characterised by subtle frontends and black-boxed algorithms. Those who come in touch with the system can hardly make sense of the technology. The diffusion and automation, and with that a sense of mystification and alienation, of surveillance technology, is communicated through language. The employed adjectives and personifications leave the impression that the technology has assumed agency; control over these surveillance systems seems like an illusion, conveying a sense of urgency.

Problematisations: discipline and control

The ubiquitous, intangible nature of the surveillance systems in question could be a key point to the speculative nature in which this discourse is held. This discourse is characterised by modalities, which do not necessarily refer to reality, but contingencies or possibilities. They express information "about what could be or must be the case, as opposed to being about what actually is the case" (Swanson, 2008, p. 1193).

One fear is central to the debate and frequently found throughout media coverage, which is assumptions concerning the transfer of discipline and control to an automated process. Most authors did at least touch upon the (in)capabilities of algorithms to classify facial expressions, movements, interactions and to enable authorities to exercise discipline and control based on these interpretations, which is commonly referred to as predictive policing (Perry et al., 2013). Süddeutsche Zeitung author Moorstedt questions the capabilities of a computerised interpretation of our world. The author remarks, “a hug in front of an ICE3 that is almost leaving the station could look like a brawl to the computer. Those who run on the platform, trying to catch the train, will possibly be marked as on the run” (Moorstedt, 2017). In a blog post, one calls for putting a stop to a trial that turns Berlin-Südkreuz into a “bewilderment train station” (Demuth, 2017). In Spiegel Online, the author speculates about the emergence of “a magic system of artificial intelligence and real-time data collection, which one day will predict who will do evil next” (Stöcker, 2017). The author refers to predictive policing, the algorithmic capabilities to detect and predict potential criminal activity. In the Süddeutsche Zeitung article, the fear of predictive policing through algorithms is expressed through rhetorical questions, which add dramatic quality, emotionally engaging the reader: “What will life look like in times of intelligent cameras, where one is not only always watched but also always evaluated?” (Moorstedt, 2017). The author answers promptly: “One ought to behave as unsuspicious as possible” (Moorstedt, 2017). This rhetorical twist raises the reader's curiosity. The answer is phrased like an ominous wake-up call. Playing along similar lines, the Süddeutsche Zeitung reader is reminded that “everyone is initially suspicious” (Kühl, 2017). Some interpretations go even further: “Algorithmic pattern recognition raises the question of who defines criminality and if police power is impermissibly delegated to machines” (Lobe, 2017). The author suggests that algorithms could define criminality, traditionally a responsibility of the judiciary, which interprets the law, or the legislative that passes them. "Interpretation of criminality" could also refer to a situational interpretation of the legitimacy of acts, an executive task. Interestingly, the author speaks about delegation of "police power", instead of sheer police work, which would be a more fitting term for mere interpretative algorithmic tasks. Accordingly, the algorithm is not only staged as a computerised process of police supervision. The authors convey that algorithms could not only be used to support law enforcement but ultimately become law enforcement. This is carried to the extreme, evoking dystopian visions about Kafkaesque or Orwellian dystopias and the proclamation that "dystopia threatens to become reality" (Moorstedt, 2017).

Some of the headlines read “Orwell and Kafka meet at the train station” (Stöcker, 2017) and “Big Brother at the train station” (Morgenpost, 2017). Along the same lines, one author asserts "Big Brother is installed at the train station" (Prantl, 2017). In Morgenpost, the totalitarian visions are phrased more subtly. Regarding the recent expansion of surveillance technologies in Germany, the Morgenpost reader is soberly reminded that "facial recognition software already opens up unforeseen opportunities in many dictatorships" (Morgenpost, 2017). These linguistic frames, suggesting dystopian visions, in which those in control use algorithmic surveillance to exercise totalitarian control, privilege one understanding of reality over another. The reader is left with these unsettling speculations about a future of algorithmic discipline and control.

In these articles, the value judgements elicit emotion, while the authors speculate about the possibilities of the technology employed at Berlin-Südkreuz in modalities. The oppositions convey asymmetrical power-relations: there is one party who is controlled and one who exercises control.

The various terms that are applied in this context attempt to capture the pervasive, diffuse nature of algorithmic surveillance. The added adjectives convey associations of autonomous, threatening technology. The employed personifications add to this picture, technology has seemingly assumed agency. The problematisations mainly expressed through modalities point at associated uncertainties about the future. The main themes are speculations about predictive policing and the effectiveness of algorithms to appropriately interpret behaviour and associated worries that it will become necessary to correctly anticipate behaviour to not raise suspicion. This is further escalated, with visions of algorithmic law enforcement and dystopian visions of the future.

This analysis can give us some insight into the arguments, or truth-claims, that are put forward into this context. The critical tone that I found in varying degrees throughout all articles and blog posts does however not imply that there is a societal opposition to the adoption of automated surveillance technology; it just gives us a glimpse into some discursive frames, wider social practices and the negotiation processes that the pilot project spurs. This next section details how (post-)panoptic theory can be utilised to illuminate the topic of algorithmic surveillance technology.

Moving beyond the panopticon

In the following paragraphs, I want to situate this case, and algorithmic surveillance more generally, within post-panoptic social theory, drawing on the conceptual threads that Shoshana Zuboff (1988) derived from her empirical work. To this end, I will briefly retrace the panoptic journey from its origins to post-panoptic theory.

The headlines suggest how influential different conceptualisations and ideas of surveillance are in this discourse. Kafka and Orwell would certainly be astonished to see recent developments in surveillance technology. In scholarly discourse, two other names, Bentham and Foucault, still impact how scholars think and conceptualise surveillance technology today. Bentham and his ideas on the architectural implementation of surveillance can be regarded as a starting point for surveillance studies. Bentham’s younger brother first invented the Panopticon, a circular prison building with a large control tower in the central yard. Stories of prison cells line the rounded walls. Occupants cannot see each other as they are divided by walls. Yet, they can always be watched from within the control tower. The central tower is equipped with lights that hinder the occupants from knowing whether they are being watched or not (Galič et al., 2016, pp. 12-13).

This idea of spatial, passive control was later theoretically refined by Foucault in Discipline and Punish (Foucault, 1995). He used the Panopticon as a metaphor to analyse mechanisms of social control and relations to power and knowledge. Foucault notes how the Panopticon allows for power to become anonymous, as occupants can be efficiently controlled without necessarily being watched. Those "subjected to a field visibility […] simultaneously play both roles" they become the principle of their subjection (Foucault, 1995, pp. 202-203). With the emergence of the internet, surveillance lost the Panopticon's physical and spatial characteristics. Surveillance is turned into a networked part of the infrastructure. The physical, if hypothetical, prison guard becomes abstract; the metaphor flawed.

Many scholars have made important contributions to the study of contemporary distributed forms of surveillance. Noteworthy theoretical frameworks come from Deleuze, Kallinikos, and Zuboff, among others (Deleuze, 1992; Kallinikos, 2004 & 2007; Zuboff, 1988). These authors, however, all work with different takes on moving beyond the panopticon.

In Smart Machine, Zuboff makes an astonishing empirical and theoretical contribution to surveillance as a means of managerial control. Zuboff (1988) studied the transformation of blue- and white-collar work through the application of information technology within corporations. Surprisingly, her ideas are still relevant today, almost 30 years later. Yet, many of Zuboff’s conceptualisations need to be adapted if we want to think about algorithmic surveillance, that in many ways goes way beyond the domains of her studies: Zuboff (1988) considers the rationale behind the adoption of surveillance within an organisation. She remarks that the burden of authority created “the yearning for omniscience in the face of uncertainty, the conformity-inducing power of involuntary display” (Zuboff, 1988: 324). Correspondingly, the narrative of increasing uncertainty in times of globalised threat seems to be a key motivator for the adoption of surveillance technologies like the one deployed at Berlin-Südkreuz. Of course, Zuboff made this observation referring to the exertion of managerial control in times of uncertainty, referring to the uncertainty of process optimisation. The scale and context are different, yet the prospect of regaining control might still appeal to authorities.

She also invokes the panoptic schema, which she describes as “mechanisms or instruments that render visible, record, differentiate and compare […] whenever one is dealing with a multiplicity of individuals on whom […] a particular form or behaviour must be imposed (Zuboff, 1988, p. 322). In a corporate setting, employees can assumedly differentiate between (un)desired behaviours and adapt accordingly. One of the goals of the pilot project Berlin-Südkreuz is behaviour modification, deterring unwanted behaviours. Yet, as the media discourse illustrates, anticipating what unwanted behaviour constitutes and how the algorithm would draw these boundaries, raises concern.

The discipline that surveillance imposes upon the individual has, since Zuboff’s studies in the 1980s, left factory premises. Algorithmic surveillance is networked and no longer limited to a certain space or specific organisational boundaries. The diffusion of the internet changes spatial dynamics and infrastructures. Even the facial recognition software that is deployed at Südkreuz does not generate and interpret data within clear boundaries. Every passer-by is, if only for a short moment, registered in search for a match with the database. Those who are not content with surveillance in the workplace can, as a last resort, resign. With surveillance technology becoming intertwined in the infrastructure of our everyday lives, simply opting-out is not an option. Algorithmic surveillance pertains to all areas of life with surveillance extending out into the public sphere.

In Zuboff’s (1988) study, foremen were watching their workers. Different managerial levels were using the data to check on the lower levels. Zuboff advocated for horizontal visibility as vertical visibility expands, granting data access to those on the same organizational level (Zuboff, 1988, p. 350). Yet, there is no horizontal visibility at the pilot project at Berlin-Südkreuz. Algorithmic surveillance produces the "unobservable observer". Unlike other products of digitalization, e.g., mobile applications, there is no accessible front end, no window into the system that enables the user to make sense of the employed system. In this context, one could take a post-panoptic stance and argue that the diffusion of the internet works both ways: the extensive online media coverage shows that the many [publics] are watching the few [e.g., state authorities] just as much as the few are watching the many. Boyne (2000) makes this point in his piece Post-Panopticism, in which he attempts to redress panopticism. This argument holds some merit. However, the reluctance of those responsible for the pilot project to give out information illustrates that two-way visibility does not necessarily result in an eye-level relationship between the state and the public(s) (Kurz, 2017). Not only could everyone be unknowingly watched, but it is also difficult to draw a boundary between those who are watching ─ and those who are not. As large interoperable information infrastructures emerge, data is not context-bound anymore. It cannot only be accessed but can leave the context; become aggregated and intertwined (Kallinikos, 2010). The project at Berlin-Südkreuz is the product of a cross-institutional, state-corporate partnership. The construction of the discursive identities, with the citizens as the protagonists and differing ideas about who the antagonist is, are exemplary for the diffuseness and the cross-contextuality that characterise contemporary algorithmic surveillance.

Ideally, managerial control in the relationship between the observer and the observed is mutually beneficial. The data generated through workplace surveillance could be used to assign promotions, bonuses, and if not that, coaching (Zuboff, 1988, p. 324). Algorithmic surveillance in public spaces benefits those who are being observed ─ but only hypothetically. The ease of moving around anonymously, in relative privacy in a public space, is certainly gone, while it remains questionable how algorithmic surveillance can prevent crime, benefiting those in control and those being controlled by increasing security. London, for instance, has a very tight-knit surveillance infrastructure. Yet, horrible terrorist attacks like the acid attack on 23 September 2017 keep happening (Sharman & Roberts, 2017). How could algorithmic control enable authorities to prevent crime? Zuboff (1988) observes this fundamental challenge as well. She notes that “the panopticon also enabled managers to see more of the processes and behaviours that affected their areas, without necessarily making it any easier to influence or control those events” (Zuboff, 1988, p. 348).

We need to critically question if, and how, the technology-focused, top-down ideas of the Panopticon apply to contemporary surveillance technologies. They are hardly applicable to diffuse, automated computerised systems. The emergence of plural agency, anticipatory functionalities and obscured spatial boundaries are just some instances that show that the conception of the monolithic Panopticon is not always productive. This case illustrates that post-panoptic theorists such as Zuboff (1988) can still provide us with some helpful conceptual lenses to consider contemporary algorithmic surveillance technology. The next challenge will be to find new ways to approach the emerging social lifeworld of what some already term “surveillance society” (Galič et al., 2016).

Conclusion

Despite the heated controversy that the first test run in 2017 sparked, another surveillance pilot commenced at Berlin-Südkreuz in the summer of 2019 (Bundespolizei, 2019). The 2019 trial run specifically tested algorithms that detect suspicious behaviour (Henning, 2019). The new project provoked media coverage similar to the project’s first phase (see Henning, 2019; Morgenpost, 2019; Vogt, 2019).

Amidst these developments, it is important to remember that the implementation of surveillance technology is a social practice. It is not only an issue of privacy, but it's also an issue of democracy in itself and pertains to the fundamental right to self-determination. All the social problems that this software ought to solve ─ transnational corporate crime, violent acts ─ require social intervention. This discourse exhibits a sombre tone. The safety benefit is hypothetical, the feeling of surveillance is tangible in the discourse. This goes to show that technology does never exist in isolation, it is always embedded in the social world. Social processes, discourses as negotiation, are relevant to technological developments (MacKenzie & Wajcman, 1999, p. 23).

Finally, this small glimpse at the discourse on the pilot-project at Berlin-Südkreuz – and the themes that dominate it – show that valuable insights for future research and exploration can be gained from the study of discourse. This case study also provides a baseline against which future cases could be compared. For instance, it would be compelling to research how media portray change over time and vary across different regions and nations. This discourse also offers a window onto underlying socio-technical imaginaries. To this end, it would be worthwhile to investigate how the media representation of this project compares against expert and policy discourses. A close look at the truth-claims that other actors put forward, e.g., state or manufacturers can offer perspectives onto the social construction and negotiation of the issue. This could give us a valuable insight into the negotiation of the cultural, political and social conditions under which the next generation of surveillance technology is developed.

The technology at hand is one in the making, public discourse is not only important; it's a necessity. Technology must not be developed in the isolation of state research facilities and private corporations. Citizens must be granted an input on questions that concern them so fundamentally. This controversial pilot project illustrates that it is crucial to take a substantive approach to questions of science and technology. A comprehensive participation process that would add new knowledge and improve decision quality.

References

Antonia. (2017, August 2). Bahnhof Südkreuz: Start frei für die Erprobung intelligenter Videotechnik zur Gesichtserkennung. Tarnkappe. https://tarnkappe.info/bahnhof-suedkreuz-start-frei-fuer-die-erprobung-intelligenter-videotechnik-zur-gesichtserkennung/

Baker, P., & Ellece, S. (2011). Key Terms in Discourse Analysis. London: A&C Black.

Binder, C. (2016). Science, Technology and Security: Discovering Intersections between STS and Security Studies. EASST Review, 35(4). https://easst.net/article/science-technology-and-security-discovering-intersections-between-sts-and-security-studies/

Borchers, D. (2017, February 23). Europäischer Polizeikongress: Intelligente Videoanalyse für mehr Sicherheit. heise online. https://www.heise.de/newsticker/meldung/Europaeischer-Polizeikongress-Intelligente-Videoanalyse-fuer-mehr-Sicherheit-3633397.html

Boyne, R. (2000). Post-Panopticism. Economy and Society, 29(2), 285–307. https://doi.org/10.1080/030851400360505

Breyton, R. (2017, March 7). Der schmale Grat zwischen Terrorabwehr und Überwachung. Welt. https://www.welt.de/politik/deutschland/article162735087/Der-schmale-Grat-zwischen-Terrorabwehr-und-Ueberwachung.html

Bundesministerium des Innern. (2017, August 1). Sicherheitsbahnhof Berlin Südkreuz [Press release]. http://www.bmi.bund.de/SharedDocs/Pressemitteilungen/DE/2017/08/gesichtserkennungstechnik-bahnhof-suedkreuz.html

Bundesministerium des Innern. (2018, October 11). Projekt zur Gesichtserkennung erfolgreich [Press release]. https://www.bmi.bund.de/SharedDocs/pressemitteilungen/DE/2018/10/gesichtserkennung-suedkreuz.html

Bundespolizei. (2019, June 7). Test intelligenter Videoanalyse-Technik. https://www.bundespolizei.de/Web/DE/04Aktuelles/01Meldungen/2019/06/190607_videoanalyse.html

BVerfG, Urteil vom 15.12.1983, 1 BvR 209/83 u. a. - Volkszählungsurteil, NJW 1984, 419 http://sorminiserv.unibe.ch:8080/tools/ainfo.exe?Command=ShowInfo&Name=bv065001

Chaos Computer Club. (2018, October 13). Biometrische Videoüberwachung: Der Südkreuz-Versuch war kein Erfolg. https://www.ccc.de/en/updates/2018/debakel-am-suedkreuz

Conrad, C. (2017, August 28). Stopp der Gesichtserkennung am Bahnhof Südkreuz gefordert – wie steht es um das Pilotprojekt?. https://www.datenschutz-notizen.de/stopp-der-gesichtserkennung-am-bahnhof-suedkreuz-gefordert-wie-steht-es-um-das-pilotprojekt-4318928/

Deutscher Bundestag. (2019, October 9). Antwort der Bundesregierung auf die Kleine Anfrage der Abgeordneten Lars Herrmann, Dr. Gottfried Curio, Martin Hess, weiterer Abgeordneter und der Fraktion der AfD, Drucksache 19/13848, 09.10.2019. http://dip21.bundestag.de/dip21/btd/19/138/1913848.pdf

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7. http://www.jstor.org/stable/778828

Demuth, K. (2017, November 30). Endstation – Bilder vom Protest am Bahnhof Berlin Südkreuz [Blog post]. https://digitalcourage.de/blog/2017/endstation-protest-suedkreuz

Dr. Datenschutz. (2018, July 23). Gesichtserkennung am Rande des Zulässigen oder schon darüber hinaus? [blog post]. https://www.datenschutzbeauftragter-info.de/gesichtserkennung-am-rande-des-zulaessigen-oder-schon-darueber-hinaus/

European Commission. (2016, October 7). P-REACT Report Summary. http://cordis.europa.eu/result/rcn/189910_en.html

European Commission. (2017, May 25). INDECT. Intelligent Information System Supporting Observation, Searching and Detection for Security of Citizens in Urban Environment. http://cordis.europa.eu/project/rcn/89374_en.html

Evans, M. (2013, May 9). ‘The Author and the Princess'– An Example of Critical Discourse Analysis. http://www.languageinconflict.org/component/content/article/90-frontpage/145-the-author-and-the-princess-an-example-of-critical-discourse-analysis.html

Fontanille, J. (2006). The Semiotics of Discourse. Peter Lang.

Foucault, M. (1995). Discipline and punish: the birth of the prison. Vintage Books.

Galič, M., Timan, T., & Koops, B. J. (2016). Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation. Philosophy & Technology, 30(1), 9–37. https://doi.org/10.1007/s13347-016-0219-1

Gallbally, J., Marcel, S., & Fierrez, J. (2014). Image Quality Assessment for Fake Biometric Detection. Application to Iris, Fingerprint, and Face Recognition. IEEE Transactions on Image Processing, 23(2), 710–724. https://doi.org/10.1109/TIP.2013.2292332

Gates, K. A. (2011). Our biometric future: Facial recognition technology and the culture of surveillance. New York University Press.

Henning, M. (2019, June 19). Überwachung am Südkreuz soll jetzt Situationen und Verhalten scannen. Netzpolitik. https://netzpolitik.org/2019/ueberwachung-am-suedkreuz-soll-jetzt-situationen-und-verhalten-scannen

Hermes, J. (2017, December 16). Gesichtserkennung und Wirrungen des BMI. TEXperimenTales. https://texperimentales.hypotheses.org/2283

Horchert, J. (2017, August 1). Gesichtserkennung am Berliner Südkreuz. Bitte gehen Sie weiter. Hier werden Sie gesehen. Der Spiegel. http://www.spiegel.de/netzwelt/netzpolitik/gesichtserkennung-am-berliner-suedkreuz-ein-test-fuer-unsere-freiheit-a-1160867.html

Hornung, G., & Schnabel, C. (2009). Data Protection in Germany I: The Population Census Decision and the Right to Informational Self-Determination. Computer Law & Security Review, 25(1), 84–88. https://doi.org/10.1016/j.clsr.2008.11.002

Hummel, P. (2017, November 11). Die Tücken der Gesichtserkennung. Spektrum. https://www.spektrum.de/news/die-tuecken-der-gesichtserkennung/1521469

Jasanoff, S. (2004). Afterword. In S. Jasanoff (Ed.), States of Knowledge. The Co-production of Science and Social Order (pp. 274–282). Routledge.

Jasanoff, S. (2005). Designs on Nature: Science and Democracy in Europe and the United States. Princeton University Press.

Jasanoff, S. (2015). Future Imperfect: Science, Technology and the Imagination of Modernity. In S. Jasanoff & Kim, S. H. (Eds.), Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power (pp. 1–33). University of Chicago Press.

Kallinikos, J. (2004). Deconstructing Information Packages: Organizational and Behavioural Implications of ERP Systems. Information Technology & People, 17(1), 8–30. https://doi.org/10.1108/09593840410522152

Kallinikos, J. (2007). The Consequences of Information: Institutional Implications of Technological Change. Edward Elgar Publishing.

Kallinikos, J. (2010). The “Age of Smart Machine”: A 21st Century View. In P. A. Laplante (Ed.), Encyclopedia of Software Engineering (Vol. 1, pp. 1097–1103). Auerbach Publications. https://www.taylorfrancis.com/books/e/9781351249270/chapters/10.1081/E-ESE-120044162

Käppner, J. (2017, September 15). Videoüberwachung. Süddeutsche Zeitung. http://www.sueddeutsche.de/leben/v-videoueberwachung-1.3656960

Kühl, E. (2017, August 24). Datenschützer fordern Abbruch des Pilotprojekts. Zeit. http://www.zeit.de/digital/datenschutz/2017-08/gesichtserkennung-berlin-suedkreuz-daten-transponder

Kurpjuweit, K. (2017, February 20). Bahn testet intelligente Videoüberwachung am Südkreuz. Tagesspiegel. https://www.tagesspiegel.de/berlin/berliner-bahnhof-bahn-testet-intelligente-videoueberwachung-am-suedkreuz/19413266.html

Kurz, C. (2017, August 1). Ortstermin am Südkreuz: Die Automatische Gesichtserkennung beginnt. Netzpolitik. https://netzpolitik.org/2017/ortstermin-am-suedkreuz-die-automatische-gesichtserkennung-beginnt/

Law Blog. (2017, August 1). Nicht hinnehmbares Gefühl des Überwachtwerdens [Blog post]. https://www.lawblog.de/index.php/archives/2017/08/01/nicht-hinnehmbares-gefuehl-des-ueberwachtwerdens/

Lobe, A. (2017, May 9). Lobes Digitalfabrik. Wir merken uns schon mal Ihr Gesicht. Spektrum. http://www.spektrum.de/kolumne/wir-merken-uns-schon-mal-ihr-gesicht/1456847

Lyon, D. (2007). Surveillance studies: An overview. Polity Press.

MacKenzie, D. A., & Wajcman, J. (1999). Introductory Essay: The Social Shaping of Technology. In: A. MacKenzie & J. Wajcam. (Eds.), The Social Shaping of Technology (pp. 3–27). Open University Press.

Möllers, N., & Hälterlein, J. (2013). Privacy Issues in Public Discourse: The Case of “smart” CCTV in Germany. Innovation: The European Journal of Social Science Research, 26(1-2), 57–70. https://doi.org/10.1080/13511610.2013.723396

Moorstedt, M. (2017, April 7). Sie Sehen Uns. Süddeutsche Zeitung. http://www.sueddeutsche.de/kultur/kuenstliche-intelligenz-sie-sehen-uns-1.3455674

Morgenpost. (2017, July 28). Gesichtserkennung: Big Brother im Bahnhof Berlin Südkreuz. https://www.morgenpost.de/bezirke/tempelhof-schoeneberg/article211395129/Im-Bahnhof-Suedkreuz-startet-Test-zur-Gesichtserkennung.html

Morgenpost. (2019, June 6). Videoüberwachung am Südkreuz startet wieder. https://www.morgenpost.de/berlin/article226216631/Videoueberwachung-am-Suedkreuz-startet-wieder.html

Musik, C. (2011). The thinking eye is only half the story: High-level semantic video surveillance. Information Polity, 16(4): 339–353. https://doi.org/10.3233/IP-2011-0252

Neumann, P. (2017, April 4). Testlauf Ab Herbst wird am Südkreuz Gesichtserkennung erprobt. https://archiv.berliner-zeitung.de/berlin/testlauf-ab-herbst-wird-am-suedkreuz-gesichtserkennung-erprobt-26247956

Norris, C. & Armstrong, G. (1999). The Maximum Surveillance Society: The Rise of CCTV. Berg Publishers.

OECD. (2004). The Security Economy. OECD Publishing.

Oepen, D. (2013, August). Transparenz und Datensparsamkeit von Elektronischen Ausweisdokumenten in Deutschland. In Arbeitsgruppe Informatik in Bildung und Gesellschaft (Ed.), Biometrische Identitäten und ihre Rolle in den Diskursen um Sicherheit und Grenzen. Tagung, 30.11/1.12.2012 (pp. 37–60). Humboldt-Universität zu Berlin.

Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., & Hollywood, J. S. (2013). Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Rand Corporation. https://www.rand.org/content/dam/rand/pubs/research_reports/RR200/RR233/RAND_RR233.pdf

Phillips, N., & Hardy C. (2002). Discourse Analysis - Investigating Processes of Social Construction. Qualitative Research Methods. SAGE Publications.

Poschmann, A. (2017, June 20). Polizei sucht Freiwillige für Totalüberwachung. Computer Bild. https://www.computerbild.de/artikel/cb-News-Sicherheit-Polizei-sucht-Freiwillige-Totalueberwachung-18383625.html

Prantl, H. (2017, August 24). De Maizière hebt das Recht auf Anonymität auf. Süddeutsche Zeitung. https://www.sueddeutsche.de/digital/gesichtserkennung-de-maiziere-hebt-das-recht-auf-anonymitaet-auf-1.3639958

Rabenstein, A. (2017, June 23). Berlin-Südkreuz: Los, scanne mich. Märikische Allgemeine. https://www.maz-online.de/Brandenburg/Berlin-Suedkreuz-Los-scanne-mich

Reinsch, M. (2017, August 22). Gesichtserkennung am Südkreuz: Opposition befürchtet

endgültiges Ende der Anonymität. Berliner Zeitung. https://archiv.berliner-zeitung.de/berlin/gesichtserkennung-am-suedkreuz-opposition-befuerchtet-endgueltiges-ende-der-anonymitaet-28208738

Reuter, M. (2017a, June 21). Dauerfeuer gegen das Grundgesetz – so treibt die große Koalition das Land in den Überwachungsstaat. Netzpolitik. https://netzpolitik.org/2017/dauerfeuer-gegen-das-grundgesetz-so-treibt-die-grosse-koalition-das-land-in-den-ueberwachungsstaat/

Reuter, M. (2017b, August 24). Bundesregierung: Test am Südkreuz wird auf jeden Fall ein Erfolg. Netzpolitik. https://netzpolitik.org/2017/bundesregierung-test-am-suedkreuz-wird-auf-jeden-fall-ein-erfolg/

Rieblinger, P. (2017, August 25). Überwachung: Der Pilotversuch Berlin Südkreuz. IT-Security@Work. https://www.isw-online.de/ueberwachung-der-pilotversuch-berlin-suedkreuz-2/

Roßnagel, A., Desoi, M., & Hornung, G. (2011). Gestufte Kontrolle bei Videoüberwachungsanlagen. Datenschutz und Datensicherheit-DuD, 35(10). https://doi.org/10.1007/s11623-011-0166-z

Roux, M. (2019, March 20). Face Recognition vs Face Detection: What’s the difference? https://sightcorp.com/blog/face-recognition-vs-face-detection-whats-the-difference/

Schmidt, F. (2017, August 24). Biometrische Gesichtserkennung macht Totalüberwachung möglich. DW. https://p.dw.com/p/2igNt

Schmiechen, F. (2017, August 28). Berliner Bahnhof Südkreuz: Ist Gesichtserkennung der Beginn der totalen Überwachung? Gründerszene. https://www.gruenderszene.de/allgemein/berlin-gesichtserkennung-kommentar

Schneider, F. (May 6, 2013a). Introduction to Discourse Analysis [video file]. https://www.youtube.com/watch?v=NpJhICZczUQ

Schneider, F. (May 13, 2013b). How to Do a Discourse Analysis. PoliticsEastAsia. http://www.politicseastasia.com/studying/how-to-do-a-discourse-analysis/

Sharman, J. & Roberts, R. (2017, September 23). Stratford 'Acid Attack': Six People injured near Shopping Centre in East London. Independent.http://www.independent.co.uk/news/uk/crime/stratford-acid-attack-latest-updates-bus-station-incident-injured-police-a7963831.html

Simon, L. (2017, June 22). #SelfieStattAnalyse: Masken gegen Überwachung. Digitalcourage, https://digitalcourage.de/blog/2017/selfiestattanalyse-masken-gegen-ueberwachunung

Stöcker, C. (2017, August 25). Videoüberwachung am Südkreuz. Treffen sich Orwell und Kafka am Bahnhof. Der Spiegel. http://www.spiegel.de/netzwelt/netzpolitik/gesichtserkennung-am-suedkreuz-treffen-sich-orwell-und-kafka-am-bahnhof-a-1164578.html

STS Research Platform. (2018). Sociotechnical Imaginaries: Methodological Pointers. http://sts.hks.harvard.edu/research/platforms/imaginaries/ii.methods/methodological-pointers/

Stürzl, J. (2018, October 12). Mehr Sicherheit durch mehr Überwachung? Qiez. https://www.qiez.de/suedkreuz-ueberwachung-gesichtserkennung/

Suthorn, C. (2017, August 1). Face Recognition Field Test at Südkreuz 14 [Photograph]. Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Face_Recognition_Field_Test_at_S%C3%BCdkreuz_14.jpg

Swanson, E. (2008). Modality in Language. Philosophy Compass, 3(6), 1193–1207. https://doi.org/10.1111/j.1747-9991.2008.00177.x

Tagesschau. (2020, January 24). Gesichtserkennung. Kameras ja, Software nein. https://www.tagesschau.de/inland/gesichtserkennung-bundespolizei-101.html

Tiles, M., & Oberdiek, H. (1995). Living in a Technological Culture: Human Tools and Human Values. Routledge. https://doi.org/10.4324/9780203980927

Verbeek, P.-P. (2011). Moralizing Technology: Understanding and Designing the Morality of Things. University of Chicago Press.

Vogt, S. (2019, June 9). Das Südkreuz wird wieder zum Drehort. Tagesspiegel. https://www.tagesspiegel.de/berlin/videoueberwachung-in-berlin-das-suedkreuz-wird-wieder-zum-drehort/24439112.html

Winner, L. (1980). Do Artifacts have Politics?. Daedalus, 109(1), 121–136. http://www.jstor.org/stable/20024652

Wissen.de. (2018, November 2). Hinterfragt: Wie gut funktioniert die Gesichtserkennung? wissen.de. https://www.wissen.de/hinterfragt-wie-gut-funktioniert-die-gesichtserkennung

Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and Power. Basic Books.

Footnotes

1. All coverage analysed and referenced in this paper was published in German. Citations are the author's translations.

2. For a full compilation of all articles, see Appendix A.

3. German high-speed train.

Viewing all 178 articles
Browse latest View live