Quantcast
Channel: News and Research articles on Privacy & Security
Viewing all 178 articles
Browse latest View live

Faustian bargain: privacy in time of pandemic in Lithuania

$
0
0

The Covid-19 pandemic is challenging public health, economic and social life across Europe. Yet aspiring authoritarians are living a dream. Pandemic is a perfect excuse to interpret basic rights at will.

This is the setup for the strong handed government of Lithuania, led by a former national police commissioner, Prime Minister Saulius Skvernelis. With the ruling coalition trailing the polls at the beginning of the year and the national election scheduled later this year, the Covid-19 crisis is proving an opportunity to play into the darkest fears and embrace war powers with wide open arms. Notably with little or no resistance from the stunted political opposition. Personal privacy, which is hard to appreciate and quantify even in peaceful times, is the first on the chopping block.

Already before the pandemic struck, Lithuania proved privacy was not the priority. Since 2018 Lithuanian national policies and laws gravitated towards weakening privacy for managing geopolitical risks, while the data protection supervisory body was nudged to the sidelines by depriving it of expertise and budgetary means. The governmental abuses of privacy were shielded from investigation and publicity, even in case of blatant privacy violations. Just one recent example: in January 2020 the leaking of very private investigation videos (sexual content) of minor crime suspects by the Lithuanian police was downplayed and the investigation was shut down in advance, while the data protection supervisor went silent for the whole month of February 2020 and pretended as if nothing had happened.

Privacy was not a concern for the government in the context of the Covid-19 pandemic in Lithuania. In response to the protest of legal scholars only (no politicians), the executive branch basically sidelined the legislature in proclaiming national emergency by governmental resolution, and allowed the members of the government to publicise private information (geolocation and routes) on the first Covid-19 cases, without any legal basis (official regulation). Mass media capitalised on this info and produced public visualisations of the Covid-19 routes in Lithuania from which identities of the infected are easy to extrapolate. International media picked up the story from there. A reaction of the Lithuanian data protection supervisor? Nowhere to be heard.

It is not clear whether the executive branch already took steps to electronically survey in real time the Covid-19 cases or suspected carriers in Lithuania, but based on examples elsewhere it can be reasonably assumed. First thing after reconvening the legislature on 30 March, the Lithuanian government tabled a legislation (amendments to the Law on Electronic Communications) allowing instant electronic surveillance (location and communication stream data) of private persons on vague grounds of ‘extraordinary situations’ or ‘quarantine’ or even ‘a person’s entrance into “unsafe territory”’ - whatever that means. The legislation obliges the communications service providers to turn such data over to authorities free of charge upon first request. Judicial authorisation is not necessary. That would  be a waste of time in war – according to the government. Closely following up is another legislation imposing grave sanctions (fines up to 6000 Euro and prison time) and empowering the law enforcement to police it. That includes forced entry into dwellings and extrajudicial detention of surveilled persons. So far only a few MPs have voiced concerns and the government quickly conceded that communication stream data may be excessive, yet vague grounds for surveillance remain and all this legislation is expected to be adopted over the next few days.

The history of the XXth century teaches us what came next, after basic rights were sidelined. While extremely serious and unprecedented, a pandemic is not war. Compromises on privacy are needed, but they cannot be arbitrary nor generalised. Limitations shall be individual, limited to public health enforcement, and each case shall be reviewed and monitored by proactive independent supervisors. War rhetoric and sweeping unsupervised war powers are not the way.


Data protection in times of COVID-19: the risks of surveillance in Brazil

$
0
0

The legitimacy of government surveillance measures to fight the spread of COVID-19 has taken over the data protection debate in affected countries. Starting with China (first epicentre of the pandemic and known for its institutionalised surveillance), use of personal data to enforce containment policies (like tracing contacts and spotting agglomerations) was also reported to have been adopted in different other countries, raising concerns in the academic community.

In Brazil, the possibility of implementing similar practices has first appeared as a technical discussion on mass and social media on the lawfulness of institutionalised surveillance practices designed to prepare and enforce isolation policies. Now, the theoretical controversy escalated to concrete measures, since local and federal governments are enacting data access agreements with telco companies.

While data protection scholars have already signaled that the use of personal data to fight the pandemic is not necessarily illegitimate, there are a series of issues regarding the way these processes are being led by state authorities.

Surveillance initiatives in place

For the time being, a few worrying developments have been confirmed. In Rio de Janeiro, city hall and telco operator TIM have signed an agreement that allows local authorities to track the concentration and movement of people in the territories affected by the pandemic. The goal is to support the pandemic control by allowing government agencies to evaluate the success of the measures that have already been implemented and inform future actions. Even though further details of the agreement were not yet disclosed, TIM has stated that all customer data is anonymised and used to draw heat maps, crossing information about epidemiological outbreaks and points of high concentration of people. A similar arrangement takes place in the north-eastern city of Recife, where cell phone tracking data associated to at least 700,000 telephone numbers is being used to coordinate actions encouraging social isolation.

On the federal level, the Ministry of Communications announced another partnership with cell phone operators to allow agglomeration monitoring and provide personal information on the gender and age of tracked users. These measures are particularly problematic since President Jair Bolsonaro has repeatedly denied the magnitude of the pandemic threat and, at the same time, supported demonstrations against isolation policies. While the president insists in undermining these restrictions, regional and local governments as well as other federal bodies officially support heavier isolation and contingency policies, hitting the population with mixed messages and likely increasing the sense of insecurity and uncertainty during the pandemic. For the data protection debate, this contradiction raises a flag on what exactly are the destination and purpose of the data targeted by the federal agreement, as its collection is not coherent to the president’s discourse on the measures that the pandemic requires.

Overall, these initiatives are not being met with the level of public accountability required by highly complex policy choices with strong impacts on rights and liberties.

Risks for privacy and accountability concerns

First, there are no participation or disclosure mechanisms in place to ensure transparency over the terms of these documents or the negotiation processes that led to their adoption. As they have not yet been made available to the public, the content can only be assessed by what the authorities report. With no transparency, there is also no debate over such texts and the extent of their legitimacy (which in a timely sensitive context could at least take place in post-implementation stages), contradicting some basic public administration fairness principles guaranteed by the Brazilian Constitution.

It also allows the reinforcement of a rhetoric according to which privacy is a relativised or possibly devalued right in the face of promising technological solutions, a mindset that now presents itself in a far more dramatic guise as the world faces a sanitary emergency. Complex and possibly irreversible initiatives that could take years to be debated and responsively shaped are being implemented overnight.

As duly pointed out by some scholars, Brazil’s GDPR-inspired data protection law does provide for fundamental rights that could help design these policies while still protecting individuals – a task that is challenged by the lack of a structured Data Protection Authority, but still, not impossible. However, even the application of this framework is currently threatened by legislative proposals to postpone the validity of the law. Initially fixed for next August, there is at least onebill of law under congress appreciation since last year proposing its postponement (possibly motivated by difficulties in structuring the data protection authority), a claim that has now been reinforced by the COVID-19 pandemic.

What is next?

The abrupt adoption of these measures - coupled with the eventual postponement of the validity of the data protection law – is likely to relocate this debate to the courts. As much as the checks and balances system arms these bodies with the legitimate remedy against authoritarian abuse, in the face of a health emergency judges will be pressured to decide provisionally, in a short amount of time and with a low level of information. In this context, there is an important risk of developing a legal and jurisprudential framework based on a fait accompli, with irreversible restrictive effects on freedoms and privacy.

In this challenging landscape, we should not ignore some bitter lessons from the global technology regulation experience, like the risk of function creep, so often brought into the debates on the risk of repurposing content filters with questionable ends. Overall, many democratic systems have faced the contingencies of unchecked tech powers or misguided regulation. At the very least, these experiences lead to the conclusion that depriving these processes of transparency and accountability can impose high costs in the future.

Lithuanian pandemia containment measures will test the immunity to privacy risks

$
0
0

Only the exhaustion of MPs at a late evening plenary session on 31 March 2020 stopped the Lithuanian Parliament from a final vote on the draft amendment to the Electronic Communications Law. The speed of this draft in legislative corridors leaves little doubt that the enactment will be decided at the next plenary session scheduled for 7 April 2020. The proposed amendment aims to grant governmental authorities with access to mobile location data. The draft does contain privacy safeguards, however so nominal, that many commentators voiced their fears over mass surveillance. The outrage in the presence of a global health threat reflects the level of disappointment with the current Lithuanian privacy policy where particular interests outweigh privacy. Contrasting international precedents, opinions of privacy professionals and a few bright political objections have already repeatedly proven to be insufficient in this arena.

Existing retention rules for traffic data, including mobile location data, genuinely illustrate low national ambition in the privacy field. Back in 2014 the Court of Justice of the European Union has annulled the so-called data retention directive 2006/24/EC. It has done so with a strong stance “that Directive 2006/24 entails a wide-ranging and particularly serious interference with those fundamental rights in the legal order of the EU, without such an interference being precisely circumscribed by provisions to ensure that it is actually limited to what is strictly necessary.“ A few years ago the same court had reminded us that the EU law precludes “national legislation which, for the purpose of fighting crime, provides for general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication”. Such reminders have encouraged some European countries to put these rules before constitutional courts, yet a Lithuanian statutory traffic data retention list stands resistant to these developments since 2009 - when it was first mandated by the now annulled directive. Fuelled by public security advocates, the scope of location-specific data retention meanwhile keeps expanding (e.g., data related to location area identity (LAI), Cell ID, etc. were newly enlisted for retention in 2018).

Moreover, this proposed legislative initiative is hardly reconcilable with the recent unusually liberal recommendations by the European data protection board. In a statement adopted on 19 March 2020, the board suggests that as regards location data based pandemia containment solutions, a EU member state’s “Public authorities should first seek to process location data in an anonymous way <…>, which could enable generating reports on the concentration of mobile devices at a certain location”. These recommendations would allow to resort to proportional tracking only when anonymous location processing is “impossible”. It remains to be seen whether the Austrian and Italian choice of anonymised mobile location data technologies in their struggle to contain the pandemic will convince Lithuanian MPs to any privacy-friendly measure.

National legislative procedures require to assess the effects of the proposed new legislation whenever the legislation is introduced into new areas not subject to legal regulation before. There is no available information that the suggested legal regulation has undergone any substantial risk assessment process, including data privacy impact assessment required by the GDPR for any massive tracking. It therefore seems that the ultimate clearance of proportionality and consequences of the proposal will be performed in-field on COVID-19 patients and related individuals - who will have nothing but their immunity to shield them from both viruses and privacy risks.

Will Quarantine pass the test of privacy? A Lithuanian perspective

$
0
0

COVID-19 has impacted every sphere of our lives. First and most importantly of all, it has brought unmeasurable threats and damages to people’s lives and health. It also has had a huge impact on almost every business – “revenue growth” is usually replaced with “staying alive” in the list of priorities of many businesses. Moreover, this extreme situation penetrates very strongly and deeply into the area of human rights and freedoms, with a major impact on their implementation. Europe has not seen such strict closing of borders and movement restrictions for decades. Even though democracy and the rule of law dictates that human rights and freedoms shall not be put under quarantine in any circumstances and shall be respected even on the darkest days, some countries might fail this exam. One of the heavily affected rights, challenging almost every democracy, is the right to privacy.

Home-teaching a four-year-old daughter for almost two months, I unwillingly started applying certain “grades” mechanisms to some situations in everyday life (too early for a four-year-old, you say?). Could we put privacy in times of quarantine to test? Would our democracies fail or pass such a test? There is only one way to find out – set the rules of the game (the second thing I mastered during this quarantine!) and give it a go.

Let’s assign letters A through F as grades. A (excellent) to E (poor) means passing the test, F – failing. Let’s list the measures applied in most of the European countries having an impact on individuals’ privacy, and then, at the second part of this op-ed, assign a report card to Lithuania. Every item listed below reduces the overall grade by at least one point (depending on the extent of violation the grade could be reduced by two or more points):

  • Implementation of route maps of infected individuals and their public announcement– minus one point. Two or more points shall be reduced in case of highly detailed maps (for example, provision of gender data; maps with possibility to locate exact or very proximate home address, etc.);

  • Collection of GPS location data via specialised apps without person’s consent – minus one point. The downgrading could be adjusted depending on security and technical configuration of the app (for example, the app has serious security gaps or provides information to third parties);

  • Collection of location data of individuals from mobile operators– minus one point. Collection of aggregated or anonymised data shall not result in downgrade. Two or more points could be taken away in case mobile operators are requested to provide location data of all individuals, without considering if he or she falls in the greater risk zone (is infected; has been in contact with infected individuals; has been travelling, etc.), or not; also, if the individuals are not informed about such collection, etc.;

  • Introduction of GPS ankle monitors for infected / quarantined individuals – minus one point. Even if all the measures are taken in terms of privacy and legitimacy (for example, the individual is well informed about the device and its operating principles, the GPS ankle monitor is introduced after the individual already breached the quarantine conditions and only upon appointment of the court, etc.), my subjective downgrade goes to this method solely for the inhumane nature of it. More points could be taken away in case the said conditions are not ensured. 

The list could go on. These measures may be topped up with imperative orders on collection of health data (requirement for businesses to document each flu-like symptom of employees and/or clients), heavy interrogations of individuals on certain subtle aspects of their private lives and many others. This list once again proves that democracies are taking a huge privacy test by facing all of these challenges at once. 

Would Lithuania pass this test? With goodwill, let's start the test with the grade of A. Further goes the privacy assessment items described above: 

  • Minus one point for implementing route maps of infected individuals and their public announcement;

  • One more point can be taken away for quite detailed entries of the mentioned map – some of them include exact flight number, destination and origin countries, exact seat or other precise information;

  • The Lithuanian government together with the municipality of Vilnius has set up an app called Karantinas (in English – quarantine), which enables daily coronavirus symptom tracking, encourages healthy actions that curb the spread of the virus and helps to care for people in self-isolation. The Privacy Policy claims that GPS location data is available only in case location data is activated. However, operation of this specific app is not available in case location data is off. Therefore, it seems that provision of location data is based on legitimate interest or other ground, but not freely given consent according to General Data Protection Regulation – minus one point for that -- Lithuania slips to a grade of D, but the story is not over;

  • Failure to ensure other privacy aspects in the mentioned app costs the Lithuanian government another point at this test: data retention periods, in my opinion, are too long (almost all the collected data, including location data is saved for 18 months); profiling might have significant impact on person’s rights and legitimate interests (the wording says “the data controller expects that profiling will not have significant impact, but <…> the results of the analysis could make the person identifiable”; some reviews of the app claim they received unsolicited marketing e-mails after installing the app, etc. Also, a technical audit may be conducted to ensure the app has no serious security gaps;

  • Lithuania has a chance to lose some more points on its debated collection of location data of individuals from mobile operators. The draft amendment to the Law on Electronic Communications which aims to grant governmental authorities with access to mobile location data is under consideration at the Parliament. After serious criticism, the draft is amended and contains certain privacy safeguards, however they are more nominal than real, giving a ground for many commentators to voice their fears over mass surveillance. The repeated hearing was scheduled on 28 April, however, was postponed once again (the date of the next hearing is not set yet);

  • Luckily, Lithuania did not introduce the GPS ankle monitors for infected / quarantined individuals so far. 

Following the rules of the game, Lithuania gets minus 4 points and ends with “E” (poor) as the test result, leaving the last question unanswered – will the Lithuanian Parliament adopt the mentioned draft Law on Electronic Communications without privacy protecting amendments? If the answer is positive, sadly, the privacy report card will change into “F - fail”. 

Understanding that situations such as the current coronavirus pandemic mean that certain human rights and freedoms may be restricted, I strongly believe such restrictions shall be proportionate, legitimate, well-weighed, justified, in line with the rule of law and only be sign-off on when other less restrictive measures cannot achieve that objective. Even considering the test criteria set forth above, one can clearly see that there are many means and options of collection of data, and the governments shall choose wisely.

Every country is currently under exam not only in terms of mitigating the virus, but also in terms of protecting basic human rights and freedoms. Most likely the actions taken, and lines drawn will be analysed and pushed forward and backward for many years to come. Would your country pass the test?

A new milestone for data protection in Brazil

$
0
0

As the Covid-19 pandemic expanded across the world, so did the debates on whether fighting this sanitary emergency would require the use of personal data, and on how that would impact pre-established data protection frameworks.

In Brazil, these concerns first came to light with the announcement of agreements between government and telco companies, which would allow use of personal data to enforce isolation policies. Even though a modern Data Protection Law was approved in 2018 (Law 13.709/2018), the Brazilian landscape is particularly threatened by its recent postponement to 2021 and the lack of a data protection authority. Overall, this results in an institutional environment mostly defined by blurred competences and legal uncertainty.

Despite the concerns of even further setbacks for data protection, last week the country reached an important milestone, as the Supreme Court ruled on the unconstitutionality of a legal provision that mandated personal data sharing for statistics purposes as an emergency measure.

The constitutional quarrel

Case law ADI 6387 questioned the validity of the Executive Order 954/2020 (MP 954), which obliged telecom operators to share with the Brazilian Institute for Geography and Statistics (IBGE) personal data of more than 140 million mobile service users (including names, cell phone numbers and addresses). IBGE is the public agency in Brazil, which is responsible for the official collection of statistical information about the country, such as the national census. The data sharing was claimed to be essential in order to allow individual interviews for the development of relevant national statistics, and the reasoning behind it was the impossibility of IBGE’s staff to visit households in person during the pandemic.

The Executive Order MP 954 had been contested (in court) by different political parties and the Federal Bar Association both on the grounds of procedure and substance. Regarding the first one, executive orders are legal acts valid up to 120 days, to be enacted by the President (with posterior parliament approval) in the face of urgency and relevance requirements – neither of which the institutions that pleaded its unconstitutionality thought were met.

As per the substance, they claimed that it did not abide by fundamental legitimacy standards, such as purpose limitation (as it did not establish a clear correlation between the data needed and its alleged use) and transparency, since it did not provide the exact reasons the data were needed and the means through which they would be used. Also, it was claimed that the legal provision failed to address proportionality, as the amount of data required was neither clearly justified, nor accompanied by security measures towards risk of breaches or misuse. In this sense, the measure did not comply with a minimum use principle, according to which the use of data should be restricted to the amount strictly necessary for its purposes.

 

The Supreme Court decision

By the vast majority of 10 out of 11 justices, the Brazilian Supreme Court held MP 954 unconstitutional and declared its invalidity. Data protection had already been applied by a few other higher courts’ decisions, but this trial is a milestone for the country’s framework because the Supreme Court has now validated the constitutional fundamental right status of data protection. Among other things, this means that future legislation and administrative acts are bound by principles such as purpose limitation, proportionality and transparency. Above all, the decision allows the judicial review of ordinary legislation regarding this new recognised fundamental right.

The meaning of this trial for Brazil can be compared to the 1983 German Constitutional Court ruling, which pioneered the concept of informational self-determination in the country and later influenced international debates on data protection. Both in the Brazilian and in the German case, the collection carried out by state agencies for the production of official statistics shed light on the importance of shielding these processes with guarantees that assure fairness and transparency.

Among other arguments, three central ideas that based the Court’s reasoning came into the spotlight: the need for protection beyond intimate data; the recognition of data protection as a fundamental right and the limitations of the country’s current institutional framework.

 

Constitutional protection beyond the processing of intimate data

One of the main arguments for the validity of MP 954 was that it only covered “non intimate data” which would be freely available in telephone catalogs. However, the Court recognised that there is no such thing as neutral or insignificant personal data in the current technological context, which would allow for several possibilities of data processing. The case rapporteur, Justice Rosa Weber, has expressly stated that any data that leads to the identification of a person can be used for different purposes and therefore, deserves constitutional protection. It is worth noticing that name and cell phone numbers were the exact sort of data that is reported to have been used in electoral disinformation campaigns that struck Brazil’s last presidential elections.

 

Fundamental right status

In this context, the Court ruled that data protection is grounded in different constitutional protections of the individuals (such as privacy and due process related guarantees), and therefore, holds a fundamental right status. In fact, different justices have expressly extracted this construction from both the German right to informative self-determination and article 8 of the European Charter of Fundamental Rights.

As such, data protection entails both a subjective right - which must be defended by the state, by protecting individuals from external threats - as well as an objective dimension. According to the latter, it is not enough that the state reacts to data protection breaches. It must also actively promote concrete measures that protect individuals from the misuse of their data, may it be by public or private actors.

 

Institutional limitations of data protection in Brazil

Justice Rosa Weber also highlighted that the risks entailed in MP 954 – namely, its vague terms, absence of security measures and disproportionate collection of data – are potentially increased by the current institutional limitations of data protection in Brazil. This fragile framework is related, in first place, to the fact that the data protection law, that would come into force next August, has been postponed to 2021. Despite the country having constitutional and other legal provisions that support data protection claims, a sanitary emergency with the potential to impact use of personal data must have strengthened the claim for an organised and coherent data protection regime. Instead, it served as ground for its weakening, depriving the institutional landscape of clarity and legal certainty. Secondly, the lack of a data protection authority leaves it without an expert and independent body that can also contribute to fair and efficient measures involving the use of data.

Even though this Supreme Court decision cannot revert these setbacks, it sends a clear message to the government regarding future initiatives that might come to the Court’s appreciation.

 

The milestone and the challenges ahead

Besides settling this specific dispute, the Supreme Court endorsed years of debate and initiatives towards the enhancement of data protection regulation. In a time when data protection measures were already being underrated for the (alleged) sake of public health, this decision underpins present and future initiatives of data use in purpose limitation, proportionality and the active adoption of information security measures. Furthermore, the decision establishes relevant boundaries for the executive and legislative branches, which were not clear in the Brazilian Landscape yet.

Even though this decision does not neutralise all the current risks for data protection, it does state an important precedent for lower courts. As adjudication will possibly continue to play an important role in containing initiatives such as MP 954, only the creation of a data protection authority, with its multistakeholder council, would bring more legal certainty for data processing in the country.

Nevertheless, the ruling already provides for clear constitutional standards for legitimate data processing. Hopefully, it will also clear the path for the full implementation of the data protection framework designed in Law 13.709/18.

How the GDPR on data transfer affects cross-border payment institutions

$
0
0

The General Data Protection Regulation (GDPR) in Recital 23 brought an obligation to all companies that receive, control or process personal data of European Union (EU) residents to comply with the minimal safeguards stated in European legislation. One of the main issues is the fact that companies that are not based in the EU, which receive, store or process the personal data of EU residents are also required to provide adequate levels of security as per the GDPR standard

In a decade of globalisation and tech services this may become one of the most difficult compliance challenges especially for those businesses operating in the EU which provide cross-border services.  It can directly impact the relationship with a data processor based in a third-country which may be necessary for the business provision in that third-country. 

Cross-border payments

As the subject of this text is cross-border payment providers, it is important to analyse that payment providers usually operate based on a relationship with financial institutions, card processors and various other providers.  Once a payment goes from a European country to a third country and vice versa, the entity responsible for the start or the end of the payment process may be based in a third-country where GDPR is not applicable. Furthermore, the third-party processor may have sub-processors that are equally not required to comply with GDPR but with their local laws and regulations. 

GDPR states that the processor must inform the data controller if it has any sub-processors, provide full detail if any and allow the data controller to decide if these sub-processors are acceptable or not. All sub-processors must comply with the same (GDPR) safeguards that the processor is subject to. The processor is fully liable for failures of a sub-processor according to Art 28.4.  

This just increases the difficulty of getting into a contract with a provider in a third-country since the data protection requirements that must be in place are not going to affect only the processor but also its sub-processors. 

Given the right of the data controller to decide between accepting the sub-processor or not, this may become a further issue. It does not seem viable that a company which already has its own sub-processors would change all or some of them just to be able to engage in business with a European company. 

As an example, issues may arise with a card processor based in the Middle East that provides services to a European company. This processor would enable the company to process card payments for its customers, which may be European and non-European residents. In order to support the transactions, certain personal information related to the card holder may be required. The card processor itself is required to comply with its local law and regulations, the legal framework of the country where it is based in the Middle East. It will not be required to comply with the European standard of data protection unless it engages in business with a European company or provides services to European residents.  

The European regulation says that in the case of the processor not being based within the EU and in the absence of common personal data protection safeguards at a global level, cross-border flows of such data entail the risk of a breach in continuity of the level of protection guaranteed in the European Union. Art. 28(1) states the obligation of the data controller to choose only processors that comply with the GDPR. 

If the controller engages in business with a provider/processor that is not GDPR compliant, the data controller is liable. If the essential third-party provider is not GDPR compliant, how will the European cross-border payment provider continue to operate its business? 

If GDPR is applied in black and white manner, both scenarios would culminate in extremely onerous and time-consuming changes for those types of businesses. Is the European company as a data controller supposed to terminate the provision of its services in certain countries if it does not have a processor that is GDPR compliant? Or what if the data controller is unable to get GDPR compliant providers in all the areas that are needed? Often a complete business provision is not about one single third-party provider but a few of them. 

Mechanisms for the safe transfer of data to third-countries

Trying to be less prejudicial to European companies that operate internationally and facilitate the flow of such data, the EU legislator established mechanisms whereby personal data may be transferred from the European Union to a third country. 

The mechanisms which allow for the transfer of personal data to a third country are the following: (a) Countries approved by adequacy decision: In this case the transfer of data will be allowed based on the adequacy decision provided by the European Commission; (b) Standard Data Protection Clauses: The data controller may transfer personal data to a third-country if there are appropriate safeguards in place and enforceable data subjects’ rights are available. In order to facilitate this, the Commission has provided standard clauses to be inserted in a contract between the parties; (c) Binding Corporate Rules (BCRs): When a multinational group has EU entities and non-EU entities, the group may use the BCRs to transfer personal data between these entities. The group must have data protection policies in place for the transfer of data from the entity in the EU to the entity outside of the EU. It is important to note is that both entities must be part of the same group and these policies must be enforced by every member of the group. 

A different possibility, but which only applies to the transfer of data with the US is the Privacy Shield. This was an adequacy decision provided by the European Commission where it allows the free transfer of personal data from the European Economic Area (EEA) to the US for commercial purposes to companies that are certified in the US under the Privacy Shield. 

Despite of all of the benefits of the above-mentioned mechanisms, it is also important to observe that all these requirements can be laborious to achieve. 

The adequacy decision, although once guaranteed allows for the free transfer of data to the third-country in question, it still mean a long process that can take a few years to be completed. 

According to Giovanni Buttarelli, Europe’s data protection officer, Mexico, South Korea and India have shown interest in achieving Commission approval for data transfers. Joined by the UK which after Brexit will have to pass through Commission analyses. 

BCRs can be a solution for some multinational companies and their internal flows of data, but is not applicable to third-party providers. 

In this case, the last and best option are the Standard Data Protection Clauses. These can be quickly included in the provision of services agreement between the parties. Traditionally, it has been the most frequently used mechanism to legitimise international data transfers to countries that are not deemed to provide an adequate level of protection. Furthermore, there is no country segregation, unless the data importer is subject to laws that oblige it to lower data protection rules more than the restrictions necessary in a democratic society. 

Nevertheless, the standard clauses impose a considerable number of obligations on a third-party provider. Obligations that most of the time are neither part of their local legislation nor something they are required to comply with in their own country. 

One concern often raised by providers when required to comply with standard clauses contracts is the fact that they are not based in Europe and in their jurisdiction they are not required to comply with GDPR. 

Protecting business continuity

This fact just makes European companies less and less attractive to third-party providers who will have to comply with the minimum standards required by the GDPR if they decide to engage in a business relationship with a European company. 

On the one hand it is impressive to see the European legislator’s efforts to protect its citizens and their privacy rights. On the other, it is necessary to question how it is going to impact the success of European companies that operate in an international environment. A more reasonable mechanism to allow cross-border data transfers is still needed in order to ensure that business continuity is also protected. 

Although Europe is a strong economic block it cannot ignore the impact of international markets on its own economic results. To oblige third-countries to comply with European legal standards is not the most democratic way to survive in a globalised world. 

Crypto communities as legal orders

$
0
0

‘A specter is haunting the modern world, the specter of crypto anarchy.ʼ

Tim May, The Crypto Anarchist Manifesto

1. Introduction

‘A revolution has been born’. This is how Dread Pirate Roberts, the pseudonymous alleged founder and administrator of the Silk Road dark market (hereafter: Silk Road) described the role of the platform towards its users in a forum post back in 2012 (Dread Pirate Roberts, 2012). In the same post, he wrote: ‘Silk Road was never meant to be private and exclusive. It is meant to grow into a force to be reckoned with that can challenge the powers that be and at last give people the option to choose freedom over tyranny. We fundamentally believe that people can thrive and prosper under these conditions and so far tens of thousands have done so in the Silk Road market.’ Yet, one wonders, could an online platform like the Silk Road indeed be understood as a revolutionary group challenging the (partially offline) ‘powers that be’?

Historically, as the internet grew, so did the occurrence of virtual communities (Lessig, 1996). Facilitated by the advent of microcomputing, these spontaneous groups have been popping up unrestrained on both the visible internet indexed by traditional search engines, as well as on the deep web, invisible to general use (Bergman, 2001). While evangelists of internet independence thought of cyberspace as ‘the new home of the mind’ (Barlow, 1994), it is the deep web’s darker layers, inaccessible through standard web browsers, where alternative political, social and economic orders thrive, hiding away from state laws designed to control human behaviour: out of sight, out of reach of state sovereignty (May, 1994).

The progress made by computer cryptography in the 1970s, and more specifically public key cryptography (Gardener, 1977), is one of the defining forces enabling these spaces. Tools based on strong encryption algorithms act as a cloak of secrecy and their uses for the protection of individual privacy are diverse, yet two particular characteristics stand out. First, the architecture of virtual communities entails that cryptography is used to ensure the security of identity, communication, currency, or more recently, value. Second, in these spaces, political ideologies are built around cryptography, arguably employed as a way of ‘displacing conventional notions of nationhood’ (May, 1994). In this paper, we refer to communities defined by these two features as ‘crypto communities’.

The Silk Road is one such crypto community. Generally labelled as a den of dealers (Christin, 2013), the Silk Road brought together people who rejected surveillance. Whether driven by personal creed, financial gain, casual needs or simple curiosity, its users formed a space where state-based regulatory limits were rejected. To enter this realm, users would go through The Onion Router browser (Tor) acting as the gatekeeper to a network of hidden addresses. Given its complex encryption, users could browse the dark web without being tracked (Gruber, 2013). Within this space, financial cryptography enabled users to engage in trade. Back in the 1980s, when David Chaum was writing about digital transaction systems (Chaum, 1981, 1985; De Filippi & Wright, 2018), cryptocurrencies had not gained too much traction, as academic interest on this topic mostly focused on the mathematics behind it and not so much on the financial potential (May, 1994). Enter the Bitcoin era circa 2008, and trade became a common occurrence in virtual communities on the dark web, as they started developing more mainstream, consumer-oriented market characteristics. As the biggest market to ever use a blockchain-based cryptocurrency (Bitcoin), the Silk Road connects the past and the future of crypto communities: the cypherpunks of the 1980s, and the decentralisation projected for the next internet era.

Though initially believed to be a Wild West by its creator,1 throughout its existence, the Silk Road matured into an ecosystem with its own elaborate set of rules and enforcement mechanisms. Our paper examines whether or not this ecosystem had the constitutive elements of a legal order, as well as whether or not this order had the revolutionary potential described by Dread Pirate Roberts.2 Our goal is to use these insights to contribute to the long-standing academic and regulatory discussion regarding the rule of law in cyberspace and the legitimacy of state intervention (Suzor, 2010; Sunstein, 1995; Hardy, 1994; Perrit, 1997; Menthe, 1998). The first iteration of the Silk Road left behind a large footprint of social interactions, in the form of forum posts, now also made publicly available.3 Essentially, this article uses qualitative content analysis to look into these interactions, in search of the constitutive elements of a legal order as follows.

Section 2 of this paper maps three different generations of crypto communities, and reveals their shared narrative. This part also describes the Silk Road and clarifies methodological questions regarding the qualitative content analysis. Using a legal philosophical framework, in section 3 we engage in the qualitative content analysis of randomised forum threads from the first iteration of the Silk Road. In this part, we explore whether or not the Silk Road can be understood as a legal order - a minimal condition for its revolutionary potential. Section 4 reflects upon three specific models resulting from the analysis of the Silk Road data set, and explores the implications of these findings for the governance of crypto communities in general. Lastly, the conclusion includes a discussion of the question whether or not an online platform such as the Silk Road poses a serious challenge for state sovereignty.

2. Cryptography and virtual communities

What was the original ideology of crypto communities, and how did it evolve? As indicated above, in this paper we consider crypto communities to be virtual communities that use cryptography for their architecture, as cryptography becomes an integral part of the community ideology, whether expressed in a political form, or showing features common to religion.

The internet knows a plethora of virtual communities, either past or present: early Usenet forums (Bartlett, 2015, p. 15), social media networks such as Facebook, Twitter or even WhatsApp, or gaming worlds such as Second Life are a few examples. Many of these communities use some encryption functions, authentication being perhaps the most recognisable. Crypto communities are different from other virtual communities because of two main features. First, they use strong cryptography to secure identity, communication, currency and/or value (Arora & Barak, 2009), and this is vital to their architecture. Second, cryptography goes beyond its architectural usefulness, and becomes a tool for the expression of socio-economic or even political ideologies. In this section, we use these two criteria (architecture and ideology) to identify and discuss three generations of crypto communities. In doing so, in section 2.1 we first outline a brief history of computer cryptography. Section 2.2 focuses on the first version of the Silk Road as one such crypto community, to explain its birth and demise.

2.1 Keep your hands off my stuff - Three generations of crypto communities

As a field of computer science, cryptography has been dubbed the art and science of encryption (Ferguson, 2011, p. 5; Bauer, 2013). The encryption of information is supposed to guarantee its confidentiality, and generally entails ‘an algorithm called a cypher and a secret value called the key’ (Aumasson, 2018, p. 1). According to Kessler, the primary functions of cryptography are privacy/confidentiality, authentication, integrity, non-repudiation and key exchange (see table 1 below).

Before the 1970s, cryptography was the monopoly of governments, and used mostly for the benefit of intelligence services (Greenberg, 2012, p. 62). This paradigm shifted with the introduction of publicly-available cryptography, particularly the symmetric Data Encryption Standard (DES) cipher (Schneier, 1994; Greenberg, 2012, p. 86), the asymmetric Rivest–Shamir–Adleman (RSA) cipher, and the Diffie-Hellman key exchange (Narayanan, 2013; De Filippi & Wright, 2018, p. 14). With the effort of many academics, hobbyists and civil liberties organisations, cryptography moved from being considered a highly dangerous asset – back in the 1990s, it was labelled as ‘munition’ for export purposes by the US government – to becoming a basic tenet of online communication (Freier et al., 1996; Levy, 2002). Cryptography as a translation of privacy from the analogue world to the ‘Information Superhighway’ is referred to as ‘Pragmatic Crypto’ (Narayanan, 2013).

Table 1: Primary functions of cryptography (Kessler, 2019)

Privacy/confidentiality

Ensuring that no one can read the message except the intended receiver.

Authentication

The process of proving one’s identity.

Integrity

Assuring the receiver that the received message has not been altered in any way from the original.

Non-repudiation

A mechanism to prove that the sender really sent this message.

Key exchange

The method by which crypto keys are shared between sender and receiver.

Among the computer scientists focused on cryptography was May, a self-proclaimed free-market warrior, who argued that the state should ‘keep [its] hands off my stuff; out of my files, out of my office, off what I eat, drink and smoke. If people want to overdose, c’est la vie. Schadenfreude’ (Greenberg, 2012, p. 52). Together with fellow techno-libertarians like Hughes and Gilmore, who equally believed the state should have no involvement in the affairs of its citizens, May started the ‘cypherpunk’ group in 1992, which he described as ‘a loose, anarchic mailing list and group of hackers’ (May, 1994), leading to the birth of the first generation crypto community (see Table 2). The group rallied up a lot of support for its aim to build on and practically implement earlier theoretical cryptography, which crystallised the ‘doer’ nature of the group. As Hughes declared in 1993, ‘cypherpunks write code’ (Hughes, 1993). This code was the backbone of a small-scale online infrastructure, where cryptography was used at the level of securing identity and communication. Many of their meetings were held in person, and their online presence comprised of several mailing lists and bulletin boards/forums. But cypherpunks did more than write code, they also propagated a mission, labelled as ‘Cypherpunk Crypto’ (Narayanan, 2013, p. 3). Their development and use of mass-distributed cryptography was a means to an end. The goal was to fundamentally alter the social, economic and political status quo. As self-proclaimed crypto-anarchists (May, 1992), ‘where they saw authority, they attacked it’ (Greenberg, 2012, p. 122).

Table 2: Three generations of crypto communities

Cryptocommunities/uses of cryptography

Identity

Communication

Currency

Value

1st generation

(Cypherpunks)

✔️

✔️

  

2nd generation

(Dark markets)

✔️

✔️

✔️

 

3rd generation

(Dapps)

✔️

✔️

✔️

✔️

In 2008, Nakamoto shared his white paper on May’s cryptography mailing list (Nakamoto, 2008), leading to the creation of the second generation of crypto communities, which include dark markets (see Table 2). In itself, the idea of a market where people can exchange goods or services, including those prohibited by state law, was not a new concept: in the early 1990s, the so-called BlackNet, a marketplace for information, or the ‘Assassination Politics’ crowdfunding and gambling assassination scheme had been shared around the cypherpunk community (Bartlett, 2015, p. 11). Yet, as one of the earliest markets to combine libertarian principles with cryptography, the Silk Road achieved something its predecessors did not: scale. The main reason why in its heyday, the Road grew to have up to 150,000 active customers (Christin, 2013, p. 9) is because in addition to identity and communication, it also used cryptography as currency. Tor provided more privacy for more functionalities than sending emails using anonymous remailers. In addition, cryptocurrencies enabled transfers of pecuniary value for illegal transactions, and Bitcoin reduced the risk of being tracked by law enforcement (Narayanan et al., 2016).

The commercial success surrounding Bitcoin financial speculation unleashed a wave of interest in the cryptographic technology behind it, namely blockchain, which is one example of the broader category of distributed ledger technology, or DTL (Walch, 2017; Benčić & Žarko, 2018; Ferraro et al., 2018; Popov, 2018). In some opinions, blockchain is supposed to be the harbinger of a new internet era, in the form of the decentralised internet, viewed as the solution to the increasingly complex problems posed by new ominous uses of centralised big data by both public and private actors (Simonite, 2018; Yeung, 2019). This context marks the emergence of the third and most recent generation of crypto communities (see Table 2 above). It includes groups involved in the development or use of decentralised computing platforms (e.g., Ethereum) or apps (Dapps). Unlike the earlier generations of crypto communities, this one uses cryptography at an additional level of infrastructure: exchanging value. This has led to the so-called ‘Internet of Value’, a concept which has yet to be defined in legal scholarship, social science, or computer science (Finck, 2018). This concept seems to be based on the notion that societies and markets are increasingly developing a steadier network infrastructure to transfer value, but also that such a networked reality would bring with it a new understanding of what can be valuable in virtual worlds (e.g., digital assets like cryptokitties, weapon skins or virtual land). In addition, this value would ideally be exchanged just as quickly as information (Choy & Teng, 2017), because it actually is information. While cryptocurrencies too are valuable, the Internet of Value encompasses a broader category of tradeable assets.

2.2 Silk Road v1.0

So far, we mapped three generations of crypto communities. Now it is time to shift the focus to one community in particular. In our view, the Silk Road v1.0 is an appropriate case study, as it ties earlier and later generations together using the shared narrative explored above. Moreover, it employs sufficiently sophisticated tools and systems from a cryptographic perspective, and its lifespan is concluded, which eliminates any unpredictable development in this community.

The Silk Road’s appeal came from a combination of the e-commerce, customer-oriented familiarity and the surprise of its unexpected (and generally, from a state perspective, highly illegal) listings. It was hailed as a platform providing ‘some cool and edgy stuff, not just another PayPal’, or an alternative Amazon, ‘if Amazon sold mind-altering chemicals’ (Chen, 2011). Much has been written about the marketplace in terms of its illegal activity, the trial of its founder - alleged owner and first main administrator, Ross Ulbricht - or the plethora of legal questions it brought up, ranging from the enforcement of laws prohibiting online crimes to the regulatory issues posed by the widespread use of Bitcoin (Lee, 2016; Turpin, 2014; Seligman, 2015; Hughes & Middlebrook, 2015; Ghappour, 2017; Price, 2014). While relevant, these matters are not within the scope of our paper, which instead focuses on the more social features of the Silk Road, namely how its community interacted. For cohesion purposes, we narrow down our depiction to a specific range of the Road’s lifespan, namely its first rendition (February 2011 - October 2013).

The mastermind behind the Silk Road is Dread Pirate Roberts, yet the known facts relating to his identity portray an incomplete picture, as it is still unclear if he was factually the sole owner of the platform. What is, however, a fact, is that Ulbricht acknowledged to be the founder of the Silk Road, which he saw as an economic experiment (O’Neill, 2015), and he was convicted by a court of law in the state of New York for crimes associated with the creation and operation of the marketplace.4 This paper moves on with an assumption that Ulbricht was the main operator of the Silk Road, who had absolute administrative privileges, and operated under two pseudonyms: Silk Road and Dread Pirate Roberts.

Dread Pirate Roberts had assistance in at least two ways: for programming and moderating. At least one Unix administrator was responsible for server security, reliability and performance, and was on the Silk Road payroll.5 In addition, there was a fluctuating number of moderators who were sometimes referred to by the community active on the forum, operating on a voluntary basis and crowdfunded by the community, as can be seen in a post6 by a user called ‘dutchshop’. Some moderators demanded financial support for their help. Overall, administrators dealt with back-end issues, and moderators provided forum support for FAQs or specific transactional issues.

3. Is the Silk Road a legal order?

3.1 Framework: the constitutive elements of a legal order

Can the Silk Road be understood as a legal order? To appreciate this question, we have to first clarify what we mean by ‘legal order’ (Hopman, 2019a; Hopman, 2019b). Legal orders can be understood in many ways and analysed from many angles (e.g., sociology, political science, law, philosophy, economics). This paper takes a legal philosophy angle to the understanding of legal orders, in order to contribute to the development of this concept particularly with insights from legal pluralism.

In this context, our proposed theoretical framework starts from the basic assumption that law is a social fact; laws are created by persons, they do not exist objectively and externally to human understanding. Laws exist only where there is a relation between people, a specific sort of relation that takes on a certain character, so that we define it as legal. To clearly delineate what would be considered legal and not legal, we propose that each social order, to be classified as a legal order, has to possess the following five characteristics:

  1. A sovereign: the person, or group of people, that the (legal) community has authorised to make law over them. The community allows the sovereign to be the author of (part of) their actions, thereby giving up part of their individual freedom, bestowing legal power upon the sovereign. The sovereign is an artificial person. According to Arendt, ‘When we say of somebody that he is “in power”, we actually refer to his being empowered by a certain number of people to act in their name. The moment the group, from which the power originated to begin with (potestas in populo, without a people or group there is no power), disappears, “his power” also vanishes’ (Arendt, 1970, p. 44). Similarly, Hobbes states that: ‘Since man's passions incline men to peace, out of fear of death […] they covenant amongst themselves to submit to a sovereign. In other words; by covenant they create an artificial person, a Leviathan, and they appoint one man to bear their person of whose actions they are all the author’ (Hobbes, 1996, pp. 88-91, p. 120).
  2. The basic norm: the norm that presupposes that one ought to behave such as has been commanded by the sovereign, or that the sovereign is the legitimate sovereign (Kelsen, 2007, pp. 115-118; Kelsen, 2009, pp. 8-9; Hart, 2012, p. 100).
  3. The legal community: the person, or a group of people, to whom the laws of the legal order apply. They recognise the basic norm authorising the sovereign to create laws.
  4. Laws: a law is a valid legal norm, which is valid within a legal order, by virtue of the fact that it has been created by a legitimate sovereign. A norm is a prescriptive statement, a rule by which a certain behaviour is commanded, permitted or authorised, and laws can be written or unwritten, public or non-public (Hopman, 2017).
  5. Possibility of legal enforcement: Anyone who acts against the law (commits an illegal act) is liable to legal consequences posed within the same legal order.

The legal order can then be defined as the legal community, sovereign and its laws taken together. Since law is a social fact, the existence of all of these elements, and ultimately the existence of any legal order, depends on the subjective belief of people. For there to be law, there has to be a legal community that recognises the legal power of a sovereign, and a sovereign who in fact makes law over this community. This point is made quite clear by Haugaard’s example:

[W]hat distinguishes the actual Napoleon from the ‘napoleons’ who are found in psychiatric institutions is not internal to them but the fact the former (unlike the latter) had a substantial ring of reference which validates his power. (Haugaard, 2008, p. 122)

Following this theoretical framework, the Silk Road must possess these elements in order to qualify as a legal order. Clearly, these elements have to be seen in connection to each other and can only be separated artificially. In practice, they are interdependent; for example, the possibility of legal consequences prescribed by the sovereign (e) is in great part dependent on the subjective belief of the legal community in the basic norm of the legal order (a+c). It does not, however, mean that every law has to be known by the whole legal community for it to be law. It is sufficient for the legal community to generally believe in the basic norm that installs the sovereign, and for this sovereign to declare the law – even if only a limited amount of the members of the legal community know about this law.

3.2 Methodology

To be able to determine whether the Silk Road as a platform possesses these elements of a legal order, we analysed the high amount of text available on the platform’s forum. While this text was originally not available on the regular web, after the demise of the Silk Road, several users made this data public. In our analysis, we use one of these data sets, anonymously collected and hosted online.7 We consider this data set to be sufficiently reliable, since it has also served as one of the main sources for the extensive reporting done by Bilton, who looked into more than ‘[…] two million words of chat logs and messages between the Dread Pirate Roberts and dozens of his employees […]’ together with ‘dozens of pages of Ross’s personal diary entries and thousands of photos and videos of Ross’ (Bilton, 2017, p. 323, p. 329). This information also matches independent reports written by Greenberg and Bartlett, two journalists who have extensively covered the first version of the Silk Road in their writings (Bartlett, 2015; Greenberg, 2012). Moreover, this resource is also referred to by the pseudonymous users of the SilkRoad subreddit, implying that it portrays accounts which individuals familiar with the history of the Silk Road consider to be valid, and that the collector and host of all this information is knowledgeable about the Silk Road and its development. The text available in this data included the Silk Road Charter, the Terms of Service (Seller’s Agreement; Buyer’s Guide; Seller’s Guide), and forum threads, posts and messages. While our inquiry refers to the Charter and Terms of Service to establish the rules of the Silk Road, the main empirical focus of this study lies in the forum threads. In the data set, forum threads were labelled according to users, and each user folder accounts for thousands of posts, labelled as both threads and individual posts from each thread where users contributed. We chose to focus on threads in the Dread Pirate Roberts folder, ranging from 18 June 2011 to 26 September 2013, which included conversations of various lengths with a wide variety of users on the forum. We randomised the total of 324 threads available and selected 118 threads. After having analysed these threads, we hand-selected another 9 threads which seemed relevant based on their topic line. We subsequently looked into the selected information using qualitative content analysis (Budd et al., 1967; Hojlund, 2015). This type of analysis gives us further insights into how the Silk Road rules were applied, and most importantly, how the community perceived them. The full coding notes and further details regarding the data set are available upon request.

3.3 Empirical study: the (legal?) community of the Silk Road

Based on our in-depth qualitative analysis of the Silk Road forum, we can now look at the following questions:

  1. Do Silk Road users see Dread Pirate Roberts as the sovereign who is authorised to make law over the Silk Road community?
  2. Are the laws in the formal written documents (i.e., the Charter, Buyer’s and Seller’s Guides) enforced?
  3. Are there other Silk Road laws which perhaps are not formalised in written legal documents?

When we have answered these questions, we can answer the question of whether the Silk Road is a legal order, in the sense that it possesses the elements of a legal order indicated under section 3.1.

3.3.1. Do the Silk Road users see Dread Pirate Roberts as the sovereign who is authorised to make law over the Silk Road community?

One condition for the rules of the Silk Road, created by Dread Pirate Roberts, to be properly understood as laws, is that the relevant legal community – in this case, the Silk Road users – believes that Dread Pirate Roberts is the legitimate sovereign of the Silk Road. This means that the (legal) community has authorised Dread Pirate Roberts to make law over them, and that they believe in a basic norm that presupposes that they ought to behave such as has been commanded by the sovereign (section 3.1).

On the forum, it seems that people generally understand Dread Pirate Roberts as the one who makes the rules. In several forum threads, users propose certain changes to the rules and wait for Dread Pirate Roberts to reply to this, or are asked by Dread Pirate Roberts for input on proposed legislation. In other cases, Dread Pirate Roberts simply announces legislative and/or user changes, yet it seems that user comments do have the potential to make Dread Pirate Roberts change his mind.8 Examples of such legislation are amendments of seller rating/feedback9 and financial regulations.10

In these cases, users either complain about new regulations or defend/compliment Dread Pirate Roberts. While some users seem critical of the authority of Dread Pirate Roberts, no one seems to deny or seriously question his authority as legislator. Users regularly refer to Dread Pirate Roberts as ‘the captain’,11 as does himself (Dread Pirate Roberts: ‘Whether you like it or not, I am the captain of this ship. You are here voluntarily and if you don't like the rules of the game, or you don't trust your captain, you can get off the boat.’).12

3.3.2.Are the laws in the formal written documents (i.e. the Charter, Buyer’s and Seller’s Guides) enforced?

Several instances of enforcement of laws are discussed on the forum. Insofar as these concern enforcement of formal written laws, the following rules are mentioned:

  1. Restricted items: child porn (seemingly defined as porn involving anyone under age 18) was illegal, and users are asked to report child porn listings by contacting Dread Pirate Roberts/the admin team.13 In general, it was illegal to sell something that would hurt others. Examples mentioned are stolen items or info, stolen credit cards, counterfeit currency, personal information, assassinations and/or weapons.14 Listings of forgeries of government documents, such as fake identity documents, are allowed, but not forgeries of privately issued documents, such as diplomas or tickets. It is however unclear what the consequences would be, were this rule violated.15
  2. Customer service: it was not allowed for sellers to leave feedback for themselves from a dummy account. However, while the Seller’s Guide indicates that this would be ‘sanctioned with the revocation of privileges’, there does not seem to be any active enforcement of this rule.16 Threatening a customer was illegal and the punishment was the suspension of the account.17
  3. Obligations relating to payment: to prevent vendors from pretending to sell, receiving payment yet not sending the promised goods, per 9 January 2012, a new rule was introduced. This rule was sent in a message to all vendors by Dread Pirate Roberts. According to this rule, from then on selling out of escrow (the intermediated payment system mandated by Dread Pirate Roberts) was illegal. Users were asked to report vendors who would demand out of escrow payment to the administrative team. At the time, Dread Pirate Roberts argued: ‘We are looking at several mechanisms for enforcing the ban on [Out of Escrow] transactions, from self-policing to bounties on offenders.’ Soon after, it was decided that accounts of vendors requesting out of escrow payment would be terminated.18‘Finalizing early’, as described in the Seller’s Guide, was indeed allowed.19 It was also illegal for vendors to redirect users to their personal, or another, darknet site. The punishment for this act was for the vendor’s account to be suspended.20
  4. Data protection: First, it was not allowed to share any kind of personal information of users, not even if these users were (state) police officers. It is unclear what enforcement was applied in this case.21 Vendors had to delete buyer addresses as soon as they had either shipped the purchased good, or in case they did not intend to do so. If they did not, their accounts would be suspended.22

In general, it seems that although everyone is aware of the laws of the forum, policing and enforcement does not always happen. In case of a dispute, moderators act as judges, with Dread Pirate Roberts as the supreme judge. However, there are also instances when Dread Pirate Roberts mentions he will not take measures against the sale of certain types of products, such as counterfeit silver bars:

Up to this point, we have been strict about not allowing counterfeit currency, but all kinds of counterfeit things like bullion, apparel, even fake drugs have started to be sold on Silk Road and we just haven't taken the time to police it or draw a well-defined line for what is and isn't allowed. At this point, you won't be stopped if you list this item, but sometime soon I will have a discussion with the community about where we want to draw the line and you might be asked to delist such items. If that's the case, anyone selling such items will get their bond refunded even if they hadn't met the requirements.23

3.3.3 Are there other Silk Road laws which perhaps are not formalised in written legal documents?

Before starting our data analysis, we expected that the laws would not be too meticulously formalised in the written formal documents, especially because these were so limited. However, it appears that these documents were considered very important and as indeed containing all law of the Silk Road (Goanta, 2020). The documents were often referred to, and in our qualitative content analysis endeavour we did not come across any other rules on the Silk Road.

3.4 What kind of socio-legal order is the Silk Road?

Above, we embarked on an analysis of Silk Road forum threads wondering whether the Silk Road can be considered a self-standing legal order, and if so, what kind of socio-legal order. These two points are elaborated upon in what follows.

(i) The Silk road as a legal order

In terms of the elements of a legal order mentioned before, all these elements seem to be present. According to the theory of legal pluralism, human beings are members of different legal orders simultaneously, of which the state legal order is only one (Mak, 2018; Tamanaha, 2008). The Silk Road can be understood as one of these legal orders, providing an alternative to the state legal order, under whose rules selling and buying drugs is illegal. In this model, Dread Pirate Roberts would be the legislator/sovereign, who refers to himself and is referred to as ‘the captain’, and/or the whole team of administrators whom he seems to be leading. The laws are the rules of the forum, for example the rules around selling and buying. Table 3 below gives some examples of how the constitutive elements of a legal order interact on the Silk Road.

Table 3: Examples of constitutive elements of a legal order present in forum threads

Legal norm (law)

Legislator

Legal community

Enforcement

1. Scamming is illegal. Scamming is understood as:

a) To impersonate an existing user on the Silk Road forum;

b) To impersonate an existing Silk Road user on other forums/marketplaces

c) Vendors who create buyer accounts, order their own product and leave feedback to boost their sales

d) Vendors who pretend to sell a product, get paid by the buyer but never send the products.

Dread Pirate Roberts (and Silk Road admins)

Silk Road users

Types b and c are not enforced; this is considered up to the individual responsibility of the users. For type a, the existing user can send a message to the admins who will do a password reset. For type d) there is a buyer protection mechanism (escrow system). Per 9 January 2012 selling out of escrow is made illegal (see below).

2. Selling out of escrow is illegal (however ‘finalizing early’: finalizing transaction and releasing funds before goods have been received, is still allowed).

Dread Pirate Roberts

Silk Road users

Termination of account

3. Contact between seller and buyer outside Silk Road is not allowed unless the site is out.

Dread Pirate Roberts

Silk Road users

Termination of account

4. Child porn is illegal.

Dread Pirate Roberts

Silk Road users

unknown

In many instances, the Silk Road community is either consulted about new legislation/policy, or the community itself takes the initiative to comment on existing rules/features and to propose improvements, which often receive serious consideration by the Silk Road administrators. In this sense, it is quite an egalitarian legal order, although the leader is not chosen nor does the community have final decisional power (the Silk Road administrators decide, and there is no voting process). Some users get upset when Dread Pirate Roberts or his administrators ‘legislate’, calling them ‘dictators’, ‘tyrants’, ‘chiefs’, while others support the leadership. The authority/enforcement element is sometimes also expressed through banishment.

It's one thing to ban listings but terminating accounts for this kind of a violation is ridiculous and dictatorial. I won't be around here much longer if it's going to turn into the 4th Reich. (RapidImprovement, 2012, January 9)

I for one applaud and support your governance. (exodusultima 2012, January 11)

It is also noticeable that most users seem upset with Silk Road operators when they unilaterally raise their commission without any perceived benefits to the community, thereby expressing some kind of expectation alike a social contract (we follow our leader, in exchange for protection):

@Silk Road - I simply just cannot see how any of your proposed legislation prevents scams. (Paperchasing, 2012, January 11)

Lastly, it is interesting that the Silk Road claims to operate as protecting its community against both scammers and ‘LE’ (law enforcement, meaning law enforcement of the state legal order), a view that is often reiterated by the users. Users of the Silk Road seem aware that while many of their actions are legal from the internal perspective of the Silk Road, they are illegal from the external perspective of certain state legal orders, of which they are also members. As moderator Libertas argues:

People here are not criminals […] They may be considered “criminals” under the laws of the society in which they live in but those laws do not apply to us here […].21

However, upon closer scrutiny, the social environment of the Silk Road gets more confusing and complicated. While it may be argued that the Road is a self-standing legal order, potentially one that in a certain area defies the state legal order, we found that the story is not that black-and-white, because different users seem to view the Silk Road quite differently. On the basis of our analysis, we found that there are two particular types of socio-legal orders that different users consider the Silk Road to be: a revolutionary movement, and an illegal capitalist marketplace.

(ii) The Silk Road as revolutionary movement

Silk Road as a revolutionary movement is a model that seems to be advocated for by Dread Pirate Roberts himself mostly, who seems to argue that the Silk Road and its financial benefits are only a means to an end, namely to fight state control, to prepare for the war to come. Some of the users seem to strongly support this view, calling each other ‘brothers in arms’.24 You can also see that a lot of community building goes on between users and Dread Pirate Roberts, for example in the frequent love declarations and reference to trust. In his ‘State of the Road Address’, Dread Pirate Roberts writes:

Silk Road was never meant to be private and exclusive. It is meant to grow into a force to be reckoned with that can challenge the powers that be and at last give people the option to choose freedom over tyranny. We fundamentally believe that people can thrive and prosper under these conditions and so far tens of thousands have done so in the Silk Road market. A revolution has been born (Dread Pirate Roberts, 2012).

He argues that the change in commission over sales, which leads to higher profits for the administrators, is necessary because of Silk Road’s long term vision, which is not ‘getting the most out of this thing before it gets taken down’, but: ‘doing everything we can NOW to prepare for the war to come’. If they do not, ‘Silk Road will be a shooting star that burns out quickly and dies as little more than a dream, swallowed by the nightmare reality of an ever-expanding, all-powerful global oligarchy’. Therefore, everyone has to support this enterprise: ‘Do it for me, do it for yourself, do it for your families and friends, and do it for mankind’. Some of the users seem to support this view. They call each other ‘brother in arms’, ‘brothers of the struggle’, and accept Dread Pirate Roberts as their leader, whom - as mentioned - they refer to as ‘the captain’. Another argument supporting this model for the Silk Road, is the fact that a lot of love and support is exchanged between the members of the community. Dread Pirate Roberts actively builds a community with himself as a revolutionary, trustworthy leader (or captain). The main elements holding the community together are love and trust. Declarations of love are often returned from user to administrators and vice versa.

Here's another thing that doesn't get said enough: I love you. This is the most fun I've ever had and I feel closer to the people I have met here than the vast majority of people I have to hide all of this from in real life. (Dread Pirate Roberts, 2011 January 9)

I fucking love you. Thanks for making our lives so much better (listentothemusic, 2011, August 23).

Hey Dread Pirate Roberts, you probably get this a lot, but you’re awesome. You are my personal hero. (divinechemicals, 2012, February 28)

In terms of trust, establishing trust within the community is an important issue. In reply to the state of the road address on 9 January 2011, many users argue that Dread Pirate Roberts is pretending there is a larger goal behind Silk Road, while in fact he simply wants to make more money, which is challenged by the platform’s leader:

I am quite surprised by […] how little faith you put in me after I feel like I have done so much to deserve it […] if you would only do me the courtesy of believing me […] Everyone WILL be treated fairly under the new rules just as you have been all along […] you have to TRUST us that we are doing our absolute best and will always work toward our stated goals, which include giving people the opportunity to choose freedom over tyranny, and to trade in just about any good or service they wish, securely and privately […] If I am greedy, I am greedy for freedom. I am greedy for power. Not force over others, but for a world where POWER resides in me and each and every individual, where it belongs. If we can get to that world, I can die happy. (Dread Pirate Roberts, 2011, January 9)

(iii) The Silk Road as a marketplace

However, many of the users also seem skeptical, and retain the view that the Silk Road is rather a capitalist marketplace. When Dread Pirate Roberts talks about the revolution, they are cynical and see this as simply an excuse to ask for a higher commission. They also argue that the marketplace is simply like a product you can use to buy and sell, and if you do not like it, you do not have to use it. For the users who do not see the Silk Road so much as a revolutionary project, but a capitalist marketplace, Dread Pirate Roberts is the owner, and illegal activities that take place that are ‘illegal’ from the perspective of the state legal order, which simply is the legal order all platform users are subject to. These users argue, in reply to the commission change on 9 January 2012:

I have zero issues with this policy change. As a business, Silk Road has the right to do what it pleases. If you don’t like it, then create/find an alternative. (keldog, 2012, January 9)

Another argument supporting this model, is that when the site is out, people are quick to jump ship and move to other illegal marketplaces to buy/sell drugs. There are relatively few discussions about the revolutionary programme of the Silk Road, with most forum threads discussing technical issues. Dread Pirate Roberts himself also seems to adhere to this view when on 22 June 2011, he starts a thread called ‘Keep your guard up’, in which users are warned:

DO NOT get comfortable! This is not wal-mart, or even amazon.com. It is the Wild West and there are as many crooks as there are honest businessmen and women. Keep your guard up and be safe, even paranoid. (Dread Pirate Roberts, 2011, June 22)

This implies that there is no order that will protect the individual users, rather it is the ‘Wild West’, a lawless, unorganised, dangerous place, a state of nature.

Because political discussion is generally limited on the forum, it seems that the Silk Road really is more of an individual legal order, or even only an illegal marketplace. However, it is not unlikely that political debate and preparation for the revolution happened in more private spaces rather than the forum that was accessible to all users. As Dread Pirate Roberts writes in his (political) ‘State of the Road comment’:

I don't like writing this kind of stuff publicly because it taunts our enemies and might spur them into action. (Dread Pirate Roberts, 2012, January 11)

It is therefore possible that only a limited part of the community, perhaps in a more private/hidden forum, were making plans for a world revolution, for which the Silk-Road-as-legal-order, in the form of a capitalist marketplace is a means to an end (to accumulate financial resources for the war to come). In this situation, it makes sense that for most of the Silk Road users, the platform was simply an illegal capitalist marketplace, while for some core users it was a revolutionary movement.

4. ‘Technology Has Let the Genie Out of the Bottle’: Reflecting on the role of crypto communities

The Silk Road is a fascinating example of crypto community self-governance. It tells the story of how a handful of individuals from around the US (and the world) managed to set up a system, whether calling a legal order (a revolutionary movement, or a capitalist marketplace) or not, and – at least for a few years – successfully govern it. In spite of its contradictions and controversy, the Silk Road left most of the writers who investigated it in depth baffled by its accomplishments (Bartlett, 2015). Whether it was the perceived atmosphere of camaraderie between utmost strangers in a hidden part of the internet, the high effectiveness of the reputational systems that mostly led to high quality services, or the vision that its users contributed to the colonisation of cyberspace, the Silk Road started out as a do-it-yourself platform with a few users and expanded to a space actively used and visited by hundreds of thousands. The nature of its activities is certainly condemnable from the perspective of a state legal order. And yet, in spite of the severity of this condemnation and its affiliated risks, the Silk Road took the tools developed by other generations of crypto-libertarians and deployed them at an unprecedented scale. Through those tools, Ulbricht and his administrators brought about a new expression of libertarianism to a community mostly free to engage in transactions otherwise considered unlawful by states. Unsurprisingly, freedom is a mission taken over by next generation crypto communities as well (e.g., decentralised platforms). Ethereum, for one, aims to ‘build a more globally accessible, more free and more trustworthy Internet’.

The Silk Road as a crypto community case study is a fascinating example of how the internet has been challenging the sovereignty of nation states. While this study has not focused on comparing the Silk Road to a sovereign state, but rather to identify whether, in a legal pluralist understanding, it has the constitutive elements of a legal order, some considerations relating to the notion of sovereignty in cyberspace can be briefly discussed. One of the leading constitutional theories under the umbrella of internet and more specifically platform governance developed during the past years is Pasquale’s ‘functional sovereignty’ (Pasquale, 2017). Looking at the identity of digital platforms such as Amazon or Facebook that exercise juridical power over their users, Pasquale clarifies that they are no longer simple market participants, as they exert ‘regulatory control over the terms on which others can sell goods and services. Moreover, they aspire to displace more government roles over time, replacing the logic of territorial sovereignty with functional sovereignty’. This phenomenon of crowding out the powers of the state (e.g., making and enforcing rules) can be explained by the influence exercised online by digital platforms, seen as private entities ordering a realm otherwise considered lawless.

Pasquale himself agrees that even the mainstream digital giants, staple brands known by consumers around the world, have roots in the early libertarian days of the internet. Reflecting on crypto communities through the lenses of functional sovereignty, it can be argued that platforms such as the Silk Road fit this identity quite neatly - not only had the Silk Road preserved the libertarian goals from the dawn of cyberspace, but because of its nature, users literally turned to it for rules in what they perceived to be a lawless realm, used the dispute resolution and remedies mechanisms of the platform, and paid fees for using it. Whether future iterations of crypto communities can also be labelled as functional sovereigns, and how mainstream platforms can be compared to illicit cyber spaces, is a matter that requires further inquiry.

Conclusion

In this paper, we looked at the constituting elements of a legal order, in order to analyse whether they were present in the first iteration of the Silk Road community, as expressed by its members on its forum. We found that all these elements seem to be present, and that according to a legal pluralist view, an internet platform such as the Silk Road may very well make up its own socio-legal order. We further discussed two different labels for what kind of legal order it may be, and on the basis of the qualitative content analysis performed on the forum threads, we found that the Silk Road may be considered a revolutionary movement by some, and a mere marketplace by others, and that these findings may have implications for future crypto communities.

As functional sovereigns, internet platforms exercise some of the functions of the state, albeit to a reduced degree. The main rationale behind the constant rejection of the independence of cyberspace has been that it cannot replace the physical world, and the physical world is governed by rules, some of which have been around for hundreds, if not thousands of years. The physical world is largely governed by the state, who has a monopoly on the use of force, or on the threat over the use of force (Schrepel, 2019). If your neighbour’s trees block the access to your back door, you could ask a court to force your neighbour to take down the tree. But what if the tree is instead a digital tree in a digital world such as Decentraland?

Together, different generations of crypto communities shape a common narrative of using cryptography-based computer technologies to enhance personal freedom. Their role may not necessarily be to create anarchy in the sense of lawlessness. After all, ‘there are no spaces of perfect freedom from all constraints’, as can also be seen in the Silk Road example, where authority still leads through law (Benchler, 2006; de Filippi & Loveluck, 2016). Instead, the dream seems to be to adhere to a libertarian order with minimal intervention, which is still based on rules. To achieve this goal, ‘Technology has let the genie out of the bottle’, said cypherpunks, and back in 1994 when this sentence was originally written, it largely reflected the rich sci-fi imagination of a group of mostly computer programmers. Yet with each new iteration, crypto communities grow larger, stronger and seem to move closer to the shared vision of freedom. After law enforcement managed to take down the first rendition of the Silk Road in 2013, many other markets emerged in its stead, with far more controversial listings that were banned on the original platform (Greenberg, 2015).

Silk Road stakeholders had diverse interests, so it is not surprising that not all of them rallied around the platform’s core philosophy. The physical meetings held by Tim May arguably attracted the more orthodox believers of the cypherpunk movement. However, its mailing list was free for anyone to join, without any obligation to show their zealotry towards cryptoanarchy. This is equally the case when looking at the Silk Road forum: some members support the movement behind the platform, while others perceive it as a drugs marketplace. Further on, the cryptocurrency bubble and the huge investments in various decentralised apps seem to tell a similar story. Perhaps some of the developers and entrepreneurs involved in this space truly believe in its potential for changing societal paradigms because of the vision they stand for. However, it is equally reasonable to argue that not all the members of this space are primarily motivated by this vision, but might be driven by financial gain instead.

What does this tell us about the future of crypto communities? Just like the Silk Road, internet startup HavenCo aimed to create a data haven in Sealand, a self-proclaimed independent micronation, and store content that was illegal in other countries. It too spurred ‘a spirit of apocalyptic conflict between the Internet and national authority’ (Grimmelmann, 2012, pp. 407-408). The blockchain hype has given new wings to libertarian initiatives such as Free Society, that claims to be in the process of ‘purchasing sovereignty from a government to create the world’s first Free Society’. Other examples include Bitnation, Liberland, and the Floating Island Project (Chandler, 2016). Similarly, after only ten years, Bitcoin and the alternative coins that followed led to the development of a dynamic and highly volatile market, frantically oscillating between a total worth of US$ 831 and US$ 186 billion only in 2018 (Schroeder, 2018; Marr, 2017). The fast pace of blockchain technological and commercial development has taken regulators by surprise. Blockchain in itself has received an overwhelming amount of attention in the past decade (Vilner, 2018), yet most of this attention ignored Bitcoin’s cryptographic roots (Narayanan & Clark, 2017; May, 2018).

Whether these initiatives pose serious challenges for state sovereignty in the 21st century remains to be seen. Nevertheless, it would be unwise to discard them as criminal or entrepreneurial enterprises, as they could reveal insights into behaviours for which new regulatory incentives might be needed.

References

Arendt, H. (1970). On violence. Houghton Mifflin Harcourt Publishing Company.

Arora, S. & Barak, B. (2009). Computational Complexity: A Modern Approach. Cambridge University Press. https://doi.org/10.1017/CBO9780511804090

Aumasson, J. P. (2018). Serious Cryptography. No Starch Press.

Barlow, J. P. (1994). A Declaration of the Independence of Cyberspace. https://www.eff.org/cyberspace-independence.

Bartlett, J. (2015). The Dark Net. Melville House.

Bauer, C. P. (2013). Secret History: The Story of Cryptology. Taylor & Francis. https://doi.org/10.1201/b14076

Benčić, F. M., & Podnar Žarko, I. (2018). Distributed Ledger Technology: Blockchain Compared to Directed Acyclic Graph. ArXiv. https://arxiv.org/abs/1804.10013

Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedoms. Yale University Press. https://cyber.harvard.edu/wealth_of_networks/index.php/Main_Page

Bergman, M. K. (2001). White Paper: The Deep Web: Surfacing Hidden Value. Taking License, 7(1). https://doi.org/10.3998/3336451.0007.104

Bilton, N. (2017). American Kingpin. Penguin.

Budd, R. W., Thorp, R. K., & Donohew, L. (1967). Content analysis of communications. Macmillan.

Chaum, D. (1981). Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms. Communications of the ACM, 24(2), 84. https://doi.org/10.1145/358549.358563

Chaum, D. (1985). Security without Identification: Transaction Systems to make Big Brother Obsolete. Communications of the ACM, 28(10), 10. https://doi.org/10.1145/4372.4373

Chen, A. (2011, June 1). The Underground Website Where You Can Buy Any Drug Imaginable. Gawker. https://gawker.com/the-underground-website-where-you-can-buy-any-drug-imag-30818160.

Choy, W.L. & Teng, P. (2017). When Smart Contracts are Outsmarted: The Parity Wallet ‘Freeze’ and Software Liability in the Internet of Value. National Law Review, 7(356). https://www.natlawreview.com/article/when-smart-contracts-are-outsmarted-parity-wallet-freeze-and-software-liability.

Christin, N. (2013). Traveling the Silk Road: A measurement analysis of a large anonymous online marketplace. Proceedings of the 22nd international conference on World Wide Web,213–224. https://doi.org/10.1145/2488388.2488408

Ferguson, N., Schneier, B., & Kohno, T. (2011). Cryptography Engineering: Design Principles and Practical Applications. Wiley. https://doi.org/10.1002/9781118722367

Ferraro, P., King, C., & Shorten, R. (2018). Distributed Ledger Technology for Smart Cities, The Sharing Economy, and Social Compliance. ArXiv. https://arxiv.org/abs/1807.00649.

De Filippi, P. & Loveluck, B. (2016). The invisible politics of Bitcoin: governance crisis of a decentralised infrastructure. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.427

De Filippi, P., & Wright, A. (2018). Blockchain and the Law. Harvard University Press.

Finck, M. (2018). Blockchains: Regulating the unknown. German Law Journal, 19(4), 665–692. https://doi.org/10.1017/S2071832200022847

Freier, A., Karlton, P., & Kocher, P. C. (1996). The SSL Protocol Version 3.0. https://pdfs.semanticscholar.org/bf8e/d85a0c87b2762a344c0f9c0432bc2442fdfe.pdf

Gardener, M. (1977). A New Kind of Cipher that would take millions of years to break. Scientific American. https://www.scientificamerican.com/article/mathematical-games-1977-08.

Greenberg, A. (2012). This Machine Kills Secrets: How Wikileakers, Cypherpunks, and Hacktivists Aim to Free the World’s Information. Dutton.

Greenberg, A. (2015, 12 August). Crackdowns Haven’t Stopped the Dark Web’s $100M Yearly Drug Sales. Wired. https://www.wired.com/2015/08/crackdowns-havent-stopped-dark-webs-100m-yearly-drug-sales/

Ghappour, A. (2017). Searching Places Unknown: Law Enforcement Jurisdiction on the Dark Web. Stanford Law Review,69(4), 1075–1136. https://www.stanfordlawreview.org/print/article/searching-places-unknown/

Goanta, C. (2020). The Private Governance of Identity on the Silk Road. Frontiers in Blockchain. https://doi.org/10.3389/fbloc.2020.00004

Grimmelmann, J. (2012). Sealand, Havenco, and the Rule of Law. University of Illinois Law Review, 2012(2), 405–484. https://illinoislawreview.org/print/volume-2012-issue-2/sealand-havenco-and-the-rule-of-law/

Gruber, S. M. (2013). Trust, Identity and Disclosure: Are Bitcoin Exchanges the Next Virtual Havens for Money Laundering and Tax Evasion. Quinnipiac Law Review,32(1), 135–208.

Hart, H. L. A. (2012). The Concept of Law. Oxford University Press.

Hardy, I. T. (1994). The Proper Legal Regime for Cyberspace. University of Pittsburgh Law Review, 55(4), 993–1056.

Haugaard, M. (2008). Power and Legitimacy. In M. Mazzotti (Ed.), Knowledge as Social Order: Rethinking the Sociology of Barry Barnes. Ashgate.

Hobbes, T. (1996). Leviathan. Cambridge University Press. https://doi.org/10.1017/CBO9780511808166

Hojlund, S. (2015). Evaluation in the European Commission: For Accountability or Learning?. European Journal of Risk Regulation, 6(1), 35-46. https://www.jstor.org/stable/24323715

Hopman, M. (2017). Lipstick law, or: the three forms of statutory law. Journal of Legal Pluralism and Unofficial Law, 49(1), 54–66. https://doi.org/10.1080/07329113.2017.1308787

Hopman, M. (2019a). A new model for the legal pluralist study of children’s rights, illustrated by a case study on the child’s right to education in the Central African Republic. Journal of Legal Pluralism and Unofficial Law,51(1), 72-97.

Hopman, M. (2019b). Looking at law through children’s eyes. https://www.globalacademicpress.com/ebooks/marieke_hopman/mobile/index.html.

Hughes, S. J., & Middlebrook, S.T. (2015). Developments in the Law Affecting Electronic Payments and Financial Services. The Business Lawyer, 71(1), 361–372. https://www.jstor.org/stable/26417561

Kelsen, H. (2007). General Theory of Law and State. Transaction Publishers.

Kelsen, H. (2009). Pure Theory of Law. The Lawbook Exchange.

Kessler, G. C. (2019) An Overview of Cryptography. https://www.garykessler.net/library/crypto.html

Lee, L. (2016). New Kids on the Blockchain: How Bitcoin's Technology Could Reinvent the Stock Market. Hastings Business Law Journal, 12(2), 81–132. https://repository.uchastings.edu/hastings_business_law_journal/vol12/iss2/1

Lessig, L. (1996). The Zones of Cyberspace. Stanford Law Review, 48(5), 1403–1412.

Levy, S. (2002). Crypto: How the Code Rebels Beat the Government, Saving Privacy in the Digital Age. Penguin Putnam.

Lopez, M. (2018, June 11). WhatsApp is upending the role of unions in Brazil. Next, it may transform politics. Washington Post. https://wapo.st/2FHinrL

Mak, V. (2018). Pluralism in European Private Law. Cambridge Yearbook of European Legal Studies, 20, 202–232. https://doi.org/10.1017/cel.2018.5

May, T. C. (1994). Crypto Anarchy and Virtual Communities. http://groups.csail.mit.edu/mac/classes/6.805/articles/crypto/cypherpunks/may-virtual-comm.html

May, T. (2018, October 19). Enough with the ICO-Me-So-Horny-Get-Rich-Quick-Lambo Crypto. CoinDesk. https://www.coindesk.com/enough-with-the-ico-me-so-horny-get-rich-quick-lambo-crypto

Menthe, D. C. (1998). Jurisdiction in Cyberspace: A Theory of International Spaces. Michigan Telecommunications and Technology Law Review, 4(1), 69–104. https://repository.law.umich.edu/mttlr/vol4/iss1/3/

Narayanan, A. (2013). What happened to the crypto dream? Part 1. IEEE Security & Privacy11(2), 75–76. https://doi.org/10.1109/MSP.2013.45

Narayanan, A., Bonneau, J., Felten, E., Miller, A., & Goldfeder, S. (2016). Bitcoin and Cryptocurrency Technologies. Princeton University Press.

Narayanan, A., & Clark, J. (2017). Bitcoin’s Academic Pedigree. Communications of the ACM, 60(12), 36-45. https://doi.org/10.1145/3132259

O'Neill, P.H. (2015, 13 January). Ross Ulbricht admits he created Silk Road but says he’s not Dread Pirate Roberts. Daily Dot. https://www.dailydot.com/unclick/ross-ulbricht-trial-silk-road-inventor-not-dpr/

Pasquale, F. (2017) From Territorial to Functional Sovereignty: The Case of Amazon. Law and Political Order Blog. https://lpeblog.org/2017/12/06/from-territorial-to-functional-sovereignty-the-case-of-amazon/

Perritt, H. H., Jr. (1997). Cyberspace and State Sovereignty. Journal of International Legal Studies, 3(2), 155–204.

Price, R. (2014, 7 November). We spoke to the shady opportunist behind Silk Road 3.0. Daily Dot. https://www.dailydot.com/layer8/silk-road-3-blake-benthall/

Romeo, A. D. (2016). Hidden Threat: The Dark Web Surrounding Cyber Security. Northern Kentucky Law Review, 43(1), 73–86. https://chaselaw.nku.edu/content/dam/chase/docs/lawreview/v43/nklr_v43n1.pdf

Schneier, B. (1994). Applied Cryptography. Wiley.

Schrepel, T. (2020). Anarchy, State, and Blockchain Utopia: Rule of Law versus Lex Cryptographia. In U. Bernitz, X. Groussot, J. Paju, & S. de Vries (Eds.), General Principles of EU Law and the EU Digital Order. Wolters Kluwer.

Seligman, J. S. (2015). Cyber Currency: Legal and Social Requirements for Successful Issuance Bitcoin in Perspective. Ohio State Entrepreneurial Business Law Journal, 9(2), 263–276. http://hdl.handle.net/1811/78481

Silk Road | Users. (n.d.). https://antilop.cc/sr/users/

Simonite, T. (2018, March 3) The Decentralized Internet Is Here, with Some Glitches. Wired. https://www.wired.com/story/the-decentralized-internet-is-here-with-some-glitches/

Sunstein, C. R. (1995). Problems with Rules. California Law Review, 83(4), 953–1026. https://doi.org/10.2307/3480896

Suzor, N. P. (2010). The role of the rule of law in virtual communities. Berkeley Technology Law Journal,25(4), 1818–1886. https://www.jstor.org/stable/24118612

Tamanaha, B. (2008). Understanding Legal Pluralism: Past to Present, Local to Global. Sydney Law Review, 30, 375–411. http://www.austlii.edu.au/cgi-bin/viewdoc/au/journals/SydLawRw/2008/20.html?context=1;query=understanding%20legal%20pluralism;mask_path=au/journals/SydLawRw

Turpin, J. B. (2014). Bitcoin: The Economic Case for a Global, Virtual Currency Operating in an Unexplored Legal Framework. Indiana Journal of Global Legal Studies, 21(1), 335–368. https://doi.org/10.2979/indjglolegstu.21.1.335

Vilner, Y. (2018, November 14). No More Hype: Time To Separate Crypto From Blockchain Technology. Forbes. https://www.forbes.com/sites/yoavvilner/2018/11/14/no-more-hype-time-to-separate-crypto-from-blockchain-technology/#17102008171c

Walch, A. (2017). The Path of the Blockchain Lexicon (and the Law). Review of Banking and Finance Law, 36(2), 713–766. http://www.bu.edu/rbfl/files/2017/09/p729.pdf

Wood, G. (2018, 12 September). Why We Need Web 3.0. Medium. https://medium.com/@gavofyork/why-we-need-web-3-0-5da4f2bf95ab .

Yeung, K. (2019). Regulation by Blockchain: The Emerging Battle for Supremacy between the Code of Law and Code as Law. Modern Law Review, 82(2), 207–239. https://doi.org/10.1111/1468-2230.12399

Footnotes

1.Thread 13 (see methodology section 3.2).

2. The concept of ‘legal order’ will be used throughout this article. This will be considered semantically equal to the notion of a ‘legal system’ as referred to by some cited authors.

3. Silk Road Tales and Archives, https://antilop.cc/SR/.

4.United States of America v. Ross William Ulbricht, United States Court of Appeals of the Second Circuit, 31 May 2017.

5. Ross’ diary indicates the system administration is a user called ‘SYG’; Government exhibit 240b, United States of America v. Ross William Ulbricht.

6. See https://antilop.cc/sr/img/2012_05_11_moderators.png

7. At the moment of writing, this resource had been last updated on 2 January 2019.

8. Threads 14, 45, 99, 100, 116, 255, 310, 315.

9. Threads 45, 116, 315.

10. Threads 36, 99, 100.

11. Threads 41, 100, 116, 122, 126, 193, 230, 299, 321. Other references to Dread Pirate Roberts as authority are ‘the lord’ (thread 14), ‘the leader’ (thread 137), ‘the president’ (thread 161), ‘God’ (thread 145), ‘bossman’ (thread 233).

12. Thread 99.

13. Threads 15, 78, 136.

14. While the selling of weapons was initially allowed on Silk Road, and a subject of debate among the community, on 26 February 2012, Dread Pirate Roberts announced that he had created a new marketplace for the sale of weapons called ‘The Armory’. From that moment on, selling weapons on Silk Road was forbidden (Thread 122).

15. Threads 54, 78, 136, 319.

16. Threads 14, 45.

17. Thread 270.

18. Threads 99, 100, 101, 184.

19. Threads 99, 100

20. Threads 186, 242.

21.a.b. Thread 233.

22. Threads 270, 284.

23. Thread 319.

24. Thread 100.

Harnessing the collective potential of GDPR access rights: towards an ecology of transparency

$
0
0

The GDPR’s goal of empowering citizens can only be fully realised when the collective dimensions of data subject rights are acknowledged and supportedthrough proper enforcement. The power of the collective use of data subjects’ rights, however, is currently neither acknowledged nor properly enforced. This is the message we sent to the European Commission in response to its call for feedback for its two-year review of the GDPR. In our submission entitled Recognising and Enabling the Collective Dimension of the GDPR and the Right of Access –A call to support the governance structure of checks and balances for informational power asymmetries, we demonstrate the collective potential of GDPR access rights with a long list of real-life examples.

According to the European Commission's recently published evaluation, the GDPR is doing well in attaining this goal of empowering citizens. We do not agree with this conclusion. While we share some aspects of the positive evaluation, our research shows that the empowerment provided by the GDPR is severely limited. We cannot ignore the fact that most data protection experts, including regulators, academics, practitioners and NGOs are indicating that in a fair assessment of the success of the GDPR the glass is at best half full. Moreover, as shown by the surveys conducted for the Commission, the majority of European citizens already felt that they did not have control over the personal data they provide online before the introduction of the GDPR, and the proportion of citizens feeling that way has only grown since the introduction of the GDPR [see Eurobarometer 497a, Eurobarometer 431]. The whirlwind of cookie banners, “informed” consent forms and privacy policies which the GDPR triggered, have probably contributed to a sense of dis-empowerment, as these are very demanding on the individual and may have just made people more aware of their existing lack of control. We believe the discrepancy between the Commission’s largely positive evaluation and the practical experience of many citizens can be explained by the Commission ignoring the two key elements we highlighted in our submission.

Lack of collective dimension and problems of enforcement

First, the Commission fails to acknowledge the collective dimension at the core of the governance system put in place by the GDPR. In the face of an ever-increasing digitalisation of our society, and the growing informational power asymmetries that accompany this shift, the potential for empowerment through individual rights is limited. This fact is recognised in the “architecture of empowerment” provided by the GDPR, which places individual citizens and their rights in a broader infrastructure, also empowering societal organisations as well as data protection authorities (DPAs). In order for data subject rights, and the right of access in particular, to live up to their potential for empowerment and social justice in a datafied society, we need to recognise and stimulate an ‘ecology of transparency’.

Second, the Commission underestimates the existing problems in compliance and enforcement. While enforcement went up over the last years, most DPAs are structurally under-resourced and many blatant infringements of the GDPR remain unenforced. Without proper enforcement there can be no citizens’ empowerment. The “architecture of empowerment” will inevitably lack the backbone that is needed to enable the “ecology of transparency” to express its full potential.

The ecology of transparency

In order to substantiate our call for more attention and effort to be put into enabling an ecology of transparency, we provided a broad overview of real-life cases where the right of access has been used collectively. The annex to our submission describes around 30 cases in which an engaged civil society (including NGOs, journalists and individuals) used the right of access to achieve collective goals. As such, we wish to highlight this collective dimension of access rights under GDPR, emphasising their potential for social justice and the actions that need to be taken for rendering them effective. The European Commission, along with the European Data Protection Board (EDPB) and national data protection authorities has a duty to create an enabling environment for collective access rights.

The ecology of transparency we envisage is constituted by the intra-institutional network of actors, laws, norms and practices in which the right of access is exercised. It is shaped by the interplay between the law, the regulators and the actual practices of civil society. Taking this broader view on the ecosystem of institutions and practices allows us to better identify the social conditions that need to be in place for the right of access to achieve its goal of enabling citizens to assess and contest systems that rely on the processing of personal data.

This ability to scrutinise and challenge digital infrastructures or ecosystems has become even more urgent in the wake of the massive migration of work, education and social life to online services and platforms. GDPR transparency measures – and the right of access in particular – offer a vital legal tool for investigatory research into these digital infrastructures, identifying what data is collected, how and why it is processed, with whom it is shared, and how it is (supposed to) affect people.

Apart from the “usual suspects” such as Privacy International, NOYB and Bits of Freedom, we also observed other NGOs, collectives, journalists and motivated individuals capitalising on access rights in order to achieve goals reaching beyond mere curiosity and/or self-interest. These range from climate activists fighting corporate surveillance, to students uncovering discriminatory admission criteria, content creators challenging YouTube’s demonetisation and content recommendation practices, gig-economy workers pushing for better working conditions and a whole range of investigative journalism projects. These examples illustrate a crucial point: data subject rights are not only necessary tools to safeguard ‘privacy’ or ‘data protection’ rights, but are vital to the defence of all fundamental rights.

No empowerment without enforcement

While the real-life cases listed in our submission clearly demonstrate the importance and collective potential of access rights, we observe major failings that obstruct this ecology of transparency, and thereby thwart the emancipatory potential of the GDPR. Meaningful compliance with the right of access is still very low. Many data controllers only grant access to some of the information they are legally required to give, and/or raise many legal and technicalobstaclesalongthe way. This has resulted in numerous complaints filed with data protection authorities across the EU. For example, in the last year, almost 40% of the complaints received by the UK Information Commissioner's Office (ICO) concerned access requests, and almost 30% of the complaints received by the Dutch DPA were about data subject rights, with a substantial part relating to the right of access.

Considering this high number of complaints, as well as the seriousness of the alleged infringements, it is harrowing to see how weak enforcement has been in the last two years. The Commission holds the view that “DPAs have made balanced use of their strengthened corrective powers” (p. 5). We forcefully disagree with this position. In our submission to the Commission, we raised four enforcement issues in particular: (a) lack of consistent enforcement across EU member states; (b) apparent low-priority of data rights cases; (c) very slow enforcement; and (d) over-tolerant enforcement. DPAs should take data subject (access) rights much more seriously, as they are a crucial tool within the GDPR’s architecture of empowerment. We observe that even where NGOs or academics have filed well-argued and documented complaints for often blatant cases of non-compliance, DPAs have only taken action occasionally, and if so, very mildly. This stands in sharp contrast to their pivotal role in the ecology of transparency, in which DPAs are explicitly tasked (and given extensive powers) to monitor and enforce the application of the GDPR. A significant increase in resources and know-how is crucial in resolving these issues. In light of this, we welcome the Commission’s acknowledgment that many DPAs lack the required funding. Crucially, both the right of access and DPA’s duty to verify compliance are explicitly mentioned in the Charter of fundamental rights and freedoms of the EU. As it stands now, there is a strong argument to be made that most member states fail to comply with Article 8(2)-(3) Charter.

The effectiveness of the ‘ecology of transparency’ depends on the effectiveness of its individual components, i.e. the network of actors, laws, norms and practices in which the right of access is being exercised – and their ability to mutually reinforce each other. Active citizens, digital rights organisations, the media, academia, but also regulators, data protection authorities and data protection officers interact with each other and function together as a network of checks and balances.

Supporting a thriving European culture of data protection

The severe information and power asymmetries in modern society cannot be addressed effectively by data subjects acting alone. It is in recognition of this reality that the GDPR provides a broader architecture of empowerment. Yet the importance of the collective dimension underlying the GDPR has still not been properly recognised. This is problematic, because collective processes are vital when contesting situations where the current status quo is essentially at odds with fundamental rights. Especially in contexts characterised by strong information and power asymmetries. In our submission, we list numerous real-life cases that exemplify this, such as Max Schrems contesting data transfers to the US because their surveillance laws are in contradiction with European fundamental rights, or civil society scrutinising the ad personalisation sector. Without immediate action reinforcing the collective dimension of the GDPR, we risk further solidifying the current status quo where individuals - and society more broady - are at the mercy of those operating data infrastructures.

For these reasons, the competent institutions – i.e. the European Commission, EDPB, DPAs and European Data Protection Supervisor (EDPS) – should properly consider, value and strengthen the ecology of transparency when interpreting and applying the GDPR. Fully recognising the ecology of transparency is vital in enabling the GDPR to realise its function as a baseline framework for a fair data-driven society. The time is now to invest in collective empowerment so as to nurture a thriving European culture of data protection.


What if Facebook goes down? Ethical and legal considerations for the demise of big tech

$
0
0

Introduction

Facebook1 has, in large parts of the world, become the de facto online platform for communication and social interaction. In 2017, the main platform reached the milestone of two billion monthly active users (Facebook, 2017), and global user growth since then has continued, reaching 2.6 billion in April 2020 (Facebook, 2020). Moreover, in many countries Facebook has become an essential infrastructure for maintaining social relations (Fife et al., 2013), commerce (Aguilar, 2015) and political organisation (Howard and Hussain, 2013). However, recent changes in Facebook’s regulatory and user landscape stand to challenge its pre-eminent position, making its future demise if not plausible, then at least less implausible over the long-term.

Indeed, the closure of an online social network would not in itself be unprecedented. Over the last two decades, we have seen a number of social networks come and go — including Friendster, Yik Yak and, more recently, Google+ and Yahoo Groups. Others, such as MySpace, continue to languish in a state of decline. Although Facebook is arguably more resilient to the kind of user flight that brought down Friendster (Garcia et al., 2013; Seki and Nakamura, 2016; York and Turcotte, 2015) and MySpace (boyd, 2013), it is not immune to it. These precedents are important for understanding Facebook’s possible decline. Critically, they demonstrate that the closure of Facebook’s main platform does not depend on the exit of all users; Friendster, Google+ and others continued to have users when they were sold or shut down.

Furthermore, as we examine below, any user flight that precedes Facebook’s closure would probably be geographically asymmetrical, meaning that the platform remains a critical infrastructure in some (less profitable) regions, whilst becoming less critical in others. For example, whilst Friendster started to lose users rapidly in North America, its user numbers were simultaneously growing, exponentially, in South East Asia. It was eventually sold to a Filipino internet company and remained active as a popular social networking and gaming platform until 2015.2 The closure of Yahoo! GeoCities, the web hosting service, was similarly asymmetrical: although most sites were closed in 2009, the Japanese site (which was managed by a separate subsidiary) remained open until 2019.3 It is also important to note that, in several of these cases, a key reason for user flight was the greater popularity of another social network platform: namely, MySpace (Piskorski and Knoop, 2006) and Facebook (Torkjazi et al., 2009). Young, white demographics, in particular, fled MySpace to join Facebook (boyd, 2013).

These precedents suggest that changing user demographics and preferences, and competition from other social networks such as Snapchat or a new platform (discussed further below) could be key drivers of Facebook’s decline. However, given Facebook’s pre-eminence as the world’s largest social networking platform, the ethical, legal and social repercussions of its closure would have far graver consequences than these precedents. Rather, the demise of a global online communication platform such as Facebook could have catastrophic social and economic consequences for innumerable communities that rely on the platform on a daily basis (Kovach, 2018), as well as the users whose personal data Facebook collects and stores. 

Despite the high stakes involved in Facebook’s demise, there is little research or public discourse addressing the legal and ethical consequences of such a scenario. The aim of this article is therefore to foster dialogue on the subject. Pursuing this goal, the article provides an overview of the main ethical and legal concerns that would arise from Facebook’s demise and sets out an agenda for future research in this area. First, we identify the headwinds buffeting Facebook, and outline the most plausible scenarios in which the company — specifically, its main platform — might close down. Second, we identify four key ethical stakeholders in Facebook’s demise based on the types of harm to which they are susceptible. We further examine how various scenarios might lead to these harms, and whether existing legal frameworks are adequate to mitigate them. Finally, we provide a set of recommendations for future research and policy intervention.

It should be noted that the legal and ethical considerations discussed in this article are by no means limited to the demise of Facebook, social media, or even “Big Tech”. In particular, to the extent that most sectors in today’s economy are already, or will soon become, data-driven and data-rich, these considerations, many of which relate to the handling of Facebook’s user data, are ultimately relevant to the failure or closure of any company handling large volumes of personal data. Likewise, as human interaction becomes increasingly mediated by social networks and Big Tech platforms, the legal and ethical considerations that we address are also relevant to the potential demise of other social networks, such as Google or Twitter. However, focusing on the demise of Facebook — one of the most data rich, social networks in today’s economy — offers a fertile case study for the analysis of these critical legal and ethical questions.

Why and how could Facebook close down?

This article necessarily adopts a long-term perspective, responding to issues that could significantly harm society in the long run if we do not begin to address them today. As outlined in the introduction, Facebook is currently in robust health: aggregate user growth on the main platform is increasing, and it continues to be highly profitable, with annual revenue and income increasing year-over-year (Facebook, 2017; 2018). As such, it is unlikely that Facebook would shut down anytime soon. However, as anticipated, the rapidly changing socio-economic and regulatory landscape in which Facebook operates could lead to a reversal in its priorities and fortunes over the long term.

Facebook faces two major headwinds. First, the platform is coming under increasing pressure from regulators across the world (Gorwa, 2019). In particular, tighter data privacy regulation in various jurisdictions (notably, the EU General Data Protection Regulation [GDPR]4 and the California Consumer Privacy Act [CCPA])5 could severely inhibit the company’s ability to collect and analyse user data. This in turn could significantly reduce the value of the Facebook platform to advertisers, who are drawn to its granular, data-driven insights about user behaviour and thus higher ad-to-sales conversion rates through targeted advertising. In turn, this would undermine Facebook’s existing business model, whereby advertising generates over 98.5% of Facebook’s revenue (Facebook, 2018), the vast majority of which on its main platform. More boldly, regulators in several countries are attempting to break up the company on antitrust grounds (Facebook, 2020, p. 64), which could lead, inter alia, to the reversal of its acquisitions of Instagram and WhatsApp — key assets, the loss of which could adversely affect Facebook’s future growth prospects.

Secondly, the longevity of the main Facebook platform is under threat from shifting social and social media trends. Regarding the latter, social media usage is gradually moving away from public, web-based platforms in favour of mobile-based messaging apps, particularly within younger demographics. Indeed, in more saturated markets, such as the US and Canada, Facebook’s penetration rate has declined (Facebook, 2020, pp. 31-33), particularly amongst teenagers who tend to favour mobile-only apps such as Snapchat, Instagram and TikTok (Piper Jaffray, 2020). Although Facebook and Instagram still have the largest share of the market in terms of time spent on social media, this has declined since 2015 in favour of Snapchat (Furman, 2019, p. 26). They also face growing competition from international players such as WeChat with over 1 billion users (Tencent, 2019), as well as social media apps with strong political leanings, such as Parler, which are growing in popularity.6

A sustained movement of active users away from the main Facebook platform would inevitably impact the preferences of advertisers, who rely on active users to generate engagement for their clients. More broadly, Facebook’s business model is under threat from a growing social and political movement against the company’s perceived failure to remove misinformation and hateful content from its platform. The advertiser boycott in the wake of the Black Lives Matter protests highlights the commercial risks to Facebook of failing to respond adequately to the social justice concerns of its users and customers.7 As we have seen in the context of both Facebook as well as precedents such as Friendster, due to reverse network effects, any such exodus of users and/or advertisers can occur suddenly and escalate rapidly (Garcia et al., 2013; Seki and Nakamura, 2016; Cannarella and Spechler, 2014).

Collectively, these socio-technical and regulatory developments may force Facebook to shift its strategic priorities away from being a public networking platform (and monetising user data through advertising on the platform), to a company focused on private, ephemeral messaging, monetised through commerce and payment transactions. Indeed, recent statements from Facebook point in this direction:

I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won't stick around forever. This is the future I hope we will help bring about.

We plan to build this the way we've developed WhatsApp: focus on the most fundamental and private use case -- messaging -- make it as secure as possible, and then build more ways for people to interact on top of that.(Zuckerberg, 2019)

Of course, it does not automatically follow that Facebook would shut down its main platform, particularly if it still has sufficient active users remaining on it, and it bears little cost from keeping it open. On the other hand, closure becomes more likely once a sufficient number of active users and advertisers (but, importantly, not necessarily all) have also left the platform, especially in its most profitable regions. In this latter scenario, it is conceivable that Facebook would consider shutting down the main platform’s developer API (Application Programming Interface — the interface between Facebook and client software) instead of leaving it open and vulnerable to a security breach. Indeed, it was in similar circumstances that Google recently closed the consumer version of its social network Google+ (Thacker, 2018). 

In a more extreme scenario, Facebook Inc. could fail altogether and enter into a legal process such as corporate bankruptcy (insolvency): either a reorganisation that seeks to rescue the company as a going concern, typically by restructuring and selling off some of its assets; or liquidation, in which the company is wound down and dissolved entirely. Such a scenario, however, should be regarded as highly unlikely for the foreseeable future. Although we highlight some of the legal and ethical considerations arising from a Facebook insolvency scenario, the non-insolvent discontinuation or closure of the main platform shall be our main focus henceforth. It should be noted that, as a technical matter, this closure could take various forms. For example, Facebook could close the platform but preserve users’ profiles; alternatively, it could close the platform and destroy, or sell parts or all of its user data etc. Whilst our focus is on the ethical and legal consequences of Facebook’s closure at the aggregate level, we address technical variations in the specific form that this closure could take to the extent that it impacts upon our analysis. 

Key ethical stakeholders and potential harms

In this section, we identify four key ethical stakeholders who could be harmed8 by Facebook’s closure. These stakeholders are: dependentcommunities, in particular the socio-economic and media ecosystems that depend on Facebook to flourish; existingusers, (active and passive) individuals, as well as groups,whose data are collected, analysed and monetised by Facebook, and stored on the company’s servers; non-users, particularly deceased users whose data continues to be stored and used by Facebook, and who will represent hundreds of millions of Facebook profiles in only a few decades; and future generations, who may have a scientific interest in the Facebook archive as a historical resource and cultural heritage.

We refer to these categories as ethical stakeholders, rather than user types, because our categorisation is based on the unique types of harm that each would face in a Facebook closure, not their way of using the platform. That is, the categorisation is a tool to conduct our ethical analysis, rather than corresponding to some already existing groups of users. A single individual may for instance have mutually conflicting interests in her capacity as an existing Facebook user, a member of a dependent community, and as a future non-user. Thus, treating her as a single unit, or part of a particular user group, would reduce the ethical complexity of the analysis. As such, the interests of the stakeholders are by no means entirely compatible with one another, and there will unquestionably be conflicts of interest between them.

Furthermore, for the purposes of the present discussion, we do not intend to rank the relative value of the various interests; there is no internal priority to our analysis, although this may become an important question for future research. We also stress that our list is by no means exhaustive. Our focus is on the mostsignificant ethical stakeholders who have an interest in Facebook’s closure and would experience unique harms due to the closure of a company that is both a global repository of personal data, and the world’s main communication and social networking infrastructure. As such, we exclude traditional, economic stakeholders from the analysis — such as employees, directors, shareholders and creditors. While these groups certainly have stakes in Facebook’s potential closure, there is nothing that significantly distinguishes their interests in the closure of a company like Facebook from the closure of any other (multinational) corporation. This also means that we exclude stakeholders that could benefit from Facebook’s closure, such as commercial competitors, or governments struggling with Facebook’s influence on elections and other democratic processes. Likewise, we refrain from assessing the relative overall (un)desirability of Facebook’s closure.

Dependent communities

The first key ethical stakeholders are the ‘dependent communities’, that is, communities and industries that have developed around the Facebook platform and now (semi-)depend on its existence to flourish.9

Over the last decade, Facebook has become a critical economic engine and a key gateway to the internet as such (Digital Competition Expert Panel, 2019). The growing industry of digitally native content providers, from major news outlets such as Huffington Post and Buzzfeed, to small independent agencies, is sometimes entirely dependent on exposure through Facebook. For example, the most recent change in Facebook’s News Feed algorithm had devastating consequences for this part of the media industry — some news outlets allegedly lost over 50% of their traffic overnight (Nicholls et al., 2018, p. 15). If such a small change in its algorithms could lead to the economic disruption of an entire industry, the wholesale closure of the main Facebook platform would likely cause significant economic and societal damage on a global scale, particularly where it occurs rapidly and/or unexpectedly, such that news outlets and other dependent communities do not have sufficient time to migrate to other web platforms.

To be clear, our main concern here is not with the individual media outlets, but with communities that are dependent on a functioning Facebook-based media ecosystem. While the sudden closure of one, or even several media outlets may not pose a threat to this ecosystem, a sudden breakdown of the entire ecosystem would have severe consequences. For instance, many of the content providers reliant on exposure through Facebook are located in developing countries, in which Facebook has become almost synonymous with the internet, acting as the primary source of news (Mirani, 2015), amongst other functions. Given the primacy of the internet to public discourse in today’s world, it goes without saying that, for these communities, Facebook effectively is the digital public sphere, and hence a central part of the public sphere overall. A notable example is Laos, a country which has so recently been digitised, that its language (Lao) has not yet been properly indexed by Google (Kittikhoun, 2019). This lacuna is filled by Facebook, which has established itself not only as the main messaging service and social network in Laos, but effectively also as the web as such. 

The launch of Facebook’s Free Basics platform, which provides free access to Facebook services in less developed countries, has further increased the number of communities that depend solely on Facebook. According to the Free Basics website,10 100 million people who would not otherwise have been connected are now using the services offered by the platform. As such, there are many areas and communities that now depend on Facebook in order to function and are thus susceptible to considerable harm were the platform to shut down. Note that this harm is not reducible to the individuals using free basics, but is a concern for the entire community, including members not using Facebook. As an illustrative example, consider the vital role played by Facebook and other social media platforms in disseminating information about and keeping many communities connected during the COVID-19 pandemic. In a time of crisis, communities with a large dependency on a single platform become particularly vulnerable.

Of course, whether the closure of Facebook’s main platform harms these communities depends on the reasons for closure and the manner in which it closes down (sudden death vs slow decline). If closure is accompanied by the voluntary exodus of these communities, for example to a different part of the Facebook Inc. group (e.g., Messenger or Instagram), or a third-party social network, they would arguably incur limited social or economic costs. Furthermore, it is entirely possible to imagine a scenario in which the main Facebook platform is shut down because it is unprofitable to the company as a whole, or does not align with the company’s strategic priorities, yet remains systemically important for a number of dependent communities. These communities could still use and depend on the platform however may simply not be valuable or lucrative enough for Facebook Inc. to justify keeping the platform open. Indeed, many of the dependent communities that we have described are located in regions of the world that are the least profitable for the company (certainly under an advertising-driven revenue model).

The question arises how these dependent communities should be protected in the event of Facebook’s demise. Indeed, existing legal frameworks governing Facebook do not make special provision for its systemically important functions. As such, we propose that a new concept of ‘systemically important technological institutions’ (‘SITIs’) — drawing on the concept of ‘systemically important financial institutions’ (‘SIFIs’) — be given more serious consideration in managing the life and death of global communications platforms, such as Facebook, that provide a critical societal infrastructure. This proposal is examined further in the second part of this article.

Existing users

‘Existing users’ refers broadly to any living person or group of people who uses or has used the main Facebook platform, and continues to maintain a Facebook profile or page. That is, both daily and monthly active users, as well as users who are not actively using the platform however still have a profile where their information is stored (including ‘de-activated’ profiles). Invariably, there is an overlap between this set of stakeholders and ‘dependent communities’: the latter includes the former. Our main focus here is on ethical harms that arise at the level of the individual user, by virtue of their individual profiles or group pages, rather than the systemic and societal harms outlined above. 

It is tempting to think that the harm to these users in the event of Facebook’s closure is limited to the loss of the value that they place on having access to Facebook’s services. However, this would be an incomplete conclusion. Everything a user does on the network is recorded and becomes part of Facebook’s data archive, which is where the true potential for harm lies. That is, the danger stems not only from losing access to the Facebook platform and the various services it offers, but from future harms that users (active and passive) are exposed to as they lose control over their personal data. Any violation of the trust that these users place in Facebook with respect to the use of their personal data threatens to compromise user privacy, dignity and self-identity (Floridi, 2011). Naturally, these threats also exist today. However, as long as the platform remains operational, users have a clear idea of who they can hold accountable for the processing of their data. Should the platform be forced to close, or worse still, sell off user data to a third party, this accountability will likely vanish.

The scope for harm to existing users upon Facebook’s closure depends on how Facebook continues to process user data. If the data are deleted (as occurred, for example, in the closure of Yahoo! Groups),11 users could lose access to information — particularly, photos and conversations — that are part of their identity, personal history and memory. Although Facebook does allow users to download much of their intentionally provided data to a hard drive — in the EU, implementing the right to data portability12— this does not encompass users’ conversations and other forms of interactive data. For example, Facebook photos in which a user has been tagged, but which were uploaded by another user, are not portable, even though these photos arguably contain the first user’s personal data. Downloading data is also an impractical option for the hundreds of millions of users accessing the platform only via mobile devices (Datareportal, 2019) that lack adequate storage and processing capacity. Personal archiving is an increasingly constitutive part of a person’s sense of self, but, as noted by Acker and Brubaker (2014), there is a tension between how users conceive of their online personal archives, and the corporate, institutional reality of these archives.

On the other hand, it is highly plausible that Facebook would instead want to retain these data to train its machine learning models and to provide insights on users of other Facebook products, such as Instagram and Messenger. In this scenario, the risk to existing users is that they lose control over how their information is used, or at least fail to understand how and where it is being processed (especially where these users are not active on other Facebook products, such as Instagram). Naturally, involuntary user profiling is a major concern with Facebook as it stands. The difference in the case of closure is that many users will likely not even be aware of the possibility of being profiled. If Facebook goes down, these users would no longer be able to view their data, leading many to believe that it in fact is destroyed. Yet, a hypothetical user may for instance create an Instagram profile in 2030 and still be profiled by her lingering Facebook data, despite Facebook (the main platform) being long gone by then. Or worse still, her old Facebook data may be used to profile other users who are demographically similar to her, without her (let alone their) informed consent or knowledge.

Existing laws in the EU offer limited protection for users’ data in these scenarios. If Facebook intended to delete the data, under EU data protection law it would likely need to notify as well as seek the consent of users for the further processing of their data,13 offering them the opportunity to retrieve their data before deletion (see the closure of Google+14 and Yahoo! Groups). On the other hand, if Facebook opted to retain and continue processing user data in order to provide the (other) services set out under its terms and conditions, it is unlikely that it would be legally required to obtain fresh consent from users — although, in reality, the company would likely still offer users the option to retrieve their data. Independently, users in the EU could also exercise their rights to data portability and erasure15 to retrieve or delete their data.

In practice, however, the enforcement and realisation of these rights is challenging. Given that user data are commingled across the Facebook group of companies, and moreover have ‘velocity’ — an individual user’s data will likely have been repurposed and reused multiple times, together with the data of other users — it is unlikely that all of the data relating to an individual user can or will be identified and permanently ‘returned’. Likewise, given that user data are commingled, objection by an individual user to the transfer of their data is unlikely to be effective — their data will still be transferred with the data of other users who consent to the transfer. As previously mentioned, the data portability function currently offered by Facebook is also limited in scope.

Notwithstanding these practical challenges, a broader problem with the existing legal framework governing user data is that it is almost entirely focused on the rights of individual users. It offers little recognition or protection for the right of groups— for example, Facebook groups formed around sports, travel, music or other shared interests — and thus limited protection against group-level ethical harm within the Facebook platform (i.e., when the ethical patient is a multi-agent-system, not necessarily reducible to its individual parts [Floridi, 2012; Simon, 1995]).

This problem is further exacerbated by so called ‘ad hoc groups’ (i.e., groups that are formed only algorithmically [Mittelstadt, 2017]), which may not necessarily correspond to any organic communities. For example, ‘dog owners living in Wales aged 38–40 that exercise regularly’ (Mittelstadt 2017, p. 477) is a hypothetical, algorithmically formed group. Whereas many organically formed groups are already acknowledged by privacy and discrimination laws, or at least have the organisational means to defend their interests (e.g., people with a certain disability, sexual orientation etc.), ad hoc algorithmic groups often lack organisational means of resistance.

Non-users

The third key ethical stakeholders are those who never, or no longer, use Facebook, yet are still susceptible to harms resulting from its demise. This category includes a range of disparate sub-groups, including individuals who do not have an account, but whose data Facebook nevertheless collects and tracks from apps or websites that embed its services (Hern, 2018). Facebook uses these data, inter alia, to target the individual with ads encouraging them to join the platform (Baser, 2018). Similarly, the non-user category includes individuals who may be tracked by proxy, for example by analysing data from their relatives or close network (more on this below). A third sub-group is minors who may feature in photos and other types of data uploaded to Facebook by their parents (so-called “sharenting”).

The most significant type of non-users, however, are deceased users, i.e., those who have used the platform in the past but have since passed away. Although this may currently seem a rather niche concern, the deceased user group is expected to grow rapidly over the next couple of decades. As shown by Öhman and Watson (2019), Facebook will soon host hundreds of millions of deceased profiles on their servers.16 This sub-group is of special interest since, unlike living non-users who generally enjoy at least some legal rights to privacy and data protection (as outlined above), the deceased do not qualify for protection under existing data protection laws.17 The lack of protection for deceased data subjects is a pressing concern even without Facebook closing.18 Facebook does not have any legal obligation to seek their consent (nor that of their representatives) before deleting, or otherwise further processing, users’ data after death (although Denmark, Spain and Italy are exceptions).19 Moreover, even if Facebook tried to seek the consent of their representatives, it would have a difficult time given that users do not always appoint a ‘legacy contact’ to represent them posthumously.

The closure of the platform, however, opens an entirely new level of ethical harm, particularly in the (unlikely but not impossible) case of bankruptcy or insolvency. Such a scenario would likely force Facebook to sell off its assets to the highest bidder. However, unlike the sale or transfer of data of living users, which under the GDPR and EU insolvency law requires users’ informed consent, there is no corresponding protection for the sale of deceased users’ data in insolvency, such as requiring the consent of their next of kin.20 Moreover, there are no limitations on who could purchase these data and for what purposes. For example, a deceased person’s adversaries could acquire their Facebook data in order to compromise their privacy or tarnish their reputation posthumously. Incidents of this kind have already been reported on Twitter, where the profiles of deceased celebrities have been hacked and used to spread propaganda.21 The profiles of deceased users may also remain commercially valuable and attractive to third party purchasers — for instance, by providing insights on living associates of the deceased, such as their friends and relatives. As in genealogy — where one individual’s DNA also contains information about their children, siblings and parents — one person’s data may similarly be used to predict another’s behaviour or dispositions (see Creet [2019] on the relationship between genealogy websites and big pharma).

In sum, the demise of a platform with Facebook’s global and societal significance is not only a concern for those who use, or have used it directly, but also for individuals who are indirectly affected by its omnipresence in society.

Future generations

It is also important to consider indirect harms arising from Facebook’s potential closure due to missed opportunities. The most important stakeholders to consider in this respect are future generations, which, much like deceased users, are seldom directly protected in law. By ‘future generations’ we refer mainly to future historians and sociologists studying the origins and dynamics of digital society, but also to the general public and their ability to access their shared digitalcultural heritage.

It is widely accepted that the open web holds great cultural and historical value (Rosenzweig, 2003), and thus several organisations — perhaps most notably the Internet Archive’s Way Back Machine22— as well as researchers (Brügger and Schroeder, 2017) are working to preserve it. Personal data, however, have received less attention. Although (most) individual user data may be relatively inconsequential for historical, scientific and cultural purposes, the aggregate Facebook data archive amounts to a digital artefact of considerable significance. The personal digital heritage of each Facebook user is, or will become, part of our shared cultural digital heritage (Cameron and Kenderdine, 2007). As Varnado writes:

Many people save various things in digital format, and if they fail to alert others of and provide access to those things, certain memories and stories of their lives could be lost forever. This is a loss not only for a descendant’s legacy and successors but also for society as a whole. […] This is especially true of social networking accounts, which may be the principal—and eventually only—source for future generations to learn about their predecessors (Varnado, 2014, p. 744)

Not only is Facebook becoming a significant digital cultural artefact, it is arguably the first such artefact to have truly global proportions. Indeed, Facebook is by far the largest archive of human behaviour in history. As such, it can legitimately be said to hold what Appiah (2006) calls ‘cosmopolitan value’ — that is, something that is significant enough to be part of the narrative of our species. Given its global reach, and thus its interest to all of human kind (present and future), this record can even be thought of as a form of future public good (Waters, 2002, p. 83), without which we risk falling into a ‘digital dark age’ (Kuny, 1998; Smit et al., 2011) — a state of ignorance of our digital past.

The concentration of digital cultural heritage in a single (privately controlled and corporate) platform is in and of itself problematic, especially in view of the risk of Facebook monopolising private and collective history (Öhman and Watson, 2019). These socio-political concerns are magnified in the context of the platform’s demise. For such a scenario poses a threat not only to the control or appraisal of digital cultural heritage, but also to its very existence — by decompartmentalising the archive, thus destroying its global significance, and/or by destroying it entirely due to lack of commercial or other interest in preserving it.

These risks are most acute in an insolvency scenario, where, as discussed above, the data are more likely to be deleted or sold to third parties, including by being split up among a number of different data controllers. Although such an outcome may be viewed as a positive development in terms of decentralising Facebook’s power (Öhman and Watson, 2019), it also risks dividing and therefore diluting the global heritage and cosmopolitan value held within the platform. Worse still would be a scenario in which cosmopolitan value is destroyed due to a lack of, or divergent, commercial interests in purchasing Facebook’s data archives, or indeed the inability to put a price on these data due to the absence of agreed upon accounting rules over a company’s (big) data assets (Lyford-Smith, 2017). The recent auction of Cambridge Analytica’s assets in administration, where the highest bid received for the company’s business and intellectual property rights (assumed to include the personal data of Facebook users) was a mere £1, is a sobering illustration of these challenges.23 

However, our concerns are not limited to an insolvency scenario. In the more plausible scenario of Facebook closing the shutters on one of its products, such as the main platform website and app, the archive assembled by the product would no longer be accessible as such to either the public or future generations, even though the data and insights would likely continue to exist and be utilised within the Facebook Inc. group of companies (inter alia, to provide insights on users of other products such as Instagram and Messenger).

Recommendations

The stakeholders presented above, and the harms to which they are exposed, occupy the ethical landscape in which legal and policy measures to manage Facebook’s closure must be shaped. Although it is premature to propose definitive solutions, in this section we offer four broad recommendations for future policy and research in this area. These recommendations are by no means intended to be coherent solutions to “the” problem of big tech closure, but rather are posed as a starting point for further debate.

Develop a regulatory framework for Systemically Important Technological Institutions.

As examined earlier, many societies around the world have become ever-more dependent on digital communication and commerce through Big Tech platforms such as Facebook and would be harmed by their (disorderly) demise. Consider, for instance, the implications of a sudden breakdown of these platforms in times of crisis like the COVID-19 pandemic. As such, there are compelling reasons to regulate these platforms as systemically important institutions. By way of analogy to the SIFI concept — that is, domestic or global financial institutions and financial market infrastructures whose failure is anticipated to have adverse consequences for the rest of the financial system and the wider economy (FSB, 2014) — we thus propose that a new concept of systemically important technological institution, or ‘SITI’, be given more serious consideration. 

The regulatory framework for SITIs should draw on existing approaches to regulating SIFIs, critical national infrastructures and public utilities, respectively. In the insolvency context, drawing upon best practices for SIFI resolution, the SITI regime could include measures to fast-track insolvency proceedings in order to facilitate the orderly wind-down or reorganisation of a failing SITI in a way that minimises disruption to the (essential) services that it provides, thus mitigating harm to dependent communities. This might include resolution powers vested in a regulatory body authorised to supervise SITIs (this could be an existing body, such as the national competition or consumer protection/trade agency, or a newly established ‘Tech’ regulator) — including the power to mandate a SITI, such as Facebook, to continue to provide ‘essential services’ to dependent communities — for example, access to user groups or messaging apps — or else facilitate the transfer of these services to an alternative provider. 

In this way, SITIs would be subject to public obligations similar to those imposed on regulated public utilities, such as water and electricity companies — as “private companies that control infrastructural goods” (Rahman, 2018) — in order to prevent harm to dependent communities.24 Likewise, the SITI regime should include obligations for failure planning (by way of analogy to ‘resolution and recovery planning’ under the SIFI regime). In the EU, this regime should also build on the regulatory framework for ‘essential services’, specifically essential ‘digital service providers’, under the EU NIS (Network and Information Systems) Directive,25 which focuses on managing and mitigating cyber security risks to critical national infrastructures.

Whilst the fine print of the SITI regulatory regime requires further deliberation — indeed, the analogy with SIFIs and public utilities has evident limitations — we hope this article will help incite discussions to that end.

Strengthen the legal mechanisms for users to control their own data in cases of platform insolvency or closure.

Existing data protection laws are insufficient to protect Facebook users from the ethical harms that could arise from the handling of their data in the event of the platform’s closure. As we have highlighted, the nature of ‘Big Data’ is such that even if users object to the deletion or sale of their data, and request their return, Facebook would be unable as a practical matter to fully satisfy that request. As a result, users face ethical harm where their data is used against their will, in ways that could undermine their privacy, dignity and self-identity.

This calls for new data protection mechanisms that give Facebook users better control over their data. Potential solutions include creating new regulatory obligations for data controllers to segregate user data, in particular as between different Facebook subsidiaries (e.g., the main platform and Instagram), where data are currently commingled.26 This would allow users to more effectively retrieve their data were Facebook to shut down and could offer a more effective way of protecting the interests of ad hoc ‘algorithmic’ groups (Mittelstadt, 2017). However, to the extent that segregating data in this way undermines the economies of scale that facilitate Big Data analysis, it could have the unintended effect of reducing the benefits that users gain from the Facebook platform, inter alia through personalised recommendations. 

Additionally, or alternatively, further consideration should be given to the concept of ‘data trusts’, as a bottom-up form of data governance and control by users (Delacroix & Lawrence, 2019). Under a data trust structure, Facebook would act as a trustee for user data, holding them on trust for the user(s) — as the settlor(s) and beneficiary(ies) of the trust — and managing and sharing the data in accordance with their instructions. Moreover, a plurality of trusts can be developed, for example, designed around specified groups of aggregated data (in order to leverage the economies of scope and scale of large, combined data sets). As a trustee, Facebook would be subject to a fiduciary duty to only use the data in ways that serve the best interests of the user (see further Balkin, 2016). As such, a data trust structure could provide a stronger legal mechanism for safeguarding the wishes of users with respect to their data as compared to the existing standard of ‘informed consent’. Another possible solution involves decentralising the ownership and control of user data, for example using distributed ledger technology.27 

Strengthen legal protection for the data and privacy of deceased users.

Although the interests of non-users as a group need to be given serious consideration, we highlight the privacy of deceased users as an area in particular need of protection. We recommend that more countries follow the lead of Denmark in implementing legislation that, at least to some degree, protects the profiles of deceased users from being arbitrarily sold, mined and disseminated in the case of Facebook’s closure.28 Such legislation could follow several different models. Perhaps the most intuitive option is to simply enshrine the privacy rights of deceased users in data protection law, such as (in the EU) the GDPR. This can either be designed as a personal (but time-limited) right (as in Denmark), or a right bestowed upon next of kin (as in Spain and Italy). It could also be shaped by extending copyright law protection (Harbinja, 2017) or take place within what Harbinja (2013, p. 20) calls a ‘human rights-based regime’, (see also Bergtora Sandvik, 2020), i.e. as a universal and inviolable right. Alternatively, it could be achieved by designating companies such as Facebook as ‘information fiduciaries’ (Balkin, 2016), pursuant to which they have a duty of care to act in the best interests of users with respect to their data, including posthumously.

The risk of ethical harm to deceased users or customers in the event of corporate demise is not limited to the closure of Facebook, or Big Tech (platforms). Although Facebook will likely be the single largest holder of deceased profiles in the 21st century, other social networks (LinkedIn, WeChat, YouTube etc.) are also likely to host hundreds of millions of deceased profiles within only a few decades. And as more sectors of the economy become digitised, any company holding customer data will eventually hold a large volume of data relating to deceased subjects. As such, developing more robust legal protection for the data privacy rights of the deceased is important for mitigating the ethical harms due to corporate demise, broadly defined. 

However, for obvious reasons, deceased data subjects have little political influence, and are thus unlikely to become a top priority to policy makers. Moreover, any legislative measures to protect their privacy are likely to be adopted at national or regional levels first, although the problem inevitably remains global in nature. A satisfactory legislative response may therefore take significant time and political effort to develop. Facebook should therefore be encouraged to specify how they intend to handle deceased users’ data upon closure in their terms of service, and in particular commit not to sell those data to a third party where this would not be in the best interests of said users. While this private approach may not have the same effectiveness and general applicability as national or regional legislation protecting deceased user data, it would provide an important first step.

Create stronger incentives for Facebook to share insights and preserve historically significant data for future generations.

Future generations cannot directly safeguard their interests and thus it is incumbent on us to do so. Given the societal, historical and cultural interest in preserving, or at least averting the complete destruction of Facebook’s cultural heritage, stronger incentives need to be created for Facebook to take responsibility and begin acknowledging the global historical value of its data archives.

A promising strategy would be to protect Facebook’s archive as a site of digital global heritage, drawing inspiration from the protection of physical sites of global cultural heritage, such as through UNESCO World Heritage protected status.29 Pursuant to Article 6.1 of the Convention Concerning the Protection of World Cultural and Natural Heritage (UNESCO, 1972), state parties acknowledge that, while respecting the sovereignty of the state territory, their national heritage may also constitute world heritage, which falls within the interests and duties of the ‘international community’ to preserve. Meanwhile, Article 4 stipulates that:

Each State Party to this Convention recognizes that the duty of ensuring the identification, protection, conservation, presentation and transmission to future generations of the cultural and natural heritage […] situated on its territory, belongs primarily to that State. It will do all it can to this end, to the utmost of its own resources and, where appropriate, with any international assistance and co-operation, in particular, financial, artistic, scientific and technical, which it may be able to obtain. (UNESCO, 1972, Art. 4)

A digital version of this label may similarly entail acknowledgement by data controllers of, and a pledge to preserve, the cosmopolitan value of their data archive, while allowing them to continue using the archive. However, in contrast to physical sites and material artefacts, which fall under the control of sovereign states, the most significant digital artefacts in today’s world are under the control of Big Tech companies, like Facebook. As such, there is reason to consider a new international agreement between corporate entities, in which they pledge to protect and conserve the global cultural heritage on their platforms.30

However, bestowing the label of global digital heritage does not resolve the question of access to this heritage. Unlike Twitter, which in 2010 attempted to donate its entire archive to the Library of Congress,31 Facebook’s archive arguably contains more sensitive, personal information about its users. Moreover, these data offer the company more of a competitive advantage compared to Twitter (the latter’s user accounts are public, in contrast to Facebook, where many of the profiles are visible only to friends of the user). These considerations could reduce Facebook’s readiness to grant public access to its archives. Nevertheless, safeguarding the existence of Facebook’s records and its historical significance remains an important first step in making it accessible to future generations.

It goes without saying that the interests of future generations will at times conflict with the interests of the other three ethical stakeholders we have identified. As Mazzone (2012, p. 1660) points out, ‘the societal interest in preserving postings to social networking sites for future historical study can be in tension with the privacy interests of individual users.’ Indeed, Facebook’s data are proprietary, and any interventions must respect its rights in the data as well as the privacy rights of users. Yet, the mere fact that there are conflicts of interests and complexities does not mean that the interests of future generations ought to be neglected altogether.

Conclusion

For the foreseeable future, Facebook’s demise remains a high risk, low probability event. However, mapping out the legal and ethical landscape for such an eventuality, as we have done in this article, allows society to better manage the fallout should this scenario materialise. Moreover, our analysis helps to shed light on lower risk but higher probability scenarios. Companies regularly fail and disappear — increasingly taking with them troves of customer-user data that receive only limited protection and attention under existing law. The legal and ethical harms that we have identified in this article, many of which flow from the use of data following Facebook’s closure, are thus equally relevant to the closure of other companies, albeit on a smaller scale. Regardless of which data-rich company is the next to go, we must make sure that an adequate governance framework is in place to minimise the systemic and individual damage. Our hope is that this article will help kickstart a debate and further research on these important issues.

Acknowledgements

We are deeply grateful to Luciano Floridi, David Watson, Josh Cowls, Robert Gorwa, Tim R Samples, and Horst Eidenmüller for valuable feedback and input. We would also like to add a special thanks to reviewers James Meese and Steph Hill, and editors Frédéric Dubois and Kris Erickson for encouraging us to further improve this manuscript.

References

Acker, A., & Brubaker, J. R. (2014). Death, memorialization, and social media: A platform perspective for personal archives. Archivaria, 77, 2–23. https://archivaria.ca/index.php/archivaria/article/view/13469

Aguilar, A. (2015). The global economic impact of Facebook: Helping to unlock new opportunities [Report]. Deloitte. https://www2.deloitte.com/uk/en/pages/technology-media-and-telecommunications/articles/the-global-economic-impact-of-facebook.html

Aplin, T., Bentley, L., Johnson, P., & Malynicz, S. (2012). Gurry on breach of confidence: The protection of confidential information. Oxford University Press.

Appiah, K. A. (2006). Cosmopolitanism: Ethics in a world of strangers. Penguin.

Balkin, J. (2016). Information fiduciaries and the first amendment. UC Davis Law Review, 49(4), 1183–1234. https://lawreview.law.ucdavis.edu/issues/49/4/Lecture/49-4_Balkin.pdf

Baser, D. (2018, April 16). Hard questions: What data does Facebook collect when I’m not using Facebook, and why? [Blog post]. Facebook Newsroom. https://newsroom.fb.com/news/2018/04/data-off-facebook/

Bergtora Sandvik, K. (2020). Digital dead body management (DDBM): Time to think it through. Journal of Human Rights Practice, uaa002. https://doi.org/10.1093/jhuman/huaa002

boyd, d. (2013). White flight in networked publics? How race and class shaped american teen engagement with MySpace and facebook. In L. Nakamura & P. Chow-White (Eds.), Race after the internet.

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Cannarella, J., & Spechler, J. (2014). Epidemiological Modelling of Online Social Network Dynamics. ArXiv. https://arxiv.org/pdf/1401.4208.pdf

Competition & Markets Authority. (2020). Online Platforms and Digital Advertising (Market Study) [Final report]. Competition & Markets Authority. https://assets.publishing.service.gov.uk/media/5efc57ed3a6f4023d242ed56/Final_report_1_July_2020_.pdf

Creet, J. (2019). Data mining the deceased: Ancestry and the business of family [Documentary]. https://juliacreet.vhx.tv/

DataReportal. (2019). Global digital overview. https://datareportal.com/?utm_source=Statista&utm_medium=Data_Citation_Hyperlink&utm_campaign=Data_Partners&utm_content=Statista_Data_Citation

Delacroix, S., & Lawrence, N. D. (2019). Disturbing the ‘One size fits all’ approach to data governance: Bottom-up. International Data Privacy Law, 9(4), 236–252. https://doi.org/10.1093/idpl/ipz014

Di Cosmo, R., & Zacchiroli, S. (2017). Software heritage: Why and how to preserve software source code. iPRES 2017 – 14th international conference on digital preservation. 1–10.

F, C., & S, K. (2007). Theorizing digital cultural heritage: A critical discourse. MIT Press.

Facebook. (2017). Form 10-K annual report for the Fiscal Period ended December 31, 2017.

Facebook. (2018). Form 10-K annual report for the fiscal period ended december 31, 2018.

Facebook. (2019, June 18). Coming in 2020: Calibra [Blog post]. Facebook Newsroom. https://about.fb.com/news/2019/06/coming-in-2020-calibra/

Facebook. (2020). Form 10-Q quarterly report for the quarterly period ended March 31, 2020.

Federal Trade Commission. (2019, July 24). FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook [Press Release]. News & Events. https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions

Financial Stability Board. (2014). Key attributes of effective resolution regimes for financial institutions ¡. https://www.fsb.org/wp-content/uploads/r_141015.pdf

Floridi, L. (2011). The informational nature of personal identity. Minds and Machines, 21(4), 549–566. https://doi.org/10.1007/s11023-011-9259-6

Floridi, L. (2012). Distributed morality in an information society. Science and Engineering Ethics, 19(3), 727–743. https://doi.org/10.1007/s11948-012-9413-4

Furman, J. (2019). Unlocking digital competition [Report]. Digital Competition Expert Panel. https://www.gov.uk/government/publications/unlocking-digital-competition-report-of-the-digital-competition-expert-panel

Garcia, D., Mavrodiev, P., & Schweitzer, F. (2013). Social resilience in online communities: The autopsy of Friendster. Proceedings of the First ACM Conference on Online Social Networks (COSN ’13). https://doi.org/10.1145/2512938.2512946.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914

Harbinja, E. (2013). Does the EU data protection regime protect post-mortem privacy and what could be the potential alternatives? Scripted, 10(1). https://doi.org/10.2966/scrip.100113.19

Harbinja, E. (2014). Virtual worlds—A legal post-mortem account. Scripted, 11(3). https://doi.org/10.2966/scrip.110314.273

Harbinja, E. (2017). Post-mortem privacy 2.0: Theory, law, and technology. International Review of Law, Computers & Technology, 31(1), 26–42. https://doi.org/10.1080/13600869.2017.1275116

Howard, P. N., & Hussain, M. M. (2013). Democracy’s fourth wave? Digital media and the arab spring. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199936953.001.0001

Information Commissioner’s Office. (2019, October). Statement on an agreement reached between Facebook and the ICO [Statement]. News and Events. https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/10/statement-on-an-agreement-reached-between-facebook-and-the-ico

Kittikhoun, A. (2019). Mapping the extent of Facebook’s role in the online media landscape of Laos [Master’s dissertation.]. University of Oxford, Oxford Internet Institute.

Kuny, T. (1998). A digital dark ages? Challenges in the preservation of electronic information. International Preservation News, 17(May), 8–13. https://doi.org/Article

Lyford-Smith, D. (2017). Data as an Asset. ICAEW ¡. https://www.icaew.com/technical/technology/data/data-analytics-and-big-data/data-analytics-articles/data-as-an-asset

Marcus, D. (2020, May). Welcome to Novi [Blog post]. Facebook Newsroom. https://about.fb.com/news/2020/05/welcome-to-novi/

Mazzone, J. (2012). Facebook’s afterlife. North Carolina Law Review, 90(5), 1643–1685.

Mirani, L. (2015). Millions of Facebook users have no idea they’re using the internet. Quartz. https://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet/

M.I.T. (2013). An autopsy of a dead social network ¡. https://www.technologyreview.com/s/511846/an-autopsy-of-a-dead-social-network/

Mittelstadt, B. (2017). From Individual to Group Privacy in Big Data Analytics. Philos. Technol, 30, 475–494. https://doi.org/10.1007/s13347-017-0253-7

N, B., & R, S. (Eds.). (2017). The web as history: Using web archives to understand the past and the present. UCL Press.

Öhman, C., & Floridi, L. (2018). An ethical framework for the digital afterlife industry. Nature Human Behaviour. https://doi.org/10.1038/s41562-018-0335-2

Öhman, C. J., & Watson, D. (2019). Are the dead taking over Facebook? A Big Data approach to the future of death online. Big Data & Society, 6(1), 205395171984254. https://doi.org/10.1177/2053951719842540

Open Data Institute. (2018, July 10). What is a Data Trust? [Blog post]. Knowledge & opinion blog. https://theodi.org/article/what-is-a-data-trust/#1527168424801-0db7e063-ed2a62d2-2d92

Piper Sandler. (2020). Taking stock with teens, spring 2020 survey. Piper Sandler. http://www.pipersandler.com/3col.aspx?id=5956

Piskorski, M. J., & Knoop, C.-I. (2006). Friendster (A) [Case Study]. Harvard Business Review.

Rahman, K. S. (2018). The new utilities: Private power, social infrastructure, and the revival of the public utility concept. Cardozo Law Review, 39(5), 1621–1689. http://cardozolawreview.com/wp-content/uploads/2018/07/RAHMAN.39.5.2.pdf

Rosenzweig, R. (2003). Scarcity or abundance? Preserving the past in a digital era. The American Historical Review, 108(3), 735–762. https://doi.org/10.1086/ahr/108.3.735

Scarre, G. (2013). Privacy and the dead. Philosophy in the Contemporary World, 19(1), 1–16. https://doi.org/10.1063/1.2756072

Seki, K., & Nakamura, M. (2016). The collapse of the Friendster network started from the center of the core. 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 477–484. https://doi.org/10.1109/ASONAM.2016.7752278

Simon, T. W. (1995). Group harm. Journal of Social Philosophy, 26(3), 123–138. https://doi.org/10.1111/j.1467-9833.1995.tb00089.x

Smit, E., Hoeven, J., & Giaretta, D. (2011). Avoiding a digital dark age for data: Why publishers should care about digital preservation. Learned Publishing, 24(1), 35–49. https://doi.org/10.1087/20110107

Stokes, P. (2015). Deletion as second death: The moral status of digital remains. Ethics and Information Technology, 17(4), 1–12. https://doi.org/10.1007/s10676-015-9379-4

Taylor, J. S. (2005). The myth of posthumous harm. American Philosophical Quarterly, 42(4), 311–322. https://www.jstor.org/stable/20010214

Tencent. (2019). Q2 earnings release and interim results for the period ended June 30, 2019.

Thacker, D. (2018, December 10). Expediting Changes to Google+ [Blog post]. Google. https://blog.google/technology/safety-security/expediting-changes-google-plus/

Torkjazi, M., Rejaie, R., & Willinger, W. (2009). Hot today, gone tomorrow: On the migration of MySpace users. Proceedings of the 2nd ACM Workshop on Online Social Networks - WOSN ’09, 43. https://doi.org/10.1145/1592665.1592676

U. K. Government. (2019). Online harms [White Paper]. U.K. Government, Department for Digital, Culture, Media & Sport; Home Department. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf

UNESCO. (1972). Convention concerning the Protection of the World Cultural and Natural Heritage. Adopted by the General Conference at its seventeenth session Paris, November 16.

Varnado, A. S. S. (2014). Your digital footprint left behind at death: An illustration of technology leaving the law behind. Louisiana Law Review, 74(3), 719–775. https://digitalcommons.law.lsu.edu/lalrev/vol74/iss3/7

Warren, E. (2019). Here’s How We Can Break Up Big Tech [Medium Post]. Team Warren. https://medium.com/@teamwarren/heres-how-we-can-break-up-big-tech-9ad9e0da324c

Waters, D. (2002). Good archives make good scholars: Reflections on recent steps toward the archiving of digital information. In The state of digital preservation: An international perspective (pp. 78–95). Council on Library and Information Resources. https://www.clir.org/pubs/reports/pub107/waters/

York, C., & Turcotte, J. (2015). Vacationing from facebook: Adoption, temporary discontinuance, and readoption of an innovation. Communication Research Reports, 32(1), 54–62. https://doi.org/10.1080/08824096.2014.989975

Zuckerberg, M. (2019, March 6). A privacy-focused vision for social networking [Post]. https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/

Footnotes

1. Unless otherwise stated, references to ‘Facebook’ are to the main platform (comprising News Feed, Groups and Pages, inter alia, both on the mobile app as well as the website), and do not include the wider group of companies that comprise Facebook Inc, namely WhatsApp, Messenger, Instagram, Oculus (Facebook, 2018), and Calibra (recently rebranded as Novi Financial) (Marcus, 2019; 2020).

2. See https://www.washingtonpost.com/news/the-intersect/wp/2015/02/12/8-throwback-sites-you-thought-died-in-2005-but-are-actually-still-around/

3. See https://qz.com/1408120/yahoo-japan-is-shutting-down-its-website-hosting-service-geocities/

4. Regulation (EU) 2016/679 < https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG>.

5. California Legislature Assembly Bill No. 375 <https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375>

6. See <https://www.politico.com/news/2020/07/06/trump-parler-rules-349434>

7. See < https://www.nytimes.com/2020/06/29/business/dealbook/facebook-boycott-ads.html>.

8. We adopt an inclusive definition of ethical harm (henceforth just ‘harm’) as any encroachment upon personal or collective and legitimate interests such as dignity, privacy, personal welfare, and freedom.  

9. Naturally, not all communities with a Facebook presence can be included in this category. For example, the lost marketing opportunities for large multinational corporations such as Coca Cola Inc., due to the sudden demise of Facebook, cannot be equated with the harm to a small-scale collective of sole traders in a remote area (e.g., a local craft or farmers’ market) whose only exposure to customers is through the platform. By ‘dependent communities’ we thus refer only to communities whose ability to flourish and survive may be threatened by Facebook’s sudden demise.

10. See https://info.internet.org/en/impact/

11. See https://help.yahoo.com/kb/understand-data-downloaded-yahoo-groups-sln35066.html

12. See Art 20 GDPR. 

13. See Art 4(2) GDPR (defining ‘processing’ to include, inter alia, ‘erasure or destruction’ of personal data).

14. See Google Help, (2019) ‘Shutting down Google+ for consumer (personal) accounts on April 2, 2019’ https://support.google.com/plus/answer/9195133?hl=en-GB. Facebook states in its data policy that ‘We store data until it is no longer necessary to provide our services and Facebook Products or until your account is deleted — whichever comes first’, which might suggest that users provide their consent to future deletion of their data when they first sign up to Facebook. However, it is unlikely that this clause substitutes for the requirement to obtain specific and unambiguous consent to data processing, for specific purposes — including deletion of data — under the GDPR (see Articles 4(11) and 6(1)(a)).

15. See Art 17 GDPR.

16. Facebook’s policy on deceased users has changed somewhat over the years, but the current approach is to allow next of kin to either memorialise or permanently delete the account of a confirmed deceased user (Facebook, n.d.). Users are also encouraged to select a ‘legacy contact’, that is, a second Facebook user who will act as a custodian in the event of their demise. Although these technical solutions have proven to be successful on an individual, short-term level, several long-term problems remain unsolved. In particular, what happens when the legacy contact themselves dies? For how long will it be economically viable to store hundreds of millions of deceased profiles on the servers?

17. However, note that the information of a deceased subject can continue to be protected by the right to privacy under Art 8 of the European Convention on Human Rights, and the common law of confidence with respect to confidential personal information (although the latter is unlikely to apply to data processing by Facebook) (see generally Aplin et al., 2012).

18. Several philosophers and legal scholars have recently argued for the concept of posthumous privacy to be recognised (see Scarre [2014, p. 1], Stokes [2015] and Öhman & Floridi [2018]). 

19. Recital 27 of the GDPR clearly states that ‘[t]his Regulation does not apply to the personal data of deceased persons’, however does at the same time allow member states to make additional provision for this purpose. Accordingly, a few European countries have included privacy rights for deceased data subjects in their implementing laws (for instance, Denmark, Spain and Italy — see https://www.twobirds.com/en/in-focus/general-data-protection-regulation/gdpr-tracker/deceased-persons.) However, aside from these limited cases, existing data protection for the deceased is alarmingly sparse across the world. 

20. Under EU insolvency law, any processing of personal data (for example, deletion, sale or transfer of the data to a third party purchaser) must comply with the GDPR (See Art 78 (Data Protection) of EU Regulation 2015/848 on Insolvency Proceedings (recast). However, see endnote 17 with regard to the right to privacy and confidentiality.

21. See https://www.alaraby.co.uk/english/indepth/2019/2/25/saudi-trolls-hacking-dead-peoples-twitter-to-spread-propaganda

22. See https://archive.org/web/

23. See Administrator’s Progress Report (2018) https://beta.companieshouse.gov.uk/company/09375920/filing-history. However, consumer data (for example, in the form of customer loyalty schemes) has been valued more highly in other corporate insolvencies (see for example, the Chapter 11 reorganisation of the Caesar’s Entertainment Group https://digital.hbs.edu/platform-digit/submission/caesars-entertainment-what-happens-in-vegas-ends-up-in-a-1billion-database/).

24. There is a broader call, from a competition (antitrust) policy perspective, to regulate Big Tech platforms as utilities on the basis that these platforms tend towards natural monopoly (see, e.g. Warren, 2019). Relatedly, the UK Competition and Markets Authority has recommended a new ‘pro-competition regulatory regime’ for digital platforms, such as Google and Facebook, that have ‘strategic market status’ (Furman, 2019; CMA, 2020). The measures proposed under this regime — such as facilitating interoperability between social media platforms— would also help to mitigate the potential harms to Facebook’s ethical stakeholders due to its closure.

25. Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union OJ L 194, 19.7.2016.

26. Facebook has stated that financial data collected by Calibra/Novi, the digital wallet for Libra cryptocurrency, will not be shared with Facebook or third parties without user consent (Facebook 2019b). The segregation of user data is the subject of a ruling by the German Competition Authority, however this was overturned on appeal by Facebook (and is now being appealed by the competition authority — the original decision is here: https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2019/07_02_2019_Facebook.html).

27. A related imperative is to clarify the financial accounting rules for the valuation of (Big) data assets, including in an insolvency context.

28. See s 2(5) of the Danish Data Protection Act 2018 <https://www.datatilsynet.dk/media/7753/danish-data-protection-act.pdf>

29. UNESCO has previously initiated a project to preserve source code (see Di Cosmo R and Zacchiroli, 2017).

30. This could be formal or informal, for example in the vein of the ‘Giving Pledge’ — a philanthropic initiative to encourage billionaires to give away the majority of their wealth in their lifetimes (see < https://givingpledge.org/>).

31. Although the initiative has ceased to operate as originally planned, it remains one of the best examples of large scale social media archiving (see https://www.npr.org/sections/thetwo-way/2017/12/26/573609499/library-of-congress-will-no-longer-archive-every-tweet). 

Russia’s great power imaginary and pursuit of digital multipolarity

$
0
0

Introduction

In the twenty-first century, Russia has spearheaded an international movement for the primacy of national governments in managing the internet. The geopolitical debate surrounding internet governance pits supporters of administering the global internet’s critical resources and standards via state-based multilateral institutions against those favouring the present distribution of governance functions between state and non-state actors (DeNardis, 2014; Mueller, 2010, 2017; Radu, 2019). This article offers a cultural reading of Russia’s approach to global internet governance. It argues that Russian ruling elites’ imaginary of Russia as a historic great power deserving of full participation in global governance has directed the state’s promotion of internet multilateralism and its challenge to the perceived US digital hegemony.

Social imaginaries, in philosopher Charles Taylor’s formulation, are the “common repertory” of people’s conceptions of their surroundings, normative expectations of how things in the world should proceed, and understandings of what actions are at their disposal – all of which “enables us to carry out the collective practices that make up our social life” (Taylor, 2003, pp. 23-26). The laws, standards, and norms governing technologies reflect social imaginaries: both through people’s concerted efforts to embed their views and expectations into technological design and operation and through taken-for-granted assumptions that guide individuals’ and collectives’ relations with technology (Jasanoff & Kim, 2015; Mansell, 2012). In particular, national policymaking elites make sense of and act upon the internet against their ideas about the respective nation’s identity and place in the world (Dumitrica, 2015).

The proposed focus on national imaginaries of Russian policymaking elites serves as an alternative analytical lens to a broad scholarly consensus that regards Russia’s global internet agenda as an expression of President Vladimir Putin’s political regime. Some scholars situate Russia’s approach to global digital technologies within the context of the renewed ideological struggle between liberal democracies and illiberal governments (e.g., Maréchal, 2017; Polyakova & Meserole, 2019; Rosenbach & Mansted, 2019; Soldatov & Borogan, 2015, Ch. 11). For example, Robert Morgus, then New America Foundation’s cybersecurity analyst, attributed Russia’s “digital authoritarianism” to Putin’s “paranoias” about the dictate of US values and interests over the global internet (Morgus, 2018). Another scholarly strand emphasises the Kremlin’s concerns with state security and social control as primary motivations for its internet governance philosophy (e.g., Claessen, 2020; Deibert & Crete-Nishihata, 2012; Franke & Pallin, 2012, pp. 62-64; Kennedy, 2013; Nocetti, 2015; Pigman, 2019; Stadnik, 2019). According to American University’s internet governance scholar Laura DeNardis, for instance, authoritarians like Russia and China support internet multilateralism “under the mantel of cyber sovereignty” to establish cultural and political control over their citizenry, such as the Russian state’s recent online crackdown on the country’s sexual minorities “under the guise of preserving social order” (DeNardis, 2020, pp. 180-183).

Yet others consider Russia’s internet internationalisation agenda as part of the political-economic rise of non-Western powers and their ensuing challenge to the Western dominance over global political economy of telecommunication (e.g., Ebert & Maurer, 2013; Freedman & Wilkinson, 2013; Polatin-Reuben & Wright, 2014; Rebello, 2017; Zhao, 2015). Carleton University’s telecommunication scholar Dwayne Winseck posits that, in addition to making inroads into the geopolitical economy of internet infrastructure, Russia and China “are also trying to add international legal norms steeped in nineteenth-century views of state security that would further entrench the semiautonomous, national web 3.0 model in a multilateral model of internet governance” (2017, p. 260). Lastly, Kieron O’Hara and Wendy Hall, computer scientists and social thinkers at the University of Southampton, place what they term “Moscow’s Spoiler Model” of global internet governance outside of conventional politics altogether (O’Hara & Hall, 2018, pp. 11-13). According to the authors, Russian leadership’s ideological mix of nationalism, victimhood, cynicism, and conspiratorial thinking drives the Kremlin’s strategy of “free riding on the efforts of others to produce a valuable information space” with the sole aim of sabotaging the liberal West in cyberspace and beyond.

In contrast with the prevailing scholarly focus on the role of Putin’s persona and regime in Russia’s internet governance, this article aims to disentangle the longer-term cultural factors underlying Russia’s global internet agenda from the current regime’s political ideologies and practices. After the end of the Cold War, the Russian leadership viewed Russia as an immutable great power with continued responsibility for global affairs. Consequently, Russia has opposed the emergent US-led unipolarity and instead promoted the ideal of a multipolar world order governed collectively through intergovernmental multilateral institutions. In the domain of internet governance, Russia similarly has advanced the primacy of state sovereignty over respective national internet segments, diversification of internet governance mechanisms and markets purportedly monopolised by the US public and private actors, and the leading role of the United Nations and its specialised agency, the International Telecommunication Union (ITU), in managing the global internet. Accordingly, I conceptualise Russia’s quest to reconfigure the global digital order based on the principles and language of multipolarity as digital multipolarity.

In order to analytically detach the cultural logics underlying Russia’s pursuit of digital multipolarity from its commonly acknowledged pivot toward greater political authoritarianism and illiberalism under Putin’s rule (Gel’man, 2015; Kolstø & Blakkisrud, 2016), I illuminate how the principles of multipolarity directed Russian digital governance initiatives in the 1990s – a decade preceding Putin’s rise to power. The first data set pertains to Russia’s advocacy of state-based global telecommunication governance at the International Telecommunication Union Plenipotentiary Conferences held in 1992, 1994, and 1998. I located Russian delegations’ contributions to these meetings within the materials available at the ITU online archive (International Telecommunication Union, n.d.). My analysis incorporates Russian representatives’ addresses to the plenary, proposals for the work of the conference, draft resolutions, and meeting minutes. The second set of data concerns the resolution Russia first proposed to the UN General Assembly (UNGA) in 1998 entitled Developments in the field of information and telecommunications in the context of international security. The resolution is widely considered to have inaugurated information security discussions at the United Nations (e.g., Chernenko, 2018, p. 43; Henriksen, 2019, p. 2; Maurer 2011, p. 16; Radu, 2019, p. 102). Through the searchable UN Digital Library, I located documents from 1998-99 that directly related to the resolution, such as Russian diplomats’ addresses at the UNGA that introduced the resolution and Russia’s elaboration of its international information security vision submitted to the UN Secretary-General.

I conducted discursive analysis of Russian policymaking initiatives at the ITU and the UN. Following British cultural sociologist Rosalind Gill (2018), I understand discourse as textual construction of a particular version of the world set against competing visions. The analytical goal, as Gill explains, is to understand and illuminate the ideological premises that run through a particular discourse. The methodological task, then, consists of identifying what Gill calls “interpretive repertoires”—recurrent themes, ideas, or tropes—within the delineated corpus of texts and situating them within larger social contexts and cultural shifts.

By juxtaposing repertoires excavated in Russian discourse of multipolarity and of digital governance, I show how the Russian state’s conceptions and language of the multipolar world enable its vision of global communications. This relationship between Russia’s national and technological imaginaries is best understood as “constitutive causality” (Schwartz-Shea & Yanow, 2012, p. 52). Rather than establishing a mechanistic causality between cultural context and political action, this analytical approach explores how “humans conceive of their worlds, the language they use to describe them, and other elements constituting that social world, which make possible or impossible the interactions they pursue” (Ibid.). My argument, then, is not that Russia’s normative conceptions of its greatness and of world multipolarity make its state-centric internet governance agenda inevitable, but that they make it conceptually imaginable and therefore politically possible.

In addition to enriching literature on Russia’s global communication philosophy and practice, this paper contributes a novel approach to internet governance studies that takes national narratives about the self and its place in the world seriously. Internet governance scholarship, to date, has privileged the lenses of law, political economy, international relations, and science and technology studies (e.g., Brousseau et al., 2012; Bygrave & Bing, 2009; Choucri & Clark, 2019; Kohl, 2017; Musiani et al., 2016). Socio-cultural approaches examining internet governance actors’ visions and narratives constitute a minority (e.g., Chenou, 2014; Pohle et al., 2016; Price, 2017), particularly those focusing on national identities (e.g., Kiggins, 2012; Schulte, 2013). Using the Russian case, I show how centring national identity narratives in the analysis of states’ internet governance agendas can add further nuance to their understanding.

This article proceeds in three parts. First, I contextualise Russia’s multipolarity framework within the socio-political circumstances of its emergence in the 1990s and deconstruct its normative claims. Next, I illuminate how the multipolarity framework underlays the logics and language of Russia’s policymaking initiatives at the ITU and the UN in the 1990s. Lastly, I show how Russia’s global internet governance agenda arising in the 2000-2010s incorporated multipolarity principles and rhetoric of the preceding decade. By tracing Russia’s great power imaginary and pursuit of the multipolar world order to its most liberal years of the early 1990s, I challenge the prevailing analytical coupling of Russia’s internet governance agenda with the Russian state’s authoritarian political tendencies under Putin’s rule.

Great power imaginary and multipolarity

A country’s foreign policy is normally a reflection of its governing elites’ prevailing consensus-based understanding of the nation’s identity and ensuing geopolitical priorities (Ringmar, 1996; Weldes, 1999). Since Peter the Great’s (1682-1725) campaign to turn the Russian tsardom into a modern European power, the country’s ruling elites have imagined Russia as a great power responsible for world affairs and strove to be recognised as such by the West (Neumann, 2008a, 2008b; Prizel, 1998; Ringmar, 2002; Tolz, 2001). Despite Russia’s significant geopolitical weakening in the aftermath of the Soviet Union’s breakup, the elite imaginary of Russia as an immutable great power persisted into the post-Soviet era (Clunan, 2014; Lo, 2002).

Andrei Tsygankov, a leading US-based scholar of Russian foreign policy, posits that since the eighteenth century, representatives of three schools of foreign policy thought—Westernisers, Statists, and Civilisationists—have competed to guide Russia’s engagement with the world in accordance with their respective visions of the country’s national identity (Tsygankov, 2019; see also Thorun, 2009, Ch. 3). In the post-Soviet years, Westernisers steered Russian foreign policymaking during Boris Yeltsin’s first presidential term (1990-96) under Foreign Minister Andrey Kozyrev (1990-96) and sought integration with the Euro-Atlantic world based on shared liberal values, particularly in 1991-1993. The Statist period that followed began with Yevgeny Primakov’s terms as Foreign Minister (1996-98) and Prime Minister (1998-99) during Yeltsin’s second presidential term (1996-99). Statists range from liberal to conservative wings and view the primary goal of the state as maintaining domestic economic and political order and ensuring security from external threats. Statists are not inherently anti-Western but seek recognition of Russia’s sovereignty as a prerequisite to pragmatic cooperation. Lastly, Civilisationists emphasise Russia’s cultural distinctiveness and most categorically challenge Western liberalism. Foreign policy under Vladimir Putin’s rule gradually moved from the more liberal to more conservative flanks of Statism while increasingly incorporating Civilisational motifs, particularly following the regime’s conservative turn of 2012-14.

While Russian intellectual and political elites disagree about the precise sources of Russia’s greatness, they uniformly believe that the country’s independence in domestic governance and unimpeded participation in global governance are indispensable conditions of its great power status (Lo, 2002, pp. 57-61; Trenin, 2011, pp. 411-417). Russia’s desire for regaining the strategic independence partially lost with the demise of the Soviet Union gave rise to multipolarity as its central foreign policy framework (Ambrosio, 2005; Chebankova, 2017; Lo, 2002, pp. 86-96; Miskimmon & O’Loughlin, 2017; Silvius, 2016). Russia envisions global governance conducted by multiple powers, or poles, in place of the US unipolar dominance.

The foundational repertoires of Russia’s multipolarity narrative have remained virtually unchanged since its emergence in the early 1990s. In Russia’s conceptualisation, the basis of the multipolar world order is the inviolability of state sovereignty. Sovereignty, in turn, manifests itself in states’ ability to conduct independent domestic and foreign policy free from outside interference into their internal affairs. Meanwhile, the repertoire of diversity conveys the notion that the world is comprised of sovereign nations with equally valuable cultural, social, and political systems. Russia posits a multipolar world as more democratic, just, and equal, because multipolarity purportedly respects sovereign peoples’ rights to live in accordance with their respective political ideologies and cultural beliefs.

The main threat to domestic sovereignty and global diversity, according to Russia’s multipolarity narrative, is the unipolar hegemony or monopoly of the United States (often referred to with euphemisms of “one country” and “sole power”). The hegemon, whose behaviour is regularly described in anthropomorphic terms of arrogance, cynicism, and egoism, imposes its will on others, disregarding national interests and identities. The inherent tension between the hegemon’s desire for domination and the diversity of countries’ foreign policy interests undermines global peace and stability, particularly when the hegemon resorts to coercion of military force and economic sanctions. The only instruments of global governance capable of satisfactorily representing diverse national interests and containing destructive impulses of the hegemon are international law and multilateral diplomacy, foremost the United Nations and its Security Council.

From the first months of Russia’s post-Soviet independence, still at the height of its Euro-Atlantic orientation, Russian leadership was already promoting key multipolarity claims. During the foreign policy dominance of Westernisers in the early 1990s, the Kremlin’s multipolarity narrative presented Russia as a liberal great power that sought to become a democratic market economy aligned with the Euro-Atlantic world (Tsygankov, 2019, Ch. 3). For example, at the UNGA in September 1992, Foreign Minister Kozyrev described Russia as “a normal rather than an aggressive great Power” that “rejected communism” and “imperialistic ambitions”, and argued that the “post-confrontational and post-communist world is not a pax Sovietica, a pax Americana, a pax Islamica or a pax Christiana, nor is it a monopolistic system of any kind, but rather the multipolar unity in diversity that the United Nations has symbolized from the very outset” (Kozyrev, 1992, pp. 57-59). Although the Russian foreign policy establishment did not yet self-consciously think of such claims in terms of a coherent multipolarity doctrine, it incorporated propositions promoting multilateral UN-based global governance in opposition to the US dominance into Russia’s inaugural Foreign Policy Concept adopted in spring 1993 (Russian Federation, 2005 [1993]).

Russia’s multipolarity narrative under Kozyrev, moreover, often was employed in support of closer relations with the liberal West. In 1994, for example, Kozyrev argued for a more meaningful partnership with the USA in Russia’s Izvestia newspaper and in the US magazine Foreign Affairs (Kozyrev, 1994a, 1994b). In Izvestia, Kozyrev claimed greatness to be Russia’s transcendental trait, suggesting that Russia historically was “doomed to be a great power” and always “will remain a superpower” (Kozyrev, 1994b). While proposing that Russian and American great powers “share common values” and have “mutually complementary” national interests, Kozyrev nevertheless harshly criticised what he called the US administration’s “almost maniacal desire to see only one leading power in today’s world” and to “obsessively declare American leadership”. Articulating an alternative to the unipolar geopolitical arrangement, Kozyrev argued that Russia ought to govern the multipolar world as “an equal partner, not a junior one” of the United States. Russia, in other words, was not opposed to the US liberal values as such but resisted the United States’ perceived abuse of its economic and military superiority to the detriment of other great powers in global governance.

In early 1996, Yevgeny Primakov, the then-head of the Russian Foreign Intelligence Service, replaced Kozyrev as foreign minister. Primakov’s appointment signalled elites’ disenchantment with the ideal of Western integration and turn toward greater statism in foreign policy. Under Primakov, Russia strove to counterbalance the US by diversifying its foreign policy orientations after Kozyrev’s overwhelming focus on the West (Ambrosio, 2005, Ch. 4-5; Tsygankov, 2019, Ch. 4). Russia’s diversification efforts ranged from Primakov’s first tours of Latin American countries by a high-level Russian official in the post-Soviet period to the establishment of the Shanghai Five, a precursor to the Shanghai Cooperation Organisation (SCO). China in particular became Russia’s close ally in this period (Ambrosio, 2005, pp. 78-89).

Primakov was instrumental in reframing multipolarity into a doctrinal vision to be instituted in policy and actively promoted abroad. The 1997 National Security Concept, for example, states that Russia’s interests “require active foreign policy aimed at strengthening Russia’s positions as a great power – one of the influential centres in the emerging multipolar world” (Russian Federation, 2002 [1997], p. 55). At the international level, multipolarity was anchored in the Russian-Chinese Joint Declaration on a Multipolar World and the Establishment of a New International Order (Yeltsin & Zemin, 1997).

To discern the cultural logics underlying Russia’s internet governance agenda, it is necessary to appreciate continuities in ideas and policies between Kozyrev’s and Primakov’s tenures. Scholars commonly credit Primakov as the progenitor of Russia’s multipolarity vision (e.g., Ambrosio, 2005, pp. 66-67; Clunan, 2014, p. 286; Lo, 2015, pp. 43-44; Makarychev & Morozov, 2011, p. 355; Silvius, 2017, p. 82). The current Russian leadership, too, has mythologised Primakov as the founding father of post-Soviet Russia’s foreign policy and specifically of multipolarity as its ideational basis (e.g., Putin, 2019; Lavrov, 2019). As I discuss next, however, elite imaginaries of Russia’s greatness and of multipolarity informed Russian global communication diplomacy from the height of Westernism in the early 1990s to the maturation of Statism by the close of the decade.

The genesis of Russia’s digital multipolarity in the Yeltsin years, 1992-1999

Russia’s internet governance agenda that arose during Putin’s presidency in the 2000s-10s drew upon the multipolarity framework that emerged during Boris Yeltsin’s presidency. Russian diplomacy already advanced state-based governance of digital technologies in the 1990s in the debates over the future of global telecommunication and international information security at the International Telecommunication Union and the United Nations. This section illustrates how Russian policymaking discourse in these debates relied on the principles and repertoires of multipolarity.

Preservation of state-based telecommunication governance at the ITU

The International Telecommunication Union was established in 1865 as the International Telegraph Union to coordinate transnational telegraphy (Fari, 2015). As new technologies such as telephony, radio, and satellite appeared, the ITU incorporated them into its mandate (Balbi & Fickers, 2020; Codding, 1995). In keeping with this tradition, from the 1970s through the early 1990s, the ITU strove—but ultimately failed—to become the global authority in data networking development and governance (Schafer, 2020; Rioux et al., 2014; Winseck, 2020). In the closing decade of the twentieth century, global trends of economic liberalisation and technological convergence spurred the debate within the ITU about enhancing the role of the private sector in its operations and about the Union becoming the worldwide champion of telecommunication development and liberalisation (Hills, 2007, Ch. 4). The potential changes were debated and subsequently instituted at the three ITU Plenipotentiary conferences, which took place in Geneva in 1992, Kyoto in 1994, and Minneapolis in 1998 (“ITU Plenipotentiaries”, 1993; MacLean, 1995, 1999). Russian delegations at the Plenipotentiaries applied to the telecommunication domain multipolarity’s foundational propositions of Russia’s greatness, pre-eminence of states and multilateral organisations in global governance, and all countries’ equal access to global governance.

The Russian state’s approach to global communication and internet governance reflects its ruling elites’ conceptions of Russia and its place in the world. In the early 1990s, Russian diplomacy presented the country as at once a new liberal democracy and a historic great power. Accordingly, the Russian delegate at the Plenary Meeting of the Geneva Plenipotentiary in December 1992 portrayed the country simultaneously as a first-time ITU participant and as one of the organisation’s founding members: “This is the first time that a delegation from the Russian Federation is taking part in a Plenipotentiary Conference of the International Telecommunication Union. It will be remembered, however, that Russia was one of the 20 founder States of the Union at the Paris Conference 127 years ago” (Russian Federation, 1992b). Russia’s proposed liberal and great power identities in their respective ways were meant to bolster the country’s legitimacy in global telecommunication governance.

Russia’s dual identity narrative at the ITU obscured the country’s continuities with the Soviet period while emphasising the relation to its past as an imperial great power. Over a third of the Russian delegation’s members at the 1992 Plenipotentiary, including the most senior diplomats, had represented the Soviet Union at the previous Plenipotentiary in 1989 (International Telecommunication Union, 1989, pp. 66-67; 1993, pp. 40-42). Against these material continuities with the ancien régime, introducing the Russian delegation as a first-time participant signalled the Russian state’s symbolic rejection of communism, to recall Kozyrev’s wording, and its aspiration to membership in the liberal community. In the atmosphere of post-Cold War liberal triumphalism, Russia’s alignment with the liberal camp and self-presentation as one of the victors of the Cold War served as justification for its equal role in global governance.

Like Russia’s appeal to its liberal identity, its delegation’s invocation of the Russian Empire’s critical role in the ITU’s inception in the nineteenth century also was meant to render present-day Russia’s voice credible in the debate about the Union’s operations. The Russian Empire was one of the International Telegraph Union’s twenty founding members in 1865 and played a prominent role in its work, including hosting the Union’s 1875 conference in Saint Petersburg. In the decades following Russia’s devastating loss to the Euro-Ottoman coalition in the Crimean War (1853-56), Russia viewed its active involvement in governing global telegraphy as contributing to the restoration of its great power prestige and conveying its belonging within the civilised European community (Siefert, 2020). Russia’s great power imaginary, then, has shaped the logics of its telecommunication diplomacy across centuries.

Whereas in the nineteenth century a country’s partaking in the then-novel domains of international law and multilateral diplomacy signified its enlightened modern nature, the late twentieth century neoliberal culture framed support for state-based governance as retrograde. In the prevailing climate of privatisation of governance, Russia was under pressure to legitimise its state-centric agenda. Tellingly, at the 1992 Geneva Plenipotentiary, the Russian delegate insisted that Russia was “in favour of progressive reforms and against conservatism”, but that its caution against privatisation was meant to ensure that the conference’s decisions ultimately increased the ITU’s efficiency (Russian Federation, 1992b).

In endorsing continued centrality of national governments at the ITU, Russia appealed to the ITU’s own tradition of state-based governance and to the purported fact that states’ own interests were best served by the existing state-based governance model. Russian proposal for the work of the 1992 Geneva conference, for example, argued that it is “desirable to maintain some historical continuity and draw on the practical experience accumulated by this venerably old international organisation” in sustaining the pre-existing “role and responsibility of ITU Member countries” (Russian Federation, 1992a, p. 1). While acknowledging the non-state actors’ technical contributions, the proposal invoked ITU traditions in reminding member nations that, “[n]evertheless, it has always been the Administrations of the Member countries of the Union which have […] exercised a leading role” at the ITU (Ibid., p. 3). Put simply, Russia argued that something that wasn’t broken didn’t require fixing.

In addition to a historical argument, the proposal alleged that maintaining the primacy of national governments at the ITU would reflect the supposed international consensus that saw state-based governance as preferable. The document contended that “all countries” recognised telecommunications’ “significance for safeguarding [state] interests at the international level” and therefore “[gave] the State a key say in the management of telecommunications as a whole” (Ibid., p. 3). In conclusion, the document reiterated that the ITU Constitution and Convention, which were being finalised at the Geneva conference, needed to “[m]aintain and consolidate the leading role and responsibility of Administrations in the work of the ITU, which is an intergovernmental specialised agency of the United Nations” (Ibid., p. 3). At the following 1994 Plenipotentiary in Kyoto, the Russian delegation headed by the Minister of Posts and Telecommunications Vladimir Boulgak (1990-1997) continued calling for “preserving each State’s sovereign right to manage its own telecommunications” (Boulgak, 1994) as well as for “preserving the ITU’s pre-eminent world role in the regulation of international telecommunication issues” (Russian Federation, 1994a, p. 4).

The key promise of state-based multilateral governance, according to the Russian multipolarity narrative, is countries’ equal participation in world affairs. To this end, in the 1990s Russia lobbied for the United Nations system’s historical principle of equitable geographic distribution to be officially instituted at the ITU. The principle professes fair distribution of bureaucratic functions among the organisation’s five administrative world regions: the Americas, Western Europe, Eastern Europe and North-eastern Asia, Africa, and Asia and Australasia (Thakur, 1999). In practice, however, distribution of higher-level posts at the ITU historically skewed in favour of the developed West. At the 1994 Plenipotentiary, Russia framed its support for making the principle of equitable geographic distribution mandatory in multipolarity terms of egalitarian global governance: “so that the representatives of the various countries – whether developed or developing, large or small – enjoy equal access to [the ITU administrative] duties” (Russian Federation, 1994b, n.p., RUS/11/7 (MOD) 20-22). Russia’s proposal did not garner enough support in 1994 and was put aside.

At the 1998 Plenipotentiary, Russia warned that failure to uphold the principle of equitable geographic distribution would have “a significant moral and psychological impact which can ultimately affect the effectiveness of the ITU activity” (Russian Federation, 1998, p. 7, RUS/34/17 (MOD) 20-22). Russia insinuated that the absence of equitable geographic distribution (i.e., unipolarity) is not simply immoral, but also inefficient in the long run. Further, Russia’s 1998 proposal decried the fact that because the previous conference had not mandated the principle of equitable geographic distribution, representatives of Asian and Eastern European administrative regions, “which account for more than two-thirds of the world’s population and have enormous economic, technical and intellectual potential have been deprived of elected posts” (Ibid.). As a country that belongs to the ITU’s administrative region of Eastern Europe, Russia was lamenting foremost its own exclusion from this facet of global governance – the very problem that its pursuit of multipolarity was seeking to resolve. Not coincidentally, the Russian proposal’s sentiment regarding the excluded Asian and Eastern European countries’ constructive potential and desire for participation in telecommunication governance closely resembles how the 1993 Russian Foreign Policy Concept articulates one of Russia’s primary national tasks: “to achieve the equal and natural incorporation of the Russian Federation into the world community as a great power that boasts a centuries-long history, unique geopolitical situation, considerable military might, and significant technological, intellectual and ethical capacities” (Russian Federation, 2005 [1993], p. 27). Russia’s obscuring of its national interests in the selfless language of international equity is a reminder that, ultimately, Russia’s pursuit of multipolarity is guided by its desire for full participation in global governance as a recognised great power.

Promotion of state-based international information security at the United Nations

The repertoires of multipolarity advanced at the ITU also informed Russia’s pioneering of the international information security issue at the UN in 1998-99. In a September 1998 letter to the UN Secretary-General Kofi Annan, Russian Foreign Minister Igor Ivanov (1998-2004) urged the international community to place international information security atop the UN agenda (Ivanov, 1998). The letter contained an accompanying draft resolution entitled Developments in the field of information and telecommunications in the context of international security. The two-page resolution, which the UNGA adopted with minor changes, drew attention to the potential malicious use of emerging scientific-technological innovations, encouraged promotion of the consideration of this issue at the international level, and invited UN member states to submit their views on the subject (United Nations General Assembly, 1999). Since 1998, Russia has resubmitted the resolution annually, and the General Assembly has readopted it each time (United Nations Office for Disarmament Affairs, n.d.).

Russia’s resolution over the years contributed to the institutionalisation of internet geopolitics. In August 1999, citing the resolution as its impetus, the UN convened the first forum on international information security, bringing together dozens of high-level governmental and non-governmental experts from around the globe to discuss the issue (United Nations Institute for Disarmament Research, 1999). Further, at the resolution’s suggestion, states began sharing their views on international information security with the Secretary-General. These are collected and published annually under the auspices of the UN and serve as an ongoing intergovernmental discussion platform on the subject (United Nations Office for Disarmament Affairs, n.d.).

Another major outgrowth of the resolution was the Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (GGE) (United Nations Office for Disarmament Affairs, 2019). First suggested by the 2002 resolution, GGE were a series of year-long consultations on cybersecurity norms among up to two dozen national delegations that became a crucial venue for intergovernmental deliberations. Between 2004 and 2017, GGE processes took place five times and produced three expert reports that demonstrated gradual progress in achieving intergovernmental understanding on the foundations of international information security. In 2017, GGE split into two parallel discussion processes spearheaded by the USA and Russia (Henriksen, 2019).

Russia’s promotion of state-based information security governance at the UN in 1998-99 must be understood within the period’s geopolitical context. In the fall of 1998, after two years of Primakov leading the Russian foreign policy, Russia-West relations were generally cooperative, even if the Kozyrev-era language of Russia’s Euro-Atlantic integration and shared liberal destiny had by then subsided. Weeks before Russia introduced the issue at the UN in 1998, for instance, the presidents of Russia and the USA signed a Joint Statement on Common Security Challenges at the Threshold of the Twenty-First Century. Calling Russia and the USA “natural partners in advancing international peace and stability,” the statement identified “mitigating the negative aspects of the information technology revolution” and counteracting “computer and other high-technology crime” among multiple areas for the two countries’ potential cooperation (Clinton & Yeltsin, 1998). Moreover, following the devastating financial crisis that hit Russia in mid-August 1998, the Russian leadership was particularly eager to assuage Western fears of their country’s illiberal turn in the face of economic hardship. As Igor Ivanov emphatically argued in his Plenary address at the UNGA:

From this rostrum I pledge that Russia will not deviate from the path of reform and will do its best to pass with dignity this most difficult test, so as not only to preserve the democratic progress that has been made but also to augment it.

Likewise, Russia’s foreign policy will remain consistent and constructive. It is firmly geared towards building a democratic multipolar world[.] (Ivanov, 1998, p. 20)

Ivanov’s remarks reveal that at the time Russian elites did not view their public commitment to furthering domestic liberalisation, which had begun in the early 1990s, as incongruent with the multipolarity stance in foreign policy.

The original 1998 and subsequent annual draft resolutions on international information security submitted by Russia conveyed its longstanding preference for multilateral global governance. The text of the 1998 resolution called on the international community to promote the issue of information threats at “multilateral levels” and tackle them by “developing international principles” that would enhance global ICT security (United Nations General Assembly, 1999). Russia’s decision to advance the issue of information security via the UN is itself telling of Russia’s normative view of the UN as the preeminent governance venue in a multipolar world. In fact, Russia framed its push to institutionalise information security within the UN expressly in terms of increasing the UN’s efficiency. In his UNGA address in 1998, after voicing Russia’s support for “reforms and changes in United Nations mechanisms that will promote effective consolidation of the United Nations and improve its activities,” Ivanov indicated that “Russia’s initiative to launch a discussion on ways to achieve international information security serves the same goal” (Ivanov, 1998, p. 23; added emphasis).

The 1998 resolution portrayed the latest information and telecommunication technologies as the world’s shared good that ought to be protected from nefarious use. Responsible use of information technologies, according to the text, furthered the “development of civilization”, created opportunities for the “common good of all States”, enhanced the “creative potential of mankind”, and improved the “circulation of information in the global community” (United Nations General Assembly, 1999). The all-embracing tropes—civilization, common good, all states, mankind, global community—drew on Russia’s framing of multipolarity as the egalitarian peaceful order that benefits all countries contrasted with the allegedly conflict-ridden unipolarity that benefits the hegemon alone. When introducing the second iteration of the resolution in the fall of 1999, Russia’s representative Anatoly Antonov, who later served as the country’s ambassador to the United States, insisted that the document was “exclusively non-confrontational and cover[ed] the interests of a broad range of States” (Antonov, 1999, p. 13).

The 1998 resolution called on world governments to share their views on international information security. In August 1999, the UN Secretary-General published the inaugural collection of countries’ replies to that call (UN Secretary-General, 1999). Russia’s contribution to the report was steeped in the logics of multipolarity, detailing its vision of the geopolitical role of information technologies, the primary threats stemming from their misuse, and the required measures for containing these potential dangers. At the outset, Russia’s entry expressed concern that states’ use of information technologies for enhancing their military capabilities “alter[ed] the global and regional balance of forces and g[ave] rise to tension between traditional and emerging centres of power and influence” (UN Secretary-General, 1999, p. 8). This shifting landscape, the entry alleged, could lead to growing non-compliance with “the principles of the sovereign equality of States” and of “non-interference in internal affairs” – the existential conditions for Russia’s own survival and for the functioning of the multipolar world, according to Russian foreign policy discourse (Ibid.).

Russian ruling elites evidently feared the post-Cold War geopolitical balance would tilt further in favour of the most developed powers that could afford to employ the latest information technologies to their strategic advantage, foremost the United States. Given that Russia lacked equivalent resources that would allow it to partake in the scientific-technological race, an unconstrained technological competition would compromise its great power status. As Russia saw it, at the time, “international law ha[d] virtually no means of regulating” such information weaponry (Ibid.). Hence, the document went on to identify potential dangers arising from the misuse of information technologies and to propose solutions for their amelioration.

Russia’s entry defined information security as “including the information and telecommunications infrastructure and information per se” (Ibid., p. 10). That definition signalled Russia’s understanding of the concept as encompassing both hardware and content. Definitions given to the internet and associated technologies shape their design and governance by framing the issues and designating the actors responsible for their solutions (DeNardis, 2020, pp. 189-191). Russia’s expansive socio-technical understanding of information security means that it views information technologies’ material (infrastructure) and symbolic (information) dimensions as falling within the ambit of state regulation domestically and internationally. Russia’s opponents, chiefly Western liberal democracies, privilege a limited technical understanding of information security as pertaining to ICT infrastructures alone and critique Russia’s approach for giving the state a carte blanche for undue content control (Giles & Hagestad II, 2012; Godwin III et al., 2014).

In line with Russia’s two-pronged understanding of information security, its 1999 contribution to the collection of replies warned against states’ adversarial actions at material and symbolic levels of information technologies. Expressing the Kremlin’s proclaimed support for states’ domestic independence and criticism of the US geopolitical domination, the document cautioned that some states may seek to “dominate and control” the information realm and to acquire a “monopoly” over other countries’ informational capabilities, rendering them “technologically dependent” (UN Secretary-General, 1999, p. 9). Specifically, the document noted the dangers of “[u]ncontrolled transboundary dissemination of information” and “[m]anipulation of information flows” aimed at undermining “a State’s political and social system” and eroding its population’s “traditional cultural, moral, ethical and aesthetic values” (Ibid.). These warnings are an early example of post-Soviet Russia’s explicit opposition to the free flow of information doctrine and support for nationally bounded information segments, which would form the crux of Russia’s global internet governance vision in the following decades.

To safeguard the world against varied information threats, Russia’s entry advocated relying on multilateral diplomacy and international law. As usual, Russia painted such multilateral governance as the system that took into account all countries’ interests, unlike the egoistic unipolar system. Russia’s entry suggested locating “all existing positions and views” on information security in order to identify countries’ “common approaches” that would underlie “a multilateral international legal instrument” to regulate this domain (Ibid.). For its part, Russia proposed a number of state-centric initiatives for “an international legal basis” of information security, reflecting its preference for binding intergovernmental agreements as a key digital governance mechanism.

In light of the resolution’s relative prominence within the international policymaking community, Russian diplomats over the past two decades often have invoked the document to portray Russia as a pioneering internet governance power whose efforts to reshape the global internet order enjoyed widespread support and, therefore, legitimacy (e.g., Boyko, 2016; Medvedev, 2015; Russian Ministry of Foreign Affairs, 2008, pp. 26-27). As early as in October 1999, Russia’s delegate at the UN, Anatoly Antonov, reminded his colleagues that the previous year “Russia for the first time took the initiative of introducing a draft resolution” that evolved into the “discussion initiated by the Russian Federation of an important and topical issue — the problem of information security” (Antonov, 1999, p. 12, added emphasis). Like the reference to Russia’s status of an ITU co-founder at the 1992 ITU Plenipotentiary, Antonov’s words were meant to confer historical credibility upon Russia’s state-centric position on international information security by portraying Russia as a progenitor of this very geopolitical issue.

Russia’s approach to global communication that formed throughout both liberal and statist foreign policy orientations in the 1990s then informed its global internet governance agenda during Vladimir Putin’s rule in the 2000s-10s; this despite the fact that the Putin’s regime has constructed its image in explicit opposition to Russia’s liberal period. From the early 2000s, the Kremlin has legitimised the regime’s growing authoritarianism with the narrative that Putin’s assertive policies helped Russia overcome “the turbulent 1990s”, including the country’s alleged subservience to the West and loss of international prestige, and enjoy “the stable 2000s” (Malinova, 2020). By illuminating how the current administration’s digital multipolarity philosophy, in fact, ultimately draws upon Kozyrev-era multipolarity discourse, the next section further highlights the essential role of historical identity narratives in understanding Russia’s internet governance.

Russia’s digital multipolarity and global internet governance in the twenty-first century

Since the internet’s emergence in the 1970s under the auspices of the US Department of Defence, its design and governance have been subject to negotiations and power struggles among public and private actors within the United States and internationally (Abbate, 1999; Braman, 2011, 2012; Russell, 2014, Ch. 8). Following the internet’s rapid popularisation and commercialisation of the mid-1990s, internet governance was quickly acquiring a geopolitical dimension (Braman, 2004; Paré, 2002). At the time, multiple world powers called for placing the internet under an international rule. Instead, in order to secure the US government’s historical privilege over the now ascendant technology, the White House facilitated placing the US-based non-governmental technical bodies in charge of the global internet’s critical resources and standards (Mueller, 2002).

In response to the novel US-centric internet governance arrangement to the exclusion of the international community, the 1998 ITU Plenipotentiary proposed convening an international forum to discuss socio-political aspects of digital technologies (Kleinwächter, 2004). The resulting two-phase event, the World Summit on Information Society (WSIS), took place under the auspices of the UN in 2003 and 2005. Attended by over 11,000 participants, including dozens of heads of states and ministers, WSIS signalled the expansion of internet governance from the technical niche into a standalone public policy domain (Mueller, 2010, Ch. 3).

WSIS elevated multistakeholderism as the foundational principle of internet governance. The ideal of multistakeholderism promises egalitarian distribution of governing functions among governmental and non-governmental stakeholders while casting state-based governance as non-democratic. In practice, digital corporations and major states dominate internet policymaking relative to civil society actors and less powerful governments, thereby reinforcing existing power imbalances rather than leveling the internet governance field (Hofmann, 2016; Radu et al., 2014, Part 2). As a home to the global internet’s critical infrastructures and largest digital corporations, the United States are the primary beneficiary of the multistakeholder status quo (Powers & Jablonski, 2015).

In the lead-up to the WSIS and thereafter, Russia promoted state-centric internet multilateralism in opposition to the multistakeholder model. In his Plenary addresses at both phases of the WSIS, the Minister for Information Technologies and Communications Leonid Reiman (1999-2008) emphasised Russia’s view that states via the UN and the ITU should play the leading role in governing the global information society with the secondary consultative roles reserved for the private sector and other stakeholders (Reiman, 2003, 2005). In the years since the WSIS, the multipolarity framework has continuously informed Russia’s increasingly assertive promotion of this state-centric internet governance hierarchy. At the inaugural International Cybersecurity Congress in Moscow in 2018, a high-profile annual conference organised by Russia’s largest state-affiliated bank Sberbank, Putin argued for the need

[T]o develop common rules of the game and binding international standards [for cyberspace] that will take into account the rights and interests of all countries as much as possible and will be universal and acceptable for all. We have seen more than once that some countries’ egoism and self-centred policies are damaging the international information stability.

[…] I would like to say that Russia has advanced a number of initiatives on the rules of responsible behaviour of states in the information sphere, legal mechanisms for fighting cybercrime and international internet governance.

We intend to continue to promote these initiatives, primarily at the most highly respected and influential international organisation, the UN. (Putin, 2018)

Putin’s remarks conveyed key propositions of Russia’s multipolarity narrative. In line with multipolarity’s critique of the unipolar system as innately unstable and prone to conflict, Putin determined instability of the international information environment as the primary issue that internet governance should address. The Russian president identified the egoism of the United States, referred to with a thinly veiled euphemism of “some countries”, as the root cause of this instability. Putin’s proposed mechanisms for tackling the instability included state-based multilateralism and international law, particularly the time-tested United Nations. According to the statement, Russia’s state-centric approach, by contrast with the US egoism, respected the rights and interests of all countries. In appealing to the UN history and to states’ own interests, Putin’s remarks mirrored Russia’s two-pronged argument for state-based governance at the ITU in 1992-94, which had similarly referred to the ITU’s tradition and an alleged preference of all countries for the state-based status quo. Drawing on another long-standing trope of Russian global communication diplomacy that hails Russia’s role within policymaking debates, Putin portrayed Russia as a key internet power by reminding about its past and future internet governance initiatives.

Russia institutionalised its digital multipolarity narrative at the international level through proliferating internet governance initiatives with allied governments. The two primary organisations that have aided Russia’s international advancement of digital multipolarity have been BRICS (Brazil, Russia, India, China, and South Africa) and the Shanghai Cooperation Organisation (SCO) consisting of China, Kazakhstan, Kyrgyzstan, Russia, Tajikistan and Uzbekistan, and since 2017 also India and Pakistan. In 2011 and 2015, for example, SCO members proposed the International code of conduct for information security to the UNGA (Li et al., 2011; Liu et al., 2015). This non-binding set of principles for regulating states’ behaviour in cyberspace called for “the establishment of multilateral, transparent and democratic international Internet governance mechanisms,” among other reforms (Liu et al., 2015, p. 5). The proposal equated multilateral internet governance with transparency and democracy, while implying that it is the current system of unilateral US-based internet governance that is non-democratic and non-transparent for its lack of accountability to the international community. In another instance, after gathering upon Russia’s initiative in 2015, BRICS communication ministers reasserted “the right of all States to establish and implement policies for information and communication networks in their territories in accordance with their respective history, culture, religion and social factors” (BRICS, 2015). Echoing Russia’s rhetoric during the UN international information security debates in the late 1990s about the need to protect states’ political systems and traditional values, the BRICS communique argued for aligning countries’ informational and territorial borders to protect states’ domestic sovereignty and, consequently, global political and cultural diversity.

With the deterioration of Russia-West relations in the recent years, Russian official rhetoric toward the West has become increasingly hostile while maintaining multipolarity’s long-standing claims and metaphors (e.g., Putin, 2013; Lavrov, 2017). In late 2018, the Russian Foreign Ministry issued a strongly worded condemnation of liberal democracies for not supporting Russia’s internet governance initiatives at the UNGA that year. The official statement argued that in opposing Russia’s internet multilateralism “the Western countries have set themselves off against the international community” and “have only their own mercenary goals in mind” (Russian Ministry of Foreign Affairs, 2018). The statement’s rhetoric was more confrontational toward the West than Russia’s multipolarity narrative in the 1990s. At the same time, it advanced digital multipolarity’s decades-old foundational repertoires in arguing that “all countries, regardless of their level of technological development, have a right to take a direct part in talks on [international information security] at the UN and to influence the decision-making process,” and that only such egalitarian governance can foster “a fair and equal world order in the digital sphere” (Ibid.). Two decades prior at the ITU Russia advanced a similarly worded argument in support of the principle of equitable geographic distribution of posts.

Conclusion: Toward a cultural framework of internet governance

This essayexplored the cultural logics underpinning the Russian state’s geopolitical pursuit of multilateral internet governance. Over the past two decades, Russia has emerged as a leading advocate of transferring the global internet’s key governing functions and infrastructures away from the ambit of non-governmental organisations historically tied to the US public and private sectors toward state-based international organisations, such as the United Nations and its specialised agency, the International Telecommunication Union. Since Russia’s internet governance activism arose during Vladimir Putin’s rule in the 2000s-10s, most scholars have interpreted Russia’s global internet agenda as an expression of Putin’s regime characterised by increasing authoritarianism and anti-Western illiberalism.

In this article, I offered an alternative analytical lens to argue that Russian ruling elites’ perception of Russia as a historic great power with an inherent right to full participation in global governance has directed the Russian state’s approach to global internet governance—what I conceptualised as digital multipolarity. As a self-perceived great power, Russia has viewed the US-led unipolar order that emerged in the post-Cold War environment as curtailing its domestic sovereignty and historical role in managing the international system. Consequently, from its first months of post-Soviet independence, Russia has advanced the normative idea of a multipolar world order that would be based on the pre-eminence in global governance of the United Nations and its Security Council.

To illuminate how Russian elites’ great power and multipolarity imaginaries continuously have underlain its digital multipolarity advocacy from enthused liberalism of the early 1990s to vehement illiberalism in the second half of the 2010s, I examined two sets of Russian policymaking initiatives in the 1990s. Each of the two initiatives is rooted in the logics and language of multipolarity. One is Russia’s defence of the ITU’s internal state-based governance and the Union’s leading role in global telecommunication governance. Another is Russia’s promotion of state-based international information security at the UN. By delimiting the crux of my analysis to Russia’s first post-Soviet decade, I disentangled Russia’s political developments under Putin’s rule from the great power imaginary that had informed Russian foreign policy for centuries.

My focus on the ideational factors shaping the Russian state’s internet governance philosophy does not negate the Kremlin’s instrumental use of information policy and digital technologies to promote its geopolitical agenda and exert greater social and political control at home. I have shown, rather, that cultural frameworks, such as elites’ ingrained ideas about the country’s national identity and place in world history and contemporary politics, can shape states’ global communication agenda across political regimes and ideologies.

References

Abbate, J. (1999). Inventing the internet. MIT Press.

Ambrosio, T. (2005). Challenging america’s global preeminence: Russia’s quest for multipolarity. Routledge. https://doi.org/10.4324/9781315260686

Antonov, A. (1999). UN general assembly 54th session. 16th Meeting, UN First Committee (A/C.1/54/PV.16), 12–13. https://undocs.org/en/A/C.1/54/PV.16

Balbi, G., & Fickers, A. (Eds.). (2020). History of the International Telecommunication Union (ITU): Transnational techno-diplomacy from the telegraph to the Internet. In History of the International Telecommunication Union (ITU). De Gruyter. https://doi.org/10.1515/9783110669701

Boulgak, V. (1994). Statement by the head of the delegation of the russian federation dr. Vladimir boulgak. Minutes of the First Plenary Meeting (Document 83-E), 1994 ITU Plenipotentiary Conference, 16–17. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.15.51.en.101.pdf

Boyko, S. (2016). UN Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security: A View from the Past into the Future (Gruppa pravitelʹstvennykh ėkspertov OON po dostizhenii͡am v sfere informatizat͡sii i telekommunikat͡siĭ v kontekste mezhdunarodnoĭ bezopasnosti: Vzgli͡ad iz proshlogo v budushchee). International Affairs (Russia), 8. https://interaffairs.ru/jauthor/material/1718

Braman, S. (Ed.). (2004). The emergent global information policy regime. Palgrave. https://doi.org/10.1057/9780230377684

Braman, S. (2011). The framing years: Policy fundamentals in the internet design process, 1969–1979. The Information Society, 27(5), 295–310. https://doi.org/10.1080/01972243.2011.607027

Braman, S. (2012). Internationalization of the Internet by design: The first decade. Global Media and Communication. https://doi.org/10.1177/1742766511434731

B.R.I.C.S. (2015). Communique of BRICS ICT Ministers on results of the meeting “Expanding of collaboration in spheres of telcom and infocommunications" [Official statement]. Minisistry of Digital Development, Communications and Mass Media of the Russian Federation. https://digital.gov.ru/en/events/34194/

Brousseau, E., Marzouki, M., & Méadel, C. (Eds.). (2012). Governance, regulation and powers on the internet. Cambridge University Press.

Bygrave, L. A., & Bing, J. (Eds.). (2009). Internet governance: Infrastructure and institutions. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199561131.001.0001

Chebankova, E. (2017). Russia’s idea of the multipolar world order: Origins and main dimensions. Post-Soviet Affairs, 33(3), 217–234. https://doi.org/10.1080/1060586X.2017.1293394

Chenou, J.-M. (2014). From cyber-libertarianism to neoliberalism: Internet exceptionalism, multi-stakeholderism, and the institutionalisation of internet governance in the 1990s. Globalizations, 11(2), 205–223. https://doi.org/10.1080/14747731.2014.887387

Chernenko, E. (2018). Russia’s cyber diplomacy (No. 148; Challiot Paper, pp. 43–52). European Union Institute for Security Studies. https://www.iss.europa.eu/content/hacks-leaks-and-disruptions-%E2%80%93-russian-cyber-strategies

Choucri, N., & Clark, D. D. (2019). International relations in the cyber age: The co-evolution dilemma. The MIT Press.

Claessen, E. (2020). Reshaping the internet – the impact of the securitisation of internet infrastructure on approaches to internet governance: The case of Russia and the EU. Journal of Cyber Policy, 5(1), 140–157. https://doi.org/10.1080/23738871.2020.1728356

Clinton, W. J., & Yeltsin, B. (1998). Joint statement on common security challenges at the threshold of the twenty-first century. https://www.govinfo.gov/content/pkg/WCPD-1998-09-07/pdf/WCPD-1998-09-07-Pg1696.pdf

Clunan, A. L. (2014). Historical aspirations and the domestic politics of Russia’s pursuit of international status. Communist and Post-Communist Studies, 47(3), 281–290. https://doi.org/10.1016/j.postcomstud.2014.09.002

Codding, Jr, G. A. (1995). The International Telecommunication Union: 130 years of telecommunications regulation. Denver Journal of International Law and Policy, 23(3), 501–512. https://digitalcommons.du.edu/djilp/vol23/iss3/3/

Deibert, R. J., & Crete-Nishihata, M. (2012). Global governance and the spread of cyberspace controls. Global Governance, 18(3), 339–361. https://doi.org/10.1163/19426720-01803006

DeNardis, L. (2014). The global war for internet governance. Yale University Press. https://doi.org/10.12987/yale/9780300181357.001.0001

DeNardis, L. (2020). The internet in everything: Freedom and security in a world with no off switch. Yale University Press.

Dumitrica, D. (2015). Imagining the canadian internet: A case of discursive nationalization of technology. Studies in Ethnicity and Nationalism, 15(3), 448–473. https://doi.org/10.1111/sena.12152

Ebert, H., & Maurer, T. (2013). Contested cyberspace and rising powers. Third World Quarterly, 34(6), 1054–1074. https://doi.org/10.1080/01436597.2013.802502

Fari, S. (2015). The formative years of the telegraph union. Cambridge Scholars Publishing.

Federation, R. (1994a). Proposals for amendments to the Constitution and Convention of the ITU. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.15.51.en.101.pdf

Federation, R. (2002). National security concept (1997: Vol. IV (T. Shakleina, Ed.; pp. 51–74). Moscow State Institute of International Relations.

Federation, R. (2005). Foreign policy conception of the russian federation (1993. In A. Melville & T. Shakleina (Eds.), Russian foreign policy in transition: Concepts and realities (pp. 28–64). Central European University Press.

Federation, R. (1994b). Draft resolution [RUS/2]: Possible ways of improving the efficiency of ITU’s work. Proposals for the Work of the Conference (Document 47-E, 4–6. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.15.51.en.101.pdf

Federation, R. (1992). Statement by the delegate of the Russian Federation. Minutes of the First Plenary Meeting (Document 77 (Rev.1)-E), 1992 ITU Additional Plenipotentiary Conference, Annex 3, 13. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.14.51.en.101.pdf

Franke, U., & Pallin, C. V. (2012). Russian Politics and the Internet in 2012 (FOI-R--3590--SE). FOI: Swedish Defence Research Agency. https://www.foi.se/report-search/pdf?fileName=D%3A%5CReportSearch%5CFiles%5Cebb043f5-26fc-41be-982b-da589398eeb7.pdf

Freedman, L., & Wilkinson, B. (2013). Autocracy rising: The internet in a multipolar world. Index on Censorship, 42(2), 59–61. https://doi.org/10.1177/0306422013492258

Gel’man, V. (2015). Authoritarian russia: Analyzing post-soviet regime changes. University of Pittsburgh Press.

Giles, K., & Hagestad, W. I. (2013). Divided by a common language: Cyber definitions in chinese, russian and english. 5th International Conference on Cyber Conflict (CyCon) Proceedings, 413–429. https://ccdcoe.org/uploads/2018/10/CyCon_2013_Proceedings.pdf

Gill, R. (2018). Discourse. In M. Kackman & C. Kearney (Eds.), The craft of criticism: Critical media studies in practice. Routledge. https://doi.org/10.4324/9781315879970-3

Godwin, III, J. B., Kulpin, A., & Rauscher, K. F. (2014). Critical terminology foundations 2: Russia-U.S (Policy Report No. 2/2014). The EastWest Institute; Information Security Institute, Moscow State University. https://www.files.ethz.ch/isn/178418/terminology2.pdf

Henriksen, A. (2019). The end of the road for the UN GGE process: The future regulation of cyberspace. Journal of Cybersecurity, 5(1). https://doi.org/10.1093/cybsec/tyy009

Hills, J. (2007). Telecommunications and Empire. University of Illinois Press.

Hofmann, J. (2016). Multi-stakeholderism in Internet governance: Putting a fiction into practice. Journal of Cyber Policy, 1(1), 29–49. https://doi.org/10.1080/23738871.2016.1158303

International Telecommunication Union. (n.d.). Plenipotentiary conference (Geneva, 1959). http://handle.itu.int/11.1004/020.1000/4.9

International Telecommunication Union. (1989). List of participants, 1989 plenipotentiary conference. International Telecommunication Union; ITU Library & Archives. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.13.44.m7.100.pdf

International Telecommunication Union. (1993). List of participants, 1992 additional plenipotentiary conference. International Telecommunication Union; ITU Library & Archives. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.14.44.m7.100.pdf

ITU Plenipotentiaries agree on a new ITU. (1993). Telecommunication Journal, 60(2), 55–62. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.14.57.en.101.pdf

Ivanov, I. (1998a). UN general assembly 53rd session. United Nations; United Nations Official Records. https://undocs.org/en/A/53/PV.9

Ivanov, I. (1998b). Letter dated 23 September 1998 from the Minister for Foreign Affairs of the Russian Federation addressed to the Secretary-General. https://undocs.org/en/A/C.1/53/3

Jasanoff, S., & Kim, S.-H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. University of Chicago Press. https://doi.org/10.7208/chicago/9780226276663.001.0001

Kennedy, D. (2013). Deciphering russia: Russia’s perspectives on internet policy and governance. Global Partners Digital. https://www.gp-digital.org/wp-content/uploads/pubs/FINAL%20-%20Deciphering%20Russia.pdf

Kiggins, R. (2012). U.S. identity, security, and governance of the internet. In S. S. Costigan & J. Perry (Eds.), Cyberspaces and global affairs (pp. 189–202). Ashgate.

Kleinwächter, W. (2016). Beyond ICANN Vs ITU?: How WSIS Tries to Enter the New Territory of Internet Governance—Wolfgang Kleinw‰chter. International Communication Gazette, 66(3–4), 233–251. https://doi.org/10.1177/0016549204043609

Kohl, U. (Ed.). (2017). The net and the nation state: Multidisciplinary perspectives on internet governance. Cambridge University Press.

Kolstø, P., & Blakkisrud, H. (Eds.). (2016). The new russian nationalism: Imperialism, ethnicity and authoritarianism 2000-2015. Edinburgh University Press. https://doi.org/10.3366/edinburgh/9781474410427.001.0001

Kozyrev, A. (1992). Provisional verbatim record of the 6th meeting. https://undocs.org/en/A/47/PV.6

Kozyrev, A. (1994a). The lagging partnership. Foreign Affairs, 73(3), 59–71.

Kozyrev, A. (1994b). Russia and the U.S.: Partnership is not premature, it is overdue (Rossii͡a i SSHA: partnerstvo ne prezhdevremenno, a zapazdyvaet). Izvestia, 3.

Lavrov, S. (2017, December). Foreign Minister Sergey Lavrov’s remarks and replies to media questions during the Government Hour in the Federation Council of the Federal Assembly of the Russian Federation. The Ministry of Foreign Affairs of the Russian Federation. http://www.mid.ru/en/press_service/video/-/asset_publisher/i6t41cq3VWP6/content/id/2992396

Li, B., Churkin, V., Aslov, S., & Askarov, M. (2011). International code of conduct for information security. United Nations. https://undocs.org/en/A/66/359

Liu, J., Abdrakhmanov, K., Kydyrov, T., Churkin, V., Mahmadaminov, M., & Madrakhimov, M. (2015). International code of conduct for information security. United Nations. https://undocs.org/en/A/69/723

Lo, B. (2002). Russian foreign policy in the post-soviet era: Reality, illusion and mythmaking. Palgrave Macmillan. https://doi.org/10.1057/9781403920058

MacLean, D. J. (1995). A new departure for the ITU: An inside view of the Kyoto Plenipotentiary Conference. Telecommunications Policy, 19(3), 177–190. https://doi.org/10.1016/0308-5961(95)00002-N

MacLean, D. J. (1999). Open doors and open questions: Interpreting the results of the 1998 ITU Minneapolis Plenipotentiary Conference. Telecommunications Policy, 23(2), 147–158. https://doi.org/10.1016/S0308-5961(98)00084-6

Makarychev, A., & Morozov, V. (2011). Multilateralism, multipolarity, and beyond: A menu of Russia’s policy strategies. Global Governance, 17(3), 353–373. https://doi.org/10.1163/19426720-01703006

Malinova, O. (2020). Framing the collective memory of the 1990s as a legitimation tool for Putin’s regime. Problems of Post-Communism, 1–13. https://doi.org/10.1080/10758216.2020.1752732

Mansell, R. (2012). Imagining the internet: Communication, innovation, and governance. Oxford University Press.

Maréchal, N. (2017). Networked authoritarianism and the geopolitics of information: Understanding russian internet policy. Media and Communication, 5(1), 29–41. https://doi.org/10.17645/mac.v5i1.808

Maurer, T. (2011). Cyber norm emergence at the United Nations—An analysis of the UN’s activities regarding cyber-security (2011-11) [Discussion Paper]. Harvard Kennedy School, Belfer Center for Science and International Affairs. https://www.belfercenter.org/sites/default/files/files/publication/maurer-cyber-norm-dp-2011-11-final.pdf

Medvedev, D. (2015, December 16). Address at the 2nd world internet conference. The Russian Government. http://government.ru/en/news/21075/

Miskimmon, A., & O’Loughlin, B. (2017). Russia’s narratives of global order: Great power legacies in a polycentric world. Politics and Governance, 5(3), 111–120. https://doi.org/10.17645/pag.v5i3.1017

Morgus, R. (2018). The spread of russia’s digital authoritarianism. In N. D. Wright (Ed.), AI, china, russia, and the global order: Technological, political, global, and creative perspectives (pp. 85–93). U.S. Department of Defense.

Mueller, M. (2002). Ruling the root: Internet governance and the taming of cyberspace. The MIT Press.

Mueller, M. (2010). Networks and states: The global politics of internet governance. MIT Press.

Mueller, M. (2017). Will the internet fragment?: Sovereignty, globalization and cyberspace. Polity.

Musiani, F., Cogburn, D. L., DeNardis, L., & Levinson, N. S. (Eds.). (2016). The turn to infrastructure in internet governance. Palgrave Macmillan. https://doi.org/10.1057/9781137483591

Neumann, I. B. (2008a). Russia as a great power, 1815–2007. Journal of International Relations and Development, 11(2), 128–151. https://doi.org/10.1057/jird.2008.7

Neumann, I. B. (2008b). Russia’s standing as a great power, 1494–1815. In T. Hopf (Ed.), Russia’s European choice. Palgrave Macmillan. https://doi.org/10.1057/9780230612587_2

Neumann, I. B. (2016). Russia and the idea of Europe: A study in identity and international relations (2nd ed.). Routledge. https://doi.org/10.4324/9781315646336

Nocetti, J. (2015). Contest and conquest: Russia and global internet governance. International Affairs, 91(1), 111–130. https://doi.org/10.1111/1468-2346.12189

O’Hara, K., & Hall, W. (2018). Four internets: The geopolitics of digital governance (No. 206; CIGI Papers). https://www.cigionline.org/sites/default/files/documents/Paper%20no.206web.pdf

Paré, D. J. (2002). Internet governance in transition: Who is the master of this domain? Rowman & Littlefield Publishers.

Pigman, L. (2019). Russia’s vision of cyberspace: A danger to regime security, public safety, and societal norms and cohesion. Journal of Cyber Policy, 4(1), 22–34. https://doi.org/10.1080/23738871.2018.1546884

Pohle, J., Hösl, M., & Kniep, R. (2016). Analysing internet policy as a field of struggle. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.412

Polatin-Reuben, D., & Wright, J. (2014). An internet with BRICS characteristics: Data sovereignty and the balkanisation of the internet. In 4th USENIX workshop on free and open communications on the internet. FOCI’14, San Diego. https://www.usenix.org/conference/foci14/workshop-program/presentation/polatin-reuben

Polyakova, A., & Meserole, C. (2019). Exporting digital authoritarianism: The Russian and Chinese models (Democracy & Disorder) [Policy Brief]. The Brookings Institution. https://www.brookings.edu/wp-content/uploads/2019/08/FP_20190827_digital_authoritarianism_polyakova_meserole.pdf

Powers, S. M., & Jablonski, M. (2015). The real cyber war: The political economy of internet freedom. University of Illinois Press. https://doi.org/10.5406/illinois/9780252039126.001.0001

Price, M. E. (2017). The global politics of internet governance: A case study in closure and technological design. In D. R. McCarthy (Ed.), Technology and world politics. Routledge. https://doi.org/10.4324/9781317353836-7

Prizel, I. (1998). National identity and foreign policy: Nationalism and leadership in Poland, Russia and Ukraine. Cambridge University Press.

Putin, V. (2013, September 19). Meeting of the valdai international discussion club. Kremlin. http://en.kremlin.ru/events/president/news/19243

Putin, V. (2018, July 6). Plenary session of the international cybersecurity congress. Kremlin. http://en.kremlin.ru/events/president/news/57957

Putin, V. (2019, October 29). Monument to Yevgeny Primakov unveiled in moscow. Kremlin. http://en.kremlin.ru/events/president/news/61929

Radu, R. (2019). Negotiating internet governance. Oxford University Press. https://doi.org/10.1093/oso/9780198833079.001.0001

Radu, R., Chenou, J.-M., & Weber, R. H. (Eds.). (2014). The evolution of global internet governance. Springer. https://doi.org/10.1007/978-3-642-45299-4

Rebello, K. (2017). Building walls with ‘BRICS’? Rethinking internet governance and normative change in a multipolar world. In A. Bower & M. Peter (Eds.), Rising powers and global governance: Opportunities, challenges, and change (pp. 25–39). Centre for Global Constitutionalism, University of St Andrews. http://cgc.wp.st-andrews.ac.uk/files/2017/03/CGC-Junior-Scholar-WP-Series-2017-FINAL.pdf#page=31

Reiman, L. (2003). Vystuplenie ministra rossiĭskoĭ federat͡sii po svi͡azi i informatizat͡sii, glavy delegat͡sii Rossiĭskoĭ Federat͡sii na Vsemirnoĭ vstreche na vysshem urovne po voprosam informat͡sionnogo obshchestva L.D [Address by Minister for Communications and Informatization of the Russian Federation, Head of the Russian delegation, L.D. Reiman at the Plenary Session 1 of the World Summit on the Information Society]. International Telecommunication Union. https://www.itu.int/net/wsis/geneva/coverage/statements/russia/ru-ru.pdf

Reiman, L. (2005, November). Text of the speech by the minister for information technologies and communication of the Russian Federation Leonid Reiman at the World Summit on the Information Society. Fourth Plenary Meeting, General Debate, World Summit on the Information Society. https://www.itu.int/net/wsis/tunis/statements/docs/g-russia/1-ru.pdf

Ringmar, E. (1996). Identity, interest and action: A cultural explanation of Sweden’s intervention in the Thirty Years War. Cambridge University Press.

Ringmar, E. (2002). The recognition game: Soviet Russia against the West. Cooperation and Conflict. https://doi.org/10.1177/0010836702037002973

Rioux, M., Adam, N., & Company Pérez, B. (2014). Competing institutional trajectories for global Regulation—Internet in a fragmented world. In R. Radu, J.-M. Chenou, & R. H. Weber (Eds.), The evolution of global internet governance (pp. 37–55). Springer. https://doi.org/10.1007/978-3-642-45299-4_3

Rosenbach, E., & Mansted, K. (2019). The geopolitics of information (Defending Digital Democracy Project) [Paper]. Belfer Center for Science and International Affairs, Harvard Kennedy School. https://www.belfercenter.org/sites/default/files/2019-08/GeopoliticsInformation.pdf

Russell, A. L. (2014). Open standards and the digital age: History, ideology, and networks. Cambridge University Press.

Russian Federation. (1992). Proposals for the work of the conference (Document 9-E). International Telecommunication Union. http://search.itu.int/history/HistoryDigitalCollectionDocLibrary/4.14.51.en.101.pdf#page=112

Russian Ministry of Foreign Affairs. (2008). Foreign policy and diplomatic activity of the Russian Federation in 2007. Russian Ministry of Foreign Affairs. https://www.mid.ru/documents/10180/873584/Obzor2008.doc/3d343752-7ddf-4b4f-9da2-b2fcea7f9cda

Russian Ministry of Foreign Affairs. (2018, December 7). Press release on the adoption of a Russian resolution on international information security at the UN General Assembly. Russian Ministry of Foreign Affairs. https://www.mid.ru/ru/mezdunarodnaa-informacionnaa-bezopasnost/-/asset_publisher/UsCUTiw2pO53/content/id/3437775?p_p_id=101_INSTANCE_UsCUTiw2pO53&_101_INSTANCE_UsCUTiw2pO53_languageId=en_GB

Schafer, V. (2020). The ITU facing the emergence of the internet, 1960s–Early 2000s. In G. Balbi & A. Fickers (Eds.), History of the International Telecommunication Union (ITU) (pp. 321–344). https://doi.org/10.1515/9783110669701-014

Schulte, S. R. (2013). Cached: Decoding the internet in global popular culture. New York University Press.

Schwartz-Shea, P., & Yanow, D. (2012). Interpretive research design: Concepts and processes. Routledge. https://doi.org/10.4324/9780203854907

Secretary-General, U. N. (1999). Developments in the field of information and telecommunications in the context of international security. In Report of the secretary-general (p. 13). https://undocs.org/en/A/54/213

Siefert, M. (2020). The Russian empire and the International Telegraph union, 1856–1875. In G. Balbi & A. Fickers (Eds.), History of the International Telecommunication Union (ITU) (pp. 15–36). De Gruyter. https://doi.org/10.1515/9783110669701-002

Silvius, R. (2016). Culture, political economy and civilisation in a multipolar world order: The case of russia. Routledge. https://doi.org/10.4324/9781315665917

Soldatov, A., & Borogan, I. (2015). The red web: The struggle between Russia’s digital dictators and the new online revolutionaries. PublicAffairs.

Stadnik, I. (2019). Sovereign runet: What does it mean? [White Paper]. Georgia Tech School of Public Policy. https://www.internetgovernance.org/wp-content/uploads/IGPWhitePaper_STADNIK_RUNET-1.pdf

Taylor, C. (2003). Modern social imaginaries. Duke University Press.

Thakur, R. C. (1999). What is equitable geographic representation in the 21st century: Report of a Seminar held by the International Peace Academy and the United. The United Nations University. https://digitallibrary.un.org/record/618091/files/equitable.pdf

Thorun, C. (2009). Explaining change in Russian foreign policy: The role of ideas in POST-SOVIET russia’s conduct towards the west. Palgrave Macmillan. https://doi.org/10.1057/9780230589964

Tolz, V. (2001). Russia. Bloomsbury Academic.

Trenin, D. (2011). Of power and greatness (P. Dutkiewicz & D. Trenin, Eds.). New York University Press.

Tsygankov, A. P. (2019). Russia’s foreign policy: Change and continuity in national identity (5th ed.). Rowman & Littlefield.

United Nations General Assembly. (1999). Developments in the field of information and telecommunications in the context of international security, A/RES/53/70. United Nations. http://undocs.org/A/RES/53/70

United Nations Institute of Disarmament Research. (1999). Developments in the field of information and telecommunications in the context of international security, Geneva—25-26 August 1999. Private discussion meeting hosted by DDA and UNIDIR. United Nations. https://www.unidir.org/sites/default/files/conferences/pdfs/summary-eng-0-25.pdf

United Nations Office for Disarmament Affairs. (n.d.). Developments in the field of information and telecommunications in the context of international security. United Nations Office for Disarmament Affairs. https://www.un.org/disarmament/ict-security/

United nations office for disarmament affairs. (2019). Fact sheet: Developments in the field of information and telecommunications in the context of international security. United Nations. https://unoda-web.s3.amazonaws.com/wp-content/uploads/2019/07/Information-Security-Fact-Sheet-July-2019.pdf

Weldes, J. (1999). Constructing national interests: The United States and the Cuban missile crisis. University of Minnesota Press.

Winseck, D. (2017). The geopolitical economy of the global internet infrastructure. Journal of Information Policy, 7, 228–267. https://doi.org/10.5325/jinfopoli.7.2017.0228

Winseck, D. (2020). Is the International Telecommunication Union still relevant in “the internet age?” lessons from the 2012 world conference on international telecommunications (WCIT. In G. Balbi & A. Fickers (Eds.), History of the International Telecommunication Union (ITU) (pp. 135–166). De Gruyter. https://doi.org/10.1515/9783110669701-007

Yeltsin, B., & Zemin, J. (1997). Russian-Chinese joint declaration on a multipolar world and the establishment of a new international order (A/52/50 S/1997/384). United Nations. https://digitallibrary.un.org/record/234074/files/A_52_153_S_1997_384-EN.pdf

Zhao, Y. (2015). The BRICS formation in reshaping global communication: Possibilities and challenges. In K. Nordenstreng & D. Thussu (Eds.), Mapping BRICS media (pp. 66–86). Routledge. https://doi.org/10.4324/9781315726212-5

Australia’s encryption laws: practical need or political strategy?

$
0
0

Introduction

Terrorist groups commonly use encrypted messaging applications to conceal their activities while recruiting new members, spreading propaganda and planning their attacks (Graham, 2016; Smith, 2017). Freely available smartphone applications like WhatsApp, Telegram and Facebook Messenger employ end-to-end encryption, which is so secure that the content of the messages cannot be read by any third parties, including the technology companies that build the product (Lewis, Zheng, & Carter, 2017). This problem – known as terrorist organisations ‘going dark’ – poses a major challenge for law enforcement and intelligence agencies, who routinely intercept communications to disrupt terrorist plots and prosecute individuals for terrorism offences (Forcese & West, 2020; Lewis, Zheng, & Carter, 2017). Law enforcement investigations can also be hindered by phone passcodes and other methods of authentication, which cannot be bypassed without undermining the privacy and security of other users. For example, several requests by the United States Federal Bureau of Investigation to unlock criminals’ iPhones have been denied by Apple, on the grounds that the data is encrypted locally on the device and cannot be accessed ‘without attacking fundamental elements of iOS security’ (Brandom, 2020).

Western governments are addressing these challenges by regulating technology companies directly, but there is no consensus as to best practice and some hesitation to legislate too strongly. This is partly due to the difficulties in governments regulating multinational tech giants, as well as concerns about privacy and cyber-security. In the United Kingdom, the Home Secretary can issue technical capability notices under the Investigatory Powers Act 2016 (IPA), which can require ‘removal by a relevant operator of electronic protection’ (section 253). However, the British government has so far exerted political rather than legal pressure, with threats of stronger regulation but no further legislation (Baker, 2019; Hern, 2017). In 2017, WhatsApp (which is owned by Facebook) reportedly refused to comply with a request by the British government to build a backdoor into the application (Ong, 2017). This suggests either that the IPA scheme lacks teeth or the British government is unwilling to pursue civil claims against large multinationals that refuse to cooperate. In the European Union, the position is even less clear, with France and Germany calling for stronger regulation of encryption but no consensus on how member states will proceed (Baker, 2019; Koomen, 2019; Toor, 2016). The European Commission has so far supported only non-legislative measures in response (European Commission, 2017). In the United States, senior officials continue to disagree on the benefits and risks of regulating encryption, with little resolution in sight (Baker, 2019; Geller, 2019).

The Australian government has not had the same qualms. In 2018, the federal parliament enacted the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth), which is known as ‘TOLA’ or, more commonly in the media, as the ‘encryption laws’ (Bogle, 2018). The legislation was enacted on a very short timetable, with little public consultation or parliamentary debate (Bogle, 2019). Its extensive powers have been heavily criticised not only by the technology industry, in Australia and globally, but also by a wide range of legal, civil society and human rights organisations (Australian Information Industry Association, 2018; Australian Human Rights Commission, 2018; Digital Industry Group, 2018). The legislation allows law enforcement and intelligence agencies to require technical assistance from ‘designated communications providers’, a broadly defined term which encompasses the largest social media companies down to small hardware and software suppliers (Telecommunications Act 1997 (Cth), s 317C). It permits an almost unlimited range of technical assistance, extending beyond decryption to include modifying consumer products and services (Telecommunications Act 1997 (Cth), s 317E).

In this article, I explain and interrogate the reasons why TOLA was enacted. In section 1, I explain the powers in their current form and assess whether there are meaningful limits to their scope. In section 2, I explore the political and parliamentary process by which these laws were enacted, including the government’s claims for urgency and the role of the Labor Party in opposition. In section 3, I explore the wider legal and political context, including Australia’s previous responses to terrorism and a lack of enforceable human rights protection. From this analysis, it becomes clear that Australia’s encryption laws reflect a pattern of highly politicised responses to terrorism within a permissive constitutional environment. They increase the impact of Australia’s existing counter-terrorism laws on human rights by generating further risks to privacy, free speech and freedom of the press.

Section 1: Australia’s encryption laws

Australia’s encryption laws, also known as TOLA, created a tiered regulatory scheme by which law enforcement and intelligence agencies can request or require technology companies to provide them with technical assistance. The scheme was inserted into the Telecommunications Act 1997 (Cth) (‘Telecommunications Act’) in early December 2018. There are three tiers of notices: technical assistance requests (TARs), technical assistance notices (TANs), and technical capability notices (TCNs). Each of these provides immunity from civil liability for companies which act in accordance or good faith with the terms of a notice (Telecommunications Act, ss 317G, 317ZJ). The requests or requirements in each notice must be reasonable, proportionate, practicable, and technically feasible (Telecommunications Act, ss 317JAA, 317P, 317V).

TARs request voluntary assistance and can be issued by the head of a law enforcement or intelligence agency for wide purposes relating to the functions of those authorities. This includes enforcing serious crimes, safeguarding national security, protecting Australia’s foreign relations or national economic well-being, and maintaining the security of electronic information (Telecommunications Act, s 317G).

TANs require mandatory assistance and set higher standards for approval. They can also be issued by the head of a law enforcement agency, but only with the approval of the Commissioner of the Australian Federal Police (AFP). Alternatively, they can be approved by the Director-General of the Australian Security Intelligence Organisation (ASIO), Australia’s domestic intelligence agency (Telecommunications Act, ss 317, 317LA). TANs cannot be issued by Australia’s foreign intelligence agencies. The purposes for issuing TANs are also narrower than those triggering TARs: they are available only to enforce serious criminal offences or safeguard national security (Telecommunications Act, ss 317). A company that fails to comply with a TAN fines of up to AUD$ 10 million (Telecommunications Act, s 317ZB).

TCNs also involve mandatory assistance, but they can require companies to develop new technical capabilities. As such, an added layer of protection applies. TCNs are available for the same purposes as TANs but they can only be issued by the Commonwealth Attorney-General on request by the Director-General of Security or the head of a law enforcement agency (Telecommunications Act, s 317T). The same penalty for non-compliance applies.

There are no clear limits to the types of technology companies that could be issued with a TOLA notice. A notice can be issued to any ‘designated communications provider’ (DCP), a broadly defined term that encompasses 15 company types. These include telecommunications service providers, internet hosting services, software and hardware suppliers, any company that ‘operates a facility’, and, at its broadest, any company that ‘provides an electronic service that has one or more end users in Australia’ (Telecommunications Act, s 317C). In this respect, the scheme extends far beyond regulating major multinationals such as Facebook and Apple, which have been the focus of global debates surrounding encryption. Notices can be issued to national telecommunications and internet service providers (such as Telstra, Vodafone and Optus) and even small software and hardware companies. The definition of a DCP could also extend to banks, universities, insurers, retailers and other businesses that offer online services to Australian end-users. Many of the categories mentioned above also include activities that facilitate, or are ancillary or incidental to, the main activity. This could plausibly include marketing companies, distributors and retailers.

The types of technical assistance that can be required under the legislation are similarly broad (Telecommunications Act, s 317E). They include:

  • removing one or more forms of electronic protection;
  • installing, maintaining, testing or using software or equipment;
  • assisting access to facilities, customer equipment, data processing devices, carriage services, or other electronic services;
  • assisting with the testing, modification, development or maintenance of a technology or capability;
  • modifying any of the characteristics of a service;
  • substituting part of a service; and
  • concealing the fact that anything has been done under the scheme

Only the first of these – removing electronic protection – relates directly to the problem of end-to-end encryption. DCPs can be required to provide many other types of assistance, such as installing software or modifying consumer products and services. For example, Apple argued in a parliamentary inquiry on the Bill that it could be required to install eavesdropping capability in its home speakers (Apple Inc., 2018). It is unlikely that ASIO or the AFP would ever require this, and amendments introduced into the final version of the Bill (explained further below) prevent the scheme being lawfully used for large-scale surveillance. However, the types of assistance available certainly extend beyond decryption to a wide range of unknown tasks.

These are extraordinary powers with few protections. Some additional accountability is provided by the requirement that law enforcement TANs be approved by the AFP Commissioner rather than the head of a state police force, although higher approval for TANs need not be sought by ASIO. The requirement that TCNs be approved by the Attorney-General is more rigorous again, although ministerial warrants still do not entail the same degree of independence as authorisation by a judge or magistrate. This contrasts with the IPA scheme in the UK, where technical capability notices must be approved through a ‘double-lock’ process involving both a judicial commissioner (an appointed former judge) and the Home Secretary (Investigatory Powers Act 2016 (UK), s 254).

The major limitation on the TOLA powers is that DCPs cannot be required to build a ‘systemic weakness’ or ‘systemic vulnerability’ into a product or service (Telecommunications Act, s 317ZG). In the original version of the Bill, these terms were left undefined, and while there is now some statutory guidance, significant confusion remains. The clearest definition available is that any assistance provided should not ‘affect a whole class of technology’ (Telecommunications Act, s 317B). In addition, companies cannot be required to develop new decryption capabilities or take action that would ‘render systemic methods of authentication or encryption less effective’ (Telecommunications Act, s 317ZG). This seemingly undermines the main purpose of the legislation, which was to allow greater access to encrypted communications. However, requests for decryption are still possible in individual cases, provided that the company already has the technical capability to unscramble the content. This is clear as a vulnerability can be ‘selectively introduced to one or more target technologies that are connected with a particular person’ (Telecommunications Act, s 317B). However, it remains doubtful whether this is technically possible without creating risks to other users (Apple Inc., 2018; Digital Industry Group, 2018). This remains a sticking point for other countries seeking to regulate encryption (Baker, 2019), and it is doubtful that the Australian approach solves this problem.

TOLA also includes reporting requirements. The issuing of any notice must be reported to the Commonwealth Ombudsman (for law enforcement) or the Inspector-General of Intelligence and Security (IGIS) (for intelligence agencies). The IGIS is an independent statutory authority that has oversight of Australia’s intelligence agencies and conducts inquiries into their operations as well as regular inspections (Hardy & Williams, 2016b). Reporting to the IGIS is an important inclusion that will enhance accountability, although most of the details in IGIS’s annual reports remain classified, so the public must largely trust rather than know that the agencies are being held to account (Hardy & Williams, 2016b). The Home Affairs Minister must produce a public report on TOLA usage, but this only includes the raw numbers for how many times the powers were used by law enforcement in relation to which types of offences (Telecommunications Act 1997, s 317ZS). ASIO must also include the number of TOLA notices issued in its annual report (Australian Security Intelligence Organisation Act 1979 (Cth), s 94), but again these are raw numbers only. In its latest report, the entire appendix containing those numbers was redacted (ASIO, 2019).

These limited reporting requirements mean there are very scant details about the use of TOLA in practice, and that this is likely to remain the case over time. This is further enforced by a disclosure offence, punishable by five years’ imprisonment, which prohibits DCP employees (and law enforcement or intelligence officers) from revealing anything about the use of the powers (Telecommunications Act, s 317ZF). This offence is likely to stifle any meaningful public discussion that could contribute to subsequent reviews and amendments by parliament.

Section 2: Political and parliamentary process

TOLA was passed very quickly by the federal parliament following a truncated committee inquiry. Draft legislation was released for public consultation on 14 August 2018 and the Bill was introduced into the House of Representatives on 20 September 2018. When he introduced the Bill, Peter Dutton, the Minister for Home Affairs, explained that encryption is ‘eroding the capacity of Australia’s law enforcement and security agencies to investigate serious criminal conduct and protect Australians’ (Dutton, 2018). He cited the November 2015 terrorist attacks in Paris as an example of terrorist groups using encrypted messaging services to conceal their activities from authorities while planning a mass-casualty attack (Dutton, 2018). With regard to Australia, he explained that 90 percent of ASIO’s priority cases, and the same percentage of AFP data intercepts, are impacted by encryption (Dutton, 2018). It was not clear from this statement whether he meant end-to-end encryption, the most secure kind which generated the need for new powers, or any type of encryption, such as passwords for email accounts, which are commonly bypassed by authorities. Given that he referred to encryption ‘in some form’ (Dutton, 2018), the latter seems more likely. It is more plausible that 90 percent of intercepted communications employ some type of encryption, with some smaller (unspecified) percentage employing the stronger end-to-end variety. Otherwise, nearly all of the telecommunications data intercepted by the AFP would be unreadable.

After Dutton’s second reading speech, the bill was referred immediately to the Parliamentary Joint Committee on Intelligence and Security (PJCIS). The PJCIS is a bipartisan committee which examines Australia’s counter-terrorism laws, reviews listings of proscribed terrorist organisations, and oversees the financing of Australia’s intelligence agencies. In contrast to similar committees in the UK and US, the PJCIS does not have oversight of intelligence agency operations; its role is largely limited to making law reform recommendations (Intelligence Services Act 2001 (Cth), s 29).

The PJCIS conducted hearings on the Bill in October and November of 2018, after receiving more than 100 written submissions from law reform and human rights organisations, digital rights organisations, and technology companies based in Australia and overseas. These groups have diverse motivations, and at other times can be opposed on rights-based issues. For example, human rights groups have advocated for stronger regulation of social media companies to prevent hate crime and other online abuse (Amnesty International, 2020). Social media and technology companies have business interests at heart and shareholders to think of, which represents a very different starting point to a rights-based organisation when thinking about platform regulation. Nonetheless, with regard to the encryption laws, there was a notable consensus amongst otherwise strange bedfellows. Across these groups, the submissions identified many similar concerns with the Bill (Apple Inc., 2018; Australian Human Rights Commission, 2018; Australian Information Industry Association, 2018; Australian Information security Association, 2018; Cannataci, 2018; Digital Industry Group, 2018; Law Council of Australia, 2018; Mozilla, 2018). The major issues raised in the submissions included:

  • Vagueness and overbreadth as to the types of companies to be targeted, devices affected, and assistance provided;
  • The absence of statutory definitions as to when a company would be introducing a ‘systemic weakness’ or ‘systemic vulnerability’ into a product or service;
  • Additional risks to privacy and cyber-security if vulnerabilities are introduced to assist with decryption, which could be exploited by malicious actors;
  • Technical difficulties in complying with the scheme in individual cases, without weakening encryption for all users;
  • Limited transparency and a lack of judicial oversight in the approval of notices;
  • A likely economic impact on technology companies, both locally and globally, as consumer trust in their products and services would be undermined; and
  • Potential for significant conflict of laws across jurisdictions

In his written submission, the United Nations Special Rapporteur on the right to privacy offered a thorough, scathing critique (Cannataci, 2018). He believed the safeguards in the bill were ‘illusory rather than substantive’, and offered this dressing-down of the government’s approach:

In my considered view, the Assistance and Access Bill is an example of a poorly conceived national security measure that is equally as likely to endanger security as not; it is technologically questionable if it can achieve its aims and avoid introducing vulnerabilities to the cybersecurity of all devices irrespective of whether they are mobiles, tablets, watches, cars, etc., and it unduly undermines human rights including the right to privacy (Cannataci, 2018).

Alongside other major players from the technology industry, including Apple, Facebook and Amazon (Apple Inc., 2018; Digital Industry Group, 2018), Mozilla went so far as to say that the powers ‘could do significant harm to the Internet’ (Mozilla, 2018).

It was obvious, then, that the legislation to be debated by parliament had significant structural problems, both principled and practical. Despite these fundamental issues, the PJCIS inquiry was expedited and the Bill passed in a single day in essentially its original form. The enacted version did incorporate a long list of amendments introduced by the government, including most of the 17 changes recommended by the PJCIS (2018). As discussed in Section 1, these included approval of law enforcement TANs by the AFP Commissioner, additional reporting requirements, and improved definitions of ‘systemic weakness’ and ‘systemic vulnerability’. However, the other changes were largely cosmetic and none addressed the most fundamental concerns, including the breadth of possible technical assistance and a lack of judicial oversight.

The truncated timetable for the PJCIS inquiry and approval by parliament was due to government intervention. On 22 November, Dutton contacted the committee to say ‘there was an immediate need to provide agencies with additional powers and to pass the Bill in the last sitting week of 2018’ (PJCIS, 2019). He cited a recent terrorist stabbing in Melbourne and an increased threat of terrorism over the Christmas and New Year period:

I am gravely concerned that our agencies cannot rule out the possibility that others may also have been inspired by events in Melbourne to plan and execute attacks ... This is particularly concerning as we approach Christmas and the New Year, which we know have been targeted previously by terrorists planning attacks against Australians gathered to enjoy the festive season ...

For these reasons I ask that the committee accelerate its consideration of this vital piece of legislation to enable its passage by the parliament before it rises for the Christmas break (PJCIS, 2019).

The committee accepted the Minister’s advice but later commented that the ‘expedited consideration … precluded the Committee from incorporating a detailed presentation of the evidence informing its recommendations’ (PJCIS, 2019). The inquiry was completed and the committee’s recommendations largely accepted by government, but in major respects the most significant opportunity to review the controversial new laws was left unfinished.

The TOLA legislation was approved by both Houses of Parliament on 6 December, on the last sitting day of Parliament before the end of the year. The Labor opposition had initially opposed the bill, with Shadow Attorney-General Mark Dreyfus declaring that the bill was ‘unworkable and potentially weakens Australia’s security’ (Duckett, 2018c). However, after being accused by senior Liberal Party members of being soft on national security – even ‘running a protection racket for terrorists’ (Duckett, 2018a) – Labor capitulated at the eleventh hour, withdrawing amendments it had proposed in the Senate and allowing the bill to pass (Worthington & Bogle, 2018). In explaining Labor’s backdown, then Opposition Leader Bill Shorten told the public, ‘Let’s just make Australians safer over Christmas (Duckett, 2018b). The Labor Party claimed it would pursue amendments to the bill in the coming year or if it was elected to government (Duckett, 2018b; Seo, 2019), but it remains in opposition and no substantive changes to the powers have since been made.

Further reviews into TOLA have been conducted by the PJCIS and the Independent National Security Legislation Monitor (INSLM, 2020). The INSLM is an independent statutory office, based on the UK’s Independent Reviewer of Terrorism Legislation, which examines Australia’s counter-terrorism laws to determine if they are proportionate, effective, necessary, and compatible with human rights (Independent National Security Legislation Monitor Act 2010 (Cth), s 6). At the time of writing, the PJCIS is yet to publish its findings. The INSLM (2020) has recommended that TANs and TCNs, which require mandatory assistance, be subject to judicial approval by a new Investigatory Powers Division of the Administrative Appeals Tribunal, rather than executive approval by the Attorney-General or head of an agency. In addition, he recommended that an Investigatory Powers Commissioner and Commission, similar to those found in the UK, be created to enhance oversight of the regime (INSLM, 2020). Finally, he offered a tighter, singular definition of systemic weakness and vulnerability, focusing on whether the modification creates a material risk of data being accessed by a third party (INSLM, 2020). It remains to be seen whether these recommendations will be taken up by the federal government.

Section 3: Legal and political context

Across a wide range of technology companies and civil society actors, both locally and globally, TOLA is recognised as adopting a highly problematic approach. This begs the question, if the laws were so obviously problematic, why were they allowed to pass? Were they justified in the Australian context as an urgently needed response to terrorism?

Australia has enacted a significant body of counter-terrorism laws since 9/11, including many more recently in response to Islamic State and the threat of returning foreign fighters (Hardy & Williams, 2016a). At last count, the federal parliament alone had enacted more than 80 separate pieces of legislation in response to terrorism (McGarrity & Blackbourn, 2019). These counter-terrorism laws have created extensive criminal offences and powers, including detention and supervision orders and expanded surveillance warrants. Despite this, until TOLA there was no legal mechanism allowing authorities greater access to encrypted communications. There were a variety of powers available, both to law enforcement and intelligence agencies, to intercept communications between persons of interest (McGarrity & Hardy, 2020), but none of these addressed the problem of terrorist organisations ‘going dark’ through end-to-end encryption. In this respect, some legal response to the encryption issue was justified. However, this does not excuse the specific powers that were created, or the timeframe in which they were enacted.

In pushing for the laws to be enacted before the end of parliament’s sitting year, the government cited an urgent threat of terrorism (PJCIS, 2019). To some extent, this might have justified imperfect laws and a truncated timetable, if lives would be saved as a direct consequence of bypassing more extensive consultation. However, and while the exact details of TOLA usage remain classified, there is sufficient reason to doubt the government’s claims of urgency. As discussed in Section 1, the Home Affairs Minister claimed that 90 percent of ASIO and law enforcement investigations are impacted by encryption (Dutton, 2018), but this figure (if accurate) more likely captures all types of encryption rather than the stronger end-to-end variety. The figure also suggests that encryption raises systemic, longstanding issues for terrorism investigations, which could be resolved over a longer timeframe. The Minister did cite a recent terrorist stabbing in Melbourne and a heightened threat over Christmas (PJCIS, 2019), but neither of these indicated that a specific terrorist plot could be averted or that lives could be saved by enacting the laws before the end of the year. The Director-General of ASIO, Duncan Lewis, explained there were ‘cases afoot at the moment where this legislation will directly assist’, and that ASIO would take advantage of the powers within 10 days of being enacted (Karp, 2018). However, he also conceded that there was no specific intelligence of an imminent threat (Karp, 2018).

Based on Australia’s previous experience in enacting counter-terrorism laws, it is more likely that the government relied on a generalised threat terrorism over the Christmas and New Year period to quicken TOLA’s passage through Parliament with minimal scrutiny. With few exceptions, Australia’s counter-terrorism laws have been passed on truncated timetables with minimal time for public and parliamentary debate (Hardy & Williams, 2016a; Lynch, 2006). For example, the government’s major response to the threat of foreign fighters was a 160-page bill that amended nearly 30 federal acts. An eight-day period was allowed for public consultation and it took one day in each House of Parliament for the laws to be approved (Hardy & Williams, 2016a). Viewed in this context, there is nothing especially unusual about the passage of TOLA through the Australian Parliament, except that the powers have generated controversy amongst a wider global audience.

A strikingly similar example is the passage of counter-terrorism laws through the federal parliament in 2005, following the London bombings in July that year. Lynch (2006) interrogated the Liberal party’s claims of urgency surrounding those laws, which were enacted in almost identical circumstances to those surrounding TOLA. The 2005 laws included technical amendments relating to terrorism offences, as well as control orders and preventative detention orders (PDOs), two of Australia’s most controversial and rights-infringing responses to terrorism (Burton, McGarrity, & Williams, 2012; Tyulkina & Williams, 2016). The package also included controversial sedition offences, which were widely recognised to undermine freedom of speech (Australian Law Reform Commission, 2006; Nette, 2006). In introducing these laws in parliament, the Prime Minister and Attorney-General claimed there was an urgent need to pass the laws before Christmas – an urgency, Lynch (2006) argued, that ‘was of the government’s own making’. He reached this conclusion based, among other factors, on the fact that the government knew about the need for the technical amendments for a much longer period, and that the new powers were not used until at least nine months after their passage through Parliament (Lynch, 2006). Confirming his analysis, the control order powers were used only twice and PDOs not at all until nearly a decade later in response to Islamic State (Hurst, 2014; Tyulkina & Williams, 2016).

Other features of the 2005 process directly resemble the passage of TOLA. At that time, too, the support of the Labor opposition was secured after senior members of the Liberal Party government accused them of being soft on national security and ‘anti-Australian’ (Lynch, 2006). The sedition offences were also widely recognised as being problematic (Nette, 2006) but were agreed to by Labor on the basis that they would be reviewed immediately after enactment by the Australian Law Reform Commission (2006). This is strikingly similar to how the Liberal government secured Labor’s support for TOLA, on threats of endangering national security and a vague promise that the laws would be improved following reviews by the PJCIS and INSLM (Seo, 2019). In both cases, Labor MPs were pressured into supporting laws that they recognised as overtly problematic.

Viewed in this light, the passage of TOLA through the Australian Parliament was highly problematic but neither exceptional nor unusual. Rather, it reflects problematic patterns of counter-terrorism lawmaking that have become commonplace in the Australian political landscape. Likely, the passage of TOLA could have been delayed for days, weeks or perhaps even months without any significant impact on national security. The Home Affairs Minister later concluded that TOLA ‘played a role, and a very positive role, in a number of investigations’ (SBS News, 2019). While the full details of these benefits will never be known, it is hardly the kind of report card that could justify such perfunctory consultation.

That the government’s urgency was doubtful is supported by two additional factors. First, discussions about regulating encryption in Australia started at least as early as 2015 (Stilgherrian, 2019), several years before the need to pass TOLA arose apparently in a matter of days. Second, to the extent that information on TOLA usage is currently available, the powers have not been used by law enforcement in relation to any terrorism offences. The only law enforcement notices to date have been issued in relation to cybercrime, homicide, organised crime, telecommunications offences and theft (Department of Home Affairs, 2019). It is possible that the powers have been used by ASIO to gather intelligence on domestic terrorism, but the numbers in the agency’s most recent annual report were redacted (ASIO, 2019).

The final piece of this puzzle, to explain why TOLA was enacted despite its evident problems, is to recognise that Australia lacks enforceable human rights protection. Australia sits alone among democratic nations in having no constitutional or statutory Bill of Rights at the federal level (Williams & Reynolds, 2017). Human rights legislation exists in some states, but there is no mechanism by which the High Court could strike down legislation enacted by the federal parliament on the basis that it infringes privacy or another fundamental right. A government securing the passage of laws speedily through parliament would be aware that the laws could later be struck down by the High Court only on structural grounds, such as infringing the separation of powers (which, incidentally, cannot be at issue with TOLA because the judiciary plays no role in its operation). There are some limited rights in the Australian Constitution, including to trial by jury and an implied freedom of political communication, but nothing that would be of any assistance in a human rights challenge against the encryption laws.

This lack of human rights protection has allowed the enactment of many counter-terrorism laws in Australia that would be constitutionally impermissible elsewhere. These include the possible detention of non-suspects by ASIO for up to a week for coercive questioning (Burton, McGarrity & Williams, 2012), and incommunicado detention for up to two weeks under PDOs to prevent a terrorist attack (Tyulkina & Williams, 2016). Sadly, the encryption laws are simply the latest example in a long line of exceptional counter-terrorism laws passed urgently through the federal parliament, in a constitutional setting that permits rights-infringing legal responses to terrorism.

In particular, the encryption laws compounded extant risks to freedom of speech and freedom of the press. Currently in Australia, freedom of the press remains a topic of significant public debate, with several ongoing prosecutions of high-profile whistleblowers and journalists (Byrne, 2019; Khadem, 2020; Knaus, 2020). The encryption laws exacerbated these risks by enhancing the possibility that journalists’ confidential sources could be accessed by law enforcement and intelligence agencies. Prior to the encryption laws, the enactment of Australia’s mandatory metadata retention regime, combined with other national security disclosure offences, had generated significant backlash from Australian media organisations (Hardy & Williams, 2015). As a result of those laws, journalists looked to encrypted messaging to protect the identity of their sources (Digital Rights Watch, 2019), but then the encryption laws meant this technique no longer provided a guarantee of security.

The possibility that the encryption laws could be used to identify journalists’ confidential sources, combined with the additional disclosure offence found in the encryption laws, has further contributed to a low point for free speech and freedom of the press in Australia. This is a cause of concern not only for journalists who wish to report on the scheme, but also for technology company employees, who may feel compelled to speak out in the public interest if the powers are misused by their employers or government agencies.

Conclusion

TOLA remains a feature of public discourse in Australia, and the issues it raises reflect wider concerns about evolving surveillance technologies, including metadata and facial recognition (Bogle, 2020; Churches & Zalnieriute, 2019). The overwhelming consensus amongst technology companies and human rights organisations, despite the otherwise contrasting motivations of these groups, is that the laws are highly problematic. The powers are vague and broadly drafted and they lack transparency and judicial oversight. According to many industry experts, the use of the powers will endanger privacy and cyber-security by allowing law enforcement and intelligence agencies to introduce vulnerabilities that can be exploited by malicious actors (Apple, Inc., 2018; Digital Industry Group, 2018). It is clear that the Labor opposition shares many of these concerns, despite allowing the laws to pass (Duckett, 2018b; Worthington & Bogle, 2018). It is also widely acknowledged that the time allowed for parliamentary debate was inadequate, and that more extensive consultation, particularly with smaller Australian companies, was needed (Bogle, 2019).

Civil society and the technology industry will be playing close attention to the upcoming report from the PJCIS and whether the federal government supports the INSLM’s recommendations. They should not, however, be optimistic that the government will introduce substantive changes as a result. Once counter-terrorism laws are on the statute books in Australia, it becomes very difficult to wind them back (Ananian-Welsh & Williams, 2014). Some of Australia’s most controversial counter-terrorism laws include sunset clauses as expiry dates, reflecting their original intention as an emergency power, but these have been renewed time and again in their original form (McGarrity, Gulati, & Williams, 2012). There is even less reason for the current government to amend the encryption laws, which were written into the statute books as permanent measures. In any case, the current COVID-19 crisis means that the political attention on counter-terrorism laws and the appetite for winding them back will be lower than at other times.

Most likely, some small changes may be made to improve accountability, but the overall shape of the scheme is likely to remain. One small amendment for significant benefit would be to reduce the scope of the disclosure offence, so that it applies only to those who intentionally harm national security or an ongoing law enforcement or intelligence operation. Alternatively, it could include a defence or exemption for DCP employees who reveal information in the public interest. As it stands, DCP employees who reveal any information about a notice face five years in prison (Telecommunications Act, s 317ZF). If some limited information about the use of TOLA notices could be made public, there may be sufficient groundswell of opinion against the laws to force the government’s hand. More significant changes, for example to address the lack of judicial oversight, might then have a greater chance of succeeding. In the meantime, such an amendment would reduce the impact of the encryption laws on freedom of speech and protect the ability of media organisations to hold government agencies accountable for any future misuse of the scheme.

During this review process, the Labor party will play a crucial role in opposition. If it bows once more to government pressure for bipartisanship, it will lose further credibility. Bipartisanship on national security matters is important to communicate a message of strength and direction to the general public, but not if it leads to poorly drafted laws that affect the privacy and security of all technology users. By allowing TOLA to sail through Parliament before Christmas, the Labor party missed an important opportunity to communicate to the Australian public that it will hold the government to account. In the absence of constitutional safeguards, protecting Australians’ human rights through legislation is crucial: not only to reviews of the encryption laws, but also when regulating any other emerging technologies. The encryption laws are a significant test case for whether the Australian government can strike an appropriate balance between security and human rights when regulating digital platforms. So far, such a balance has not been achieved.

References

Amnesty International. (2020). Toxic Twitter—The solution [Report]. Amnesty International. https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-8/

Ananian-Welsh, R., & Williams, G. (2014). The new terrorists: The normalisation and spread of anti-terror laws in Australia. Melbourne University Law Review, 38(2), 362–408. https://law.unimelb.edu.au/__data/assets/pdf_file/0008/1586987/382Ananian-WelshandWilliams2.pdf

Apple, Inc. (2018). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Apple, Inc.

Australia, L. C. (2018). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Law Council of Australia.

Australian Human Rights Commission. (2018). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Australian Human Rights Commission.

Australian Information Industry Association. (2018). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Australian Information Industry Association.

Australian Information Security Association. (2018). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Australian Information Security Association.

Australian Law Reform Commission. (2006). Fighting words: A review of sedition laws in Australia (Report No. 104). Australian Law Reform Commission. https://www.alrc.gov.au/publication/fighting-words-a-review-of-sedition-laws-in-australia-alrc-report-104/

Australian Security Intelligence Organisation (ASIO). (2019). ASIO Annual Report 2018-19. Australian Security Intelligence Organisation. https://www.asio.gov.au/asio-report-parliament.html

Baker, S. (2019, September 20). How long will unbreakable commercial encryption last? [Blog post]. Lawfare. https://www.lawfareblog.com/how-long-will-unbreakable-commercial-encryption-last

Bogle, A. (2018). 'Outlandish’ encryption laws leave Australian tech industry angry and confused. ABC News. https://www.abc.net.au/news/science/2018-12-07/encryption-bill-australian-technology-industry-fuming-mad/10589962

Bogle, A. (2019). Encryption laws developed after little consultation with Australian tech companies, FOI documents reveal. ABC News. https://www.abc.net.au/news/science/2019-07-10/dutton-encryption-laws-australian-tech-sector-not-consulted-foi/11283864

Bogle, A. (2020). Australian Federal Police officers trialled controversial facial recognition tool Clearview AI. ABC News. https://www.abc.net.au/news/science/2020-04-14/clearview-ai-facial-recognition-tech-australian-federal-police/12146894

Brandom, R. (2020). The FBI has asked Apple to unlock another shooter’s iPhone. The Verge. https://www.theverge.com/2020/1/7/21054836/fbi-iphone-unlock-apple-encryption-debate-pensacola-ios-security

Burton, L., McGarrity, N., & Williams, G. (2012). The extraordinary questioning and detention powers of the Australian Security Intelligence Organisation. Melbourne University Law Review, 36(2), 415–469. https://law.unimelb.edu.au/__data/assets/pdf_file/0018/1700172/36_2_3.pdf

Byrne, E. (2019). Afghan Files leak accused David McBride faces ACT Supreme Court for first time. ABC News. https://www.abc.net.au/news/2019-06-13/abc-raids-afghan-files-leak-accused-court-canberra/11206682

Cannataci, J. (2018). Mandate of the Special Rapporteur on the right to privacy (OL AUS 6/2018)). United Nations Human Rights Special Procedures.

Churches, G., & Zalnieriute, M. (2019, December 10). Unlawful metadata access is easy when we’re flogging a dead law. The Conversation. https://theconversation.com/unlawful-metadata-access-is-easy-when-were-flogging-a-dead-law-127621

Department Home Affairs. (2019). Telecommunications (Interception and Access) Act 1979 (Annual Report No. 2018–19). Department of Home Affairs. https://www.homeaffairs.gov.au/nat-security/files/telecommunications-interception-access-act-1979-annual-report-18-19.pdf

Department of Home Affairs. (2018). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Department of Home Affairs.

Digital Industry Group. (2018). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Digital Industry Group.

Digital Rights Watch. (2019, June). Digital security for journalists. Digital Rights Watch. https://digitalrightswatch.org.au/2019/06/10/digital-security-for-journalists

Duckett, C. (2018a). Labor will not back full encryption Bill as it offers interim deal again.

Duckett, C. (2018b, December 2). Australian government accuses Labor of backing terrorists on encryption-busting Bill. ZDNet. https://www.zdnet.com/article/australian-government-accuses-labor-of-backing-terrorists-on-encryption-busting-bill/

Duckett, C. (2018c, December 6). Australia now has encryption-busting laws as Labor capitulates. ZDNet. https://www.zdnet.com/article/australia-now-has-encryption-busting-laws-as-labor-capitulates/

Dutton, P. (2018). Commonwealth, Parliamentary Debates. House of Representatives.

European Commission. (2017). Communication from the Commission to the European Parliament, the European Council and the Council: Eleventh progress report towards an effective and genuine Security Union. European Union. https://ec.europa.eu/home-affairs/sites/homeaffairs/files/what-we-do/policies/european-agenda-security/20171018_eleventh_progress_report_towards_an_effective_and_genuine_security_union_en.pdf

Geller, E. (2019, June 27). Trump officials weigh encryption crackdown. Politico. https://www.politico.com/story/2019/06/27/trump-officials-weigh-encryption-crackdown-1385306

Graham, R. (2016). How terrorists use encryption. Combating Terrorism Center Sentinel, 9(6), 20–25. https://ctc.usma.edu/how-terrorists-use-encryption/

Hardy, K., & Williams, G. (2015). Special intelligence operations and freedom of the press. Alternative Law Journal, 41(3), 160–164. https://doi.org/10.1177/1037969X1604100304

Hardy, K., & Williams, G. (2016a). Australian legal responses to foreign fighters. Criminal Law Journal, 40(4), 196–212. http://hdl.handle.net/10072/172846

Hardy, K., & Williams, G. (2016b). Executive oversight of intelligence agencies in Australia. In Z. K. Goldman & S. J. Rascoff (Eds.), Global Intelligence Oversight: Governing Security in the Twenty-First Century. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190458072.003.0013

Hurst, D. (2014, October 9). Federal police lobby to relax rules on control orders under terrorism laws. The Guardian. https://www.theguardian.com/australia-news/2014/oct/10/federal-police-lobbying-to-relax-rules-on-obtaining-control-orders-under-terror-laws

Independent National Security Legislation Monitor. (2020). Trust but Verify: A Report Concerning the Telecommunications and Other Legislation Amendment (Assistance and Access [Report]. Australian Government, Independent National Security Legislation Monitor. https://www.inslm.gov.au/sites/default/files/2020-07/INSLM_Review_TOLA_related_matters.pdf

Intelligence, P. J. C., & Security. (2018). Advisory Report on the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Parliamentary Joint Committee on Intelligence and Security (PJCIS).

Karp, P. (2018, November 26). ASIO says it urgently needs powers forcing telcos to help break phone encryption. The Guardian. https://www.theguardian.com/australia-news/2018/nov/26/asio-says-it-urgently-needs-powers-forcing-telcos-to-help-break-phone-encryption

Khadem, N. (2020, July 3). Commonwealth dumps 42 charges against ATO whistleblower Richard Boyle but threat of prison looms. ABC News. https://www.abc.net.au/news/2020-07-03/charges-against-ato-whistleblower-richard-boyle-dropped-dpp/12419800

Knaus, C. (2020, July 10). Witness K lawyer Bernard Collaery to appeal against secrecy in Timor-Leste bugging trial. The Guardian. https://www.theguardian.com/australia-news/2020/jul/10/witness-k-lawyer-bernard-collaery-to-appeal-against-secrecy-in-timor-leste-bugging-trial

Koomen, M. (2019). The encryption debate in the European Union. Carnegie Endowment for International Peace.

Lewis, J. A., Zheng, D. E., & Carter, W. A. (2017). The effect of encryption on lawful access to communications and data. Center for Strategic and International Studies.

Lynch, A. (2006). Legislating with urgency: The enactment of the Anti-Terrorism Act [No 1] 2005. Melbourne University Law Review, 30(3), 747–781.

McGarrity, N., & Blackbourn, J. (2019). Australia has enacted 82 anti-terror laws since 2001. But tough laws alone can’t eliminate terrorism. The Conversation.

McGarrity, N., Gulati, R., & Williams, G. (2012). Sunset clauses in Australian anti-terror laws. Adelaide Law Review, 33(2), 307–333.

McGarrity, N., & Hardy, K. (2020). Digital surveillance and access to encrypted communications in Australia. Common Law World Review. https://doi.org/10.117/1473779520902478.

Mozilla. (n.d.). Submission to the Parliamentary Inquiry into the Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018. Mozilla.

Nette, A. (2006). A short history of sedition laws in Australia. Australian Universities Review, 48(2), 18–19.

News, S. B. S. (2019). Dutton says encryption laws help terror cops.

Parliamentary Joint Committee on Intelligence and Security (PJCIS). (2019). https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Intelligence_and_Security

Seo, B. (2019). Labor attacks ‘broken promise’ on encryption bill. Australian Financial Review.

Smith, L. (2017). Messaging app Telegram centrepiece of IS social media strategy. BBC Monitoring.

Stilgherrian. (2019). The encryption debate in Australia [Encryption Brief]. Carnegie Endowment for International Peace. https://carnegieendowment.org/2019/05/30/encryption-debate-in-australia-pub-79217

Tillett, A. (2018, November). Encryption laws threaten $3b cyber security industry, tech firms warn. Australian Financial Review. https://www.afr.com/politics/encryption-laws-threaten-3b-cyber-security-industry-tech-firm-senatas-warns-20181112-h17shh

Toor, A. (2016, August 24). France and Germany want Europe to crack down on encryption. The Verge. https://www.theverge.com/2016/8/24/12621834/france-germany-encryption-terorrism-eu-telegram

Tyulkina, S., & Williams, G. (2016). Preventative detention orders in Australia. University of New South Wales Law Journal, 39(2), 738–755. http://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2017/09/38-2-4.pdf

West, L., & Forcese, C. (2020). Twisted into knots: Canada’s challenges in lawful access to encrypted communications. Common Law World Review. https://doi.org/10.1177/1473779519891597.

Williams, G., & Reynolds, D. (2017). A charter of rights for Australia (4th ed.). NewSouth Press.

Worthington, B., & Bogle, A. (2018). Labor backdown allows federal government to pass controversial encryption laws. ABC News. https://mobile.abc.net.au/news/2018-12-06/labor-backdown-federal-government-to-pass-greater-surveillance/10591944?pfmredir=sm

Regulatory arbitrage and transnational surveillance: Australia’s extraterritorial assistance to access encrypted communications

$
0
0

This paper is part of Geopolitics, jurisdiction and surveillance, a special issue of Internet Policy Review guest-edited by Monique Mann and Angela Daly.

Introduction

Since the Snowden revelations in 2013 (see e.g., Lyon, 2014; Lyon, 2015) an ongoing policy issue has been the legitimate scope of surveillance, and the extent to which individuals and groups can assert their fundamental rights, including privacy. There has been a renewed focus on policies regarding access to encrypted communications, which are part of a longer history of the ‘cryptowars’ of the 1990s (see e.g., Koops, 1999). We examine these provisions in the Anglophone ‘Five Eyes’ (FVEY) 1 countries - Australia, Canada, New Zealand, the United Kingdom and the United States (US) - with a focus on those that attempt to regulate communications providers. The paper culminates with the first comparative analysis of recent developments in Australia. The Australian developments are novel in the breadth of entities to which they may apply and their extraterritorial reach: they attempt to regulate transnational actors, and may implicate Australian agencies in the enforcement - and potential circumvention - of foreign laws on behalf of foreign law enforcement agencies. This latter aspect represents a significant and troubling development in the context of FVEY encryption-related assistance provisions.

We explore this expansion of extraterritorial powers that extend the reach of all FVEY nations via Australia, by requesting or coercing assistance from transnational technology companies as “designated communications providers”, and allowing foreign law enforcement agencies to request their Australian counterparts to make such requests. Australia has unique domestic legal arrangements, which includes an aggressive stance on mass surveillance (Molnar, 2017), an absence of comprehensive constitutional or legislated fundamental rights at the federal level (Daly & Thomas, 2017; Mann et al., 2018), and has recently enacted the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth) 2, the focus of this article. We demonstrate that Australia’s status as the ‘weak link’ in the FVEY alliance enables the introduction of laws less likely to be constitutionally or otherwise legally permissible elsewhere. We draw attention to the extraterritorial reach of the Australian provisions which affords the possibility for other FVEY members to engage in regulatory arbitrage to exploit the weaker human rights protections and oversight measures in Australia.

Human rights and national security in Australia

Australia has a well-documented track record of ‘hyper legislation’ of national security measures (Roach, 2011), having passed over 64 anti-terrorism specific laws since 9/11 that have been recognised as having serious potential to encroach democratic rights and freedoms (Williams & Reynolds, 2017). Some of these laws have involved digital and information communications infrastructures and their operators, such as those facilitating Australian security and law enforcement agencies’ use of Computer Network Operations (Molnar, Parsons, & Zouave, 2017) and the introduction of mandatory data retention obligations on internet service providers (Suzor, Pappalardo, & McIntosh, 2017). Australia’s role as a leading proponent in advocating for stronger powers against encrypted communications is consistent with this history.

Yet, unlike any of the other FVEY members, Australia has no comprehensive enforceable human rights protection at the federal level (Daly & Thomas, 2017; Mann et al., 2018). 3 Australia does not have comprehensive constitutional rights (like the US and Canada), a legislated bill of rights (like NZ and the UK) nor recourse to regional human rights bodies (like the UK and its relationship with the European Convention on Human Rights) (Refer to Table 1).

Given this situation, we argue Australia is a ‘weak link’ among FVEY partners because its legal framework allows for a more vigorous approach to legislating for national security at the expense of human rights protections, including but not limited to, privacy (Williams & Reynolds, 2017; Mann et al., 2018). Australia’s status as a human rights ‘weak link’ affords the ‘legal possibility’ for measures which may be ‘legally impossible’ in other jurisdictions, including those of the other FVEY countries, given peculiar domestic and regional rights protections.

Encryption laws in the Five Eyes

FVEY governments have made frequent statements regarding their surveillance capabilities ‘going dark’ due to encryption, with consequences for their ability to prevent, detect and investigate serious crimes such as terrorism and the dissemination of child exploitation material (Comey, 2014). This is despite evidence that the extensive surveillance powers that these agencies maintain are mostly used for the investigation of drug offences (Wilson & Mann, 2017; Parsons & Molnar, 2017). Further, there is an absence of evidence that undermining encryption will improve law enforcement responses (Gill, Israel, & Parsons, 2018), coupled with disregard for the many legitimate uses of encryption (see e.g., Abelson et al., 2015), including the protection of fundamental rights (see e.g., Froomkin, 2015).

It is important to note, as per Koops and Kosta (2018), that communications may be encrypted by different actors at different points in the telecommunications process. Where, and who applies encryption, will affect which actors have the ability to decrypt communications, and accordingly where legal obligations to decrypt may lie, or be actioned. For example, in some scenarios the service provider maintains the means of decrypting the communications, but this would not be the case where the software provider or end user has the means to decrypt (i.e., ‘at the ends’). More recently, the focus has shifted to communications providers offering encrypted services or facilitating a third party offering such services over their networks. These actors can be forced to decrypt communications either via ‘backdoors’ (i.e., deliberate weaknesses or vulnerabilities) built into the service, or via legal obligations to provide assistance. The latter scenario is not a technical backdoor per se, but could be conceptualised as a ‘legal’ means to acquire a ‘backdoor’ as the government agency will obtain covert access to the service and communications therein, thus having a similar outcome to a technical backdoor. It is these measures which are the focus of our analysis. We provide a brief overview of the legal situation in each FVEY country (Table 1), before turning to Australia as our main focus.

United States

The legal situation in the US to compel decryption depends, at least in part, on the actor targeted. The US has no specific legislation dealing with encryption although other laws on government investigatory and surveillance powers may be applicable (Gonzalez, 2019). Forcing an individual to decrypt data or communications has generally been considered incompatible with the Fifth Amendment to the US Constitution (i.e. the right against self-incrimination), although there is no authoritative Supreme Court decision on the issue (Gill, 2018). Furthermore, the US government may be impeded by arguments that encryption software constitutes ‘speech’ protected by the First Amendment and Fourth Amendment (Cook Barr, 2016; Gonzalez, 2019; see also Daly, 2017).

For communications providers, the US has a provision in the Communications Assistance for Law Enforcement Act (CALEA) §1002on Capability Requirements for telecommunications providers, which states that providers will not be required to decrypt or ensure that the government can decrypt communications encrypted by customers, unless the provider has provided the encryption used (see e.g., Koops & Kosta, 2018). 4

In an attempt to avoid the difficulty of forcing individuals to decrypt, and the CALEA requirements’ application only to telecommunications companies, attention has been turned to technology companies, including equipment providers. Litigation has been initiated against companies that refuse to provide assistance; the most notable being the FBI-Apple dispute concerning the locked iPhone of one of the San Bernardino shooters (Gonzalez, 2019). Ultimately the FBI were able to unlock the iPhone without Apple’s assistance, by relying on a technical solution from Cellebrite (Brewster, 2018), thereby engaging in a form of ‘lawful hacking’ (Gonzalez, 2019). Absent a superior court’s ruling, or legislative intervention, the legal position regarding compelled assistance remains uncertain (Abraha, 2019).

Canada

Canada does not have specific legislation that provides authorities the power to compel decryption. Canadian authorities have imposed requirements on wireless communications providers through spectrum licensing conditions in the form of the Solicitor General Enforcement Standards for Lawful Interception of Telecommunications (SGES) Standard 12 which obliges providers to decrypt any communications they have encrypted on receiving a lawful request, but excludes end-to-end encryption “that can be employed without the service provider’s knowledge” (Gill, Israel, & Parsons, 2018, p. 59; West & Forcese, 2020). It appears the requirements only apply to encryption applied by the operator itself, can involve a bulk rather than case-by-case decryption requirement, do not require the operator to develop “new capabilities to decrypt communications they do not otherwise have the ability to decrypt”, and do not prevent operators employing end-to-end encryption (Gill, Israel, & Parsons, 2018, p. 60; West & Forcese, 2020).

There are provisions of the Canadian Criminal Code which give operators immunity from civil and criminal liability if they cooperate with law enforcement ‘voluntarily’ by preserving or disclosing data to law enforcement, even without a warrant (Gill, Israel, & Parsons, 2018, p. 57). There are also production orders and assistance orders that can be issued under the Criminal Code to oblige third parties to assist law enforcement, and disclose documents and records which could, in theory, be used to target encrypted communications (Gill, Israel, & Parsons, 2018, pp. 62-63), but West and Forcese (2020, p. 13) cast doubt on this possibility. There are also practical limitations, including the fact that many digital platforms and service providers do not have a physical presence in Canada, and thus are effectively beyond the jurisdiction of Canadian authorities (West & Forcese, 2020). Here, Mutual Legal Assistance Treaty (MLATs) could be used, although their use is notoriously beset with delay, and may only be effective if the other jurisdiction has its own laws to oblige third parties to decrypt data or communications (West & Forcese, 2020).

The Canadian Charter of Rights and Freedoms has a number of sections relevant to how undermining encryption can interfere with democratic freedoms, namely sections 2 (freedom of expression), 7 (security of the person), 8 (right against unreasonable search and seizure), and the right to silence and protection from self-incrimination contained in sections 7, 11 and 14 (West & Forcese, 2020). Case law from Canadian courts suggests that individuals cannot be compelled to decrypt their own data (Gill, 2018, p. 451). The Charter implications of BlackBerry’s assistance to the Canadian police in the R v Mirarchi5 case was never ruled on as the case was dropped (Gill, Israel, & Parsons, 2018, p. 58).

In absence of a legislative proposal before the Canadian Parliament, it is difficult to surmise how, and whether, anti-encryption powers would run up against human rights protections. Yet any concrete proposal would likely face scrutiny in the courts given the impacts on Canadians’ Charter-protected rights.

New Zealand

In New Zealand, provisions in theTelecommunications (Interception Capability and Security) Act 2013 (TISCA) require network operators to ensure that their networks can be technically subjected to lawful interception (Cooper, 2018). 6 Section 10(3) requires that public telecommunications network operators, on receipt of a lawful request, must decrypt encrypted communications carried by its network, if that operator has provided the means of encryption. Subsection 10(4) states that an operator is not required to decrypt communications that have been encrypted using a publicly available product supplied by another entity, and the operator is not under any obligation to ensure that a surveillance agency has the ability to decrypt communications.

It appears these provisions may entail that an operator cannot provide end-to-end encryption on its services so that their networks can be subject to lawful interception - that is, they must maintain the cryptographic key where encryption is managed centrally by the service provider (Global Partners Digital, n.d.) and engineer a ‘back door’ into the service (Cooper, 2018). However, NGO NZ Council for Civil Liberties considered the impact of this provision is theoretical as most services are offshore, and this provision does not apply extraterritorially (Beagle, 2017). Yet, section 38 of TICSA allows the responsible minister to make “service providers” (discussed below) subject to provisions such as this on the same basis as “network operators”, which may involve section 10 having an extraterritorial reach (Keith, 2020).

There is a further provision in section 24 of TISCA that places both network operators and service providers (defined as anyone, whether in New Zealand or not, who provides a communications service to an end user in New Zealand) under obligations to provide ‘reasonable’ assistance to surveillance agencies with interception warrants or lawful interception authorities, including the decryption of communications, when they were the source of the encryption. Such companies do not have to decrypt encryption they have not provided nor “ensure that a surveillance agency has the ability to decrypt any telecommunication” (TICSA s 24(4)(b)). It is unclear what “reasonable assistance” entails, and how that would apply to third party app providers such as WhatsApp (to which section 24 would prima facie apply but not section 10 in the absence of a section 38 decision). It is also unclear how this provision would be enforced against offshore companies (Dizon et al., 2019, pp. 74-75).

There are further provisions in the Search and Surveillance Act 2012 which affect encryption. Section 130 includes a requirement that “the user, owner, or provider of a computer system […] offer reasonable assistance to law enforcement officers conducting a search and seizure including providing access information” which could be used to force an individual or business to decrypt data and communications (Dizon et al., 2019, p. 61). There is a lack of clarity as to how the privilege against self-incrimination operates (Dizon et al., 2019, pp. 62-63). There is also a lack of clarity about what “reasonable assistance” from companies, which will likely be third parties, and not able to avail themselves of the protection against self-incrimination, may entail (Dizon et al., 2019, pp. 65-66).

New Zealand has human rights protections enshrined in its Bill of Rights Act 1990, and section 21 contains the right to be secure against unreasonable searches and seizures. However, it “does not have higher law status and so can be overridden by contrary legislation…but there is at least some effort to avoid inconsistencies” (Keith, 2020). There is also the privilege against self-incrimination, “the strongest safeguard available in relation to encryption as it works to prevent a person from being punished for refusing to provide information that could lead to criminal liability” (Dizon et al., 2019, p. 7). There is no freestanding right to privacy in the New Zealand Bill of Rights, and so aspects of privacy must be found via other recognised rights (Butler, 2013), or may be protected via data protection legislation and New Zealand courts’ “relatively strong approach to unincorporated treaties, including human rights obligations” (Keith, 2020).

Despite being part of the FVEY communiques on encryption mentioned below, Keith (2020) views New Zealand’s domestic approach as more “cautious or ambivalent”, with “no proposal to follow legislation enacted by other Five Eyes countries”.

United Kingdom

The most significant law is the UK’s Investigatory Powers Act 2016 (henceforth IPA). 7 Section 253 allows a government minister, subject to approval by a 'Judicial Commissioner', to issue a ‘Technical Capability Notice’ (TCN) to any communications operator (which includes telecommunications companies, internet service providers, email providers, social media platforms, cloud providers and other ‘over-the-top’ services), whether UK-based or anywhere else in the world, imposing obligations on that provider. Such an obligation can include the operator having to remove “electronic protection applied by or on behalf of that operator to any communications or data”. The government minister must also consider technical practicalities such as whether it is ‘practicable’ to impose requirements on operators, and for the operators to comply. Section 254 provides that Judicial Commissioners conduct a necessity and proportionality test before approving a TCN. This means that a provider receiving a TCN would not be able to provide end-to-end encryption for its customers, and must ensure there is a method of decrypting communications. In other words, the provider must centrally manage encryption and maintain the decryption key (Smith, 2017a).

In November 2017, the UK Home Office released a Draft Communications Data Code of Practice for consultation, which clarified that a TCN would not require a telecommunications operator to remove encryption per se, but “it requires that operator to maintain the capability to remove encryption when subsequently served with a warrant, notice or authorisation” (UK Home Office, 2017, p. 75). Furthermore, it was reiterated that an obligation to remove encryption can only be imposed where “reasonably practicable” for the communications provider to comply with, and the obligation can only pertain to encryption that the communications provider has itself applied, or in circumstances when this has been done, for example, by a contractor on the provider’s behalf.

Later, in early 2018, after analysing responses to the Draft Code, the UK Home Office introduced draft administrative regulations to the UK Parliament, which were passed in March 2018. These regulations affirm the Home Office’s previous statements that TCNs require that operators “maintain the capacity” to disclose communications data on receipt of an authorisation or warrant, and such notices can only impose obligations on telecommunications providers to remove “electronic protection” applied by, or on behalf of, the provider “where reasonably practicable” (Ni Loideain, 2019, p. 186). This would seem to entail that encryption methods applied by the user are not covered by this provision (Smith, 2017b). However, Keenan (2019) argues that the regulations may “compel […] operators to facilitate the ‘disclosure’ of content by targeting authentication functions” which may have the effect of secretly delivering messages to law enforcement.

While some of the issues identified above with the UK’s TCNs may be clarified by these regulations, other issues remain. For example, the situation remains unclear for a provider wanting to offer end-to-end encryption to its customers without holding the means to decrypt them. Practical questions remain about how the provisions can be enforced against providers which may not be geographically based in the UK, such as technology companies and platforms which may or may not maintain offices in the UK. To date, there is also no public knowledge of whether any TCNs have been made, approved by Judicial Commissioners, and complied with by operators (Keenan, 2019).

In addition to TCNs, section 49 of the Regulation of Investigatory Powers Act (2000) (RIPA) allows law enforcement agencies in possession of a device to issue a notice to the device user or device manufacturer to compel them to unlock encrypted devices or networks (Keenan, 2019). The law enforcement officer must obtain permission from a judge on the grounds that it is “necessary in the interests of national security, for the purpose of preventing or detecting crime, or where it is in the interest of the economic well-being of the United Kingdom” (Keenan, 2019). Case law on section 49 notices in criminal matters has generally not found the provision’s use to force decryption to violate the privilege against self-incrimination, in sharp distinction to the US experience (Keenan, 2019).

It is unclear whether these provisions would withstand such a challenge before the European Court of Human Rights on the basis of incompatibility with ECHR rights, especially Article 6 (right to a fair trial) and Article 8 (right to privacy).

Australia

In Australia the encryption debate commenced in June 2017 when then-Australian Prime Minister Turnbull (in)famously stated that “the laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia” (Pearce, 2017, para. 8). This remark, interpreted colloquially as a ‘war on maths’ (Pearce, 2017), gestured at an impending legislative proposal that would introduce provisions to weaken end-to-end encryption.

In August 2018, the Five Eyes Alliance met in a ‘Five Country Ministerial’ (FCM) and issued a communique that stated: “ We agreed to the urgent need for law enforcement to gain targeted access to data, subject to strict safeguards, legal limitations, and respective domestic consultations” (Australian Government Department of Home Affairs, 2018, para. 18). The communique was accompanied by a Statement of Principles on Access to Evidence and Encryption, assented to by all FVEY governments (Australian Government Department of Home Affairs, 2018). The statement affirmed the important but non-absolute nature of privacy, and signalled a “pressing international concern” posed by law enforcement inability to access encrypted content. FVEY partners also agreed to abide by three principles in the statement: mutual responsibility; the paramount status of rule of law and due process; and freedom of choice for lawful access solutions. “Mutual responsibility” relates to industry stakeholders being responsible for providing access to communications data. The “freedom of choice” principle relates to FVEY members encouraging service providers to “voluntarily establish lawful access solutions to their products and services that they create or operate in our countries”, with the possibility of governments “pursu[ing] technological, enforcement, legislative or other measures to achieve lawful access solutions” if they “continue to encounter impediments to lawful access to information” (Australian Government Department of Home Affairs, 2018, paras. 34-35).

In the month following this meeting, the Australian government introduced what became the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth) (or ‘AA Act’), which was subsequently passed by the Australian Parliament in December 2018. The Act amends pre-existing surveillance legislation in Australia, including the Telecommunications Act 1997(Cth) and the Telecommunications (Interception and Access) Act 1979 (Cth). It includes a series of problematic reforms that have extraterritorial reach beyond the Australian jurisdiction. 8

Specifically, three new mechanisms which seem (at least at face value) to be inspired by the UK’s IPA are introduced into the Telecommunications Act: Technical Assistance Requests (TARs), 9 Technical Assistance Notices (TANs) 10 and Technical Capability Notices (TCNs). 11 TARs can be issued by Australian security agencies 12 that may “ask the provider to do acts or things on a voluntary basis that are directed towards ensuring that the provider is capable of giving certain types of help.” 13 TARs escalate to TANs compelling assistance and impose penalties for non-compliance. The Australian Attorney-General can also issue TCNs which “may require the provider to do acts or things directed towards ensuring that the provider is capable of giving certain types of help” or to actually do such acts and things.

While the language of TCN is similar to the UK IPA, there is a much longer and more broadly worded list of “acts or things” that a provider can be asked to do on receipt of a TCN. 14 Although, as per section 317ZG, “systemic weaknesses” cannot be introduced, 15 there is still a significant potential impact on the security and privacy of encrypted communications. An important distinction between Australian and the UK TCNs is that the Australian notices are issued by the executive and are not subject to judicial oversight (Table 1).

The AA Act has extraterritorial reach beyond Australia in two main ways. The first is via obligations imposed on “designated communications providers” located outside Australia. “Designated communications providers” is defined extremely broadly to include, inter alia, carriers, carriage service providers, intermediaries and ancillary service providers, and any provider of an “electronic service” with any end-users in Australia, or of software likely to be used in connection with such a service, that has any end-users in Australia. It includes any “constitutional corporation” 16 that manufactures, installs, maintains or supplies devices for use, or likely to be used, in Australia, or develops, supplies or updates software that is capable of being installed on a computer or device that is likely to be connected to a telecommunications network in Australia (Ford & Mann, 2019). Thus a very wide range of providers from Australia and overseas will fall within these definitions (McGarrity & Hardy, 2020). Failure to comply with notices may result in financial penalties for companies, yet it is not clear how such penalties may be enforced vis-à-vis companies which are not incorporated or located in Australia. In any case in which a TAR is issued, it provides designated communications providers with civil immunity 9 from damages that may arise from the request (for example, rendering phones or devices useless), which may incentivise compliance prior to escalation to an enforceable TAN or TCN (Ford & Mann, 2019).

The second aspect of the AA Act’s extraterritorial reach is the provision of assistance by Australian law enforcement to their counterparts via the enforcement of foreign laws. The TARs, TANs, and TCNs all involve “assisting the enforcement of the criminal laws of a foreign country, so far as those laws relate to serious foreign offences”. 17 This is also reinforced by further amendments to theMutual Assistance in Criminal Matters Act 1987(Cth) that bypass MLAT processes, and provide a conduit to the extraterritorial application of Australia’s surveillance laws. That is, Australian law enforcement agencies are able to assist foreign governments through their requests for Australian assistance, including in the form of accessing encrypted communications and/or designing new ways to access encrypted communications (as per TCNs), for the enforcement of their own criminal laws. 18 This may operate as a loophole through which foreign law enforcement agencies circumvent their own legal system’s safeguards and capitalise on Australia’s lack of a federal human rights framework (Ford & Mann, 2019).

Table 1: Overview of anti-encryption measures in each FVEY country
 

United States

Canada

New Zealand

United Kingdom

Australia

Relevant law/s

Communications Assistance for Law Enforcement Act § 1002.

No specific legislation that provides authorities the power to compel decryption.

Narrow obligation in Solicitor General Enforcement Standards for Lawful Interception of Telecommunications (SGES) Standard 12.

Telecommunications (Interception Capability and Security) Act 2013 sections 10 and 24.

Investigatory Powers Act 2016 section 253.

Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth) section 317A.

Entities targeted

Application only to “telecommunications companies.”

Application only to “wireless communication providers.”

Section 10 applies to “network operators” and section 24 applies to “network operators” and “service providers”.

Any “communications operator” (which includes telecoms companies, internet service providers, email providers, social media platforms, cloud providers and other ‘over-the-top’ services).

The definition of “designated communications provider” is set out in section 317C. It includes but is not limited to “a carrier or carriage service provider”, “person provides an electronic service that has one or more end-users in Australia”, or “the person manufactures or supplies customer equipment for use, or likely to be used, in Australia”.

Statutory obligations imposed on target

Companies will not be required to decrypt or ensure that the government can decrypt communications encrypted by customers, unless the provider itself has provided the encryption used.

Providers must decrypt any communications they have encrypted themselves on receiving a lawful request. Seems not to apply to end-to-end encryption not applied by the provider.

Operators, on the receipt of a lawful request to provide interception, must decrypt encrypted communications carried by its network, if that operator has provided the means of encryption (s 10).

Operators and providers must provide “reasonable” assistance to surveillance agencies with interception warrants or lawful interception authorities, including the decryption of communications when they have provided the encryption (s 24).

Operators obliged to do certain things which can include the removal of “electronic protection applied by or on behalf of that operator to any communications or data”. It is unclear whether a provider receiving a TCN would be able provide end-to-end encryption for its customers.

Providers may be issued with Technical Assistance Requests (TARs), Technical Assistance Notices (TANs) and/or Technical Capability Notices (TCNs). TARs can be issued by Australian security agencies that may “ask the provider to do acts or things on a voluntary basis that are directed towards ensuring that the provider is capable of giving certain types of help.” TARs escalate to TANs compelling assistance and impose penalties for non-compliance. The Australian Attorney-General can also issue TCNs which “may require the provider to do acts or things directed towards ensuring that the provider is capable of giving certain types of help” or to actually do such acts and things.

Human rights protections

US Constitution, notably the Fourth and Fifth Amendment. Also, First Amendment in terms of cryptographic code as a possible form of protected free speech.

Canadian Charter of Rights and Freedoms: Section 2 (freedom of expression), Section 7 (security of the person), Section 8 (right against unreasonable search and seizure), and the right to silence and protection from self-incrimination contained in sections 7, 11 and 14.

Human Rights Act 1993.

Human Rights Act 1998, European Convention on Human Rights.

No comprehensive protection at the federal level; no right to privacy in Australian Constitution.

Approval mechanisms for encryption powers’ exercise

N/A

Minister of Public Safety (executive branch).

Powers subject to interception warrants or other lawful interception authority. “Indirect” judicial supervision (Keith, 2020).

Approval by Judicial Commissioner.

Approval by administrative or executive officer (TCNs are approved by the Attorney-General). If a warrant or authorisation was previously required for the activity, it is still required after these reforms.

Extraterritorial application

Does not apply extraterritorially

Does not apply extraterritorially.

Section 10 does not apply extraterritorially unless section 38 decision made.

Section 24 applies to both NZ providers and foreign providers providing a service to any end-user in NZ.

Applies to both UK-based and foreign-based communications operators.

Applies to both Australian and foreign-based providers.

Providers can receive notices to assist with the enforcement of foreign criminal laws.

Relevant court cases

Apple-FBI

R v Mirarchi

None known.

None known.

Not applicable.

Discussion

The recent legislative developments in Australia position it as a leading actor in the ongoing calls for a broader set of measures to weaken or undermine encryption. The AA Act introduces wide powers for Australian law enforcement and security agencies to request, or mandate assistance in, communications interception from a wide category of communications providers, internet and equipment companies, both in Australia and overseas, and permits foreign agencies to make requests to Australian agencies to use these powers in the enforcement of foreign laws. Compared to the other FVEY jurisdictions’ laws in Table 1, the AA Act’s provisions cover the broadest category of providers and companies, to do the broadest category of assistance acts, with the weakest oversight mechanisms and no protections for human rights.

Australia’s AA Act also gives these provisions the most broad and significant extraterritorial reach of the FVEY equivalent. While New Zealand and the UK also extend their assistance obligations to foreign entities, Australia’s AA Act surpasses this to provide assistance to foreign law enforcement agencies. This is a highly worrying development since the AA Act facilitates the paradoxical enforcement (of criminal laws) and circumvention of (human rights) foreign laws on behalf of foreign law enforcement agencies, through inter alia the coercion of transnational technology companies into designing new ways of undermining encryption at a global scale via Australian law in the form of TCNs.

The idea of jurisdiction shopping by FVEY law enforcement agencies may be applicable, whereby Australia has enacted powers that have extraterritorial consequence, and that could operate to serve the wider FVEY alliance, especially given the lack of judicial oversight of TCNs, and Australia’s weak human rights protections. Jurisdiction shopping concerns the strategic pursuance of legislative, policy and operational objectives in specific venues to achieve outcomes that may not be possible in other venues due to the local context. 19

The AA Act provisions expand legally permissible extraterritorial measures to obtain encrypted communications, and in theory, this enables FVEY partners to ‘jurisdiction shop’ to exploit the lack of human rights protections in Australia. This is not the first time Australia has been an attractive jurisdiction shopping destination. One previous example relates to Operation Artemis run by the Queensland Police where a website used for the dissemination of child exploitation material was relocated to Australian servers so that police could engage in a controlled operation and commit criminal offences (including the dissemination of child exploitation material) without criminal penalty (Høydal, Stangvik, & Hansen, 2017; McInnes, 2017). 20

Australia emerges as a strategic forum for FVEY partners to implement new laws and powers with extraterritorial reach, as unlike other FVEY members, Australia has no meaningful human rights protections that would prevent gross invasions arising from measures that undermine encryption, coupled with weak oversight mechanisms (McGarrity & Hardy, 2020). These considerations also relate to the pre-existing use of ‘regulatory arbitrage’ by FVEY members, which involves information being legally accessed and intercepted in one of the FVEY countries with weaker human rights protection, then being transferred and used in other FVEY countries with more restrictive legal frameworks (Citron & Pasquale, 2010). This situation may allow for authorisation for extraterritorial data gathering to, in effect, be funnelled through the ‘weak link’ of Australia. Thus, the AA Act presents an opportunity for FVEY partners to engage in further regulatory arbitrage by jurisdiction shopping their requests to access encrypted communications and to mandate designated communications providers (i.e. transnational technology companies) design and develop new ways to access encrypted communications via Australia.

However, it is difficult to ascertain the extent to which the FVEY partners are indeed exploiting the Australian ‘weak link’, for two reasons. One, the FVEY alliance operates in a highly secretive manner. Second, the AA Act severely restricts transparency, via the introduction of secrecy provisions and enhanced penalties for unauthorised disclosure, and an absence of judicial authorisation of the exercise of the powers (Table 1). There is very limited ex-post aggregated public reporting of the exercise of the powers. One of these few mechanisms is the Australian Department of Home Affairs annual report on the operation of the Telecommunications (Interception and Access) Act 1979 (Cth). The 2018-2019 report stated that seven TARs were issued, five to the Australian Federal Police and two to the New South Wales Police. Cybercrime and telecommunications offences were the two most common categories of crimes for which the TARs were issued, with the notable absence of any terrorism offences - the main rationale supporting the introduction of the powers. In the Australian Senate Estimates process in late 2019, it was revealed that the TAR powers had been used on a total of 25 occasions up to November 2019 (Sadler, 2020a). 21 The fact that only TARs have been issued may indicate that designated communications providers are complying with requests in the first instance, and thus there is no need to escalate to enforceable notices.

One possible, and as yet unresolved, countervailing development to the AA Act in the FVEY countries concerns the US introduction of the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which aims to facilitate US and foreign law enforcement access to data held by US-based communications providers in criminal investigations, bypassing MLAT procedures (Abraha, 2019; see also Gstrein, 2020, this issue; Vazquez Maymir, 2020, this issue). Bilateral negotiations regarding mechanisms for accessing (via US technology companies) and sharing e-evidence under the CLOUD Act between the US and Australia are underway, and there have been some early questions and debates (Bogle, 2019; Hendry, 2020) as to whether Australia will comply with CLOUD requirements. Specifically, the CLOUD Act allows “foreign partners that have robust protections for privacy and civil liberties to enter into executive agreements with the United States to use their own legal authorities to access electronic evidence” (Department of Justice, n.d) (PDF). CLOUD agreements between the US and foreign governments should not include any obligations forcing communications providers to maintain data decryption capabilities nor should they include any obligation preventing providers from decrypting data. 22 It is uncertain whether Australia would comply with CLOUD requirements given its aforementioned weak human rights framework, and the absence of judicial oversight for the authorisation of the anti-encryption powers.

These concerns seem to have motivated the current Australian opposition party, Labor, to introduce a private member’s bill into the Australian Parliament in late 2019 to ‘fix’ some aspects of the AA Act, despite their bipartisan support in passage of the law at the end of 2018. Notable fixes sought include the introduction of enhanced safeguards, including judicial oversight and clarification that TARs, TANs, and TCNs cannot be used to force providers to build systemic weaknesses and vulnerabilities in their systems, including implementing or building a new decryption capability. At the time of writing, the Australian Parliament is considering the bill, although it is unlikely it will be passed given the government has indicated it will vote down Labor’s proposed amendments (Stadler, 2020b).

Conclusion

Laws to restrict encryption occur in the context of regulatory arbitrage (Citron & Pasquale, 2010). This paper has analysed new powers that allow for Australian law enforcement and security agencies to request or mandate assistance in accessing encrypted communications, and permits foreign agencies to make requests to Australian agencies to use these powers in the enforcement of foreign laws, taking advantage of a situation where there is less oversight and fewer human rights or constitutional protections. The AA Act presents new opportunities for FVEY partners to leverage access to (encrypted) communications via Australia’s ‘legal backdoors’, which may undermine protections that might otherwise exist within local legal frameworks. This represents a troubling international development for privacy and information security.

Acknowledgements

The authors would like to acknowledge Dr Kayleigh Murphy for her excellent research assistance and the Computer Security and Industrial Cryptography (COSIC) Research Group at KU Leuven, the Law Science Technology Society (LSTS) Research Group at Vrije Universiteit Brussel, and the Department of Journalism in Maria Curie-Skłodowska University (Lublin, Poland) for the opportunity to present and receive feedback on this research. Finally, we thank Tamir Israel, Martin Kretschmer, Balázs Bodó, and Frédéric Dubois for their comprehensive peer-review comments and editorial review.

References

Abelson, H., Anderson, R., Bellovin, S. M., Benaloh, J., Blaze, M., Diffie, W., Gilmore, J., Green, M., Landau, S., Neumann, P. G., Rivest, R. L., Schiller, J. I., Schneier, B., Specter, M. A., & Weitzner, D. J. (2015). Keys under doormats: Mandating insecurity by requiring government access to all data and communications. Journal of Cybersecurity, 1(1), 69–79. https://doi.org/10.1093/cybsec/tyv009

Abraha, H. H. (2019). How Compatible is the US ‘CLOUD’ Act’ with Cloud Computing? A Brief Analysis. International Data Privacy Law, 9(3), 207–215. https://doi.org/10.1093/idpl/ipz009

Australian Constitution. https://www.aph.gov.au/about_parliament/senate/powers_practice_n_procedures/constitution

Australian Government Department of Home Affairs. (2018). Five country ministerial 2018. Australian Government Department of Home Affairs. https://www.homeaffairs.gov.au/about-us/our-portfolios/national-security/security-coordination/five-country-ministerial-2018

Australian Government, Department of Home Affairs. (2019). Telecommunications (Interception and Access) Act 1979: Annual Report 2018-19 [Report]. Australian Government, Department of Home Affairs. https://parlinfo.aph.gov.au/parlInfo/download/publications/tabledpapers/c424e8ec-ce9a-4dc1-a53e-4047e8dc4797/upload_pdf/TIA%20Act%20Annual%20Report%202018-19%20%7BTabled%7D.pdf;fileType=application%2Fpdf#search=%22publications/tabledpapers/c424e8ec-ce9a-4dc1-a53e-4047e8dc4797%22

Beagle, T. (2017, July 2). Why we support effective encryption [Blog post]. NZ Council for Civil Liberties. https://nzccl.org.nz/content/why-we-support-effective-encryption

Bell, S. (2013, November 25). Court rebukes CSIS for secretly asking international allies to spy on Canadian suspects travelling abroad. The National Post. https://nationalpost.com/news/canada/court-rebukes-csis-for-secretly-asking-international-allies-to-spy-on-canadian-terror-suspects

Bogle, A. (2019, October 31). Police want Faster Data from the US, but Australia’s Encryption Laws Could Scuttle the Deal. ABC News. https://www.abc.net.au/news/science/2019-10-31/australias-encryption-laws-could-scuttle-cloud-act-us-data-swap/11652618

Brewster, T. (2018, February 26). The Feds Can Now (Probably) Unlock Every iPhone Model in Existence. https://www.forbes.com/sites/thomasbrewster/2018/02/26/government-can-access-any-apple-iphone-cellebrite/#76a735e8667a

Butler, P. (2013). The Case for a Right to Privacy in the New Zealand Bill of Rights Act. New Zealand Journal of Public & International Law, 11(1), 213–255.

Citron, D. K., & Pasquale, F. (2010). Network Accountability for the Domestic Intelligence Apparatus. Hastings, 62, 1441–1494. https://digitalcommons.law.umaryland.edu/fac_pubs/991/

Comey, J. B. (2014). Going Dark: Are Technology, Privacy, and Public Safety on a Collision Course? Federal Bureau of Investigation. https://www.fbi.gov/news/speeches/going-dark-are-technology-privacy-and-public-safety-on-a-collision-course

Constitution Act, (1982). https://laws-lois.justice.gc.ca/eng/const/page-15.html

Cook Barr, A. (2016). Guardians of Your Galaxy S7: Encryption Backdoors and the First Amendment. Minnesota Law Review, 101(1), 301–339. https://minnesotalawreview.org/article/note-guardians-of-your-galaxy-s7-encryption-backdoors-and-the-first-amendment/

Cooper, S. (2018). An Analysis of New Zealand Intelligence and Security Agency Powers to Intercept Private Communications: Necessary and Proportionate? Te Mata Koi: Auckland University Law Review, 24, 92–120.

Daly, A. (2017). Covering up: American and European legal approaches to public facial anonymity after SAS v. France. In T. Timan, B. C. Newell, & B.-J. Koops (Eds.), Privacy in Public Space: Conceptual and Regulatory Challenges(pp. 164–183). Edward Elgar.

Daly, A., & Thomas, J. (2017). Australian internet policy. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.457

Department of Justice. (n.d.). Frequently Asked Questions. https://www.justice.gov/dag/page/file/1153466/download

Dizon, M., Ko, R., Rumbles, W., Gonzalez, P., McHugh, P., & Meehan, A. (2019). A Matter of Security, Privacy and Trust: A study of the principles and values of encryption in New Zealand (Report. New Zealand Law Foundation and University of Waikato.

Ford, D., & Mann, M. (2019). International Implications of the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018. Australian Privacy Foundation. https://privacy.org.au/wp-content/uploads/2019/06/APF_AAAct_FINAL_040619.pdf

Froomkin, D. (2015). U.N. Report Asserts Encryption as a Human Right in the Digital Age. The Intercept. https://theintercept.com/2015/05/28/u-n-report-asserts-encryption-human-right-digital-age/

Gill, L. (2018). Law, Metaphor and the Encrypted Machine. Osgoode Hall Law Journal, 55(2), 440–477. https://doi.org/10.2139/ssrn.2933269

Gill, L., Israel, T., & Parsons, C. (2018). Shining a Light on the Encryption Debate: A Canadian Fieldguide [Report]. Citizen Lab; The Canadian Internet Policy & Public Interest Clinic. https://citizenlab.ca/2018/05/shining-light-on-encryption-debate-canadian-field-guide/

Global Partners Digital. (n.d.). World Map of Encryption Law and Policies. https://www.gp-digital.org/world-map-of-encryption/

Gonzalez, O. (2019). Cracks in the Armor: Legal Approaches to Encryption. Journal of Law, Technology & Policy, 2019(1), 1–46. http://illinoisjltp.com/journal/wp-content/uploads/2019/05/Gonzalez.pdf

Gstrein, O. (2020). Mapping power and jurisdiction on the internet through the lens of government-led surveillance. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1497

Hendry, J. (2020, January 14). Home Affairs Rejects Claims Anti-Encryption Laws Conflict with US CLOUD Act. IT News. https://www.itnews.com.au/news/home-affairs-rejects-claims-anti-encryption-laws-conflict-with-us-cloud-act-536339

Holyoke, T., Brown, H., & Henig, J. (2012). Shopping in the Political Arena: Strategic State and Local Venue Selection by Advocates. State and Local Government Review, 44(1), 9–20. https://doi.org/10.1177/0160323X11428620

Høydal, H. F., Stangvik, E. O., & Hansen, N. R. (2017, October 7). Breaking the Dark Net: Why the Police Share Abuse Pics to Save Children. VG. https://www.vg.no/spesial/2017/undercover-darkweb/?lang=en

Human Rights, E. C. (2010). European Convention on Human Rights. https://www.echr.coe.int/Documents/Convention_ENG.pdf

Investigatory Powers Act 2016 (UK), Pub. L. No. 2016 c. 25 (2016). http://www.legislation.gov.uk/ukpga/2016/25/contents/enacted

Investigatory Powers (Technical Capability) Regulations 2018 (UK. (n.d.). http://www.legislation.gov.uk/ukdsi/2018/9780111163610/contents

Keenan, B. (2019). State access to encrypted data in the United Kingdom: The ‘transparent’ approach. Common Law World Review. https://doi.org/10.1177/1473779519892641

Keith, B. (2020). Official access to encrypted communications in New Zealand: Not more powers but more principle? Common Law World Review. https://doi.org/10.1177/1473779520908293

Telecommunications Amendment (Repairing Assistance and Access) Bill 2019, (2019) (testimony of Kristina Keneally). https://parlinfo.aph.gov.au/parlInfo/download/legislation/bills/s1247_first-senate/toc_pdf/19S1920.pdf;fileType=application%2Fpdf

Koops, B.-J. (1999). The Crypto Controversy: A Key Conflict in the Information Society. Kluwer Law International.

Koops, B.-J., & Kosta, E. (2018). Looking for some light through the lens of “cryptowar” history: Policy options for law enforcement authorities against “going dark”. Computer Law & Security Review, 34(4), 890–900. https://doi.org/10.1016/j.clsr.2018.06.003

Ley, A. (2016). Vested Interests, Venue Shopping and Policy Stability: The Long Road to Improving Air Quality in Oregon’s Willamette Valley. Review of Policy Research, 33(5), 506–525. https://doi.org/10.1111/ropr.12190

Lyon, D. (2014). Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique. Big Data & Society, 1(2). https://doi.org/10.1177/2053951714541861

Lyon, D. (2015). Surveillance After Snowden. Polity Press.

Mann, M., & Daly, A. (2019). (Big) data and the north-in-south: Australia’s informational imperialism and digital colonialism. Television and New Media, 20(4), 379–395. https://doi.org/10.1177/1527476418806091

Mann, M., Daly, A., Wilson, M., & Suzor, N. (2018). The Limits of (Digital) Constitutionalism: Exploring the Privacy-Security (Im)Balance in Australia. International Communication Gazette, 80(4), 369–384. https://doi.org/10.1177/1748048518757141

McGarrity, N., & Hardy, K. (2020). Digital surveillance and access to encrypted communications in Australia. Common Law World Review. https://doi.org/10.117/1473779520902478.

McInnes, W. (2017, October 8). Queensland Police Take Over World’s Largest Child Porn Forum in Sting Operation. Brisbane Times. https://www.brisbanetimes.com.au/national/queensland/queensland-police-behind-worlds-largest-child-porn-forum-20171007-gywcps.html

Molnar, A. (2017). Technology, Law, and the Formation of (il)Liberal Democracy? Surveillance & Society, 15(3/4), 381–388. https://doi.org/10.24908/ss.v15i3/4.6645

Molnar, A., Parsons, C., & Zouave, E. (2017). Computer network operations and ‘rule-with-law’ in Australia. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.453

Murphy, H., & Kellow, A. (2013). Forum Shopping in Global Governance: Understanding States, Business and NGOs in Multiple Arenas. Global Policy, 4(2), 139–149. https://doi.org/10.1111/j.1758-5899.2012.00195.x

Mutual Assistance in Criminal Matters Act 1987 Compilation No. 35, (2016). https://www.legislation.gov.au/Details/C2016C00952

Nagel, P. (2006). Policy Games and Venue-Shopping: Working the Stakeholder Interface to Broker Policy Change in Rehabilitation Services. Australian Journal of Public Administration, 65(4), 3–16. https://doi.org/10.1111/j.1467-8500.2006.00500a.x

New Zealand Bill of Rights Act 1990. http://www.legislation.govt.nz/act/public/1990/0109/latest/DLM224792.html

Ni Loideain, N. (2019). A Bridge Too Far? The Investigatory Powers Act 2016 and Human Rights Law. In L. Edwards (Ed.), Law, Policy and the Internet (2nd ed., pp. 165–192). Hart.

Parsons, C. A., & Molnar, A. (2017). Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports. SSRN. http://dx.doi.org/10.2139/ssrn.3047272

Pearce, R. (2017, July 27). Australia’s War on Maths Blessed with Gong at Pwnie Awards. ComputerWorld. https://www.computerworld.com.au/article/625351/australia-war-maths-blessed-gong-pwnie-awards/

Pfefferkorn, R. (2020, January 30). The EARN IT Act: How to Ban End-to-End Encryption Without Actually Banning It [Blog post]. The Center for Internet Society. https://cyberlaw.stanford.edu/blog/2020/01/earn-it-act-how-ban-end-end-encryption-without-actually-banning-it

Pralle, S. (2003). Venue Shopping, Political Strategy, and Policy Change: The Internationalization of Canadian Forest Advocacy. Journal of Public Policy, 23(3), 233–260. https://doi.org/10.1017/S0143814X03003118

Regulation of Investigatory Powers Act 2000, Pub. L. No. 2000 c. 23 (2000). http://www.legislation.gov.uk/ukpga/2000/23/contents

Roach, K. (2011). The 9/11 Effect: Comparative Counter-Terrorism. Cambridge University Press.

Sadler, D. (2020a, February 3). Encryption laws not used to fight terrorism [Blog post]. InnovationAus. https://www.innovationaus.com/encryption-laws-not-used-to-fight-terrorism/?fbclid=IwAR2fdjBwK827idNXHY4X5-5Xk3d8LZJBjSVJrLMutxBn6XeWXTvzyNhsVtg

Sadler, D. (2020b, February 14). No encryption fix until at least October [Blog post]. InnovationAus. https://www.innovationaus.com/no-encryption-fix-until-at-least-october/?fbclid=IwAR0HdUHyy2ArihJC6lEze0H_rxvJnB4ryNknGMAlsWf4PeibIpJXJYD--dI

Search and Surveillance Act, (2012). http://www.legislation.govt.nz/act/public/2012/0024/latest/DLM2136536.html

Smith, G. (2017, May 8). Back doors, black boxes and #IPAct technical capability regulations [Blog post]. Graham Smith’s Blog on Law, IT, the Internet and Online Media. http://www.cyberleagle.com/2017/05/back-doors-black-boxes-and-ipact.html

Smith, Graham. (2017, May 29). Squaring the circle of end to end encryption [Blog post]. Graham Smith’s Blog on Law, IT, the Internet and Online Media. https://www.cyberleagle.com/2017/05/squaring-circle-of-end-to-end-encryption.html

Solicitor General. (2008). Solicitor General’s Enforcement Standards for Lawful Interception of Telecommunications. https://perma.cc/NQB9-ZHPY

Suzor, N., Pappalardo, K., & McIntosh, N. (2017). The Passage of Australia’s Data Retention Regime: National Security, Human Rights, and Media Scrutiny. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.454

Telecommunications Act, (1997). https://www.legislation.gov.au/Details/C2017C00179

Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018, Pub. L. No. 148 (2018). https://www.legislation.gov.au/Details/C2018A00148

Telecommunications (Interception Capability and Security) Act 2013 (NZ), (2013). http://www.legislation.govt.nz/act/public/2013/0091/22.0/DLM5177923.html

United Kingdom Home Office. (2017). Communications Data Draft Code of Practice. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/663675/November_2017_IPA_Consultation_-_Draft_Communications_Data_Code_of_Pract....pdf

US Telecommunications: Assistance Capability Requirements, USC § 1002, 47 Telecommunications § 1002 (1994). https://www.law.cornell.edu/rio/citation/108_Stat._4280

Vazquez Maymir, S. (2020). Anchoring the Need to Revise Cross-Border Access to E-Evidence. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1495

West, L., & Forcese, C. (2020). Twisted into knots: Canada’s challenges in lawful access to encrypted communications. Common Law World Review. https://doi.org/10.1177/1473779519891597

Williams, G., & Reynolds, D. (2017). A charter of rights for Australia (4th ed.). NewSouth Press.

Wilson, M., & Mann, M. (2017, September 7). Police Want to Read Encrypted Messages, but They Already Have Significant Power to Access our Data. The Conversation. https://theconversation.com/police-want-to-read-encrypted-messages-but-they-already-have-significant-power-to-access-our-data-82891

Zuan, N., Roos, C., & Gulzau, F. (2016). Circumventing Deadlock Through Venue-shopping: Why there is more than just talk in US immigration politics in times of economic crisis. Journal of Ethnic and Migration Studies, 42(10), 1590–1609. https://doi.org/10.1080/1369183X.2016.1162356

Footnotes

1. The FVEY partnership is a comprehensive intelligence alliance formed after the Second World War, formalised under the UKUSA Agreement (see e.g., Mann & Daly, 2019).

2. Cth stands for Commonwealth, which means “federal” legislation, as distinct from state-level legislation.

3. At the state and territory level: Victoria, Queensland and the Australian Capital Territory have human rights laws, however the surveillance powers examined in this article are subject to Commonwealth jurisidiction rendering so these state and territory based protections are inapplicable. See: Charter of Human Rights and Responsibilities Act 2006 (Vic); Human Rights Act 2019 (QLD); Human Rights Act 2004 (ACT).

4. However, the draft EARN IT bill currently before the US Congress, if enacted, may impact negatively upon providers’ ability to offer end-to-end encrypted messaging. See Pfefferkorn (2020).

5.R v Mirarchi involved BlackBerry providing the Canadian police with a key which allowed them to decrypt one million BlackBerry messages (Gill, Israel & Parsons, 2018, p. 57-58). The legal basis and extent of BlackBerry’s assistance to the Canadian police was unclear from the ‘heavily redacted’ court records (West & Forcese, 2020).

6. For a full picture of New Zealand legal provisions which may affect encryption see Dizon et al. (2019).

7. For additional provisions in UK law which may be relevant to encryption see Keenan (2019).

8. The analysis presented here focuses on Schedule 1 of the AA Act. Schedule 2 of the AA Act introduces computer access warrants that allow law enforcement to covertly access and search devices, and to conceal the fact that devices have been accessed.

9.a.b. S 317G.

10. S 317L.

11. S 317T.

12. Namely ‘the Director‑General of Security, the Director‑General of the Australian Secret Intelligence Service, the Director‑General of the Australian Signals Directorate or the chief officer of an interception agency’.

13. Namely ‘ASIO, the Australian Secret Intelligence Service, the Australian Signals Directorate or an interception agency’.

14. For example, “removing one or more forms of electronic protection that are or were applied by, or on behalf of, the provider”, “installing, maintaining, testing or using software or equipment” and “facilitating or assisting access to… a facility, customer equipment, electronic services and software” are included in the list of ‘acts or things’ that a provider may be asked to do via these provisions. The complete list of ‘acts or things’ are listed in section 317E

15. According to AA Act s 317B a systematic vulnerability means “a vulnerability that affects a whole class of technology, but does not include a vulnerability that is selectively introduced to one or more target technologies that are connected with a particular person” and a systematic weakness means “a weakness that affects a whole class of technology, but does not include a weakness that is selectively introduced to one or more target technologies that are connected with a particular person.”

16. A category which, according to paragraph 51(xx) of the Australian Constitution, comprises “foreign corporations, and trading or financial corporations formed within the limits of the Commonwealth”.

17. S 317A; Table 1.

18. AA Act s 15CC(1); Surveillance Devices Act 2004 (Cth) ss 27A(4) and (4)(a).

19. Analyses of policy venue shopping have been conducted in relation to a range of policy areas, inter alia, immigration, environmental, labour, intellectual property, and rehabilitation policies (see e.g., Ley, 2016; Holyoke, Brown, & Henig, 2012; Pralle, 2003; Zuan, Roos, & Gulzau, 2016; Nagel, 2006; Murphy & Kellow, 2013). According to Pralle (2003, p. 233) a central “component of any political strategy is finding a decision setting that offers the best prospects for reaching one’s policy goals, an activity referred to as venue shopping”. Further, Murphy and Kellow (2013, p. 139) argue that policy venue shopping may be a political strategy deployed at global levels where “entrepreneurial actors take advantage of ‘strategic inconsistencies’ in the characteristics of international policy arenas”.

20. A further example that demonstrates regulatory arbitrage between FVEY members from the perspective of Canada, brought to light in 2013, involved Canada’s domestic security intelligence service (CSIS) being found by the Federal Court to have ‘breached duty of candour’ by secretly refusing to disclose their leveraging of FVEY networks when it applied for warrants during an international terrorism investigation involving two Canadian suspects (Bell 2013).

21. It should be noted that due to the overlapping time frames and aggregated nature of reporting, the 25 occasions the powers were used may also include some of the 7 occasions reported in the most recent Home Affairs annual-report.

22. CLOUD Act s 105 (b) (3). Note: The US Department of Justice claims the CLOUD Act is “encryption neutral” in that “neither does it prevent service providers from assisting in such decryption, or prevent countries from addressing decryption requirements in their own domestic laws.” (Department of Justice, n.d)

Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad

$
0
0

This paper is part of Geopolitics, jurisdiction and surveillance, a special issue of Internet Policy Review guest-edited by Monique Mann and Angela Daly.

In February 2019, the short video sharing and social mobile application TikTok was fined a record-setting penalty (US$ 5.7 million) for violating the Children’s Online Privacy Protection Act by the US Federal Trade Commission for failing to obtain parental consent and deliver parental notification. TikTok agreed to pay the fine (Federal Trade Commission, 2019). This settlement implies several significant developments. Owned by the Chinese internet company ByteDance, TikTok is popular worldwide, predominantly among young mobile phone users, while most commercially successful Chinese internet companies are still based in the Chinese market. Such global reach and commercial success makes Chinese mobile applications pertinent sites of private governance on the global scale (see Cartwright, 2020, this issue). China-based mobile applications therefore need to comply with domestic statutory mechanisms as well as privacy protection regimes and standards in the jurisdictions as they expand outward, such as the extraterritorial application of Article 3 of the EU’s General Data Protection Regulation (GDPR).

To examine how globalising Chinese mobile apps respond to the varying data and privacy governance standards when operating overseas, we compare the Chinese and overseas version of four sets of China-based mobile applications: (1) Baidu mobile browser - a mobile browser with a built-in search engine owned and developed by Chinese internet company Baidu, (2) Toutiao and TopBuzz - mobile news aggregators developed and owned by ByteDance, (3) Douyin and TikTok - mobile short video-sharing platforms developed and owned by ByteDance, with the former only available in Chinese app stores and the later exclusively in international app stores, and (4) WeChat and Weixin - a social application developed and owned by Chinese internet company Tencent. Together, these four mobile applications represent a global reach of flagship China-based mobile apps and a wide range of functions: search and information, news content, short videos and social. They also represent a mix of more established (Baidu, Tencent) and up-and-coming (ByteDance) Chinese internet companies. Lastly, this sample also demonstrates the varying degree of commercial success as they all offer services globally, with Baidu browser the least commercially successful, and TikTok the most successful.

An earlier study shows that Chinese web services had a bad track record in privacy protection: back in 2006, before China had in place a national regime of online privacy protection, among 82 commercial websites in China, few websites posted a privacy disclosure and an even fewer number of websites followed the four fair information principles of notice, choice, access and security (Kong, 2007). These four principles are to enhance self-regulation of the internet industry by providing consumers notice, control, security measures, and ability to view and contest the accuracy and completeness of data collected about them (Federal Trade Commission, 1998). In 2017, only 69.6 percent of the 500 most popular Chinese websites had disclosed their privacy policies (Feng, 2019). These findings suggest a significant gap between data protection requirements on paper and protection in practice (Feng, 2019). In a recent study, Fu (2019) finds improvement of the poor privacy protection track record of the three biggest internet companies in China (Baidu, Alibaba, and Tencent). Her study shows that BAT’s privacy policies are generally compliant with the Chinese personal information protection provisions but lack sufficient considerations to transborder data flows and in the case of change of ownership (such as merger and acquisitions (Fu, 2019). Moreover, the privacy policies of BAT offer more notice than choice—that user either is forced to accept the privacy policy or forego the usage of the web services (Fu, 2019, p. 207). Building on these findings, this paper asks: does the same app differ in data and privacy protection measures between international and Chinese versions? How are these differences registered in the app’s user interface design and privacy policies?

In the following analysis, we first outline the evolving framework of data and privacy protection that governs the design and operation of China-based mobile apps. The next section provides a background overview of key functions, ownership information, business strategies of examined apps. The walkthrough of app user interface design studies how a user experiences privacy and data protection features in various stages of app usage. Last, we present the comparison of privacy policies and terms of service between the two versions of the same China-based apps to identify the differences in data and privacy governance. We find that not only different apps vary in data and privacy protection, the international and Chinese versions of the same app also show discrepancies.

Governance ‘of’ globalising Chinese apps

Law and territory has always been at the centre of debates in the regulation and development of the internet (Goldsmith & Wu, 2006; Kalathil & Boas, 2003; Steinberg & Li, 2016). Among others, China has been a strong proponent of internet sovereignty in global debates about internet governance and digital norms. The 2010 white paper titledThe Internet In China enshrines the concept of internet sovereignty into the governing principles of the Chinese internet. It states: “within Chinese territory the internet is under the jurisdiction of Chinese sovereignty” (State Council Information Office, 2010). The principle of internet sovereignty was later reiterated by the Cyberspace Administration of China (CAC), the top internet-governing body since 2013, to recognise “each government has the right to manage its internet and has jurisdiction over information and communication infrastructure, resources and information and communication activities within their own borders” (CAC, 2016).

Under the banner of internet sovereignty, the protection of data and personal information in China takes a state-centric approach, which comes in the form of government regulations and government-led campaigns and initiatives. The appendix outlines key regulations, measures and drafting documents. Without an overarching framework for data protection, China’s data protection approach is characterised in a “cumulative effect” (de Hert & Papakonstantinou, 2015), which is composed of multitude of sector-specific legal instruments, promulgated in a piecemeal fashion. While previous privacy and data protection measures are dispersed across various government agencies, laws and regulation, the first national standard for personal data and privacy protection was put forth only in 2013. The promulgation of the Cybersecurity Law in 2016 is a major step forward in the nation’s privacy and data protection efforts, despite the policy priority of national security over individual protection. Article 37 of the Cybersecurity Law stipulates that personal information and important data collected and produced by critical information infrastructure providers during their operations within the territory of the People’s Republic of China shall be stored within China. Many foreign companies have complied either as a preemptive goodwill gesture or as a legal requirement in order to access, compete, and thrive in the Chinese market. For example, in 2018, Apple came under criticism for moving the iCloud data generated by users with a mainland Chinese account to data management firm Guizhou-Cloud Big Data - a data storage company of the local government of Guizhou province (BBC, 2016). LinkedIn, Airbnb (Reuters, 2016), and Evernote (Jao, 2018) have stored mainland user data in China, even prior to the promulgation of the Cybersecurity Law. The Chinese government asked transnational internet companies to form joint ventures with local companies to operate data storage and cloud computing businesses, such as Microsoft Azure’s cooperation with Century Internet and Amazon AWS-Sinnet technology (Liu, 2019).

The Chinese state participates in a wide range of online activities including, among other things, data localisation requirements for domestic and foreign companies (McKune & Ahmed, 2018). The Chinese government attributes data localisation requirements to national security and the protection of personal information on the basis that the transfer of personal and sensitive information overseas may undermine the security of data (Xu, 2015). While others point out the recurring themes of the ideological tradition of technological nationalism and independence as Cyberspace Administration of China’s prioritisation of security over personal privacy and business secrets (Liu, 2019). Captured in President Xi’s speech “without cybersecurity comes no national security”, data and privacy protection is commonly framed under the issue of internet security (Gierow, 2014).

There is a growing demand for the protection of personal information among internet users and a growing number of government policies pertaining to the protection of personal information in China (Wang, 2011). Since 2016, the Chinese government is playing an increasingly active role in enforcing a uniform set of rules and standardising the framework of privacy and data protection. As of July 2019, there are 16 national standards, 10 local standards and 29 industry standards in effect that provide guidelines on personal information protection. However, there is no uniform law or a national authority to coordinate data protection in China. The right to privacy or the protection of personal information (the two are usually interchangeable in the Chinese context) often comes as an auxiliary article along with the protection of other rights. Whereas jurisdictions such as the EU have set up Data Protection Authorities (DPAs) - that are independent public entities that supervise the compliance of data protection regulations, in China the application and supervision of data protection has fallen on private companies and state actors respectively. User complaints against the violation of data protection laws are mostly submitted to, and handled by, private companies themselves rather than an independent agency. This marks the decisive difference underlying China’s and the EU’s approach to personal data processing: in China, data protection is aimed exclusively at the individual as consumer, versus in the EU, the data protection recipient is regarded as an individual or a data subject and protection of personal data is both a fundamental right and is conducive to the trade of personal data within the Union, as stipulated in Article 1 of the General Data Protection Regulation (de Hert & Papakonstantinou, 2015).

The pre-existing legal modicum and self-regulatory regime of privacy and data protection by Chinese internet platform companies gives rise to rampant poor privacy and data protection practices, even among the country’s largest and leading internet platforms. Different Chinese government ministries have also tackled the poor data and privacy regulation of mobile apps and platform in rounds of “campaign style” (运动式监管) regulation—a top down approach often employed by the Chinese government to provide solutions to emerging policy challenges (Xu, Tang, & Guttman, 2019). For instance, Alibaba’s payment service Alipay, its credit scoring system Sesame Credit, Baidu, Toutiao, and Tencent have all shown poor track records of data and privacy protection and have come under government scrutiny (Reuters, 2018). Alipay was fined by the People’s Bank of China in 2018 for collecting users’ financial information outside the scope defined in the Cybersecurity Law (Xinhua, 2018). The Ministry of Industry and Information Technology publicly issued a warning to Baidu and ByteDance’s Toutiao for failing to properly notify users about which data it is collecting (Jing, 2018).

As China experienced exponential mobile internet growth, mobile apps stand out as a poignant regulatory target. The Cyber Administration of China put forth the Administrative Rules on Information Services via Mobile Internet Applications in 2016 that distinguishes the duties for mobile app stores and mobile apps. Mobile apps, in particular, bear six regulatory responsibilities: 1) enforce real name registration and verify the identity of users through cell phone number or other personally identifiable information, 2) establish data protection mechanism to obtain consent and disclose the collection and use of data, 3) establish fulsome information gatekeeping mechanisms to warn, limit, suspend accounts that post content that violate laws or regulations, 4) safeguard privacy during app installation processes, 5) protection of intellectual property, 6) obtain and store user logs for sixty days.

As more China-based digital platforms join the ranks of the world’s largest companies by measures of user population, market capitalisation and revenues (Jia & Winseck, 2018), various scholarly studies have already started to grapple with the political implications of their expansion. Existing studies call for attention to the distinctions between global and domestic versions of the same Chinese websites and mobile applications in information control and censorship activities and results show Chinese mobile apps and websites are lax and inconsistent at content control when they go global (Ruan, Knockel, Ng, & Crete-Nishihata, 2016; Knockel, Ruan, Crete-Nishihata, & Deibert, 2018; Molloy & Smith, 2018). To ameliorate these dilemmas, some China-based platforms have designed different versions of their products that serve domestic and international users separately. Yet, data and privacy protection of Chinese mobile apps is under-studied, especially as they embark on a global journey. This is ever more pressing an issue as Chinese internet companies that have been successful at growing their international businesses, such as Tencent and ByteDance, simultaneously struggle to provide a seamless experience for international users and complying with data and content regulations at home.

Methods

We employ a mixed-method approach to investigate how globalising Chinese mobile apps differ in data and privacy governance between Chinese and international versions accessed through Canadian app stores. While Baidu Search, TikTok, WeChat, and Topbuzz do not appear to have region-based features, the actual installation package may or may not differ based on where a user is based and downloads the apps from. First, we conducted an overview of tested mobile apps and functions, looking at issues of ownership, revenue, user population. Each app’s function and business model has a direct bearing on the data collection and usage. Secondly, to study how mobile apps structure and shape end users’ experience with regards to data and privacy protection, we deployed the walkthrough method (Light, Burgess, & Duguay, 2018). We tested both the Android and iOS version of the same app. In the case of China-based apps (i.e., Douyin & Toutiao), we downloaded the Android version from the corresponding official website of each service and the iOS version from the Chinese regional Apple App Store. For the international-facing apps (i.e., TikTok and TopBuzz), we downloaded their Android versions from the Canadian Google Play Store and the iOS version from the Canadian Apple App Store. Baidu and WeChat do not offer separate versions for international and Chinese users; instead, the distinction is made when users register their account. After we downloaded each app, we systematically stepped through two stages in the usage of the apps: app entry and registration, and discontinuation of use. We conducted the walkthrough on multiple Android and Apple mobile devices in August 2019.

In addition, we conducted content analysis of the privacy policies and terms of service of each mobile app. These documents demonstrate the governance by mobile apps as well as the governance of mobile apps within certain jurisdictions. They are also key legal documents that set the conditions of user’s participation online and lay claim to the institutional power of the state (Stein, 2013). We examined a total of 15 privacy policies and terms of service in Chinese and English language, retrieved in July 2019. Here are the numbers of documents we examined for each app: Baidu (2), Weixin (2), WeChat (2), TopBuzz (2), TikTok (3), Douyin (2), Toutiao (2). We then conducted content analysis of mobile app privacy policies and terms of service along five dimensions: data collection, usage, disclosure, transfer, and retention. For data collection, we looked for items that detailed the types of information collected, the app’s definitions of personally identifiable information, and the possibility to opt out of the data collection process; for data usage, we looked for terms and conditions that delineated third party use; for disclosure, we looked at whether the examined app would notify its users in case of privacy update, merger and acquisitions, and data leakages; for data transfer and retention, we examined whether app specified security measures such as encryption of user data, emergency measures in case of data leaks, terms and conditions of data transfer, as well as the specific location and duration of data retention.

Research limitations

Due to network restrictions, our walkthrough is limited to the Canadian-facing versions of these China-based apps. For each mobile app we studied, its parent company offers only one version of an international-facing app and one version of a China-facing app on the official website. Yet, even though there is only one international-facing app for each of the products we analysed, it remains to be tested whether the app interface, including the app’s notification setting differs when downloaded and/or launched in different jurisdictions. Moreover, our research is based on a close reading of the policy documents put together by mobile app companies. It does not indicate whether these companies actually comply with their policy documents in the operation of services, or the pitfalls of notice and consent regime (Martin, 2013). Existing research has already shown that under the Android system, there are many instances of potential inconsistencies between what the app policy states and what the code of the app appears to do (Zimmeck et al., 2016).

Overview of apps

Baidu Search

Baidu App is the flagship application developed by Baidu, one of China’s leading internet and platform companies. The Baidu App provides the search function but also feeds users highly personalised content based on data and metadata generated by users. Often regarded as the Chinese counterpart of Google, Baidu’s main business includes online search, online advertising and artificial intelligence. In 2018, the daily active users of Baidu app reached 161 million, a 24% jump from 2017. Although Baidu has embarked on many foreign ventures and expansion projects, according to its annual report, the domestic market still accounts for 98% of Baidu’s total revenue for 2016, 2017, and 2018 consecutively. Based on revenue composition, Baidu’s business model is online advertising. The major shareholders of Baidu are its CEO Robin Yanhong Li (31.7%) and Baillie Gifford (5.2%), an investment management firm headquartered in Edinburgh, Scotland.

TikTok vs Douyin, TopBuzz vs Toutiao

TikTok, Douyin, TopBuzz and Toutiao are among the flagship mobile apps in ByteDance’s portfolio. ByteDance represents a new class of up-and-coming Chinese internet companies competing for global market through diversification, merger and acquisitions of foreign apps. ByteDance acquired US video app Flipagram in 2017, France-based News Republic in 2017, and invested in India-based news aggregator Dailyhunt. TikTok, first created in 2016, was rebranded with ByteDance’s US$ 1 billion acquisition of Muscial.ly in 2018. The Chinese version of TikTok, Douyin, was released in 2016 by ByteDance as the leading short-video platform in the country. The Douyin app has several different features that are particular to the Chinese market and regulation. For example, the #PositiveEnergy was integrated into the app as an effort to align with the state's political agenda to promote Chinese patriotism and nationalism (Chen, Kaye, & Zeng, 2020). Douyin also differs from TikTok in the app’s terms of service, of which it states that content undermining the regime, overthrowing the socialist system, inciting secessionism, and subverting the unification of the country is forbidden on the platform (Chen, Kaye, & Zeng, 2020; Kaye, Chen, & Zeng, 2020). Such regulation does not exist on TikTok. ByteDance’s Chinese news and information app Toutiao was launched in 2012, followed by its English version TopBuzz in 2015, for the international market.

Dubbed as the “world’s most valuable startup” (Byford, 2018), ByteDance secured investment from Softbank and Sequoia Capital. ByteDance has made successful forays into North American, European and Southeast Asian markets, reaching 1 billion monthly active users globally in 2019 (Yang, 2019). It is one of the most successful and truly global China-based mobile apps. The company focuses on using artificial intelligence (AI) and machine learning algorithms to source and push content to its users. To accelerate its global reach, ByteDance sources its top-level management from Microsoft and Facebook for AI and global strategy development.

Both apps and their overseas versions have received much legal and regulatory scrutiny. In 2017, Toutiao was accused of spreading pornographic and vulgar information by the Beijing Cyberspace and Informatisation Office. In the 2018 Sword Net Action, China’s National Copyright Administration summoned Douyin to better enforce copyright law and put in place a complaint mechanism to report illegal content (Yang, 2018). Reaching millions of youth, TikTok was temporarily banned by Indian court and Indonesia’s Ministry of Communication and Information Technology for “degrading culture and encourag[ing] pornography” and for spreading pornography, inappropriate content and blasphemy. TikTok attempted to resolve the ban by building data centres in India while hiring more content moderators (Sharma & Niharika, 2019).

WeChat/Weixin

WeChat or Weixin is China’s most popular mobile chat app and the fourth largest in the world. It is a paradigmatic example of the infrastructurisation of platforms, where the app bundles and centralises many different functions, such as digital payment, group buying, taxi hailing into one super-app (Plantin & de Seta, 2019). Owned by Tencent, one of China’s internet behemoths, WeChat has a user base of 1 billion, though Tencent has not updated the number of its international users since 2015 (Ji, 2015). WeChat’s success was built upon Tencent’s previous social networking advantages.

Unlike ByteDance which separates its domestic and international users by developing two different versions of its major products (i.e., the internationally-facing TikTok can only be downloaded in international app stores whereas Douyin can only be downloaded in Chinese app stores and Apple’s China-region App Store), Tencent differentiates WeChat (international) and Weixin (domestic) users by the phone number a user originally signs up with. In practice, users download the same WeChat/Weixin app from either international or Chinese app stores. The app then decides whether the user is an international or Chinese user during the account registration process. Besides certain functionalities such as Wallet that is exclusive to Chinese users, the overall design of the app and the processes of account registration and deletion are the same for international and domestic users.

App walkthrough

We conducted app walkthroughs to examine and compare user experience in data and privacy protection during the app registration and account deletion process. Figure 1 compares the walkthrough results. 

Android-iOS difference

Registration processes for Baidu, Douyin, Toutiao and WeChat differ between the Android and iOS versions. The Android and iOS registration processes for TopBuzz and TikTok are similar, therefore they are recorded in one timeline in Figure 1. In general, app registrations on iOS devices comprise of more steps compared to Android, meaning that the apps need to request more function-specific authorisation from users. In the Android versions, access to certain types of data is granted by default when users install and use the app; users need to change authorisations within the app or on the device’s privacy settings. For example, TopBuzz and TikTok, both owned by ByteDance, set app push notifications as the default option without prompting for user consent. If users want to change the setting, they need to do so via their device’s privacy settings. 

“Ask until consent”

All Chinese versions of apps will prompt a pop-up window displaying a summary of privacy notification, while this is not the case for the Canadian version. However, the pop-up reminder for privacy notification does not give the users a choice to continue usage of the app without ticking “I agree”. For example, if you do not agree with the privacy reminder, the app will show the notice again until user consent is obtained to proceed to the next step. This is a reflection of the failure of the notice and choice approach to privacy protection that the users are left without a choice but to accept the terms or relinquish the usage of the app (Martin, 2013). It also mirrors and reaffirms existing study on the lack of choice if users do not agree with a privacy notice. For Douyin, TikTok, Toutiao, TopBuzz, and Baidu, users can still use limited app functions if they do not sign up for an account. However, these apps will still collect information during the use of the apps, such as device information and locational information, as per privacy policies. WeChat and Weixin, on the other hand, mandate the creation of accounts to use app services.

Real name registration

For all examined apps, users can choose to register with either cell phone numbers or emails in the international version. However, for all domestic versions, cell phone numbers are mandatory to sign up for services. This is a key difference between the international and domestic versions. The main reason is that Article 24 of China’s Cybersecurity Law requires internet companies to comply with the real name registration regulation. During account registration, all apps request for access to behavioral data (request for location) and user data (contact). The real name registration process mandated under the Chinese law differs in intent and in practice from those of US-based internet companies and platforms. For example, Facebook, YouTube, now-defunct Google+, Twitter and Snapchat have different policies about whether a user has the option of remaining anonymous, or creating an online persona that masks their identity to the public (DeNardis & Hackl, 2015, p. 764). The decisions made on part of internet companies and digital platforms could jeopardise the online safety and anonymity of minority populations and have potential to stifle freedom of expression. However, in the Chinese context, the real name registration is overseen and enforced by different levels of government for the purpose of governance and control, following the principle of “real identity on the backend and voluntary compliance on the front end”, which means apps, platforms, and websites must collect personally identifying information while it is up to users to decide whether to adopt real name as screen name.

Account deletion

For all apps examined, users need to go through multiple steps to reach the account deletion options: WeChat 5 steps, Douyin 6 steps, TikTok 4 steps, TopBuzz 3 steps. The more steps it takes, the more complicated it is for users to de-register and delete data and metadata generated on the app. All Chinese versions of the tested apps prompt an “account in secure state” notification in the process of account deletion. To have an account in secure state, it means that the account does not have any suspicious changes such as changing password or unlinking the mobile phone within a short period of time before the request, as a security measure. To have an account in a secure state is a prerequisite for account removal. The domestic versions also have screening measures so that only accounts that have a “clean history” can be deleted. A clean history means the account has not been blocked nor engaged in any previous activities that are against laws and regulations. TikTok also offers a 30-day deactivation period option before the account is deleted and TopBuzz requires users to tick “agree” on privacy terms during account deletion. It also offers a re-participation option by soliciting reasons why users delete accounts.

Figure 1: Walkthrough analysis

Content analysis of privacy policies and terms of service

Table 1: Cross-border regulation

Company

Regions

Privacy policy application scope

Laws and jurisdictions referred

Specific court that legal proceedings must go through

Baidu

 

Part of larger organization

Relevant Chinese Laws, Regulations

Beijing Haidian District People’s court

TopBuzz

EU

Part of larger organization

GDPR and EU

No

Non-EU

Part of larger organization

US, California Civil Code, Japan, Brazil

Singapore International Arbitration Center

Toutiao

 

For Toutiao

Relevant Chinese Laws, Regulations

Beijing Haidian District

Douyin

 

For Douyin

Relevant Chinese Laws, Regulations

Beijing Haidian District People’s court

TikTok

US

For TikTok

Yes

Unspecified

EU

For TikTok

Yes

Unspecified

Global

For TikTok

No

Unspecified

WeiXin

 

For Weixin

Relevant Chinese Laws, Regulations

Shenzhen Nanshan People's Court

WeChat

US

For WeChat

No

American Arbitration Association

EU

The court of the user’s place or residence or domicile

Other

Hong Kong International Arbitration Centre

We retrieved and examined the privacy policies and terms of service of all apps as of July 2019. Baidu only has one set of policies covering both domestic and international users. WeChat/WeiXin, TopBuzz/Toutiao and TikTok/Douyin have designated policies for domestic and international users, respectively. TikTok’s privacy policies and terms of service are most regional-specific, with three distinctive documents for US, EU, and global users (excluding US and EU). TopBuzz distinguishes EU and non-EU users with jurisdiction-specific items for users based in the US, Brazil, and Japan in the non-EU users privacy policies. Most policies and terms of service refer to privacy laws of the jurisdictions served, but WeChat and TikTok’s global users’ privacy policies are vague as they do not explicitly name the laws and regulations but refer to them under “relevant laws and regulations”. Compared to the Canadian versions of the same app, Chinese apps provide clearer and more detailed information about the specific court where disputes are to be solved.

Table 2: Storage and transfer of user data

Company

Regions

Storage of data

Location of storage

Duration of storage

Data transfer

Baidu

 

Yes

PRC

Unspecified

Unspecified

TopBuzz

EU

Yes

Browser behavior data stored for 90 days

third party servers in US & Singapore Amazon Web Services

Varies according to jurisdictions

Yes

Non-EU

Yes

US and Singapore

Unspecified

Yes

Toutiao

 

Yes

PRC

Unspecified

No

Douyin

 

Yes

PRC

Unspecified

Transfer with explicit consent 

TikTok

US

Unspecified

Unspecified

Unspecified

Unspecified

EU

Yes

Unspecified

Unspecified

Yes

Global

Unspecified

Unspecified

Unspecified

Unspecified

WeiXin

 

Yes

PRC

Unspecified

Unspecified

WeChat

 

Yes

Canada, Hong Kong

 

Unspecified

In terms of data storage, as shown in Table 2, most international versions of examined apps store user data in foreign jurisdictions. For example, WeChat’s international-facing privacy policy states that the personal information it collects from users will be transferred to, stored at, or processed in Ontario, Canada and Hong Kong. The company explains explicitly why it chooses the two regions: “Ontario, Canada (which was found to have an adequate level of protection for Personal Information under Commission Decision 2002/2/EC of 20 December 2001); and Hong Kong (we rely on the European Commission’s model contracts for the transfer of personal data to third countries (i.e., the standard contractual clauses), pursuant to Decision 2001/497/EC (in the case of transfers to a controller) and Decision 2010/915/EC (in the case of transfers to a processor).” Only Baidu stores user data in mainland China, regardless of the residing jurisdictions of users. However, the latter app’s policies do not specify where and for how long the transnational communications between users based in China and users based outside will be stored. Baidu’s privacy policies are particularly ambiguous about how long data will be stored. Governed by the GDPR, privacy policies serving EU users are more comprehensive than others in disclosing whether user data will be transferred.

All apps have included mechanisms through which users can communicate their concerns or file complaints about how the company may be retaining, processing, or disclosing their personal information. Almost all apps – with the exception of Baidu – provide an email address and a physical mailing address of where users can initiate communications. TikTok has provided the name of an EU representative in its EU-specific privacy policy, though the contact email provided is the same as the one mentioned in TikTok’s other international privacy policies.

Table 3: Privacy disclosure

Company

Regions

Last policy update date

Access to older versions

Notification of update?

Complaint mechanism

Complaint venue

Baidu

 

No

No

No

Yes

Legal process through local court

TopBuzz

EU

No

Yes

Yes

No privacy officer listed

Non-EU

No

No

Yes

Yes

No privacy officer listed

Toutiao

 

Yes

No

Yes

Yes

No privacy officer listed

Douyin

 

Yes

No

Yes

Yes

Email and physical mailing address

TikTok

US

Yes

No

Yes

Yes

No privacy officer listed

EU

Yes

No

Yes

Yes

A EU representative is listed

Global

Yes

No

Yes

Yes

Email and a mailing address

WeXin

 

Yes

No

Yes

Yes

Contact email and location of Tencent Legal Department

WeChat

 

Yes

No

Yes

Yes

Contact email of Data Protection Officer and a physical address

Baidu only mentions that any disputes should be resolved via legal process through local court, which increases the difficulties if users, especially international users, wish to resolve a dispute with the company. WeChat/Weixin is another interesting case: unlike ByteDance which distinguishes its domestic and international users by providing them with two different versions of apps, Tencent’s overseas and domestic users use the same app. Users receive different privacy policies and terms of service based on the phone number they signed up with. In addition, the company’s privacy policy and terms of service differentiate international users and domestic users not only via their place of residence but also their nationalities. Tencent’s terms of service for international WeChat users denote that if the user is “(a) a user of Weixin or WeChat in the People’s Republic of China; (b) a citizen of the People’s Republic of China using Weixin or WeChat anywhere in the world; or (c) a Chinese-incorporated company using Weixin or WeChat anywhere in the world,” he or she is subject to the China-based Weixin terms of service. However, neither WeChat/Weixin explain how the apps identify someone as a Chinese citizen in these documents. That said, even if Weixin users are residing overseas, they will need to go through the complaint venue outlined in the Chinese privacy policy version rather than taking it to the company’s overseas operations.

Our analysis of these apps’ data collection practices show some general patterns in both the domestic and international versions. All apps mention the types of information they may collect such as name, date of birth, biometrics, address, contact, location. However, none of the apps, except WeChat for international users offer a clear definition or examples of what counts as personally identifiable information (PII). As for disclosure of PII, all apps state that they will share necessary information with law enforcement agencies and government bodies. TikTok’s privacy policy for international users outside the US and EU seems to be the most relaxed when it comes to sharing user information with third parties or company affiliates. All the other apps surveyed state that they will request users’ consent before sharing PII with any non-government entities. TikTok’s global privacy policy states that it will share user data – without asking for user consent separately — with “any member, subsidiary, parent, or affiliate of our corporate group”, “law enforcement agencies, public authorities or other organizations if legally required to do so”, as well as with third parties.

Conclusion

This study shows that not only different Chinese mobile apps vary in data and privacy protection but also the Chinese domestic and international versions of the same app vary in data and privacy protection standards. More globally successful China-based mobile apps have better and more comprehensive data and privacy protection standards. Similar to previous findings (Liu, 2019; Fazhi Wanbao, 2018), our research shows that Baidu, compared to other apps, has the most unsatisfactory data and privacy protection measures. ByteDance’s apps: TopBuzz/Toutiao, TikTok/Douyin are more attentive to users from different geographical regions by designating jurisdiction-specific privacy policies and terms of service. In this case, the mobile app’s globalisation strategies and aspirations play an important part in the design and governance of mobile app data and privacy protection. ByteDance is the most internationalised company, when compared to Baidu and Tencent. ByteDance’s experience of dealing with fines from the United States, Indian and Indonesian law enforcement and regulatory authorities has helped revamp its practices overseas. For instance, TikTok updated its privacy policy after the Federal Trade Commission’s fine in February 2019 (Alexander, 2019). Faced with probing from US lawmakers and a ban from US Navy, TikTok released its first Transparency report in December 2019 and the company is set to open a “Transparency Center” in its Los Angeles office in May 2020, where external experts will oversee its operations (Pappas, 2020). For Tencent, with an expanding array of overseas users, the company was also among the first to comply with the GDPR. Tencent updated its privacy policy to meet GDPR’s requirement on 29 May 2018 — a day after it came into force.

For China-based internet companies that eye global markets, expanding beyond China means that they must provide a compelling experience for international users and comply with laws and regulations in jurisdictions where they operate. In this regard, nation-states and their designed ecosystem of internet regulations have a powerful impact on how private companies govern their platforms. Our analysis suggests that nation-based regulations on online spaces have at times spilled beyond their territory (e.g., Tecent’s WeChat/Wexin’s distinguishing domestic and international users based on their nationality). However, the effects of state regulations on transnational corporations are not monolithic. They vary depending on how integrated a platform is into a certain jurisdiction, where its main user base is, and what its globalisation strategies are. For example, ByteDance’s TikTok is more responsive to international criticism and public scrutiny than the other applications in this study potentially because of the app’s highly globalised presence and revenue streams.

Secondly, this paper highlights that in addition to app makers, other powerful actors and parties shape the app’s data and privacy protection practices. One of the actors is mobile app store owners (e.g., Google Play and Apple App Store). As the walkthrough analysis demonstrates, the app interface design and requests on Apple iOS do a better job at informing and notifying data access for mobile phone users. The Android version of tested apps have set user consent for push notification as default in some cases, therefore it requests individual efforts to navigate and learn how to opt out or withdraw consent. Examined mobile apps operating in the Android system are more lenient in requesting data from users, as compared to iOS. The gatekeeping function of mobile app platforms that host these apps and set the standards for app designers and privacy protection further indicates a more nuanced and layered conceptualisation of corporate power in understanding apps as a situated digital object. This further shows that in a closely interconnected platform ecosystem, some platform companies are more powerful than others with their infrastructural reach in hosting content, providing cloud computing and data services (van Dijck, Nieborg, & Poell, 2019). Even though Tencent, ByteDance and Baidu are powerful digital companies in China, they still rely on Google Play store and Apple’s App Store for the domestic and global distribution of their apps, therefore subjecting to the governance of these mobile app stores (see Cartwright, 2020, this issue). Another example is the mini-programmes, which are “sub-applications” hosted on WeChat, where developers and apps are subject to WeChat’s privacy policies and developer agreements. This shows that apps are always situated in and should be studied together with the complex mobile ecosystem and their regional context (Dieter et al., 2019). Therefore, we should consider the relational and layered interplay between different levels of corporate power in co-shaping the data and privacy practices of mobile apps.

As shown in the analysis, the international-facing version of the same China-based mobile app provides relatively higher levels of data protection to app users in the European Union than its Chinese-facing version. This further highlights the central role of nation states and the importance of jurisdiction in the global expansion of Chinese mobile apps. As non-EU organisations, Chinese app makers are subject to the territorial scope of GDPR (Article 3) when offering services to individuals in the EU. On the other hand, Chinese-facing apps have operationalised Chinese privacy regulations in app design and privacy policies compliant with rules such as real name registration. Through the analysis of terms of service and privacy policies, this paper shows that China-based mobile apps are generally in compliance with laws and data protection frameworks across different jurisdictions. However, there lacks detailed explanations of data retention and storage when users are in transit, for example, when an EU resident travels outside, do they have the same level of privacy protection as residing in the EU? On average, EU users of Chinese mobile apps are afforded greater transparency and control with regards to how data is used, stored and disclosed compared to other jurisdictions for these four particular sets of China-based mobile apps. Under China’s privacy regulation regime, which itself is full of contradictions and inconsistencies (Lee, 2018; Feng, 2019), data and privacy protection is weak for domestic Chinese users. Certain features of the app, such as the “security clearance” declaration during account deletion for domestic versions of Chinese mobile apps also shows the prioritisation of national security over the individual right to privacy as key doctrines in China’s approach to data and privacy protection under the banner of internet sovereignty. This, however, is not unique to China as national security and privacy protection is portrayed in many policy debates and policymaking processes as a zero-sum game (Mann, Daly, Wilson, & Suzor, 2018). The latest restrictions imposed by the Trump administration on TikTok and WeChat in the US citing concerns over the apps’ data collection and data sharing policies (Yang and Lin, 2020) is just another example of the conundrum China-based apps face in their course of global expansion and global geopolitics centered around mobile and internet technologies. To be sure, data and privacy protection is one of the biggest challenges if China-based apps continue to expand overseas and it is going to incur a steep learning curve and possible reorganisation of a company’s operation and governance structure.

References

Alexander, J. (2019, February 27). TikTok will pay $5.7 million over alleged children’s privacy law violations. The Verge. https://www.theverge.com/2019/2/27/18243312/tiktok-ftc-fine-musically-children-coppa-age-gate

Balebako, R., Marsh, A., Lin, J., Hong, J., & Cranor, L. F. (2014, February 23). The Privacy and Security Behaviors of Smartphone App Developers. Network and Distributed System Security Symposium. https://doi.org/10.14722/usec.2014.23006

BBC News. (2016, July 18). Apple iCloud: State Firm Hosts User Data in China. BBC News. https://www.bbc.com/news/technology-44870508

Byford, S. (2018, November 30). How China’s Bytedance Became the World’s Most Valuable Startup. The Verge. https://www.theverge.com/2018/11/30/18107732/bytedance-valuation-tiktok-china-startup

C.A.C. (2016, December 27). Guojia Wangluo Anquan Zhanlue. Xinhuanet. http://www.xinhuanet.com/politics/2016-12/27/c_1120196479.htm

Cartwright, M. (2020). Internationalising state power through the internet: Google, Huawei and geopolitical struggle. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1494

Chen, J. Y., & Qiu, J. L. (2019). Digital Utility: Datafication, Regulation, Labor, and Didi’s Platformization of Urban Transport in China. Chinese Journal of Communication, 12(3), 274–289. https://doi.org/10.1080/17544750.2019.1614964

Chen, X., Kaye, D. B., & Zeng, J. (2020). #PositiveEnergy Douyin: Constructing ‘Playful Patriotism’ in a Chinese Short-Video Application. Chinese Journal of Communication. https://doi.org/10.1080/17544750.2020.1761848

de Hert, P., & Papakonstantinou, V. (2015). The Data Protection Regime in China. [Report]. European Parliament. https://www.europarl.europa.eu/RegData/etudes/IDAN/2015/536472/IPOL_IDA(2015)536472_EN.pdf

Deibert, R., & Pauly, L. (2017). Cyber Westphalia and Beyond: Extraterritoriality and Mutual Entanglement in Cyberspace. Paper Prepared for the Annual Meeting of the International Studies Association.

DeNardis, L., & Hackl, A. M. (2015). Internet Governance by Social Media Platforms. Telecommunications Policy, 39(9), 761–770. https://doi.org/10.1016/j.telpol.2015.04.003

Dieter, M., Gerlitz, C., Helmond, A., Tkacz, N., Vlist, F., & Weltevrede, E. (2019). Multi-Situated App Studies: Methods and Propositions. Social Media + Society, 1–15.

Dijck, J., Nieborg, D., & Poell, T. (2019). Reframing Platform Power. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1414

Federal Trade Commission. (1998). Privacy Online: A Report to Congress [Report]. Federal Trade Commission. https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf

Federal Trade Commission. (2013). Mobile Privacy Disclosures: Building Trust Through Transparency [Staff Report]. Federal Trade Commission. https://www.ftc.gov/reports/mobile-privacy-disclosures-building-trust-through-transparency-federal-trade-commission

Federal Trade Commission. (2019, February 27). Video Social Networking App Musical.ly Agrees to Settle FTC Allegations That it Violated Children’s Privacy Law [Press release]. Federal Trade Commission. https://www.ftc.gov/news-events/press-releases/2019/02/video-social-networking-app-musically-agrees-settle-ftc

Feng, Y. (2019). The Future of China’s Personal Data Protection Law: Challenges and Prospects. Asia Pacific Law Review, 27(1), 62–82. https://doi.org/10.1080/10192557.2019.1646015

Fernback, J., & Papacharissi, Z. (2007). Online Privacy as Legal Safeguard: The Relations Among Consumer, Online Portal and Privacy Policy. New Media & Society, 9(5), 715–734. https://doi.org/10.1177/1461444807080336

Flew, T., Martin, F., & Suzor, N. (2019). Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Fu, T. (2019). China’s Personal Information Protection in a Data-Driven Economy: A Privacy Policy Study of Alibaba, Baidu and Tencent. Global Media and Communication, 15(2), 195–213. https://doi.org/10.1177/1742766519846644

Fuchs, C. (2012). The Political Economy of Privacy on Facebook. Television & New Media, 13(2), 139–159. https://doi.org/10.1177/1527476411415699

Gierow, H. J. (2014). Cyber Security in China: New Political Leadership Focuses on Boosting National Security (Report No. 20; China Monitor). merics. https://merics.org/en/report/cyber-security-china-new-political-leadership-focuses-boosting-national-security

Gillespie, T. (2018a). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.

Gillespie, T. (2018b). Regulation Of and By Platforms. In J. Burgess, A. Marwick, & T. Poell (Eds.), The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications. https://doi.org/10.4135/9781473984066.n15

Goldsmith, J., & Wu, T. (2006). Who Controls the Internet? Illusions of Borderless World. Oxford University Press.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914

Greene, D., & Shilton, K. (2018). Platform Privacies: Governance, Collaboration, and the Different Meanings of “Privacy” in iOS and Android Development. New Media & Society, 20(4), 1640–1657. https://doi.org/10.1177/1461444817702397

Jao, N. (2018, February 8). Evernote Announces Plans to Migrate All Data in China to Tencent Cloud. Technode. https://technode.com/2018/02/08/evernote-will-migrate-data-china-tencent-cloud/

Jia, L., & Winseck, D. (2018). The Political Economy of Chinese Internet Companies: Financialization, Concentration, and Capitalization. International Communication Gazette, 80(1), 30–59. https://doi.org/10.1177/1748048517742783

Kalathil, S., & Boas, T. (2003). Open Networks, Closed Regimes: The Impact of the Internet on Authoritarian Rule. Carnegie Endowment for International Peace.

Kaye, B. V., Chen, X., & Zeng, J. (2020). The Co-evolution of Two Chinese Mobile Short Video Apps: Parallel Platformization of Douyin and TikTok. Mobile Media & Communication. https://doi.org/10.1177/2050157920952120

Knockel, J., Ruan, L., Crete-Nishihata, M., & Deibert, R. (2018). (Can’t) Picture This: An Analysis of Image Filtering on WeChat Moments [Report]. Citizen Lab. https://citizenlab.ca/2018/08/cant-picture-this-an-analysis-of-image-filtering-on-wechat-moments/

Kong, L. (2007). Online Privacy in China: A Survey on Information Practices of Chinese Websites. Chinese Journal of International Law, 6(1), 157–183. https://doi.org/10.1093/chinesejil/jml061

Lee, J.-A. (2018). Hacking into China’s Cybersecurity Law. Wake Forest Law Review, 53, 57–104. http://wakeforestlawreview.com/wp-content/uploads/2019/01/w05_Lee-crop.pdf

Light, B., Burgess, J., & Duguay, S. (2018). The Walkthrough Method: An Approach to the Study of Apps. New Media & Society, 20(3), 881–900. https://doi.org/10.1177/1461444816675438

Liu, J. (2019). China’s Data Localization. Chinese Journal of Communication, 13(1). https://doi.org/10.1080/17544750.2019.1649289

Logan, S. (2015). The Geopolitics of Tech: Baidu’s Vietnam. Internet Policy Observatory. http://globalnetpolicy.org/research/the-geopolitics-of-tech-baidus-vietnam/

Logan, S., Molloy, B., & Smith, G. (2018). Chinese Tech Abroad: Baidu in Thailand [Report]. Internet Policy Observatory. http://globalnetpolicy.org/research/chinese-tech-abroad-baidu-in-thailand/

Mann, M., Daly, A., Wilson, M., & Suzor, N. (2018). The Limits of (Digital) Constitutionalism: Exploring the Privacy-Security (Im)Balance in Australia. International Communication Gazette, 80(4), 369–384. https://doi.org/10.1177/1748048518757141

Martin, K. (2013). Transaction Costs, Privacy, and Trust: The Laudable Goals and Ultimate Failure of Notice and Choice to Respect Privacy Online. First Monday, 18(12). https://doi.org/10.5210/fm.v18i12.4838

McKune, S., & Ahmed, S. (2018). The Contestation and Shaping of Cyber Norms Through China’s Internet Sovereignty Agenda. International Journal of Communication, 12, 3835–3855. https://ijoc.org/index.php/ijoc/article/view/8540

Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Dædalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113

Pappas, V. (2020, March 11). TikTok to Launch Transparency Center for Moderation and Data Practices [Press release]. TikTok. https://newsroom.tiktok.com/en-us/tiktok-to-launch-transparency-center-for-moderation-and-data-practices

Plantin, J.-C., Lagoze, C., Edwards, P., & Sandvig, C. (2016). Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook. New Media & Society, 20(1), 293–310. https://doi.org/10.1177/1461444816661553

Plantin, J.-C., & Seta, G. (2019). WeChat as Infrastructure: The Techno-nationalist Shaping of Chinese Digital Platforms. Chinese Journal of Communication, 12(3). https://doi.org/10.1080/17544750.2019.1572633

Reuters. (2016, November 1). Airbnb Tells China Users Personal Data to be Stored Locally. Reuters. https://www.reuters.com/article/us-airbnb-china/airbnb-tells-china-users-personal-data-to-be-stored-locally-idUSKBN12W3V6

Reuters. (2018, January 12). China Chides Tech Firms Over Privacy Safeguards. Reuters. https://www.reuters.com/article/us-china-data-privacy/china-chides-tech-firms-over-privacy-safeguards-idUSKBN1F10F6

Ruan, L., Knockel, J., Ng, J., & Crete-Nishihata, M. (2016). One App, Two Systems: How WeChat Uses One Censorship Policy in China and Another Internationally (Research Report No. 84). Citizen Lab. https://citizenlab.ca/2016/11/wechat-china-censorship-one-app-two-systems/

Sharma, I., & Niharika, S. (2019, July 22). It Took a Ban and a Government Notice for ByteDance to Wake Up in India. Quartz India. https://qz.com/india/1671207/bytedance-to-soon-store-data-of-indian-tiktok-helo-users-locally/

State Council Information Office. (2010). The Internet in China. Information Office of the State Council of the People’s Republic of China. http://www.china.org.cn/government/whitepaper/node_7093508.htm

Stein, L. (2013). Policy and Participation on Social Media: The Cases of YouTube, Facebook and Wikipedia. Communication, Culture & Critique, 6(3), 353–371. https://doi.org/10.1111/cccr.12026

Steinberg, M., & Li, J. (2016). Introduction: Regional Platforms. Asiascape: Digital Asia, 4(3), 173–183. https://doi.org/10.1163/22142312-12340076

Wanbao, F. (2018, January 6). Shouji Baidu App Qinfanle Women de Naxie Yinsi. 163. http://news.163.com/18/0106/17/D7G2O0T200018AOP.html

Wang, H. (2011). Protecting Privacy in China: A Research on China’s Privacy Standards and the Possibility of Establishing the Right to Privacy and the Information Privacy Protection Legislation in Modern China. Springer Science & Business Media. https://doi.org/10.1007/978-3-642-21750-0

Wang, W. Y., & Lobato, R. (2019). Chinese Video Streaming Services in the Context of Global Platform Studies. Chinese Journal of Communication, 12(3), 356–371. https://doi.org/10.1080/17544750.2019.1584119

West, S. M. (2019). Data Capitalism: Redefining the Logics of Surveillance and Privacy. Business & Society, 58(1), 20–41. https://doi.org/10.1177/0007650317718185

Xu, D., Tang, S., & Guttman, D. (2019). China’s Campaign-style Internet Finance Governance: Causes, Effects, and Lessons Learned for New Information-based Approaches to Governance. Computer Law & Security Review, 35, 3–14. https://doi.org/10.1016/j.clsr.2018.11.002

Xu, J. (2015). Evolving Legal Frameworks for Protecting the Right to Internet Privacy in China. In J. Lindsay, T. M. Cheung, & D. Reveron (Eds.), China and Cybersecurity: Espionage, Strategy, and Politics in the Digital Domain(pp. 242–259). Oxford Scholarship Online. https://doi.org/10.1093/acprof:oso/9780190201265.001.0001

Yang, J., & Lin, L. (2020). WeChat and Trump’s Executive Order: Questions and Answers. The Wall Street Journal. https://www.wsj.com/articles/wechat-and-trumps-executive-order-questions-and-answers-11596810744.

Yang, W. (2018, September 15). Online Streaming Platforms Urged to Follow Copyright Law. ChinaDaily. http://usa.chinadaily.com.cn/a/201809/15/WS5b9c7e90a31033b4f4656392.html

Yang, Y. (2019, June 21). TikTok Owner ByteDance Gathers 1 Billion Monthly Active Users Across its Apps. South China Morning Post. https://www.scmp.com/tech/start-ups/article/3015478/tiktok-owner-bytedance-gathers-one-billion-monthly-active-users

Zimmeck, S., Wang, Z., Zou, L., Iyengar, R., Liu, B., Schaub, F., & Reidenberg, J. (2016, September 28). Automated Analysis of Privacy Requirements for Mobile Apps. 2016 AAAI Fall Symposium Series. http://pages.cpsc.ucalgary.ca/~joel.reardon/mobile/privacy.pdf

Appendix

Current laws, regulations and drafting measures for data and privacy protection in China

Year

Title

Government ministries

Legal effect

Main takeaway

2009

General Principles of The Civil Law

National People's Congress

Civil law

Lays the foundation for the protection of personal rights including personal information, but privacy protection comes as an auxiliary article

2010

Tort Liabilities Law

Standing Committee of the National People’s Congress

Civil law

2012

Decision on Strengthening Online Personal Data Protection

Standing Committee of the National People’s Congress

General framework

Specifies the protection of personal electronic information or online personal information for the first time

2013

Regulation on Credit Reporting Industry

State Council

Regulation

Draws a boundary of what kinds of personal information can and cannot be collected by credit reporting business

2013

Telecommunication and Internet User Personal Data Protection Regulations

Ministry of Industry and Information Technology

Department regulation

Provides industry-specific regulations on personal information protection duties

2013

Information Security Technology Guidelines for Personal Information Protection with Public and Commercial Services Information Systems

National Information Security Standardization Technical Committee; China Software Testing Center

Voluntary national standard

Specifies what “personal general information” 个人一般信息 and what “personal sensitive information” 个人敏感信息 entail respectively;

Defines the concepts of “tacit consent” 默许同意 and “expressed consent” 明示同意 for the first time

2014

Provisions of the Supreme People's Court on Several Issues concerning the Application of Law in the Trial of Cases involving Civil Disputes over Infringements upon Personal Rights and Interests through Information Networks

Supreme People's Court

General framework

Defines what is included in the protection of "personal information", with a specific focus on regulating online search of personal information and online trolls

2015

Criminal Law (9th Amendment)

Standing Committee of the National People’s Congress

Criminal law

Criminalises the sale of any citizen's personal information in violation of relevant provisions.

Criminalises network service providers' failure to fulfil network security management duties.

2016

Administrative Rules on Information Services via Mobile Internet Applications

Cyberspace Administration China

Administrative rules

Reiterates app stores and internet app providers' responsibilities to comply with real-name verification system and content regulations regarding national security and public order;

Mentions data collection principles (i.e., legal, justifiable, necessary, expressed consent)

2017

Cybersecurity Law

Standing Committee of the National People’s Congress

Law

Requires data localisation;

Provides definitions of ""personal information""

Defines data collection principles;

Currently the most authoritative law protecting personal information

2017

Interpretation of the Supreme People's Court and the Supreme People's Procuratorate on Several Issues concerning the Application of Law in the Handling of Criminal Cases of Infringing on Citizens' Personal Information

Supreme People's Court

General framework

Defines "citizen personal information", what activities equate to "providing citizen personal information", and what are the legal consequences of illegally providing personal information

2017

Information security technology

Guide for De-Identifying Personal Information

Standardization Administration of China

Drafting

Provides a guideline on de-identification of personal information

2018

Information security technology Personal information security specification

Standardization Administration of China

Voluntary national standard / Currently under revision

Lays out granular guidelines for consent and how personal data should be collected, used, and shared.

2018

E-Commerce Law

Standing Committee of the National People’s Congress

Law

Provides generally-worded personal information protection rules for e-commerce vendors and platforms

2019

Measures for Data Security Management

Cyberspace Administration China

Drafting

Proposes new requirements with a focus on the protection of "important data", which is defined as "data that, if leaked, may directly affect China’s national security, economic security, social stability, or public health and security"

2019

Information security technology Basic specification for collecting personal information in mobile internet applications

Standardization Administration of China

Drafting

Provides guidelines on minimal information for an extensive list of applications ranging from navigation services to input software

2019

Measures for Determining Illegal Information Collection by Apps

Drafting stage

 

Geopolitics, jurisdiction and surveillance

$
0
0

Papers in this special issue

Geopolitics, jurisdiction and surveillance
Monique Mann, Deakin University
Angela Daly, University of Strathclyde

Mapping power and jurisdiction on the internet through the lens of government-led surveillance
Oskar J. Gstrein, University of Groningen

Regulatory arbitrage and transnational surveillance: Australia’s extraterritorial assistance to access encrypted communications
Monique Mann, Deakin University
Angela Daly, University of Strathclyde
Adam Molnar, University of Waterloo

Internationalising state power through the internet: Google, Huawei and geopolitical struggle
Madison Cartwright, University of Sydney

Public and private just wars: distributed cyber deterrence based on Vitoria and Grotius
Johannes Thumfart, Vrije Universiteit Brussel

Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad
Lianrui Jia, University of Toronto
Lotus Ruan, University of Toronto

Transnational collective actions for cross-border data protection violations
Federica Casarosa, European University Institute

The legal geographies of extradition and sovereign power
Sally Kennedy, Deakin University
Ian Warren, Deakin University

Anchoring the need to revise cross-border access to e-evidence
Sergi Vazquez Maymir, Vrije Universiteit Brussel

Geopolitics, jurisdiction and surveillance

Introduction

With this special issue we offer critical commentary and analysis of the geopolitics of data, transnational surveillance and jurisdiction, and reflect upon the question of if and how individual rights can be protected in an era of ubiquitous transnational surveillance conducted by private companies and governments alike. The internet provides a number of challenges, and opportunities, for exercising power, and regulating, extraterritorially to the sovereign nation state. These practices are shaped and influenced by geopolitical relations between states. Certainly, the trans-jurisdictional nature of the internet means that the legal geographies of the contemporary digital world require rethinking, especially in light of calls for a more sophisticated and nuanced approach to understanding sovereignty to govern, and also protecting individual rights in the electronic age (Johnson & Post, 1996; Goldsmith & Wu, 2006; Brenner, 2009; Hilderbrandt, 2013; Svantesson, 2013; 2014; 2017; DeNardis, 2014). These issues raise a host of additional contemporary and historical questions about attempts by the US to exert power over extraterritorial conduct in various fields including crime, intellectual property, surveillance and national security (see e.g., Bauman et al., 2014; Boister, 2015; Schiller, 2011). Yet dynamics are shifting with the emergence of the new technological superpower China, and regulatory efforts of the European Union (for example via the General Data Protection Regulation). The emergence of large transnational corporations providing critical virtual and physical infrastructure adds private governance to this equation, which offers further new dimensions to the rule of law and also self- or co-regulation (see for example Goldsmith & Wu, 2006; DeNardis & Hackl, 2015; Brown & Marsden, 2013; Daly, 2016).

The idea for this special issue emerged from a workshop that we co-convened in 2016 in which we sought to explore a range of questions: the impact of domestic and international cybercrime, data protection and intellectual property laws on sovereignty and extraterritoriality; the geopolitical impacts of domestic and international surveillance and cybercrime laws such as the Council of Europe’s Convention on Cybercrime (Budapest Convention), the recent United States Clarifying Lawful Overseas Use of Data (CLOUD) Act and other lawful access regimes including European Union e-Evidence proposals; the application of due process requirements in the contemporary policing of digital spaces; the objectives of justice in the study of private governance in online environments; and the implications of these transnational developments for current and future policy and regulation of online activities and spaces.

Since 2016, we have witnessed striking developments in the geopolitical and geoeconomic relationships between states, global technology companies, their transnational surveillance practices, and corresponding governance frameworks. In particular, the rise of China and the globalisation of its internet industry is a major development in this time, along with the Trump presidency in the US and the ensuing trade war (Daly, in press). Just in the weeks prior to the publication of this special issue, there was significant escalation of tensions between the US and China played out via the restriction of social media companies’ access to the US market. On 6 August 2020, Donald Trump issued executive orders banning transactions with ByteDance (Tik Tok’s parent company) and Tencent (WeChat’s parent company) that are subject to US jurisdiction, stating that “the spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China (China) continues to threaten the national security, foreign policy, and economy of the United States. Surveillance and sharing US citizens’ data with the Chinese Communist Party, protection of intellectual property from corporate espionage, and Chinese censorship and disinformation were cited as justification supporting the purge. Subsequently, Trump issued a further executive order requiring that ByteDance sell off all of TikTok’s US based assets. These types of geopolitical struggles are examined further in Cartwright’s timely contribution to this special issue on ‘Internationalising state power through the internet’ (Cartwright, 2020).

Further to the recent US-Chinese tensions, in the month prior to publication of this collection, the Court of Justice of the EU (CJEU) handed down its landmark decision in Data Protection Commissioner v Facebook Ireland Limited, Maximillian Schrems(Schrems II) (2020) invalidating the EU-US Privacy Shield (following Schrems I invalidating the predecessor EU-US Safe Harbour agreement in 2015) with significant ramifications for the transfer of the data of EU citizens to the US as a consequence of the US’ extensive state surveillance, and insufficient safeguards protecting privacy. The exact impacts that this decision will have for transborder data transfers are yet to be fully understood, but will undoubtedly be significant. At the same time, the US is negotiating executive agreements under its Clarifying Lawful Overseas Use of Data (CLOUD) Act that allow for authorised states to access the content of communications held by US technology companies without prior judicial authorisation, and for the US to compel US technology companies to provide access to data stored extraterritorially to the US jurisdiction (as per the initial Microsoft case rendered moot by the introduction of the CLOUD Act, see further Warren, 2015; Svantesson, 2017; Mann & Warren, 2018; Mulligan, 2018).

This all comes at a time when nations, and indeed regions, are asserting their “digital sovereignty” through data localisation initiatives that limit transborder data flows, as witnessed recently with France and Germany enacting their plans for European digital sovereignty (ANSSI, n.d.) and the corresponding launch of the GAIA-X cloud computing project (GAIA-X, n.d.) that creates a European data infrastructure independent of both China and the US.

China has also started asserting itself legally beyond its territorial borders. Hong Kong’s new controversial National Security Law includes provisions which criminalise secession, subversion, terrorism, and collusion with foreign powers and via Art 38, purports to apply to non-HK permanent residents committing these offences even if they are based in other countries. In addition, Art 43 enables the Hong Kong Police Force when investigating national security crimes to direct service providers to remove content and provide other assistance. How these provisions will be applied to Hong Kong’s transnational internet (which to date has included both Chinese and Western internet companies and services including some which are banned in mainland China) remains unclear, some US-based companies such as Facebook and Twitter have already announced their suspension of compliance with data requests from the Hong Kong authorities (Liao, 2020).

Taken together, these most recent developments highlight the significance of the geopolitical and geoeconomic dimensions of data, private-public surveillance interests, and associated impacts for human rights and international trade. They also demonstrate that extraterritoriality is no longer just a feature of US internet law and policy and equally that national sovereignty is no longer just a feature of Chinese internet law and policy.

These dimensions become more relevant with the concurrent reinforcement of physical borders amid a new global crisis brought by the COVID-19 pandemic that also has significant implications for cross-border information sharing and data storage e.g., immunity passports, contact tracing applications with data stored on the cloud (see Taylor et al., 2020). Certainly, expanded surveillance and information collection by states and private companies have proven to be central to the global response to bio(in)security created by the pandemic, with significant extraterritorial implications (Privacy International, 2020). For example, one of the main criticisms leveled at the Australian COVIDSafe contact tracing application was that Amazon was contracted to host the contact tracing information on its web services (AWS), with the potential for the US to access the data via the US technology company. In response, and like Germany and France, the Australian government is considering the development of a “sovereign cloud” for the storage of Australia's data (Besser & Welch, 2020; Sadler, 2020). Nevertheless, the pandemic response has also demonstrated the transnational corporate power of Google and Apple as key gatekeepers to the operation of government-backed COVID contact tracking apps, despite the questionable or unproven effectiveness of these apps in automating contact tracing (Braithwaite et al., 2020). Google and Apple have even become the source of apps that offer improved data protection when compared to the in-house attempts of various European governments to create their own apps (Daly, in press), yet simultaneously cement their infrastructural power (Veale, 2020).

Main contributions to this special issue

With these brief introductory remarks in mind, we turn to the overview of the papers and their main contributions to this issue. We open the collection with Oskar J. Gstrein’s contribution‘Mapping power and jurisdiction on the internet through the lens of government-led surveillance’ (Gstrein, 2020) that examines governance frameworks for the regulation of government-driven surveillance to avoid the ‘balkanisation’ of the internet. Two proposals are analysed, namely, the ‘Working Draft Legal Instrument on Government-led Surveillance and Privacy’, presented to the United Nations Human Rights Council, and the proposal for a ‘Digital Geneva Convention’ (DGC) by Microsoft’s Brad Smith. The article questions whether it is possible to create an internet based on human rights principles and values. Interlinked with issues of human rights online, our own (along with Adam Molnar) contribution on ‘Regulatory arbitrage and transnational surveillance’(Mann, Daly, & Molnar, 2020)examines developments regarding encryption law and policy within ‘Five Eyes’ (FVEY) countries, specifically the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (Cth 1) in Australia. We argue that this new law is significant both domestically and internationally given its extraterritorial reach enables the development of new ways for Australian law enforcement and security agencies to access encrypted telecommunications via transnational providers, and allows for Australian authorities to assist foreign counterparts in both enforcing and potentially circumventing their domestic laws. We show that deficiencies in Australian human rights protections are the ‘weak link’ in the FVEY alliance, which means there is the possibility for regulatory arbitrage to exploit these new surveillance powers to undermine encryption, at a global scale, via Australia.

Madison Cartwright’s article ‘Internationalising state power through the internet: Google, Huawei and geopolitical struggle’ (Cartwright, 2020) shows how the US has exploited the international market dominance of US-based internet companies to internationalise its own state power through surveillance programmes. Using Huawei as a case study, Cartwright also examines how Chinese companies threaten the dominance of US companies as well as the geopolitical power of the US state, and in response, how the US has sought to shrink the ‘geo-economic space’ available to Huawei by using its firms, such as Google, to disrupt Huawei’s supply chains. The analysis demonstrates how states may use internet companies to exercise power and authority beyond their borders. Extraterritorial exercise of power by non-state actors is explored further in ‘Public and private just wars: distributed cyber deterrence based on Vitoria and Grotius’ (Thumfart, 2020). In Johannes Thumfart’s contribution, the role of non-state actors in cyber attacks are considered from the perspective of just war theory. He argues that private and public cyber deterrence capacities form a system of distributed deterrence that is preferential to state-based deterrence alone.

In ‘Going global: Comparing Chinese mobile applications’ data and user privacy governance at home and abroad’ Lianrui Jia and Lotus Ruan argue that differential levels of privacy and data protection demonstrate the importance of jurisdictional influences in the regulatory environment and argue this shapes the global expansion of Chinese internet companies (Jia & Ruan, 2020). They examine the governance of Chinese mobile applications at a global scale and their comparative analysis of international-facing versions of Chinese mobile apps versus Chinese-facing versions demonstrates greater levels of data protection are proffered to those users located outside China than those within. Continuing with the theme of transnational data protection, in ‘Transnational collective actions for cross-border data protection violations’ Federica Casarosa examines alternative forms of enforcement, specifically, transnational collective actions in the European Union as an avenue to empower data subjects and achieve remedies for data protection infringements (Casarosa, 2020). Casarosa uses the Cambridge Analytica-Facebook scandal to highlight the multijurisdictional and cross-border nature of data protection violations, examines some of the limits of existing redress mechanisms under the EU’s General Data Protection Regulation (GDPR), and argues for greater scope for transnational collective actions where associations or non-government organisations represent claimants from various jurisdictions.

Cross-border access to data is a central concern for transnational online policing. In the contribution on ‘The legal geographies of extradition and sovereign power’ Sally Kennedy and Ian Warren raise a series of questions about access, use and exchange of digital evidence under mutual legal assistance treaty (MLAT) requirements (Kennedy & Warren, 2020). Via a case study concerning a Canadian citizen facing extradition to the US, they show how US sovereignty and criminal enforcement powers are advanced with implications for global online criminal investigations. Their analysis shows a need for clearer transnational data exchange protocols or the possibility of shifting prosecution forums to the source of online harm, arguing that this would promote fairness for those accused of online crimes with a cross-jurisdictional aspect. Matters of e-evidence are further explored in ‘Anchoring the need to revise cross-border access to e-evidence’ in which Sergie Vazquez Maymir examines the European Commission’s e-evidence package, including the ‘Proposal for a Regulation on European Production and Preservation Orders’ and associated impact assessment. He critically analyses the arguments and evidence supporting the EPO regulation and the policy shift away from Mutual Legal Assistance to direct cooperation. Vazquez Maymir argues that the problems associated with cross border access to e-evidence are framed in terms of technical and efficiency considerations, and in doing so, the political and economic motivations are lost.

Conclusion

Utilising, and in some cases exploiting, information communication technology to exert private and public power across multiple jurisdictions undoubtedly creates new challenges for traditional forms of regulatory governance and the protection of human rights. Each of the papers in this collection raise and speak to critical questions about the type of internet that we want (free, open, unified and decentralised?), and the role that states and companies (should) play in creating it. The papers demonstrate the significance of the internet as a forum for geopolitical struggle and the weaponisation of jurisdiction, especially with exterritorial reach, for states to extend their power beyond their own borders directly, and via transnational companies.

While the US, due to historical reasons as the birthplace of the internet and the de facto international hegemon in the 1990s/2000s, has been the focus for private and public extensions of political and economic power via the internet, the increasing multipolarity of the world and its impact on technology law and policy is impacting upon the relationship between jurisdiction and power online, as can be seen through this collection’s contributions. The EU has been gaining prominence as a ‘regulatory superpower’ especially since the introduction of the GDPR, and the emergence of China as a global internet player is now also apparent through the globalisation of its internet services and the extraterritorial reach of the new Hong Kong National Security Law. Increasing attention ought to be paid to such developments beyond the US and EU, particularly from BRICS countries, and how these interact with, and impact upon, global internet governance and internet law and policy with the West too.

Acknowledgements

Mann received funding as part of her Vice-Chancellor’s Research Fellowship in Technology and Regulation, and from the Intellectual Property and Innovation Law (IPIL) Programme, at Queensland University of Technology. This supported the original workshop, copy-editing and editorial assistance.

Angela Daly would like to thank University of Strathclyde Scholarly Publications and Research Data/Open Access@Strathclyde, and in particular Pablo de Castro, for making a financial contribution to support this special issue being made available on an open access basis. She would also like to thank the Queensland University of Technology IPIL Programme for financially supporting the original workshop.

We would like to thank Dr Kayleigh Murphy for her excellent editorial assistance. We would especially like to acknowledge and thank Frédéric Dubois and the entire Internet Policy Review team for their enthusiasm and support in publishing this collection. We thank the participants at the workshop we held at QUT in 2016, and the international peer-reviewers that contributed their expertise and constructive comments on the papers (including ones that did not make it into the final collection): Songyin Bo, Balázs Bodá, Evelien Brouwer, Lee Bygrave, Jonathan Clough, Robert Currie, Jake Goldenfein, Samuli Haataja, Blayne Haggart, Danielle Ireland-Piper, Tamir Israel, Martin Kretschmer, Joanna Kulesza, Robert Merkel, Adam Molnar, Gavin Robinson, Stephen Scheel, James Sheptycki, Nic Suzor, Dan Svantesson, Peter Swire, Johannes Thumfart, Natasha Tusikov and Janis Wong.

References

A.N.S.S.I. (n.d.). The European Digital Sovereignty—A Common Objective for France and Germany. https://www.ssi.gouv.fr/en/actualite/the-european-digital-sovereignty-a-common-objective-for-france-and-germany/

Bauman, Z., Bigo, D., Esteves, P., Guild, E., Jabri, V., Lyon, D., & Walker, R. B. J. (2014). After Snowden: Rethinking the impact of surveillance. International Political Sociology, 8(2), 121–144. https://doi.org/10.1111/ips.12048

Besser, L., & Welch, D. (2020, April 23). Australia’s coronavirus tracing app’s data storage contract goes offshore to Amazon. ABC News. https://www.abc.net.au/news/2020-04-24/amazon-to-provide-cloud-services-for-coronavirus-tracing-app/12176682

Boister, N. (2015). Further reflections on the concept of transnational criminal law. Transnational Legal Theory, 6(1), 9–30. https://doi.org/10.1080/20414005.2015.1042232

Braithwaite, I., Callender, T., Bullock, M., & Aldridge, R. W. (2020). Automated and partly automated contact tracing: A systemic review to inform the control of COVID-19. The Lancet Digital Health. https://doi.org/10.1016/s2589-7500(20)30184-9

Brenner, S. W. (2009). Cyber Threats: The emerging fault lines of the nation state. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195385014.001.0001

Brown, I., & Marsden, C. T. (2013). Good governance and better regulation in the information age. MIT Press.

Cartwright, M. (2020). Internationalising state power through the internet: Google, Huawei and geopolitical struggle. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1494

Casarosa, F. (2020). Transnational collective actions for cross-border data protection violations. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1498

Daly, A. (In press). Neo-Liberal Business-As-Usual or Post-Surveillance Capitalism With European Characteristics? The EU’s General Data Protection Regulation in a Multi-Polar Internet. In R. Hoyng & G. P. L. Chong (Eds.), Communication Innovation and Infrastructure: A Critique of the New in a Multipolar World. Michigan State University Press.

Daly, A. (2016). Private Power, Online Information Flows and EU Law: Mind the Gap. Hart.

Data Protection Commissioner v Facebook Ireland Limited, Maximillian Schrems (C‑311/18), (The Court of Justice of the European Union (Grand Chamber) 2020). http://curia.europa.eu/juris/document/document.jsf?text=&docid=228677&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=9745404

DeNardis, L. (2014). The global war for internet governance. Yale University Press. https://doi.org/10.12987/yale/9780300181357.001.0001

DeNardis, L., & Hackl, A. M. (2015). Internet governance by social media platforms. Telecommunication Policy, 39, 761–770. https://doi.org/10.1016/j.telpol.2015.04.003

Executive Order on Addressing the Threat Posed by TikTok. (2020). The White House. https://www.whitehouse.gov/presidential-actions/executive-order-addressing-threat-posed-tiktok/

GAIA-X. (n.d.). GAIA-X: A Federated Data Infrastructure for Europe. https://www.data-infrastructure.eu/GAIAX/Navigation/EN/Home/home.html

Goldsmith, J., & Wu, T. (2006). Who controls the internet? Illusions of a borderless world. Oxford University Press.

Gstrein, O. (2020). Mapping power and jurisdiction on the internet through the lens of government-led surveillance. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1497

Hildebrandt, M. (2013). Extraterritorial jurisdiction to enforce in cyberspace?: Bodin, Schmitt, Grotius in cyberspace. University of Toronto Law Journal, 63(2), 196–224. https://doi.org/10.3138/utlj.1119

Johnson, D., & Post, D. (1996). Law and borders: The rise of law in cyberspace. Stanford Law Review, 48(5), 1367–1402. https://doi.org/10.2307/1229390

Kennedy, S., & Warren, I. (2020). The legal geographies of extradition and sovereign power. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1496

Liao, R. (2020, July 8). The tech industry comes to grips with Hong Kong’s national security law. TechCrunch. https://techcrunch.com/2020/07/08/hong-kong-national-security-law-impact-on-tech/

Mann, M., Daly, A., & Molnar, A. (2020). Regulatory arbitrage and transnational surveillance: Australia’s extraterritorial assistance to access encrypted communications. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1499

Mann, M., & Warren, I. (2018). The digital and legal divide: Silk Road, transnational online policing and southern criminology. In K. Carrington, R. Hogg, J. Scott, & M. Sozzo (Eds.), The Palgrave handbook of criminology and the global south (pp. 245–260). Palgrave MacMillan. https://doi.org/10.1007/978-3-319-65021-0_13

Mulligan, S. P. (2018). Cross-Border Data Sharing Under the CLOUD Act (No. 7-5700 R45173; CRS Report). Congressional Research Service. https://fas.org/sgp/crs/misc/R45173.pdf

Order Regarding the Acquisition of Musical.ly by ByteDance Ltd. (2020). The White House. https://www.whitehouse.gov/presidential-actions/order-regarding-acquisition-musical-ly-bytedance-ltd/

Privacy International. (2020). Tracking the Global Response to COVID-19. https://privacyinternational.org/examples/tracking-global-response-covid-19

Sadler, D. (2020, July 7). Government Finally Backs Sovereign Cloud Capability. Innovation Aus. https://www.innovationaus.com/govt-finally-backs-sovereign-cloud-capability/

Schiller, D. (2011). Special commentary: Geopolitical-economic conflict and network infrastructures. Chinese Journal of Communication, 4(1), 90–107. https://doi.org/10.1080/17544750.2011.544085

Svantesson, D. (2013). A ‘layered approach’ to the extraterritoriality of data privacy laws. International Data Privacy Law, 3(4), 278–286. https://doi.org/10.1093/idpl/ipt027

Svantesson, D. (2014). Sovereignty in international law – how the internet (maybe) changed everything, but not for long. Masaryk University Journal of Law and Technology, 8(1), 137–155. https://journals.muni.cz/mujlt/article/view/2651

Svantesson, D. J. B. (2017). Solving the internet jurisdiction puzzle. Oxford University Press. https://doi.org/10.1093/oso/9780198795674.001.0001

Taylor, L., Sharma, G., Martin, A., & Jameson, S. (Eds.). (2020). Data Justice and COVID-19: Global Perspectives. Meatspace Press. https://shop.meatspacepress.com/product/data-justice-and-covid-19-global-perspectives

Thumfart, J. (2020). Private and public just wars: Distributed cyber deterrence based on Vitoria and Grotius. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1500

Vazquez Maymir, S. (2020). Anchoring the Need to Revise Cross-Border Access to E-Evidence. Internet Policy Review, 9(3). https://doi.org/10.14763/2020.3.1495

Veale, M. (2020, July 1). Privacy is not the Problem with the Apple-Google Contact-Tracing Toolkit. The Guardian. https://www.theguardian.com/commentisfree/2020/jul/01/apple-google-contact-tracing-app-tech-giant-digital-rights

Warren, I. (2015). Surveillance, criminal law and sovereignty. Surveillance & Society, 13(2), 300–305. https://doi.org/10.24908/ss.v13i2.5679

Footnotes

1. Cth stands for Commonwealth, which means “federal” legislation, as distinct from state-level legislation.

Explanations of news personalisation across countries and media types

$
0
0

Introduction

Today, newsreaders worldwide increasingly consume news online, and this leads to profound changes in how the traditional media produces and disseminates news content (Mitchelstein & Boczkowski, 2010). The shift to digital distribution, together with the growing availability of data about audiences (including individual reading habits), enables new possibilities for making content selection and delivery more individualised for each reader. The options for personalised news distribution are many; they vary from customisable subscriptions to specific topics/authors (user-based personalisation) or individually tailored news suggestions generated via algorithmic news recommenders (ANRs) 1 (system-driven personalisation). Unlike user-based personalisation, which is grounded on explicit user decisions (e.g., choosing a specific subscription mode or following a certain topic), system-driven personalisation relies on implicit data about user activity (e.g., what news stories are read or how much time is spent on a specific page) that are utilised to suggest content that the system views as interesting or relevant to the user.

The use of news personalisation, in particular in its system-driven form, is viewed as an important strategy for news outlets (Newman, 2018). The individualised news distribution allows news organisations to be responsive to consumers’ information needs while ensuring traffic, consumption, and revenue through user targeting and profiling (Karimi et al., 2018). However, the technical implementation of news personalisation systems, especially the ones relying on ANRs that automatically draw insights from personal user data to generate individually tailored content recommendations, often remains obscure to news consumers and practitioners alike (Diakopoulos & Koliska, 2017).

With regard to personalised systems of news delivery, this lack of transparency is not a trivial issue, considering the possible impact of news personalisation on individual consumers and society at large. Some scholars (Penney, 2017; Stoycheff, 2016) argue that personalisation can increase anxiety (or “chilling effects”) among news consumers who feel that their online behaviour is being monitored. Personalisation has also been argued to threaten the democratic role of the media by impacting the provision of non-discriminatory access to information and the readers’ right to receive information (Eskens et al., 2017; Zuiderveen et al., 2016). Filter bubbles (Pariser, 2011), which limit the variety of information that individuals receive from the mass media and can potentially polarise society, have been especially prominent in this discussion. Although empirical research (Dubois & Blank, 2018; Möller et al., 2016, 2018) has so far found little evidence supporting the existence of filter bubbles, the possibility that personalised information distribution can lead to information inequalities or amplify existing biases and thus undermine the integral functions of the media in democratic societies cannot be excluded (Helberger et al., 2019). Understanding the workings of personalisation systems – in particular the ones based on ANRs – is therefore of crucial importance, and the news media is responsible for explicating whether and how it uses personalisation technologies. This is, however, easier said than done in the case of system-driven personalisation, as explaining the workings of algorithmic systems is notoriously difficult (Pasquale, 2015).

In this paper, we ask how these challenges are tackled by contemporary mass media by exploring how the presentation of news personalisation varies between the digital outlets of the quality (or broadsheet) and popular (or tabloid) press in different countries. Similar to Hanusch (2013), we define the difference between the two types in terms of “content and form rather than publication size” (p. 499). The tabloid press tends to produce more sensationalist – or even scandalous – content, which often promotes iconoclastic views (Bastos, 2016), and combines this with emotional appeals (Örnebring & Jönsson, 2004). While doing so, it often aims to entertain, rather than inform or educate, the audience; this leads to a decrease in journalistic standards, which is a common source of criticism of the tabloid press (Chadwick et al., 2018; Esser, 1999). In contrast, the broadsheet press relies on more in-depth reporting and prioritises “hard news coverage, fact-checking, and research based on a timeline in which the story unfolds” (Bastos, 2016, p. 218); this is viewed as an integral condition for fulfilling the democratic functions of the media (Berry, 2009).

A number of studies have investigated the differences in the adoption of digital innovations by quality and popular media (see, for instance, Jönsson & Örnebring, 2011; Karlsson & Clerwall, 2012; Karlsson & Clerwall, 2013); however, to our knowledge, none has looked at the differences in using and communicating the use of personalisation between broadsheets and tabloids, especially from a comparative cross-country perspective. While the question of whether the type of outlet (i.e., popular or quality) influences the adoption of technological innovation remains an open one, we agree with Jönsson and Örnebring (2011), who argue that because of their more popular (or sometimes populist) nature, tabloids seem to be more inclined to adopt new technologies, including the ones related to the ways in which the readers interact with the content. Similarly, despite the current lack of comparative research on the adoption of personalisation systems across different countries, 2 the existing studies on media innovation and its adoption suggest that these processes develop differently for specific media systems and media markets (Hanusch et al., 2019).

The above-mentioned factors motivated our decision to use a comparative approach to investigate how personalisation systems are adopted by different types of media outlets in different media systems. Using a sample of 12 newspapers from Brazil, the Netherlands, and Russia, we qualitatively examine which personalisation strategies are used and how they are communicated to their audiences. While doing so, we look at both user-driven and system-driven personalisation from the point of view of the user to determine whether the popular media outlets present the use of personalisation differently from the quality ones. As part of this comparison, we also look at the differences in communicating the use of personalisation between outlets coming from different media systems to investigate the degree to which such communication can be influenced by different contextual factors, such as the countries’ media accountability infrastructures.

To answer our main research question – that is, how the use of news personalisation is communicated by quality and popular media in different media systems – we start by identifying the news personalisation practices that are used by the news media. First, we examine the front-end features of their digital outlets to determine which (if any) forms of personalisation can be observed from a user perspective. Second, we explore how these outlets communicate their personalisation practices to users: more specifically, we focus on the presentation of personalisation through formal privacy policy documents. While doing so, we investigate how transparent and intelligible these communication procedures are and whether there are meaningful differences between outlets, depending on their types and the cultural contexts in which they operate.

Theoretical background

In recent decades, the news landscape has changed drastically due to the rise of digital technologies (Macnamara, 2010; Meyer, 2009; Van der Haak et al., 2012). A concomitant aspect of this computational turn in journalism (Coddington, 2015) is the possibility for news organisations to track what news people consume and how. Many newsrooms today use such audience metrics to inform various kinds of editorial decision-making and adapt their products if necessary (Anderson, 2011; Lee & Tandoc, 2017; Petre, 2015), but they are also used as input data for personalising news delivery. These personalisation technologies enable news organisations to customise their contents to the (assumed) interests of readers, and they are seen as one of the most promising innovations in the news industry (Newman, 2018).

The use of news personalisation, in particular system-driven personalisation based on ANRs, raises a variety of societal and academic concerns. The first is privacy: news organisations can now actively track people’s online reading behaviour, but what do they actually monitor, and for what purposes? ANRs need user data to do their work, so there is always a trade-off between privacy and personalisation (Li & Unger, 2012); however, it is often unclear what data news organisation collect to personalise news delivery and whether they “enrich” these with user data collected by third parties or even share these data with them. This form of surveillance, even if deployed for arguably benign goals, may cause so-called chilling effects – a form of self-censorship in the face of coercive threats (Penney, 2017; Stoycheff, 2016).

While the collection of personal data for personalising news content delivery does not necessarily trigger the same concerns for the users as government surveillance does, there is a possibility that corporate profiling also leads to the same effects, although the current lack of empirical research does not allow this assumption to be proved or disproved (Büchi et al., 2019). However, considering that personalised commercial offers are known to increase privacy concerns as participants become aware of their data being collected and used (Aguirre et al., 2016), we suggest that the deployment of personalisation by news media can result in the chilling effects caused by the newsreaders’ awareness that the outlet knows or is even predicting their information preferences. People may think twice about what they read, which limits their right to information (Balkin, 2009; Eskens et al., 2017). Moreover, what rights people have regarding the protection of their own personal data is in many contexts rather unclear. While the adoption of the General Data Protection Regulation (GDPR) in 2018 provided the European Union (EU) with a well-defined legal framework that sets a high bar for (algorithmic) transparency, the way in which its open norms apply to specific contexts, such as news personalisation, is far from straightforward and can even differ between EU member states (Erdos, 2016; Eskens, 2019). It therefore remains to be seen what obligations, such as the GDPR’s requirement to “provide meaningful information about the logic involved [in profiling]” (art. 13–15), mean for online newsreaders.

A second concern relates to the role of the media in democratic societies and the fear of increasing societal polarisation. The news media plays an important role here as it (should) create(s) collective realities and form(s) arenas for public debate, wherein a variety of sources, voices, and perspectives can be discussed (Hampton, 2010; Muhlmann, 2010; Starr, 2005). The diversity of media outlets and their contents is seen as a key requirement for performing this democratic role (Hardy, 2014; Helberger, 2011; Karppinen, 2013). It has been argued that recommender systems threaten the media’s role in democracy, as they are assumed to focus on satisfying user information preferences based on previous histories of interaction and can thereby isolate users from alternative opinions and new topics by creating echo chambers (Sunstein, 2017) and filter bubbles (Pariser, 2011). However, recent studies (Bruns, 2019; Dubois & Blank, 2018; Möller et al., 2018) question whether these effects exist, at least with regard to the general public, and point out that news personalisation can also promote diversity (Möller et al., 2016) and help the media realise its societal functions in democratic societies (Helberger, 2019).

Furthermore, research suggests that rather than valuing systems of personalised delivery that give readers more of the same content, there is an audience demand for information diversity (Bodó et al., 2019). The practical implementation of diversity through software design in the context of news personalisation is, however, a complicated task, as it can be operationalised in different ways, ranging from autonomy-focused perspectives, which aim to suggest content that allows readers to realise their own interests, to adversarial perspectives, which value suggestions that challenge users’ existing beliefs (Helberger et al., 2018). This complexity raises multiple questions, such as the following: What responsibility do media organisations take for the diversity of their personalisation systems? What kind of individual and collective effects or diversity do they speak about or promote? And do they speak about their individual and collective effects, or do they promote more diversity-centred ANR?

Third, the opacity of the algorithmic systems that undergird news personalisation, in particular its system-driven forms, is a concern shaping public and academic debates about individualised content delivery (Burrel, 2016; Pasquale, 2015; Stohl et al., 2016). Transparency has played a key role in the operationalisation of the media’s accountability to its audiences and the general public (McBride & Rosenstiel, 2013). However, newsreaders generally have little understanding of how personalisation systems work, why certain news is recommended to them, or how to intervene when needed (Bucher, 2017; Eslami et al., 2015; Fletcher & Nielsen, 2019). This lack of transparency is a wider societal problem: it prevents civil society from learning more about algorithmic systems and holding the organisations that implement them to account (van Dijck et al., 2018). It thus allows algorithms to distribute resources across populations with little public accountability (Diakopoulos, 2016) and fuels urgent calls for their regulation (Pasquale, 2015; Ziewitz, 2016).

While algorithmic systems are notoriously difficult to understand (Gillespie, 2014; Kitchin, 2017), there are efforts to make their implementation in the news sector more transparent to enable media accountability (Diakopoulos & Koliska, 2017; Hindman, 2017) or to increase the control options so that the newsreaders can use these systems more effectively (Harambam et al., 2018). It is difficult to predict whether these changes will make it to the market, but there is certainly a desire and need for more explanations amongst newsreaders (Harambam et al., 2019; ter Hoeve et al., 2017). In addition to facilitating action by individual newsreaders, transparency rights for users or the public can be used to power collective action and reduce power asymmetries (Ausloos & Dewitte, 2018). Several civil society groups (e.g., Algorithm Watch, Panoptykon Foundation, and Tactical Tech) focus on using algorithmic transparency as a means to enforce media accountability.

These concerns – privacy, diversity, and transparency – will guide us in the analysis of how media organisations communicate the use of news personalisation. In doing so, we challenge the tendency to take a Western-centric perspective on these concerns and call for a more contextualised view of them. James Whitman (2003) already noted a difference between some Western countries with regard to the meaning of privacy (specifically by looking at differences between the United States and Europe), and the urgent need for the de-Westernisation of journalism and communication research has been highlighted by numerous scholars (Hardy, 2008; McQuail, 2000; Wang, 2011). Empirical evidence has also shown significant differences in journalism cultures and media systems worldwide (Brüggemann et al., 2014; Hallin & Mancini, 2004; Hanitzsch et al., 2011; Willnat et al., 2013) and more specifically in terms of media accountability (Bastian, 2019; Fengler et al., 2014).

The latter concept of media accountability serves as the starting point of our study and is defined as “any non-State means of making media responsible towards the public” (Bertrand, 2000, p. 108). We emphasise the importance of accountability because it encompasses not only the journalistic work of media companies but also their ethics and societal role. The differences in media accountability infrastructures that exist at the local level (Bastian, 2019) contrast with privacy discussions that emphasise the potential universalising impact of European data protection law on other countries (i.e., the so-called “Brussels effect”; Bradford, 2012). Not only does the GDPR apply directly to non-EU organisations that monitor the behaviour of EU residents, but companies can also voluntarily apply European data protection law globally to avoid the technical, economic, and reputational difficulties of maintaining different privacy policies. At the same time, European data protection law can also influence non-EU legal systems indirectly by functioning as a model from which other countries can draw inspiration when creating their own national data protection legislation (Azzi, 2018; Bradford, 2020). To go beyond the usual focus on Western media systems, we selected three countries with different types of media accountability infrastructures: Russia, the Netherlands, and Brazil. We chose these countries to identify similarities and differences in the ways in which different media systems deal with privacy and transparency issues regarding news personalisation. In the following sections, we will shed more light on these issues – first by discussing our methodology and then by detailing our findings.

Methodology

Sampling and data collection

To conduct this qualitative study, we selected a sample of 12 news outlets (six quality newspapers and six popular newspapers) from three countries: Brazil, the Netherlands, and Russia. Below, we will discuss the rationale for our choices while we made the selection – in particular our choice of the countries and specific outlets to compare.

Country selection

Our choice of countries was based on the assumption that local news industries are characterised by profound distinctions originating from a diverse set of culturally negotiated journalistic values and attitudes towards transparency, audience participation, and self-regulation, thus impacting the media accountability landscape. This assumption is based on comparative studies using concepts such as “journalism cultures” (Hanitzsch, 2011) or “media systems” (Hallin & Mancini, 2004) to trace the differences between journalistic organisations and routines at the local level. These differences are also known to influence the process of adopting media innovations as shown by multiple studies on media innovation and its adoption (see, for instance, Hanusch et al., 2019; Humprecht & Esser, 2018; Lehtisaari et al., 2018; Nozal Cantarero et al., 2020; Toepfl & Litvinenko, 2018) that suggest that these processes develop differently for specific media systems and media markets.

As media accountability is influenced by the relationship between the media on the one hand and the government and political sphere on the other (Bastian, 2019), the role of the state in the media sector already reveals important information about the media accountability infrastructure. Whereas in the Netherlands, both sectors are comparatively disconnected (Groenhart & Evers, 2017), the Brazilian media landscape is closely connected to the political sphere, depending on the outlet and the respective political actor in terms of either proximity or harsh opposition (Lima, 2011; Matos, 2009). Even more noticeably, the Russian state has an extensive influence on the media system, amongst others through the strong paternalistic tradition and intense instrumentalisation of the mass media by the ruling elites to secure political gains both domestically and internationally (see, for instance, Akhrarkhodjaeva, 2017; Rotaru, 2018; Vartanova, 2012).

Influenced by this role of the state and other characteristics of the respective media systems, media accountability infrastructures differ significantly: “the hybrid statist-commercialised nature of the Russian media system [...] influences not only authentic journalism culture but accountability practices as well” (Vartanova & Lukina, 2017, p. 223). More concretely, transparency plays a role in the Russian case through the publication of self-regulatory documents and the trend to improve “the quality of public dialogue with users [...] and [the] transparency of journalism subjects” (Vartanova & Lukina, p. 223).

In contrast to the Russian scenario, the Dutch one shows a different distribution of responsibility and influence among actors. Here, an increasing awareness of media accountability and transparency can be observed among the public, the political sector, and the media sector. 3 Because of the lack of both governmental interference and active efforts by the public to hold journalists accountable, the media sector is characterised by higher media accountability (Groenhart & Evers, 2017).

Interestingly, privacy plays an important role in the Dutch media accountability sector because in addition to professional media guidelines on how to cover the political far right, its guidelines focus “mostly on [the] privacy protection of suspects and criminals” (Groenhart & Evers, 2017, p. 174). Transparency is a valued mechanism in the Dutch context, as evidenced by a variety of policy papers published by Dutch news organisations. Increasingly finding new ways to get in touch with the audience is a further characteristic of the Dutch media accountability landscape, though it is applied to differing degrees by different media organisations (Groenhart & Evers, 2017). Furthermore, the Netherlands is the only country in our sample where the GDPR requirements for transparency and accountability are immediately applicable.

These sharp differences between media organisations are visible in the Brazilian media landscape as well. Although a general trend towards more transparency can be observed in Brazil too, outstanding best-practice examples exist alongside very opaque organisational practices (Bastian, 2019). The relationship to the public holds a special role in the Brazilian context because of several reasons: first, the country maintains a tradition of social movements and public demands for the democratisation of the public sphere and the communication sector, and second, neither the media organisations themselves nor the content they distribute adequately represent the rich (cultural and geographical) diversity of the country (Bastian, 2019). This intense relationship between all three parties – the media, the public, and the political sector – and the respective differences between the Brazilian, Russian, and Dutch media accountability infrastructures give reason to expect these differences to be reflected in the documents that are analysed in this study.

Media selection

In addition to the comparison of the different ways in which local news outlets communicate their algorithmic practices, we investigated the difference between quality (broadsheet) and popular (tabloid) news outlets. Our decision to introduce this criterion of sampling is based on the alleged difference between quality and popular media in terms of adopting new technological solutions (Jönsson & Örnebring, 2011; Karlsson & Clerwall, 2013). More specifically, it relies on the assumption that popular media may be more open to innovations that are designed to entertain or better target their users (e.g., by attracting more clicks; Karlsson & Clerwall, 2013). Similarly, we expect quality newspapers to be more responsive to societal concerns about privacy and the use of these new technologies.

Table 1: Sample of news outlets (by country and type)
Outlet typeBrazilNetherlandsRussia
PopularSuper Notícia, ExtraDe Telegraaf, Algemeen DagbladMosskovskii Komsomolets, Argumenty i Fakty
QualityO Globo, Folha de S. PauloNRC Handelsblad, Het Financieele DagbladRossiiskaia Gazeta, Izvestiia

Based on these two criteria, we selected two quality and two popular outlets for each of the three countries, as shown in Table 1. As a primary selection criterion, we used the audience size based on the publicly available estimations from the end of 2017 to the beginning of 2018 (see Mediascope [2018] rankings for Russia, rankings by Grupo de Mídia São Paulo [2018] for Brazil, and SVDJ data [Bakker 2018] for the Netherlands). The secondary criterion was related to the use of personalisation by the respective outlet. The personalisation could be either a user-driven one (e.g., the possibility of subscribing to a certain author or topic) or a system-driven one (e.g., the individualised selection of stories in the “Recommended for you” section); if at least one of the types of personalisation was used, we included the outlet in our sample. The assessment of the presence/absence of personalisation was made via a close examination of the digital versions of the respective outlets and the testing of different options for disseminating the content. Because we were particularly interested in what is visible to the newsreader by default (i.e., without a substantial commitment to the news organisation, which would, for instance, involve buying a subscription), our examination of the use of personalisation focused on the front-end features that are accessible without going beyond a (possible) paywall.

For Russia, we chose the following four press outlets: Mosskovskii Komsomolets, Argumenty i Fakty, Rossiiskaia Gazeta, and Izvestiia. All four outlets are federal-level and predominantly pro-government newspapers; they are also daily newspapers, with the exception of Argumenty i Fakty, which is weekly. With a daily circulation of 606,000 copies, Rossiiskaia Gazeta is the youngest of the four newspapers (it was founded in 1990) and serves as the official outlet of the government of the Russian Federation. Izvestiia (1917) was the official outlet of the Soviet government but was privatised following the dissolution of the Soviet Union and currently has a daily circulation of 322,900 copies. Mosskovskii Komsomolets (1919; current circulation of 513,200 copies a day) and Argumenty i Fakty (1978; current circulation of 4,572,700 copies per week) are two popular outlets with a strong focus on entertainment content. Unlike the quality outlets mentioned above, both Mosskovskii Komsomolets and Argumenty i Fakty have multiple regional editions.

Officially, only Rossiiskaia Gazeta is state-owned; the other three newspapers are commercial enterprises. However, in all three cases, there is a strong relationship between commercial owners and the Russian state that reflects a tendency for political parallelism in the Russian media system (Vartanova, 2012). All four newspapers have digital versions, which are available free of charge without any additional paywalls. The subscriptions for all four outlets (around 16 euros per month on average; the most expensive is Izvestiia at approximately 30 euros per month for the subscription) provide physical copies of the respective newspaper that are delivered by post.

In the case of Brazil, we selected the following newspapers: Super Notícia, Extra, O Globo, and Folha de S. Paulo. Both popular newspapers – Super Notícia and Extra– are relatively recent outlets that were founded in 2002 and 1998, respectively. With their low purchase price, their target group is very broad; the newspapers aim predominantly at the poor and working class. According to Grupo de Mídia São Paulo (2018), Super Notícia has a daily circulation of 219,200 copies, whereas Extra has a circulation of 116,500. In contrast, founded in 1921, Folha de S. Paulo is one of the oldest newspapers in the sample and is the one with the largest daily circulation (300,500 copies). Its competitor O Globo has a circulation of 240,900 copies. Their primary target group is well-educated Brazilians. Brazilian media organisations have often been active in political developments; the most explicit example is probably O Globo, which is criticised for the fact that it supported and benefited from the military regime.

For the Netherlands, we chose De Telegraaf and Algemeen Dagblad, which represent the popular press, and the quality newspapers NRC Handelsblad and Het Financieele Dagblad. With a circulation of more than 350,000 copies, De Telegraaf, which was founded in 1893, is the largest Dutch daily newspaper. Algemeen Dagblad, which is a more recent daily newspaper, was founded in 1946 after the Second World War and has a circulation of approximately 300,000 copies. Unlike De Telegraaf, which has a single nationwide version, Algemeen Dagblad has multiple regional versions (similar to the Russian popular outlets we selected). NRC Handelsblad (1970) and Het Financieele Dagblad (1943) have smaller circulations than their popular competitors and produce around 80,000 and 50,000 copies a day, respectively.

Unlike Russian outlets, whose digital newspaper versions were not paywalled, Dutch outlets usually required a subscription in order to access the full content. The price varied substantially between quality outlets (from 26 to 42 euros per month) and popular outlets (between 4 and 12 euros per month). For the popular outlets, the subscription included access to the so-called “premium” articles, whereas for quality media, it served as a means of getting behind the paywall, appearing after the reader viewed a few freely available articles.

In summary, the selection criteria of this explorative study are based on the notion of diversity. We have included different media systems and different types of newspapers to explore the width and depth of how newspapers engage with societal concerns regarding personalisation. Therefore, the focus of this study lies in finding similarities and differences between quality and popular media in different countries. By selecting the most popular newspapers that enable (any form of) personalisation, we ensured that we focus on the cases which are important for the respective media systems. Because this is an explorative qualitative study, its aim is not to find law-like generalisations across time and place, but to advance the current understanding of the ways in which the use of ANRs is communicated by news organisations to their readers.

Data analysis

After establishing a sample of news outlets for our study in July–August 2018, we proceeded to the analysis of front-end communication practices related to the use of algorithmic recommendation. Specifically, we focused on privacy policies both because of their importance (i.e., as binding legal agreements between content providers and users) and because of their explainability potential (i.e., being a major source of available information for users about the ways in which their data are processed and collected) (Wilson et al., 2016).

We divided our analysis into two sections. In the first section, we summarised the front-end personalisation features used in our sample. We started by visiting the site (from an internet protocol [IP] address in the EU) and examining the type of personalisation that is visible to the users (user- or system-driven), and then we looked at the accessibility of the privacy policies (how easy/difficult it is to locate them on the websites of the respective organisations). Finally, we examined the title of the website section discussing the matters of personalisation and privacy to check for possible differences between the country- or type-based categories of outlets. All of these steps were conducted using a desktop device, so the findings below are applicable to a rather specific scenario – that is, the users engaging with the news outlet via its native website and accessing it via the desktop browser. We also took into account sponsored – that is, third-party – news materials (e.g., news from partner organisations) that were found on the website and included these in our analysis if such materials were disseminated with the help of user- or system-driven personalisation.

In the second section of our analysis, we focused on the communication of personalisation in the privacy policies of news organisations. For this purpose, we used a document analysis approach to find out in more detail how newspapers communicate their personalisation practices (Bowen, 2009). Our analysis focused on seven theoretically informed characteristics of privacy policies that feature prominently in the studies dealing with data protection and privacy (see, for instance, Ausloos & Dewitte, 2018; Kuner, 2005; Kuner et al., 2016; Organisation for Economic Co-operation and Development, 2013). These characteristics are as follows: a) the mentioning of personalisation; b) the complexity and type of language (formal/colloquial) used; c) the kind of data collected; d) the purposes of the data collection; e) data storage and sharing; f) data processing; and, finally, g) data subject rights.

Both steps of the analysis were conducted by three of the authors of this paper in September 2018. To implement the analysis, the authors used Google spreadsheets. The contents of the privacy policy documents were coded according to the set of questions (e.g., Is personalisation visible on the website? What type of personalisation is mentioned in the privacy policy?) concerning the visibility of front-end personalisation features (step 1) and the seven characteristics of privacy policies listed above (step 2). The results were then discussed in a series of personal meetings. Because the analysis required knowledge of Dutch, Portuguese and Russian, each author was conducting analysis for the specific country in the language in which he or she was most proficient.

To minimise potential researcher bias, we used concise and simple questions in the case of front-end personalisation features (i.e., to decrease the possibility of disambiguation and disagreement among the coders) and discussed the coded parts of the privacy policies among the coders. Despite this, we cannot fully exclude the possibility of some of our interpretations being influenced by translation bias caused by the stronger or weaker emphasis placed on certain aspects of personalisation communication by coders working with a particular language. To address this, the coding results were checked by other authors (in the cases in which their language knowledge allowed for it – i.e., Brazilian and Dutch), and disagreements were discussed among the authors. A similar procedure was used in the cases in which the main coder had doubts concerning the attribution of a particular policy aspect to one of the above-mentioned characteristics. In these cases, the section dealing with the respective aspect was translated by the coder and discussed with the other two authors to consensus-code it. While other sources of bias can also influence our findings (e.g., the fact that all the coders are professional scholars could imply a more positive attitude towards quality media compared with popular ones), we assume that the use of clear criteria to identify the presence or absence of specific features related to personalisation communication should limit the possible effect of such biases.

Front-end personalisation

Personalisation visibility

We started our analysis by looking into front-end personalisation features. Our observations suggest the absence of substantial differences in the type of personalisation used by quality and popular media in all three countries. In almost half of the cases (five out of 12), we detected a combination of both types. System-driven personalisation is more commonly used by the Brazilian and Russian outlets, whereas the Dutch ones focus more on user-driven personalisation. Furthermore, we found that quality media tends to offer user-driven personalisation services more often. While this observation can hardly be viewed as generalisable considering the size of our sample, it raises the question of whether the provision of more control to the users via user-driven personalisation can be part of quality news services.

Our analysis indicates that for the majority of outlets, system-driven personalisation features are presented through subsections on the front page and sub-pages. The titles of these subsections vary: in addition to “Recommended for you”, we found “Read also”, “What else to read”, “Partner stories” and “World is close”. The format of the personalised subsections is rather similar: each of these usually includes three to five links to the news stories, and they are often accompanied with story-related images.

User-driven personalisation is mainly offered through mail-based updates. Not all of these updates are personalised per se; this is especially the case with editorial newsletters that include the major news events of the day (Argumenty i Fakty). Other options are more individualised, such as in the case of individual subscriptions for specific stories (Rossiiskaia Gazeta). One particularly interesting example is the user-driven service MyNews, which is offered by the Dutch quality outlet (Financieele Dagblad). After free registration, it allows the readers to compose their own personalised news feeds by choosing which topics to follow. The list of topics varies from “Media” and “Stocks” to “Donald Trump” and “Technology”. Furthermore, the service provides an option to label certain articles as “Favourites” and to see one’s “Recently read articles”.

Accessibility of privacy policies

Following the examination of the types of personalisation that are used, we looked at the location and accessibility of the privacy policies. In the case of the Netherlands, all outlets present a cookie notification before readers can enter the sites. These notifications inform the readers about the tracking of their personal data on these sites and provide links to the more detailed cookie statements. The cookie notifications vary both in terms of their length and language style. In the case of popular media (Telegraaf and Algemeen Dagblad), the notifications are rather long and formal, whereas quality media (NRC Handelsblad and Financieele Dagblad) put more effort into making their notifications clearer and more understandable.

Neither the Brazilian nor Russian outlets use cookie notifications to inform users before they access the websites. This distinction can be attributed to the absence of EU-style data protection legislation. Instead, three out of four Brazilian outlets make their privacy policies visible at the bottom of the front pages. An exception is the popular outlet Super Notícia, whose privacy policies are detectable only after going through several sub-pages and reaching the larger news portal, O Tempo. In the case of Russia, accessing privacy policies is a non-trivial task: the privacy statements of Mosskovskii Komsomolets and Izvestiia are accessible only through several sub-pages (including ones with rather non-intuitive titles, such as “Third-party Advertisements” in the case of Mosskovskii Komsomolets). In other cases (Rossiiskaia Gazeta and Argumenty i Fakty), links to the privacy policies are offered only during user registration on the respective portals.

Names of privacy sections

Finally, we examined how websites’ sections on privacy policies were named. Similar to our observations on accessibility, we identified major differences between countries and not between media types. Both the Dutch and Brazilian outlets specifically referred to privacy in the sections’ names, using titles such as “Privacy” and “Privacy Policies”. In contrast, the Russian outlets tended to avoid such normative references: both quality outlets used the title “User Agreement”, one popular outlet (Argumenty i Fakty) called it “Confidentiality Politics”, and another discussed privacy in the “Third-Party Advertisements” section.

The lack of references to normative concepts (i.e., privacy) in the case of Russia can be attributed to the slow adaptation of Russian data protection to the changing digital landscape (Kukushkina et al., 2017). Despite the rapid deployment of cutting-edge technologies (e.g., online tracking and behavioural advertising) by Russian media industries, corresponding regulations have not yet been developed. Together with the weak civil society (Brenchenmacher, 2017) and growing use of online surveillance (Gainutdinov, 2017), these reasons may contribute to the limited visibility of privacy matters in the documents of Russian news media.

Personalisation and privacy policies

The mention of personalisation

We started our analysis by examining references to personalisation in privacy policies. We found that only half of outlets actually mention personalisation, which is usually done by providing a general reference to “personalised services and offers” (Izvestiia, n.d.). With the exception of Financieele Dagblad, which provides a detailed description of its user-driven personalisation system, references to personalisation generally remain vague and are usually presented as an extra reason to justify personal data collection and not as a specific service.

The vagueness of personalisation references is reflected in the lack of differentiation between advertisement and news personalisation, particularly in the non-EU quality outlets. While popular outlets (Super Notícia and Mosskovskii Komsomolets) often explicitly inform their readers about the use of advertisement personalisation, the quality outlets in Brazil and Russia tend to use more ambiguous language. O Globo, for instance, mentions personalisation without further specification; similarly, Izvestiia rather generally refers to personalised services without actually specifying which (editorial or commercial) content is personalised.

The lack of differentiation between types of personalisation results in a lack of clarity about their technical distinctions. For instance, it is unclear whether the same types of user data (e.g., demographic information or content interaction history) are used by both advertisement and news personalisation. The absence of such information not only limits the ability of users to control the algorithmic systems used by the specific outlet but also makes it harder to determine what forms of personalisation are actually at work. Such obscurity highlights the possibility of blending commercial and normative aspects of personalisation and limits users’ ability to control what information they receive (e.g., by preventing users from opting out of a specific form of personalisation because advertisement and news personalisation are not differentiated from each other).

The complexity and type of language used

After examining how privacy documents referenced personalisation, we evaluated the general use of language in relation to news personalisation. Our observations point out that the majority of quality news sources, together with some popular outlets, use rather colloquial language to communicate the use of personalisation. One illustrative example comes from the Dutch quality outlet NRC Handelsblad (n.d.), which explicitly states that “journalism is our product, not your data” and details its “privacy promises” with simple rhetorical questions, such as “Does NRC sell your data?”, followed by simple answers: “No. Never. Nowhere”.

In some cases, news outlets adopt more formal language, for instance, by referring to specific legislative documents or using jargon or technical terms. For instance, Financieele Dagblad (2019) describes in detail how it interacts with Amazon Web Services; however, despite their significant informative value, statements such as “We have only given a very limited, minimum required number of employees access to the data. And only if this is necessary for the performance of the function. Moreover, access to the data is only authorised for that part of the data that is necessary for the execution of that function” are probably not very engaging for the readers.

The kind of data collected

Following the examination of the language of privacy policies, we moved towards analysing the way in which these documents discuss the use of newsreaders’ data. The first part of our analysis – data collected by the outlet – indicates substantial similarities between popular and quality outlets. With the exception of Russia, where popular outlets which we examined remain rather tight-lipped about data collection (e.g., by making formal references to Russian data protection legislation), outlets of both types list a number of types of user data that are collected. Generally, the types of data listed depend on the service that is used (e.g., subscriptions, advertisement/special offers, or contact forms for approaching the news outlet).

The majority of outlets differentiate between two major categories of data. The first category includes data provided by the reader during registration or explicitly added after registration. Such data include name, surname, region, telephone number, email address, postal address, location, photo, date of birth, links to personal profile on social networking sites, and so on. The second category involves data collected automatically through the reader’s interaction with the website; these include user IP, cookies, browser type, device type, time of access, address of requested page, and user-agent data.

A number of outlets also note that they use data about their readers, which they acquire from third parties. All Russian outlets that we reviewed present demographic user profiles in the advertisement sections; these profiles are based on personal data collected by the unspecified third parties. Similarly, one Dutch popular outlet, De Telegraaf, mentions that it enhances its data with user data from third parties, such as customer database companies. Other Dutch news organisations (Financieele Dagblad and Algemeen Dagblad) state that they merge their user data with the data provided by their partners who participate in a joint digital advertising initiative called “Buymedia”.

The purposes of the data collection

After identifying different types of collected data, we examined the declared purposes of this data collection. With the exception of the Russian popular outlets, which scarcely note data collection, all outlets cite a number of reasons for collecting their users’ data. The two most common purposes (referenced by 10 out of 12 privacy policies) are communication with customers and improvement of newspaper services.

The former purpose includes both general communications (e.g., for processing user requests) and targeted advertisements (e.g., pushing updates about new offers). A similarly broad interpretation is used in relation to the second purpose: the services mentioned in the documents vary from the general improvement of the outlets’ products, due to a better understanding of how readers use digital services, to more concrete tasks, such as optimising the interactive experience of users’ navigation on the website (O Globo and Folha de S. Paulo).

We noticed that normative concepts are rarely used to communicate the purposes of data collection. The majority of outlets tend to describe data collection in rather instrumentalist terms and imply that it is necessary to optimise the services that are provided to the users. The single exception to this rule is found in the case of NRC Handelsblad, which states explicitly that it uses data collection not to build user profiles or to follow readers on the internet, but to improve its journalism.

Data storage and sharing

In contrast to the relatively detailed description of user data collection and its purposes, privacy policies usually remain obscure about data storage. Such obscurity is particularly pronounced in the case of the popular outlets in our sample, in particular the two Russian ones, which ignored the matter of data storage completely. When data storage is mentioned, it is often described in general terms that leave significant space for interpretation. For example, Rossiiskaia Gazeta (n.d.) notes that it “stores personal data but puts significant organisation and technical effort to protect users’ personal information from illegal or accidental access according to the legislation of the Russian Federation” (without specifying concrete legislative acts). Similarly, Folha de S. Paulo (2018) notes that it will use “all means required to protect data confidentiality, integrity, and availability”.

Concrete specifications regarding the geographical location of data storage and the length of the storage period are mentioned exclusively by quality media. The degree of concreteness varies significantly between countries and can be attributed to the different legislative contexts (e.g., the GDPR in the EU, which explicitly requires organisations to disclose who will receive the users’ data, how long the data will be stored, and any intention to transfer the data outside of the EU). The Brazilian outlets, for instance, state that data will be stored in the companies’ databases, which can be accessed only by authorised and qualified persons, but do not specify the geographical locations of their databases. The Dutch outlets explicitly mention the period of data storage, which varies from six months to five years depending on the specific regulations. Furthermore, one of these outlets (Financieele Dagblad) states that it stores all user data in one central data warehouse environment, which is physically located on the European mainland.

Concerning data sharing, almost all news outlets note that they can share user data with third parties. Usually, outlets state that they can share users’ data with partners that are involved in their business operations (e.g., suppliers, software builders, and advertising agencies), as well as with (tax) authorities, if this is required by legislation. The scale of this sharing and the degree of transparency about relations with third parties vary between countries. For instance, the two quality Russian outlets in our sample – Izvestiia and Rossiiskaia Gazeta– note that they can share data with third parties in cases that fall under Russian legislation (without, however, specifying such cases), whereas the popular Russian outlet Mosskovskii Komsomolets notes that it collaborates with third parties for advertisement purposes, which can permit the third parties to put cookies on user machines to identify them. Both quality and popular Brazilian outlets in our sample are similarly vague in terms of disclosing the degree of third-party data sharing and mention that some third-party partners can request users’ personal information from them (O Globo and Extra), whereas the Dutch outlets that were examined tend to provide more comprehensive information about their third-party sharing (e.g., giving concrete examples of what kind of data is shared with which parties [De Telegraaf] or noting the presence of processor agreements with the third parties that regulate such sharing [NRC]).

Data processing

Among all the issues related to data use, the actual processing of user data remains particularly obscure. In all three countries, the privacy policies of the analysed media mention that personal data are used for analytical purposes, but the exact procedures remain unclear. This is particularly true in the case of popular outlets, which usually omit the subject of data processing completely. The quality media (Izvestiia and Rossiiskaia Gazeta in Russia) note that readers’ data can be used for statistical and other types of research and that the research will rely on anonymised data (Rossiiskaia Gazeta) and comply with legislation (Izvestiia); however, no concrete details are provided. The obscurity of data-processing practices leaves a conceptual gap in the communication about news personalisation in both the EU and non-EU contexts. While readers are informed about the types of collected user data and the purposes for which these data are used, they remain in the dark about what happens between the data collection and the generation of outcomes. Whether or not readers are profiled, let alone how, is not described in the privacy policies.

Data subject rights

In the final part of our analysis, we examined how the privacy policies approach the topic of data subject rights. Our comparison indicates that only the Dutch outlets devote special attention to the rights of users regarding their personal data. This difference can be viewed as another example of the impact of the GDPR on the media industry. The respective sections are found in the privacy policies of all Dutch outlets, which detail the right of users to request all the data collected on them and, if needed, delete these data. The quality outlets (Algemeen Dagblad and NRC Handelsblad) also include the contact details of the official Data Protection Officer, who can be contacted for matters related to users’ data rights.

In the case of the Brazilian and Russian outlets, the subject of data rights is generally absent. Among four Russian outlets, only two newspapers touch the subject. A popular outlet (Argumenty i Fakty) notes that if users believe that their rights are breached as part of data processing, they can contact the newspaper by email. However, it is unclear which actual person is behind the email address mentioned by Argumenty i Fakty. In the second case, a quality newspaper, Rossiiskaia Gazeta, mentions that users own their data, but these data include only content that is produced through interactions with the website (e.g., comments or photos shared by users). While Rossiiskaia Gazeta also mentions the personal data of users (i.e., items viewed), it does not state whether users own the data or not and whether they can request or access the data.

Conclusions and discussion

While some organisations are transparent about the data they collect, the way in which the news media utilise the data of their users to personalise content delivery generally remains obscure. In line with the findings from the 2018 Reuters report (Newman, 2018), 4 our analysis reveals a common usage of user- and system-driven personalisation in all three examined countries. However, this use of personalisation is rarely mentioned in the formal documents available to newsreaders, such as privacy policies. Even when personalisation is mentioned explicitly, it is often unclear whether the term refers to advertisement personalisation, news personalisation, or both. It is similarly unclear which data or techniques are used to implement personalised content delivery.

At the same time, the disclosures indicate that data collection becomes more intense for more committed readers as a result of their increasing use of services and the growing exposure of their reading behaviour. In many cases, these two processes amplify one another, as the use of additional services and the removal of paywalls enable more engaged reading and thereby more intense collection of readers’ data. This leads to a situation in which more committed readers are increasingly subjected to the effects of personalisation systems that largely remain black boxes to them. In particular, due to this power asymmetry, those readers who intensively use personalised content delivery systems (to exercise their right) to receive information are put in a vulnerable position. While the degree of influence currently remains unclear, as well as the degree to which users would be eager to familiarise themselves with the information about personalisation and change their behaviour, 5 we argue that the observed obscurity of personalisation systems deprives newsreaders of the opportunity to understand how exactly their data are used and the degree to which this usage conforms to their privacy expectations.

This stands in sharp contrast to current policy discussions on algorithmic transparency and accountability. Such discussions have long since moved beyond arguing for the disclosure of data collection practices and towards the need for increased transparency of the algorithms that use these data and the resulting output (Diakopoulous & Koliska, 2017). The GDPR’s right to an explanation has featured prominently in this discussion, and it is typically argued that the public is at least entitled to information regarding the algorithms’ existence and general functioning, if not explanations of specific algorithmic decisions (Kaminski, 2019; Wachter et al., 2017). Similarly, upon its entry into force 6, Convention 108+ from the Council of Europe, of which Russia and the Netherlands are signatories and Brazil is an observer, will provide users with the right to request information regarding the reasoning underlying data processing. Beyond these discussions of legally required information, recent work has called for further transparency regarding, for example, counterfactual explanations or information about the output of algorithmic decision-making processes (Diakopoulous & Koliska, 2017; Wachter et al., 2017a).

Such transparency is a prerequisite for achieving the goals of data protection governance. Though disagreements remain over what exact information needs to be disclosed and to whom, organisations cannot be held accountable for the ways in which they use algorithms if their existence, functioning, and output are kept secret. This is true, first, at the individual level, where transparency serves as a precondition for the individual control tools that continue to be central to data protection law (Ausloos & Dewitte, 2018; van Drunen et al., 2019). User empowerment takes on added significance in the context of the media, in which it continues to function as a tool that can offer users some protection in the absence of stricter media regulation. At the same time, the fact that many users do not necessarily read nor act on the information that is provided to them (Nissenbaum, 2011) amplifies the important role that public-facing transparency can play in producing accountability either as a result of the disciplinary action by a few motivated individuals or through more collective pressure that is exercised through civil society or other media professionals (Ausloos & Dewitte, 2018). Though such parties are less hampered by the complexity of the language we identified, the ambiguity or plain lack of information also continues to hamper accountability from this perspective.

Our analysis shows that in practice, algorithmic transparency in the media lags far behind not only discussions of what information regarding algorithmic decision-making should be disclosed but also what information is argued to be required under the GDPR and soon Convention 108+. This indicates that more work needs to be done not only to propose new transparency requirements but also to find a consensus regarding what the minimal open transparency norms required by the GDPR are and how these norms can best be enforced. Until the gap between practice and discussions on algorithmic transparency is closed, algorithmic accountability studies that presume the availability of such information will need to take into account the fact that it is often unclear whether and how (news) personalisation is occurring.

The comparison between quality and popular media showed a few differences in the ways in which they approached news personalisation. Based on the limited sample that we used, quality media tends to provide more options for user-driven personalisation (i.e., self-tailored news feeds based on email notifications) by giving readers more control over the selection of news to consume. Furthermore, quality outlets offer more information about data storage and the purposes of data collection compared with popular outlets, in particular in the case of the non-EU countries (Brazil and Russia). The distinction between quality and popular media is likely not explained by data protection law, which treats quality and popular media alike; it may instead find its roots in the different expectations of the audiences, the media’s use of transparency to build trust, or the differences in media systems. It is important to note, however, that because of the size of our sample, these observations can be treated only as exploratory findings and that further research is required to make them generalisable.

The presence of such differences within the Netherlands also indicates that while the EU sets a high transparency bar for all media organisations to follow, there is a difference between quality and popular media in terms of meeting the requirements. Our observations indicate that quality media organisations tend to disclose more information in a more understandable manner that can be explained by their willingness to comply with legislation in a more intelligible manner that distinguishes them from popular media. In addition to the media’s ethical benefits, investments in transparency, explainability, and a solid relationship with the user could strengthen the media’s position vis-à-vis its competitors in terms of the trust and loyalty of its readers and could showcase the distinction between the ways in which the legacy and social media use personalisation. Research from both the perspective of trust in the media and trust in the use of data suggests a modest relationship between trust and engagement (Curry & Stroud, 2019; Felzmann et al., 2019; Strömbäck et al., 2020). However, emerging research that combines the two perspectives by analysing how trust in the media is affected by its data usage suggests a more complex picture: although indiscriminate data collection may erode the media’s trustworthiness, readers’ assessment of the media’s data collection disclosures is influenced by both their trust in the media organisation and in online data collection more generally (Sørensen & Van den Bulck, 2020; Steedman et al., 2020).

Finally, the cross-country comparison revealed a number of major differences concerning both the basic aspects of personalisation communication, such as the type of language used (e.g., the limited number of references to the normative concept of privacy in the context of personalisation communication in the case of the Russian media outlets that were included in the sample), as well as the interpretation of more specific concepts, such as data storage (e.g., clear definitions of the storage time/physical location of data storage in the case of the Netherlands). Mirroring Groenhart and Evers’s (2017) assessment of policy documents being an indicator of media accountability, the media organisations’ disclosures in privacy policies matched the broader media accountability infrastructure of their respective countries. The Netherlands, which has the most advanced media accountability infrastructure in our sample, is also the country where transparency and explainability are reflected most obviously in our data. Media organisations in both Brazil and (to a slightly greater degree) Russia, where the media accountability infrastructure suffers from greater shortcomings, are more hesitant to provide detailed information about their personalisation mechanisms. Many of these distinctions can also be attributed to contextual factors (especially legal regulations, such as the GPDR in the Netherlands) that emphasise the importance of context in analysing personalisation practices in different environments. 7 This arguably reflects the added value of the GDPR, as its clear transparency obligations (e.g., regarding data collection and user rights) are reflected in our analysis.

At the same time, the strong differences between countries that are evident in our analysis give a different practical perspective to current discussions on the internationalisation of data protection law. Even though “[t]he EU sets the tone globally for privacy and data protection regulation” (Bradford, 2020, p. 132), our analysis indicates that strong differences remain between the countries regardless of any extraterritorial impact of European data protection law. This may indicate that the internationalisation of data protection law finds obstacles in sectors such as the media, which remains focused on national audiences, which are intertwined with the states in which they operate (in the case of Russia), and which, in the case of the EU, continues to be subject to a lesser degree of legal harmonisation (Erdos, 2016). At the same time, legal convergence is continuing. Brazil is in the process of adopting a GDPR-style regulation, and Russia has signed on to Convention 108+ and its stronger (algorithmic) transparency requirements. Future research will track whether this legal convergence translates into increased algorithmic transparency (and accountability) in practice.

Another aspect of future research on personalisation communication that is important to acknowledge is scaling of analysis, which will allow researchers to go beyond exploratory investigation and enable them to draw more generalisable conclusions about the relationship between media types or systems and the communication of algorithmic innovation. To implement such research, it will be important to look at a broader range of news outlets and to consider different scenarios in which news personalisation communication occurs. For the current study, we did not look at the scenarios in which users employ mobile browsers and/or mobile applications to access news from the respective outlets; however, we plan to do so in a follow-up study to determine whether there are any differences in the use of personalisation and its communication between desktop and mobile devices.

Acknowledgements

We would like to thank Noemi Festic, Sherine Conyers, Kristofer Erickson, Ricard Espelt, and Frédéric Dubois for their valuable peer-review comments and editorial review.

References

Aguirre, E., Roggeveen, A. L., Grewal, D., & Wetzels, M. (2016). The personalization–privacy paradox: Implications for new media. Journal of Consumer Marketing, 33(2), 98–110. https://doi.org/10.1108/JCM-06-2015-1458

Akhrarkhodjaeva, N. (2017). Instrumentalisation of mass media in electoral authoritarian regimes: Evidence from Russia’s presidential election campaigns of 2000 and 2008. Columbia University Press.

Anderson, C. W. (2011). Between creative and quantified audiences: Web metrics and changing patterns of newswork in local US newsrooms. Journalism, 12(5), 550–566. https://doi.org/10.1177/1464884911402451

Ausloos, J., & Dewitte, P. (2018). Shattering one-way mirrors: Data subject access rights in practice. International Data Privacy Law, 8(1), 4–28. https://doi.org/10.1093/idpl/ipy001

Azzi, A. (2018). The challenges faced by the extraterritorial scope of the General Data Protection Regulation. Journal of Intellectual Property, Information Technology and E-Commerce Law, 9(2), 126–137. https://www.jipitec.eu/issues/jipitec-9-2-2018/4723

Bakker, P. (2018, April 18). Digitale oplage kranten blijft fors stijgen. stimuleringsfonds voor de journalistiek. https://www.svdj.nl/de-stand-van-de-nieuwsmedia/digitale-oplage-kranten-stijgen/

Balkin, J. M. (2009). The future of free expression in a digital age. Pepperdine Law Review, 36(2), 427–444. https://digitalcommons.pepperdine.edu/plr/vol36/iss2/9/

Bastian, M. (2019). Media and accountability in Latin America: Framework – conditions – instruments. Springer VS. https://doi.org/10.1007/978-3-658-24787-4

Bastian, M., & Helberger, N. (2019, September). Safeguarding the journalistic DNA: Attitudes towards value-sensitive algorithm design in news recommenders [Paper presentation]. Future of Journalism Conference, Cardiff.

Bastian, M., Makhortykh, M., & Dobber, T. (2019). News personalization for peace: How algorithmic recommendations can impact conflict coverage. International Journal of Conflict Management, 30(3), 309–328. https://doi.org/10.1108/IJCMA-02-2019-0032

Bastos, M. (2016). Digital journalism and tabloid journalism. In B. Franklin & S. Eldridge (Eds.), The Routledge companion to digital journalism studies (pp. 217–226). https://doi.org/10.4324/9781315713793-22

Berry, S. (2009). Watchdog journalism. Oxford University Press.

Bertrand, C.-J. (2000). Media ethics & accountability systems. Transaction Publishers.

Bodó, B., Helberger, N., Eskens, S., & Möller, J. (2019). Interested in diversity. Digital Journalism, 7(2), 206–229. https://doi.org/10.1080/21670811.2018.1521292

Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40. https://doi.org/10.3316/QRJ0902027

Bradford, A. (2012). The Brussels effect. Northwestern University Law Review, 107(1), 1–68. https://scholarlycommons.law.northwestern.edu/nulr/vol107/iss1/1/

Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford University Press. https://doi.org/10.1093/oso/9780190088583.001.0001

Brechenmacher, S. (2017). Civil society under assault: Repression and responses in Russia, Egypt, and Ethiopia. Carnegie Endowment for International Peace.

Brüggemann, M., Engesser, S., Büchel, F., Humprecht, E., & Castro, L. (2014). Hallin and Mancini revisited: Four empirical types of Western media systems. Journal of Communication, 64(6), 1037–1065. https://doi.org/10.1111/jcom.12127

Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426

Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086

Büchi, M., Fosch Villaronga, E., Lutz, C., Tamò-Larrieux, A., Velidi, S., & Viljoen, S. (2019). Chilling effects of profiling activities: Mapping the issues. SSRN. https://doi.org/10.2139/ssrn.3379275

Bulck, H., & Moe, H. (2018). Public service media, universality and personalisation through algorithms: Mapping strategies and exploring dilemmas. Media, Culture & Society, 40(6), 875–892. https://doi.org/10.1177/0163443717734407

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512

Chadwick, A., Vaccari, C., & O’Loughlin, B. (2018). Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing. New Media & Society, 20(11), 4255–4274. https://doi.org/10.1177/1461444818769689

Coddington, M. (2015). Clarifying journalism’s quantitative turn: A typology for evaluating data journalism, computational journalism, and computer-assisted reporting. Digital Journalism, 3(3), 331–348. https://doi.org/10.1080/21670811.2014.976400

Council of Europe. (n.d.). Chart of signatures and ratifications of Treaty 223. https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/223/signatures

Curry, A. L., & Stroud, N. J. (2019). The effects of journalistic transparency on credibility assessments and engagement intentions. Journalism. https://doi.org/10.1177/1464884919850387

de Lima, V. A. (2011). Regulação das comunicações. História, poder e direitos. Paulus.

Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844110

Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828. https://doi.org/10.1080/21670811.2016.1208053

Drunen, M. Z., Helberger, N., & Bastian, M. (2019). Know your algorithm: What media organizations need to explain to their users about news personalization. International Data Privacy Law, 9(4), 220–235. https://doi.org/10.1093/idpl/ipz011

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656

Erdos, D. (2016). Statutory regulation of professional journalism under European data protection: Down but not out? Journal of Media Law, 8(2), 229–265. https://doi.org/10.1080/17577632.2016.1250405

Eskens, S. (2019). A right to reset your user profile and more: GDPR-rights for personalized news consumers. International Data Privacy Law (Online First. https://doi.org/10.1093/idpl/ipz007

Eskens, S., Helberger, N., & Moeller, J. (2017). Challenged by news personalisation: Five perspectives on the right to receive information. Journal of Media Law, 9(2), 259–284. https://doi.org/10.1080/17577632.2017.1387353

Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015). I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153–162. https://doi.org/10.1145/2702123.2702556

Esser, F. (1999). Tabloidization of news: A comparative analysis of Anglo-American and German press journalism. European Journal of Communication, 14(3), 291–324. https://doi.org/10.1177/0267323199014003001

Eurobarometer. (2019). Special Eurobarometer 487a: The General Data Protection Regulation (No. 487a; Special Eurobarometer). Publications Office of the European Union. https://doi.org/10.2838/43726

Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1). https://doi.org/10.1177/2053951719860542

Fengler, S., Eberwein, T., Mazzoleni, G., Porlezza, C., & Russ-Mohl, S. (Eds.). (2014). Journalists and media accountability: An international study of news people in the digital age. Peter Lang.

Financieele Dagblad. (2019). Privacy statement FD Mediagroep. Financieele Dagblad. https://fdmg.nl/wp-content/uploads/Privacy_Statement.pdf

Fletcher, R., & Nielsen, R. K. (2019). Generalised scepticism: How people navigate news on social media. Information, Communication & Society, 22(12), 1751–1769. https://doi.org/10.1080/1369118X.2018.1450887

Folha de S.Paulo. (2018). Política de privacidade – Folha de S. Paulo. https://www1.folha.uol.com.br/paineldoleitor/2018/05/politica-de-privacidade-folha-de-spaulo.shtml

Gainutdinov, D. (2017). Russia’s surveillance state is giving us a false sense of security. Open Democracy. https://www.opendemocracy.net/en/odr/russia-s-surveillance-state-is-giving-us-false-sense-of-security/

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). The MIT Press.

Groenhart, H., & Evers, H. (2017). The Netherlands: From awareness to realization. In T. Eberwein, S. Fengler, & M. Karmasin (Eds.), The European handbook of media accountability (pp. 170–179). Routledge. https://doi.org/10.4324/9781315616353-22

Grupo Mídia São Paulo. (2018). Mídia dados Brasil 2018. http://midiadados.org.br/2018/Midia%20Dados%202018%20%28Interativo%29.pdf

Haak, B., Parks, M., & Castells, M. (2012). The future of journalism: Networked journalism. International Journal of Communication, 6(16), 2923–2938. https://ijoc.org/index.php/ijoc/article/view/1750/832

Hallin, D. C., & Mancini, P. (2004). Comparing media systems: Three models of media and politics. Cambridge University Press.

Hampton, M. (2010). The fourth estate ideal in journalism history. In S. Allan (Ed.), The Routledge companion to news and journalism (pp. 3–12). Routledge.

Hanitzsch, T. (2011). Mapping journalism cultures across nations. Journalism Studies, 12(3), 273–293. https://doi.org/10.1080/1461670X.2010.512502

Hanusch, F. (2013). Sensationalizing death? Graphic disaster images in the tabloid and broadsheet press. European Journal of Communication, 28(5), 497–513. https://doi.org/10.1177/0267323113491349

Hanusch, F., Tandoc, E. C., Dimitrakopoulou, D., Rafter, K., Ramirez, M. M., Rupar, V., & Sacco, V. (2019). Transformations: Journalists’ reflections on changes in news work. In T. Hanitzsch, F. Hanusch, J. Ramaprasad, & A. S. Beer (Eds.), Worlds of journalism: Journalistic cultures around the globe (pp. 259–283). Columbia University Press. https://doi.org/10.7312/hani18642

Harambam, J., Bountouridis, D., Makhortykh, M., & Van Hoboken, J. (2019). Designing for the better by taking users into account: A qualitative evaluation of user control mechanisms in (news) recommender systems. Proceedings of the 13th ACM Conference on Recommender Systems, 69–77. https://doi.org/10.1145/3298689.3347014

Harambam, J., Hoboken, J., & Helberger, N. (2018). Democratizing algorithmic news recommenders: How to materialize voice in a technologically saturated media ecosystem. Philosophical Transactions of the Royal Society, 376(2133), 1–21. https://doi.org/10.1098/rsta.2018.0088

Hardy, J. (2008). Western media systems. Routledge. https://doi.org/10.4324/9780203869048

Hardy, J. (2014). Critical political economy of the media: An introduction. Routledge. https://doi.org/10.4324/9780203136225

Helberger, N. (2011). Diversity by design. Journal of Information Policy, 1, 441–469. https://doi.org/10.5325/jinfopoli.1.2011.0441

Helberger, N. (2019). On the democratic role of news recommenders. Digital Journalism, 7(8), 993–1012. https://doi.org/10.1080/21670811.2019.1623700

Helberger, N., Eskens, S. J., Drunen, M. Z., Bastian, M. B., & Möller, J. E. (2019, May). Implications of AI-driven tools in the media for freedom of expression. Ministerial Conference. Artifical Intelligence – Intelligent Politics: Challenges and opportunities for media and democracy, Cyprus. https://hdl.handle.net/11245.1/64d9c9e7-d15c-4481-97d7-85ebb5179b32

Helberger, N., Karppinen, K., & D’Acunto, L. (2018). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 21(2), 191–207. https://doi.org/10.1080/1369118X.2016.1271900

Hindman, M. (2017). Journalism ethics and digital audience data. In P. J. Boczkowski & C. W. Anderson (Eds.), Remaking the news: Essays on the future of journalism scholarship in the digital age (pp. 177–193). The MIT Press.

Hoeve, M., Heruer, M., Odijk, D., Schuth, A., & Rijke, M. (2017, August 27). Do news consumers want explanations for personalized news rankings? FATREC Workshop on Responsible Recommendation. https://doi.org/10.18122/B24D7N

Humprecht, E., & Esser, F. (2018). Mapping digital journalism: Comparing 48 news websites from six countries. Journalism, 19(4), 500–518. https://doi.org/10.1177/1464884916667872

Izvestiia. (n.d.). Polzovatel’skoe soglashenie. https://iz.ru/agreement.html

Jönsson, A. M., & Örnebring, H. (2011). User-generated content and the news: Empowerment of citizens or interactive illusion? Journalism Practice, 5(2), 127–144. https://doi.org/10.1080/17512786.2010.501155

Kaminski, M. E. (2019). The right to explanation, explained. Berkeley Technology Law Journal, 34(1), 189–219. https://btlj.org/data/articles2019/34_1/05_Kaminski_Web.pdf

Karimi, M., Jannach, D., & Jugovac, M. (2018). News recommender systems: Survey and roads ahead. Information Processing & Management, 54(6), 1203–1227. https://doi.org/10.1016/j.ipm.2018.04.008

Karlsson, M., & Clerwall, C. (2012). Patterns and origins in the evolution of multimedia on broadsheet and tabloid news sites: Swedish online news 2005–2010. Journalism Studies, 13(4), 550–565. https://doi.org/10.1080/1461670X.2011.639571

Karlsson, M., & Clerwall, C. (2013). Negotiating professional news judgment and “clicks”: Comparing tabloid, broadsheet and public service traditions in Sweden. Nordicom Review, 34(2), 65–76. https://doi.org/10.2478/nor-2013-0054

Karppinen, K. (2013). Rethinking media pluralism. Fordham University Press. https://doi.org/10.5422/fordham/9780823245123.001.0001

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. https://doi.org/10.1080/1369118X.2016.1154087

Kukushkina, E., Mzhavanadze, G., & Perevalov, V. (2017). Russia. Privacy, Data Protection and Cybersecurity Law Review, 284–296.

Kuner, C. (2005). Privacy, security and transparency: Challenges for data protection law in a new Europe. European Business Law Review, 16(1), 1–8.

Kuner, C., Svantesson, D. J. B., Cate, F. H., Lynskey, O., & Millard, C. (2016). The language of data privacy law (and how it differs from reality. International Data Privacy Law, 6(4), 259–260. https://doi.org/10.1093/idpl/ipw022

Lee, E. J., & Tandoc, E. C. (2017). When news meets the audience: How audience feedback online affects news production and consumption. Human Communication Research, 43(4), 436–449. https://doi.org/10.1111/hcre.12123

Lehtisaari, K., Villi, M., Grönlund, M., Lindén, C. G., Mierzejewska, B. I., Picard, R., & Roepnack, A. (2018). Comparing innovation and social media strategies in Scandinavian and US Newspapers. Digital Journalism, 6(8), 1029–1040. https://doi.org/10.1080/21670811.2018.1503061

Li, T., & Unger, T. (2012). Willing to pay for quality personalization? Trade-off between quality and privacy. European Journal of Information Systems, 21(6), 621–642. https://doi.org/10.1057/ejis.2012.13

Macnamara, J. (2010). The 21st century media (r)evolution: Emergent communication practices. Peter Lang.

Makhortykh, M., & Bastian, M. (2020). Personalizing the war: Perspectives for the adoption of news recommendation algorithms in the media coverage of the conflict in Eastern Ukraine. Media, War & Conflict. https://doi.org/10.1177/1750635220906254

Makhortykh, M., & Wijermars, W. (2019, May 24). Can echo chambers protect information freedom? Algorithmic news recommenders and the public sphere in Eastern Europe [Paper presentation]. International Communication Association Conference.

Matos, C. (2009). Journalism and political democracy in Brazil. Lexington Books.

McBride, K., & Rosenstiel, T. (2013). Introduction: New guiding principles for a new era of journalism. In K. McBride & T. Rosenstiel (Eds.), The new ethics of journalism: Principles for the 21st century (pp. 1–6).

McDonald, A. M., & Cranor, L. F. (2008). The cost of reading privacy policies. I/S: A Journal of Law and Policy for the Information Society, 4, 540–565. https://core.ac.uk/download/pdf/159561828.pdf

McQuail, D. (2000). Some reflections on the Western bias of media theory. Asian Journal of Communication, 10(2), 1–13. https://doi.org/10.1080/01292980009364781

Mediascope. (2018). Chitatel’skaia auditoriia Rossii [Report]. Mediascope. https://mediascope.net/upload/iblock/5ce/NRS_2018_1.pdf

Meyer, P. (2009). The vanishing newspaper: Saving journalism in the information age. University of Missouri Press.

Mitchelstein, E., & Boczkowski, P. J. (2010). Online news consumption research: An assessment of past work and an agenda for the future. New Media & Society, 12(7), 1085–1102. https://doi.org/10.1177/1461444809350193

Möller, J., Trilling, D., Helberger, N., Irion, K., & de Vreese, C. H. (2016). Shrinking core? Exploring the differential agenda setting power of traditional and personalized news media. Info, 18(6), 26–41. https://doi.org/10.1108/info-05-2016-0020

Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication & Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076

Muhlmann, G. (2010). Journalism for democracy. Polity Press.

Newman, N. (2018). Journalism, media and technology trends and predictions 2018 [Report]. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2018-01/RISJ%20Trends%20and%20Predictions%202018%20NN.pdf

Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Dædalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113

Nozal Cantarero, T., González-Neira, A., & Valentini, E. (2020). Newspaper apps for tablets and smartphones in different media systems: A comparative analysis. Journalism, 21(9), 1264–1282. https://doi.org/10.1177/1464884917733589

N.R.C. (n.d.). Onze journalistiek is ons product. Niet uw gegevens. https://www.nrc.nl/privacy/

Organisation for Economic Co-operation and Development. (2013). The OECD Privacy Framework. Organisation for Economic Co-operation and Development. http://www.oecd.org/sti/ieconomy/oecd_privacy_framework.pdf

Örnebring, H., & Jönsson, A. M. (2004). Tabloid journalism and the public sphere: A historical perspective on tabloid journalism. Journalism Studies, 5(3), 283–295. https://doi.org/10.1080/1461670042000246052

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Penney, J. (2017). Internet surveillance, regulation, and chilling effects online: A comparative case study. Internet Policy Review, 6(2). https://doi.org/10.14763/2017.2.692

Petre, C. (2015). The traffic factories: Metrics at chartbeat, gawker media, and the New York Times [Report]. The Tow Center for Digital Journalism. https://www.cjr.org/tow_center_reports/the_traffic_factories_metrics_at_chartbeat_gawker_media_and_the_new_york_times.php

Rossiiskaia Gazeta. (n.d.). Polzovatel’skoe soglashenie o razmeshenii kommentariev i inoi informatsii pol’zovatelei na internet-portale “Rossiiskoi gazety”. https://rg.ru/useragreement/

Rotaru, V. (2018). Forced attraction? How Russia is instrumentalizing its soft power sources in the “near abroad”. Problems of Post-Communism, 65(1), 37–48. https://doi.org/10.1080/10758216.2016.1276400

Sørensen, J. K., & Bulck, H. (2020). Public service media online, advertising and the third-party user data business: A trade versus trust dilemma? Convergence, 26(2), 421–447. https://doi.org/10.1177/1354856518790203

Sørensen, J. K., & Hutchinson, J. (2018). Algorithms and public service media. In G. Lowe, H. V. Bulck, & K. Donders (Eds.), Public service media in the networked society (pp. 91–106). Nordicom.

Starr, P. (2005). The creation of the media: Political origins of modern communication. Basic Books.

Steedman, R., Kennedy, H., & Jones, R. (2020). Complex ecologies of trust in data practices and data-driven systems. Information, Communication & Society. https://doi.org/10.1080/1369118X.2020.1748090

Steinfeld, N. (2016). I agree to the terms and conditions”: (How) do users read privacy policies online? An eye-tracking experiment. Computers in Human Behavior, 55, 992–1000. https://doi.org/10.1016/j.chb.2015.09.038

Stohl, C., Stohl, M., & Leonardi, P. M. (2016). Managing opacity: Information visibility and the paradox of transparency in the digital age. International Journal of Communication, 10, 123–137. https://ijoc.org/index.php/ijoc/article/view/4466

Stoycheff, E. (2016). Under surveillance: Examining Facebook’s spiral of silence effects in the wake of NSA Internet monitoring. Journalism & Mass Communication Quarterly, 93(2), 296–311. https://doi.org/10.1177/1077699016630255

Strömbäck, J., Tsfati, Y., Boomgaarden, H., Damstra, A., Lindgren, E., Vliegenthart, R., & Lindholm, T. (2020). News media trust and its impact on media use: Toward a framework for future research. Annals of the International Communication Association, 1–18. https://doi.org/10.1080/23808985.2020.1755338

Sunstein, C. R. (2017). #Republic: Divided democracy in the age of social media. Princeton University Press.

Toepfl, F., & Litvinenko, A. (2018). Transferring control from the backend to the frontend: A comparison of the discourse architectures of comment sections on news websites across the post-Soviet world. New Media & Society, 20(8), 2844–2861. https://doi.org/10.1177/1461444817733710

van Dijck, J., Poell, T., & De Waal, M. (2018). The platform society: Public values in a connective world. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

Vartanova, E. (2012). The Russian media model in the context of post-Soviet dynamics. In D. Hallin & P. Mancini (Eds.), Comparing media systems beyond the Western world (pp. 119–142). Cambridge University Press.

Vartanova, E., & Lukina, M. (2017). Russian journalism education: Challenging media change and educational reform. Journalism & Mass Communication Educator, 72(3), 274–284. https://doi.org/10.1177/1077695817719137

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GPDR. Harvard Journal of Law & Technology, 31(2), 841–887. https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf

Wang, G. (2011). De-Westernizing communication research: Altering questions and changing frameworks. Routledge. https://doi.org/10.4324/9780203846599

Whitman, J. (2003). The two Western cultures of privacy: Dignity versus liberty. Yale Law Journal, 113, 1151–1222. https://doi.org/10.2307/4135723

Willnat, L., Weaver, D. H., & Choi, J. (2013). The global journalist in the twenty-first century: A cross-national study of journalistic competencies. Journalism Practice, 7(2), 163–183. https://doi.org/10.1080/17512786.2012.753210

Wilson, S., Schaub, F., Dara, A., Liu, F., Cherivirala, S., Leon, P., Andersen, M., Zimmeck, S., Sathyendra, K., Russell, C., Norton, T., Hovy, E., Reidenberg, J., & Sadeh, N. (2016). The creation and analysis of a website privacy policy corpus. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 1330–1340. https://www.aclweb.org/anthology/P16-1126.pdf

Ziewitz, M. (2016). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values, 41(1), 3–16. https://doi.org/10.1177/0162243915608948

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review, 5(1). https://doi.org/10.14763/2016.1.401

Footnotes

1. In this article, we treat ANRs as a class of recommender systems that are utilised by the news media “to filter incoming streams of information according to the users’ preferences or to point them to additional items of interest in the context of a given object” (Karimi et al., 2018, p. 1203). For more information about ANRs and the effects that their deployment can have on the public sphere and media functions, see Bastian et al. (2019), Bodó et al. (2019), Harambam et al. (2018, 2019), Helberger (2019), Möller et al. (2018), and van Drunen et al. (2019).

2. For some exceptions, see Bastian and Helberger (2019), Makhortykh and Wijermars (2019), Makhortykh and Bastian (2020), Sørensen and Hutchinson (2018), and Van den Bulck and Moe (2018).

3. For the public in particular, see the results of Eurobarometer (2019), which indicate a considerable increase in public awareness of the authorities’ responsibility for protecting their data rights, compared with 2015.

4. Specifically, three quarters of the surveyed media organisations actively use or plan to start using artificial intelligence techniques to provide better content recommendations to their users.

5. These concerns are particularly high in the context of privacy policies, which are rarely read by users (see, for instance, McDonald & Cranor, 2008; Nissenbaum, 2011; Steinfeld, 2016).

6. The Convention will enter into force either when all parties will ratify it or on 11 October 2023 if there would be 38 parties to the Protocol at this date. Currently (i.e., in October 2020), the Convention is ratified by eight parties (Council of Europe, n.d.).

7. Interestingly, the attitudes of journalists towards media laws and regulations differ in the three types of journalism cultures: Hanitzsch et al. (2011) showed that a large proportion (42.1%) of Russian journalists consider media laws to be very or extremely influential, whereas Brazil scored 32.5%, and the Netherlands scored as low as 11.4%. However, at that time, the GDPR had not yet been introduced, and its presence might have led to different results.


Cryptoparties: empowerment in internet security?

$
0
0

Acknowledgements

I am deeply indebted to all the generous people who took the time to explain to me what Cryptoparties are about and corrected my many misconceptions. I hope this article does justice to their perpetuous work to make cyberspace a better place for all of us.

Introduction

This paper starts from the assumption that understanding the governance of networked technologies and related societal values such as privacy and security requires us to go beyond a focus on legal and institutional aspects. Indeed, a more comprehensive understanding of the politics of the internet ‘requires unpacking the micro-practices of governance as mechanisms of distributed, semi-formal or reflexive coordination, private ordering, and use of internet resources’ (Epstein, Katzenbach, & Musiani, 2016, p. 4). The success of privacy and security is not only determined by legislation but is heavily dependent upon the use of individual practices to counter surveillance and retain privacy (Bauman et al., 2014; Marx, 2015; Bellanova, 2017). However, these practices are often perceived as highly complex, and end users are often hesitant about using seemingly complicated tools. The focus of this article lies on one practice - cryptoparties – which attempt to combat these anxieties and teach privacy and security tools to the layperson. In doing so, this paper offers a valuable empirical study into a largely unknown phenomenon (but see Kannengießer, 2019) and corroborates previous pleas in this journal (Epstein, Katzenbach & Musiani, 2016) that more attention must be paid to micro-practices.

Cryptoparties (CPs) are not actually ‘parties’ but rather open meetings where individuals can get the help they need to improve their digital privacy and security. These meetings happen all around the world, mostly in public spaces such as cafes or universities. CPs originated in 2011 in Australia, but today most of them occur in Europe (CryptoParty, 2013, p. 13). Sigrid Kannengießer summarises the rationale of these activities as practices that ‘aim to empower ordinary people by on the one hand informing them about critical aspects of datafication processes and on the other hand enabling them to engage with their digital media technologies, encrypt those and online communication processes’ (Kannengießer, 2019, p. 12; see Loder, 2014, p. 814). CPs create an entry point for studying everyday practices and how they fit into wider political debates. In the case of CPs, we can observe how mundane practices such as choosing a secure browser or implementing better passwords are key to enacting privacy. Being a ‘mechanism of civic engagement’, CPs qualify as a practice of internet governance under the definition of Epstein and colleagues (Epstein, Katzenbach, & Musiani, 2016, p. 7).

This article brings the political significance of CPs into the spotlight and explains the role of CPs in the broader development of privacy and security controversies. Political science has been rather silent on the topic of CPs, most studies instead focusing more broadly on encryption and internet governance (Herrera, 2002; Monsees, 2020; Schulze, 2017; Myers West, 2018). Shifting the focus to the practices of CPs also allows the conceptual focus to shift away from institutions and towards more mundane and decentered practices.

I demonstrate throughout the article how CPs enact a diffuse kind of security politics where neither threat nor countermeasures work through one central institution but through mundane, decentred practices (Huysmans, 2016). In the following section, I draw on more recent contributions in the field of internet governance and international relations in order to argue that a sensitivity towards mundane practices is crucial for understanding the creation of internet security and privacy. The empirical study is mainly based on participant observation, the methodology of which I lay out in the second section. The main part of the paper presents the results of this empirical study. I argue that the specific format of cryptoparties allows them to teach relevant privacy tools and adapt to both the abilities of end users and a changing socio-technical environment. I demonstrate how CPs themselves are decentred and can adapt over time: CPs enact a decentred threat scenario that focuses less on institutions and more on individuals and their needs. The paper therefore provides novel empirical insights while at the same time showing how a shift in perspective to mundane security practices can enrich the study of internet governance.

Internet governance: on the political significance of decentred practices

Internet governance (IG) is usually analysed as a form of ‘multistakeholderism’, which is defined as:

two or more classes of actors engaged in a common governance enterprise concerning issues they regard as public in nature, and characterized by polyarchic authority relations constituted by procedural rules(Raymond & DeNardis, 2015, p. 573, see also Hofmann, 2016).

Private entities, NGOs and hybrid organisations such as ICANN, in addition to national governments, are all involved in the governance of the global infrastructure that constitutes the internet (DeNardis, 2010; Mueller, 2010). However, in line with more recent contributions that try to broaden the empirical and analytical focus of IG, I demonstrate in this paper the value of looking at less institutionalised practices. 1 For example, a special issue of this journal has shown the need to focus on ‘doing internet governance’ (Epstein, Katzenbach, & Musiani, 2016). Much IG research remains on the institutional level which ‘largely overlooks the mundane practices that make those institutions tick, thus leaving important blind spots in both conceptual and substantive understanding of the practices and power arrangements of IG’ (Epstein, Katzenbach & Musiani, 2016, p. 4). Taking it even further, van Eeten and Mueller argue that the constitution of the field of IG creates systemic blind spots: the specific boundaries between IG and other fields limit the scope and prevent deeper engagement with other fields. There is ‘a tendency to think of governance as being produced by, or taking place in, formal organizations with explicitly institutionalized rules and procedures’ (van Eeten & Mueller, 2012, p. 727). IG is, then, always linked to institutional settings in which IG is ‘explicitly the topic of discussion’ (ibid.). Limiting the analytical focus leads to a biased understanding of these formal structures and underestimates the political significance of informal practices (van Eeten & Mueller 2012, p. 730). Consequently, taking an analytical view to seemingly insignificant practices gives us a more thorough understanding of how ‘privacy’ or ‘security’ are enacted (Christensen & Liebetrau 2019). As I will argue throughout the article, we can then see how decentred practices are, in fact, politically significant.

Cryptoparties are not anchored in one central organisation but are rather a decentralised, global form of technological activism and education. My article provides both a much-needed empirical study of CPs and an illustration of the value in expanding the scope of IG. I investigate how activists and citizens come together and how knowledge about privacy and security, core aspects of IG, circulates and is put into practice. Such a ‘bottom-up perspective focuses on the mutual adjustments we make in our daily social life’ (Hofmann, Katzenbach, & Gollatz, 2016, p. 1414), thereby illustrating the ordering effects of ‘day-to-day practices that organize our social lives’ (idem). Such a shift in perspective illuminates how, for example, ‘security’ results from a multiplicity of practices, actors and technologies, and not solely from governmental institutions and legal regulations (Hofmann, Katzenbach, & Gollatz, 2016).

Mikkel Flyverbom expanded on these insights by drawing on the field of science and technology studies. For him, a crucial issue that has been neglected by IG concerns ‘the entanglement of technology and social practices and the ordering effects of processes of digitalisation and datafication’ (Flyverbom, 2016, p. 2). Flyverbom argues that an understanding of regulation as ‘institutionalised, deliberate and goal-oriented interventions by public or private actors’ is too narrow. Indeed, the de facto enactment of ‘bigger’ issues such as privacy and security is not only a result of governance efforts but also largely relies on individual actions. The political significance of privacy and security lies not only in a particular institutional set-up but in the micro-practices of individuals (Solomon & Steele, 2016; Isin & Ruppert, 2015). These practices are important for maintaining security but also shape the insecurities people experience (Guillaume & Huysmans, 2013; Selimovic, 2019). Mundane practices such as firewalls or spam-filters are equally important to legal regulations when it comes to securing networked technology. These insights motivate the scope of this article in analysing informal, decentralised practices that shape privacy and security. The objective of this article is not to weigh the relative importance of institutionalised vs non-institutionalised practices. The aim is to provide a more thorough understanding of how the actions of users are shaped by more than formal rules and legislation.

Methodology and description of the field

From my theoretical discussion on the importance of a bottom-up perspective on decentred practices, it follows that I needed a methodological toolkit that allowed me to capture these practices. I combined document analysis, participant observation and informal interviews (Gillespie & Michelson, 2011). The research followed a qualitative-interpretive research design (Schwartz-Shea & Yanow, 2012, Jackson 2011, ch. 6). In this project, I first wanted to get a better understanding of the practice of CPs. Hence, I was interested in what kind of meaning the participants themselves ascribe to CPs and how they evaluate the experience. This idea of meaning-making is core to a qualitative-interpretive research design (see Franke & Roos, 2013). With this in mind, the particular methods of participant observation and data analysis allowed me to get a clear understanding of the participant perspective.

According to Gillespie and Michelson, participant observation is a valuable, if often overlooked, method for political science (Gillespie & Michelson, 2011, p. 261). Depending on the research objective, the researcher can be more of an ‘observer’ or more of a ‘participant’ (Gillespie & Michelson 2011, p. 262, see also: Schwartz-Shea & Yanow, 2012, pp. 63-67). I remained more on the observing end of the spectrum but offered my opinions or knowledge within small groups on a few occasions. Observing allowed me to listen in on multiple small group discussions and approach participants for short, informal interviews. This open research logic was an ideal fit for my project as well as for the environment in which I conducted my research (for a broader discussion on how to adapt methods for a specific research context, see Leander, 2017). Being open to the views of participants and having the option of unstructured conversation are valuable aspects of participant observation. The choice of method allowed me to understand what the participants deemed important, without disturbing their activities. The combination of participant observation and interviews allowed me to understand CPs as they unfolded while at the same time gaining more knowledge about the background of CPs through my more detailed questions.

Since this research did not receive any financial support, I was limited in the sites I could access due to time and financial constraints. I contacted the organisers in different places in Germany (and one location in Denmark) who conducted CPs in summer and fall 2019. Once I received a positive reply, I attended the CP, making clear to everyone that I was a researcher and letting everybody know about my intentions. Because participants were concerned about privacy and because photography and filming are specifically forbidden during CPs, I made it clear from the beginning that I would not record anything and that all informants would remain anonymous. In general, the organisers welcomed me and were open about answering my questions. I was able to observe the CPs and ask questions. I also had the chance to ask more detailed questions before the CPs and at one meeting where they planned the next CP. This allowed me to get background information about the organisers and their views on CPs and how they developed.

I attended three CPs which took place in two different cities and one more meeting of a planning event for a CP. 2 The meetings lasted between one and four and a half hours and took place between July and November 2019. 3 I took only written notes while attending the meeting. Since my interviewees were sensitive to privacy, I was unable to record any interviews, but I took detailed notes which I expanded on immediately after the end of the CPs. These field notes form the base of the results described below. I only use direct quotes for the statements that I was able to write down directly during the CP (see Emerson, Fretz, & Shaw, 2001). All this accounts for the scarcity of direct quotes in the presentation of the results below: I only use them when they come from written documents or when I was able to write the quote down completely. Further documents, mainly from the cryptoparty wiki page used for organising (see below for more on the role of that wiki), corroborated my results.

The number of participants at the CPs I attended ranged from zero 4 to fifteen, which seems to be within the normal range of up to 20 people. Participants and organisers are predominantly male except for CPs that are deliberately organised for women, transgender and non-binary persons. Participants also varied in age from their 20s to 50s. 5 Some participants were open about their left-leaning politics and declared that their motivation to participate in CPs was based in their activism.All of my interview partners stated that only very few people attend more than one CP and rarely more than two to three. Based on my observation, most new participants had only very basic knowledge about internet safety. Especially at the CP targeted towards the LGBTQ+ community, the participants were open about being rather overwhelmed by the complexity of internet security (for an exploration of gender stereotypes in the hacker scene see Tanczer, 2016). They also described a feeling of anxiety and the need to ‘start somewhere’ because they lacked an overview of possible threats. 6

The conduct of cryptoparties

An Australian woman working under the pseudonym of Asher Wolf initiated the first CP out of an interest in digital privacy (Poulsen, 2014; Radio Free Europe, 2012). It is interesting to note that she was not a ‘hacker’ or an expert but started the movement out of an interest in learning about privacy and security practices. Today, CPs are organised around the world with the most regular parties occurring in Europe (CryptoParty, 2019d). CPs do not rely on one centralised organisation. Many are organised by people who were previously active in the hacker scene, and most organisers work in IT.

One can generally distinguish between two kinds of CPs. Many CPs are organised by activists and advertised on a wiki which is at the central website with all information about how to organise a CP (CryptoParty, 2019b). Often, these CPs reoccur on a monthly or bi-weekly basis in the same space, often using public spaces such as cafes, cultural centres or hackerspaces. However, some are conducted by political parties, interest groups, academic conferences or other types of independent organisations. These CPs are not publicised to potential participants through the main website but through the specific network of that organisation.

The CPs themselves differ in their particular way of ‘teaching’ technological tools. One CP might mainly provide mostly one-on-one tutorials, whereas others split the whole group into smaller discussions, whereas others might focus on one specific theme taught through a lecture-style presentation. CPs often start with a round of introduction during which everybody states what he or she can teach or what one needs help with. While the organisers are usually in the position to provide expertise, the role of the ‘teacher’ is not fixed in advance. The round of introductions develops a sense of what issues are most important to the participants allowing small groups to form around their interests. Sometimes a participant lets an organiser know in advance if they need help with something specific. An organiser will then help with that issue. The small groups cover a range of issues from basic ‘safe surfing tools’ to a more abstract introduction into ‘how the internet works’ to detailed explanations of email encryption or how to use programmes such as Tor or Tails 7 to protect anonymity to a larger degree. Participants learn, for example, about add-ons like ‘https-everywhere’ or learn about the advantages of certain web browsers when it comes to privacy. The organisers call this ‘digital self-defence’ or the development of a ‘security culture’. 8 A CP lasts a few hours, and the atmosphere is very informal and relaxed, allowing participants to ask their questions and raise specific concerns.

My fieldwork shows that certain ideas are commonly mentioned (e.g., 100% security is impossible) and that certain tools are frequently taught (e.g., selecting a safe browser for surfing the internet). These commonalities go back, at least in part, to a code of conduct which is published on the central wiki (CryptoParty, 2019a). All interview partners referred (at least implicitly) to the Code of Conduct. At two of the CPs that I attended, the Code of Conduct was explained in the beginning. 9 The Code specifies that harassment is not tolerated and that CPs should be open to the public. However, there are also more specific rules such as ‘Other People's Keyboards Are Lava - Don't touch anyone's keyboard, but your own’ (CryptoParty, 2019a). This rule is based on the pedagogical reasons that the participants learn more if they have to do everything on their own. For privacy reasons it is also considered a bad habit to use other people's devices.

The politics of cryptoparties

Diffused politics

The previous section outlined the conduct of CPs in detail. In this section, I will detail specific aspects of CPs to analyse how we can understand these activities as politically significant practices that are relevant to internet governance.

CPs first developed in response to a very specific controversy around Australian legislation but later spread in the context of global controversy about commercial and state-led mass-surveillance (Poulsen, 2014). They saw renewed interest in the aftermath of the Snowden revelations. Indeed, some of my informants told me that they held CPs with several hundred participants immediately after the Snowden revelations. Today, internet security and privacy are part of most people’s daily routines: entering passwords, shielding cameras, and deleting cookies are just a few of the most relevant practices. We can see that the context of CPs evolved from a very particular concern with a piece of legislation to a more diffuse understanding of where the problem lies, including the realisation that there is no “one-size-fits-all” recipe to pick the best privacy tool. One organiser illustrated this by emphasising that every participant has different needs when it comes to security measures 10 and that one needs to develop a security culture. 11 Security culture refers to the idea that security is always situational and is always both affecting other people and affected by their actions (see the discussion on relationality in Ermoshina & Musiani, 2018). My informants mentioned a variety of examples: needing help with data protection before travelling to China, needing help with a hacked Facebook account 12 or just needing to ‘start somewhere’ with thinking about personal security. 13 This corroborates previous research highlighting ‘how understandings of “good” encryption, security and privacy emerge [...] more often than not, in a non-academic and bottom-up fashion’ (Musiani & Ermoshina, 2017, p. 54).

What is striking, however, is that very specific legislation and events are rarely mentioned. For example, I expected the Snowden revelations to be a core event for most participants but when asked about it, they said that they were either already active at the time or only joined the CPs later. 14 A consistent reason given for activism at the CPs and the two hackerspaces I visited was a general concern that politicians, on the whole, do not have much tech expertise. It is striking that the concern seems to focus on politicians as a group and the general political context but not on specific people or events. The desire to learn about technological tools is thus motivated by the larger societal context rather than a reaction to a distinct experience. There is an observable set of diffused controversies and threat-scenarios around surveillance and privacy (for a discussion on the role of dispersion in surveillance society see Huysmans, 2016). CPs react to a type of ‘political situation’ (Barry, 2012) in which digital practices of internet security and practice become a matter of concern. Importantly, this political situation is not characterised by only one particular problem, but a constellation of security issues: government surveillance, data collection by private companies, phishing and targeted hacking attacks. CPs are a result of and deeply embedded in public controversies revolving around internet security, privacy and the roles of both global ICT companies and secret services.

The relevance of CPs for understanding internet governance lies in the way they illuminate the importance of mundane practices (and not only top-down steering) in the enactment of privacy and security on a broad scale. As discussed in the conceptual section of this paper, a bottom-up perspective gives us a more thorough understanding of the role CPs play in enacting a specific understanding of security and privacy. Activists and experts alike acknowledge that users need to account for their own personal threat-scenario. Hence there exist no universal ideal, technologies or practices, but only solutions appropriate to each individual situation (see Musiani & Ermoshina 2017, p. 69; Ermoshina & Musiani, 2018). Hence, internet security and privacy are not only seen as a function of legal regulation but also something that needs to be established anew by each individual in every situation. It also becomes clear that CPs are spaces in which diffused politics are enacted. Rather than constructing a centralised threat-scenario (the state! Facebook!), what emerges is both a diffuse and decentred image of prevailing threats and also its solution.

‘Experts’ and ‘participants’

One core issue for CP organisers is the relationship between those that teach tools during CPs and those that seek to learn them. The general idea underlying CPs is that anybody can organise one, and my conversation with the organisers revealed that they would prefer it to be the case. In practice, very few people organise CPs and they tend to be the ones with expertise in the field. This is relevant since the initial intention for CPs was not that a few ‘experts’ teach non-experts but that citizens come together in order to learn together. Asher Wolf, the founder of CPs, was not an expert herself. Those who teach tools are called ‘angels’, (Cryptoparty, 2019c) a term that emphasises their helpful, friendly manner rather than characterising them as ‘geeky’ experts. Currently, CPs are not as egalitarian as originally imagined. In reality, the organisers guide participants through the implementation and use of technology, sometimes even in a lecture-style format. 15 On a more fundamental level, founder Asher Wolf quit CPs because of the persistent misogyny that, she felt, devalued the perspectives of women and laypeople (mati & Wolf, 2012). Less drastically, Kannengießer observes that ‘there are strong hierarchies persisting between “teachers” and “students”’ (Kannengießer, 2019, p. 7). The tension between the ideal of a self-organised communal effort and the actual practice of learning in more hierarchical ways is crucial for understanding the rationale of CPs.

Whereas CPs cannot function without some kind of hierarchy, the organisers explicitly work against their status in order to create an open space, resisting the tendency CPs have of defaulting to experts. One informant told me he deliberately intends cryptoparties to ‘not look too professional’. 16 Another man, explaining the tenets of email encryption to a small group of people, fostered a discussion by deliberately limiting his lecturing. 17 One episode from the last CP I attended illustrates nicely how the original idea of CPs as a communal space of learning persists: a woman who had only attended a few CPs and otherwise did not have much prior knowledge announced in the opening round that she would be leading a small group on some issues she was familiar with. She said she could teach how to create secure passwords and some basic knowledge about how the internet works. In her own words, this was ‘pure empowerment’. 18 This episode occurred at a CP that was only open to women, trans and non-binary people, and contributes anecdotal evidence that CPs provide an open environment for people from all kinds of backgrounds. CPs point us towards new ways to transfer knowledge and the diffusion of ‘expert’ knowledge.

CPs also demonstrate how political knowledge and issues circulate in the public. Issues concerning private cyber security and privacy are not only negotiated via governmental institutions and legal regulations. Decentred practices such as CPs that focus on the everyday practices of individuals and the knowledge, tools and technologies they use are equally important. The organisational methods of CP’s might prefigure future activism in a technological society where expert knowledge is often required. Traditional forms of citizen engagement might be less equipped to offer a dynamic and personalised learning environment, while CPs offer all participants opportunities to receive knowledge tailored to their own needs and personal habits.

In the previous section, I showed how CPs contribute to the emergence of decentred threat-scenarios and simultaneously offer a solution. In this section, I looked at how ideas about relational risks also feature in how CP participants relate to each other. Again, the political relevance of CPs does not primarily lie in the way in which they feed back into governmental decision-making processes or their impact on new legal regulation. Rather, their relevance lies in the way they ‘[embed] concepts such as security and privacy’ (Ermoshina & Musiani 2018, p. 18) in a wider context and thereby influence both our perception of these concepts and the practices we deem appropriate in enacting them. The next section zooms in on the issue of privacy.

Cryptoparties without encryption?

While privacy is consistently one of the core goals of CPs, the practices and tools used to achieve and improve it have changed over time. In order to understand the political significance of these technological changes, a short detour into the history of privacy and encryption is necessary.

Cryptoparties, as the name suggests, originally revolved mainly around encryption. PGP (Pretty Good Privacy; also GNUPG) is the traditional way to encrypt email, based on strong public-key cryptography. 19 However, PGP was not always legal since the US government tried to constrain its usage. The objective in regulating encryption is to determine who has access to what kind of information. Cryptography was first a military technology, but its applications multiplied with the emergence of the internet (Kahn, 1997; Singh, 2000). Governments around the world, but especially the US, tried to regulate the use of strong encryption (Diffie & Landau, 2007). Crypto Wars is the umbrella term for controversies around who gets to decide what kind of encryption is available for public use (FindLaw Attorney Writers, 2012; Levy, 1994). The primary question in these early debates was whether encryption should be strong enough to prevent government access to digital communication (Diffie & Landau, 2007; Levy, 2002). Diana Saco has shown that activists fighting for stronger encryption were part of a libertarian hacker scene that was interested in keeping the state out of the internet (Saco, 2002). Ultimately, the use and spread of email encryption programmes such as PGP was legalised. Even though regulations loosened, the hopes of activists for more widespread usage of encryption by end users did not materialise (Diffie & Landau, 2007). Today, there is renewed interest in encryption and data protection due mainly to the revelations by Edward Snowden (Schulze, 2017). 20 The resulting debates and pressure by end users have led to the establishment of new products and services such as encrypted messaging services.

This shift to a broader concern with privacy and data protection mirrors the conduct of CPs. Whereas early CPs focused heavily on email encryption and the use of PGP as an end user solution (CryptoParty, 2013), participants in the CPs I visited showed concern for a variety of vulnerabilities: the collection of data by corporate actors, secure internet banking and targeted hacking attacks. As discussed in the previous two sections, this shift dovetails with the way in which CPs enact decentred threat-scenarios. The changing societal context goes hand in hand with the changing availability of products. But therealso seems to be a more technical reason for why PGP and email encryption are no longer a core privacy technology. If used incorrectly, PGP is harmful, and hence is not always taught at CPs. Indeed, email encryption does not constitute the main part of CPs anymore. Only two organisers still consider PGP the best (and only) tool to send email securely. According to them, despite PGP being complicated, it is still the most valuable tool for privacy in a digital environment. Even hackers consider PGP too complicated (Whitten & Tygar, 1999). During my visit at two hackerspaces only one person claimed to use PGP on a regular basis. Email encryption, while still considered to be crucial, is no longer central to debates on internet security. Rather, it is now included with other security and privacy tools that allow, for instance, for private messaging or anonymous browsing. Current discussions on email security focus on methods of server security and data mining by email providers. The original focus of CPs on PGP and email encryption has almost vanished, allowing for a more diffuse set of tools.

As a result, it becomes clear that CPs and their core idea has evolved considerably over time. Not only has the technological environment changed, the organisers have learned more about how end users can implement these tools. Several informants told me that over time they realised that teaching email encryption is too complicated and therefore decided to drop this tool and focus on other technologies. They learned from past experience and adapted their CP. At the same time, the political context and dominant technologies have changed. Today, users achieve privacy not mainly by personally encrypting their data, but by choosing services (social networks, messaging) that provide more privacy. In sum, we can see how technological change, societal change and individual learning processes alter CPs and the tools they teach.

This section showed how the political situation, the assessment of encryption technology and the diverse needs of participants all require comprehensive tools to enhance privacy and security. In contrast to earlier battles in the 1990s, the current issues can no longer be understood as a simple controversy about one particular technology such as PGP. Coming back to the insights from the first empirical section, we can see how learning and adapting to an ever-changing political and technological landscape is a core feature of CPs.

Conclusion

The present article is one of the few empirical studies on cryptoparties to date. The distinct focus on CPs as a political practice allowed a better understanding of a theme which receives only scarce academic and public attention. Based on participant observation of three CPs, informal interviews and additional document analysis, the study showed how CPs teach multiple tools to enhance privacy and prevent surveillance. The organisers of CPs strive for an egalitarian space for teaching and learning. Even though this goal is not always achieved, CPs can still serve as an example of citizen education in a technological society where every citizen needs to deal with complex technological issues.

On a more conceptual level this paper contributed to the emerging debate on ‘doing internet governance’. Drawing on previous research which identified the need to look at micro-practices, I argued for the political relevance of CPs. Even though these practices might not have a direct impact on legislation, they are still politically relevant. I demonstrated that CPs work well in a political situation that is characterised by diffuse threats. Cyber threats do not only originate from centralised, top-down dynamics but might originate from a multiplicity of spaces and agents (states, hackers, private companies). I showed how CPs are able to react to this decentered threat-scenario by adapting the tools they teach. Indeed, encryption was a core issue of legal battles in the 1990s and threats to regulate it are still present in the current discourse (Schulze, 2017). Cryptoparties started with a strong focus on teaching email encryption, but my empirical observation revealed that current CPs focus on a multiplicity of issues. This shift coincides with observable technological changes. Presently, encryption is much more likely to be embedded as part of other tools. The focus is less on only email encryption (as it was in the controversies in the 1990s) but on how encryption can be part of, for instance, messaging tools. Indeed, encryption is only one part of the solution when thinking about safe surfing, private messaging or protecting one’s anonymity.

This also means that a narrow focus on institutional aspects and legal regulations might miss the security and privacy maintenance done by end users on an everyday basis. Understanding this change in the de facto use of tools and their spread requires the study of mundane practices of end users. The focus on the practice of CPs revealed the importance of the idea of establishing a ‘security culture’. For the organisers, the aim is not only to teach specific tools but to increase awareness about the multiple vulnerabilities that users might encounter. The organisers want to teach how a higher level of security is possible. Some participants were scared and overwhelmed, prompting the organisers to teach simple tools that will still help to increase privacy and security. In line with previous research, it becomes clear that the idea is not to teach some tools that establish security once and for all but make the participants aware of their own threat-model and the multiplicity of adversaries (see also Ermoshina & Musiani, 2018). This became especially visible in the small groups that discussed ‘how the internet works’. Rather than teaching one specific tool, the idea was more to increase knowledge about technology and create awareness of one’s specific threat-model.

This speaks to a similar observation William H. Dutton has made about the need for a ‘security mindset’. According to him ‘In cyber security, the risks are more difficult to communicate, given the multiplicity of risks in particular circumstances’ (Dutton, 2017, p. 3), requiring us to rethink how to communicate about these threats. In the cyber context the threats are more diffuse and often not directly felt. The core task is, then, to develop a ‘mindset’ about beliefs, attitudes and values concerning cyber security. While I do not think that Duttton’s solution of using PGP everywhere is attainable for reasons described above, his plea for more encompassing research and policies for sensitising end users is certainly valid. Both future policies and citizen engagement practices can learn from CPs when negotiating the difficult terrain of teaching complex technologies in a political situation where threats to privacy and intrusion come from everywhere. The openness and adaptability of CPs are certainly helpful in an environment which is characterised by high complexity. Especially CPs that focus on women, transgender and nonbinary participants are able to create an open environment where a diverse ensemble of laypeople feel welcome. Mirroring these insights, it becomes clear that the conduct and the study of internet governance encompasses micro-practices and their evolution, and increasingly moves beyond a focus on institutions.

References

Barry, A. (2012). Political situations: Knowledge controversies in transnational governance. Critical Policy Studies, 6(3), 324–336. https://doi.org/10.1080/19460171.2012.699234

Bauman, Z., Bigo, D., Esteves, P., Guild, E., Jabri, V., Lyon, D., & Walker, R. B. J. (2014). After Snowden: Rethinking the impact of surveillance. International Political Sociology, 8(2), 121–144. https://doi.org/10.1111/ips.12048

Bellanova, R. (2017). Digital, politics, and algorithms: Governing digital data through the lens of data protection. European Journal of Social Theory, 20(3), 329–347. https://doi.org/10.1177/1368431016679167

Cetina, K. K., Schatzki, T. R. (2005), von Savigny, E. (Eds.). The Practice Turn in Contemporary Theory. Routledge. https://doi.org/10.4324/9780203977453

Christensen, K. K., & Liebetrau, T. (2019). A new role for ‘the public’? Exploring cyber security controversies in the case of WannaCry. Intelligence and National Security, 34(3), 395–408. https://doi.org/10.1080/02684527.2019.1553704

Coleman, G. (2010). The Hacker Conference: A Ritual Condensation and Celebration of a Lifeworld. Anthropological Quarterly, 83(1), 47–72. https://doi.org/10.1353/anq.0.0112

CryptoParty. (2013). The Crypto Party Handbook. https://www.cryptoparty.in/learn/handbook

CryptoParty. (2019a). Code of Conduct [Wiki]. CryptoParty. https://www.cryptoparty.in/code_of_conduct

CryptoParty. (2019b). CryptoParty [Wiki]. https://www.cryptoparty.in/

CryptoParty. (2019c). How to Organize a CryptoParty [Wiki]. CryptoParty. https://www.cryptoparty.in/organize/howto

CryptoParty. (2019d). Upcoming Parties [Wiki]. CryptoParty. https://www.cryptoparty.in/parties/upcoming

DeNardis, L. (2010). The Emerging Field of Internet Governance (Yale Information Society Project) [Working Paper]. Yale University. https://doi.org/10.2139/ssrn.1678343

Denning, D. E. (1996). The Future of Cryptography. In P. Ludlow (Ed.), Crypto Anarchy, Cyberstates, and Pirate Utopias(pp. 85–101). MIT Press.

Diffie, W., & Landau, S. E. (2007). Privacy on the line: The politics of wiretapping and encryption. MIT Press.

Dutton, W. H. (2017). Fostering a cyber security mindset. Internet Policy Review, 6(1). https://doi.org/10.14763/2017.1.443

Eeten, M. J., & Mueller, M. (2013). Where is the governance in Internet governance? New Media & Society, 15(5), 720–736. https://doi.org/10.1177/1461444812462850

Emerson, R., Fretz, R., & Shaw, L. (2001). Participant observation and fieldnotes. In P. Atkinson, A. Coffey, S. Delamont, J. Lofland, & L. Lofland (Eds.), Handbook of Ethnography (pp. 352–368). SAGE Publications. https://doi.org/10.4135/9781848608337.n24

Epstein, D., Katzenbach, C., & Musiani, F. (2016). Doing internet governance: Practices, controversies, infrastructures, and institutions. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.435

Ermoshina, K., & Musiani, F. (2018). Hiding from Whom? Threat Models and In-the-Making Encryption Technologies. Intermédialités / Intermediality, 32. https://doi.org/10.7202/1058473ar

FindLaw Attorney Writers. (2012, June 21). 30 Years of Public-Key Cryptography. FindLaw. http://technology.findlaw.com/legal-software/30-years-of-public-key-cryptography.html

Flyverbom, M. (2016). Disclosing and concealing: Internet governance, information control and the management of visibility. Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.428

Franke, U., & Roos, U. (2013). Einleitung: Zu den Begriffen ‘Weltpolitik’ und ‘Rekonstruktion’. In U. Franke & U. Roos (Eds.), Rekonstruktive Methoden der Weltpolitikforschung: Anwendungsbeispiele und Entwicklungstendenzen (pp. 7–29). Nomos.

Gillespie, A., & Michelson, M. R. (2011). Participant Observation and the Political Scientist: Possibilities, Priorities, and Practicalities. PS: Political Science & Politics, 44(2), 261–265. https://doi.org/10.1017/S1049096511000096

Gregory, M. A. (2012, August 23). Cybercrime bill makes it through – but what does that mean for you? The Conversation. The Conversation. https://theconversation.com/cybercrime-bill-makes-it-through-but-what-does-that-mean-for-you-8953

Guillaume, X., & Huysmans, J. (2013). Citizenship and Securitizing: Interstitial Politics. In X. Guillaume & J. Huysmans (Eds.), Citizenship and security: The constitution of political being (pp. 18–34). Routledge.

Gürses, S., Kundnani, A., & Van Hoboken, J. (2016). Crypto and empire: The contradictions of counter-surveillance advocacy. Media, Culture & Society, 38(4), 576–590. https://doi.org/10.1177/0163443716643006

Herrera, G. L. (2002). The politics of bandwidth: International political implications of a global digital information network. Review of International Studies, 28(1), 93–122. https://doi.org/10.1017/S0260210502000931

Herring, S., Job-Sluder, K., Scheckler, R., & Barab, S. (2002). Searching for Safety Online: Managing ‘Trolling’ in a Feminist Forum. The Information Society, 18(5), 371–384. https://doi.org/10.1080/01972240290108186

Hofmann, J. (2016). Multi-stakeholderism in Internet governance: Putting a fiction into practice. Journal of Cyber Policy, 1(1), 29–49. https://doi.org/10.1080/23738871.2016.1158303

Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: Finding the governance in Internet governance. New Media & Society, 19(9), 1406–1423. https://doi.org/10.1177/1461444816639975

Huysmans, J. (2016). Democratic curiosity in times of surveillance. European Journal of International Security, 1(1), 73–93. https://doi.org/10.1017/eis.2015.2

Isin, E. F., & Ruppert, E. (2015). Being Digital Citizens. Rowman & Littlefield.

Jackson, P. T. (2011). The conduct of inquiry in international relations: Philosophy of science and its implications for the study of world politics. Routledge. https://doi.org/10.4324/9780203843321

Kahn, D. (1997). The codebreakers: The comprehensive history of secret communication from ancient times to the Internet. Scribner’s and Sons.

Kannengießer, S. (2019). Reflecting and acting on datafication – CryptoParties as an example of re-active data activism. Convergence: The International Journal of Research into New Media Technologies. https://doi.org/10.1177/1354856519893357

Kubitschko, S. (2015). The Role of Hackers in Countering Surveillance and Promoting Democracy. Media and Communication, 3(2), 77–87. https://doi.org/10.17645/mac.v3i2.281

Leander, A. (2017). From Cookbooks to Encyclopaedias in the Making: Methodological Perspectives for Research of Non-State Actors and Processes. In A. Kruck & A. Schneiker (Eds.), Methodological Approaches for Studying Non-state Actors in International Security. Theory and Practice (pp. 231–240). Routledge. https://doi.org/10.4324/9781315669830-16

Levy, S. (1994, November 2). Cypher Wars: Pretty Good Privacy Gets Pretty Legal. Wired. http://encryption_policies.tripod.com/industry/levy_021194_pgp.htm

Levy, S. (2002). Crypto: How the code rebels beat the government, saving privacy in the digital age. Penguin Putnam.

Loder, C. (2014). Something to Hide: Individual Strategies for Personal Privacy Practices. IConference 2014 Proceedings, 814–819. https://doi.org/10.9776/14403

Marx, G. T. (2015). Security and surveillance contests: Resistance and counter-resistance. In T. Balzacq (Ed.), Contesting security: Strategies and logics (pp. 15–28). Routledge. https://www.taylorfrancis.com/books/e/9780203079850/chapters/10.4324/9780203079850-9

mati, & Wolf, A. (2012, December 30). Dear Hacker Community—We Need to Talk [Blog post]. Fachschaft Informatik. https://www.fsinf.at/posts/de/2012-12-30-dear-hacker-community-we-need-to-talk/

Monsees, L. (2020). Crypto-politics: Encryption and democratic practices in the digital era. Routledge. https://doi.org/10.4324/9780429456756

Mueller, M. L. (2010). Networks and States: The Global Politics of Internet Governance. MIT Press. https://doi.org/10.7551/mitpress/9780262014595.001.0001

Musiani, F., & Ermoshina, K. (2017). What is a Good Secure Messaging Tool? The EFF Secure Messaging Scorecard and the Shaping of Digital (Usable) Security. Westminster Papers in Communication and Culture, 12(3), 51–71. https://doi.org/10.16997/wpcc.265

Myers West, S. (2018). Cryptographic imaginaries and the networked public. Internet Policy Review, 7(2). https://doi.org/10.14763/2018.2.792

Poulsen, K. (2014, May 21). Snowden’s First Move Against the NSA Was a Party in Hawaii. Wired. https://www.wired.com/2014/05/snowden-cryptoparty/

Radio Free Europe. (2012, November 27). The Woman Behind Crypto Party. Radio Free Europe. https://www.rferl.org/a/the-woman-behind-cryptoparty/24782719.html

Raymond, M., & DeNardis, L. (2015). Multistakeholderism: Anatomy of an inchoate global institution. International Theory, 7(3), 572–616. https://doi.org/10.1017/S1752971915000081

Reichertz, J. (2016). Qualitative und interpretative Sozialforschung: Eine Einladung. Springer. https://doi.org/10.1007/978-3-658-13462-4

Saco, D. (2002). Cybering democracy: Public space and the Internet. University of Minnesota Press.

Schulze, M. (2017). Clipper Meets Apple vs. FBI—A Comparison of the Cryptography Discourses from 1993 and 2016. Media and Communication, 5(1), 54–62. https://doi.org/10.17645/mac.v5i1.805

Schwartz-Shea, P., & Yanow, D. (2012). Interpretive research design: Concepts and processes. Routledge. https://doi.org/10.4324/9780203854907

Selimovic, J. M. (2019). Everyday agency and transformation: Place, body and story in the divided city. Cooperation and Conflict, 54(2), 131–148. https://doi.org/10.1177/0010836718807510

Singh, S. (2000). The code book: The science of secrecy from ancient Egypt to quantum cryptography. Anchor Books.

Solomon, T., & Steele, B. J. (2017). Micro-moves in International Relations theory. European Journal of International Relations, 23(2), 267–291. https://doi.org/10.1177/1354066116634442

Tanczer, L. M. (2016). Hacktivism and the male-only stereotype. New Media & Society, 18(8), 1599–1615. https://doi.org/10.1177/1461444814567983

Tanczer, L. M. (2017, April 6). Digital skills in academia: Let’s CryptoParty! OpenDemocracy. openDemocracy. https://www.opendemocracy.net/en/digital-skills-in-academia-let-s-cryptoparty/

Whitten, A., & Tygar, J. D. (1999). Why Johnny can’t encrypt: A usability evaluation of PGP 5.0. Proceedings of the 8th Conference on USENIX Security Symposium, 8. https://www.usenix.org/legacy/events/sec99/full_papers/whitten/whitten.ps

Footnotes

1. For a more comprehensive discussion on the different conceptualisations of practices, see the contributions in: Cetina, Schatzki, & Von Savigny (2005).

2. I also attended two hackerspaces in the hope of receiving access to people formerly involved in CPs. Even though these visits were helpful in gathering insights into the culture of the hacker scene (see Kubitschko, 2015; Coleman, 2010) they did not give me further access to people involved in CPs.

3. In order to protect the privacy of the participants, I will refrain from mentioning the locations of the CPs.

4. One CP that I attended indeed had zero participants. Since people do not need to sign up beforehand, this can happen.

5. I am aware that my selection is not representative in quantitative terms. However, based on my interviews and the existing literature I could confirm that the CPs I visited were typical in a ‘qualitative’ sense.

6. Women also face challenges such as harsher and more violent forms of trolling (Herring et al., 2002)

7. Tor (The Onion Router) allows for anonymous surfing by bouncing a user’s data and requests through a set of relay servers. References to the ‘dark web’ usually indicate browsing via Tor. Tails is a programme that allows one to boot from, for example, a USB stick and relies on Tor for even greater privacy. However, the programme as such is more time intensive. The installation process, done at one CP, took several hours.

8. Organisers at cryptoparties 2 and 3.

9. This is also confirmed by the results of Kannengießer (2019).

10. Cryptoparty 1, 2 and 3

11. Cryptoparty 2.

12. Cryptoparty 1

13. Cryptoparty 3.

14. In the context of the Snowden revelations, Gürses et al. (2018, p. 581) observed that while technologies were contested, the larger economic and socio-structural questions were hardly debated. Hence, they argue that ultimately the encryption debate after Snowden had depoliticising effects (for a different assessment see: Monsees, 2020).

15. CP 1 and CP planning meeting.

16. CP 2

17. CP2

18. Cryptoparty 3. Kannengießer presents a very similar story (Kannengießer, 2019, pp. 6-7).

19. For a more in-depth description of the principle of public key cryptography, see Monsees (2020, pp. 61-63).

20. That is why I expected the Snowden revelations would be identified as a crucial event by the activists. However, the importance of his revelations was played down in the interviews.

VPNs as boundary objects of the internet: (mis)trust in the translation(s)

$
0
0

This paper is part of Trust in the system, a special issue of Internet Policy Review guest-edited by Péter Mezei and Andreea Verteş-Olteanu.

Introduction

This paper considers VPNs as boundary objects of the internet, in a way that opens new empirical and methodological insights about the tensions between technical materialities and symbolic registers of technology. The tensions we explore include: what discourses surround Virtual Private Networks (VPNs) for users and how does this affect their deployment? How do users come to understand VPNs as specific technologies or as part of the internet? How can we discern what these objects are as they tack back and forth between metaphor and technical processes in their use and their governance? The impacts of these tensions have profound implications for navigating socio-material practices online, as well as ‘offline’ through conceptualisations of how to govern these technologies. 

This paper looks at the social ontologies of an exemplar boundary object of the internet: VPNs. Following Star’s (2010, p. 603) clarification of boundary objects as entities that people act towards (or with) in relation to their own communities of practice, we follow Star’s call to further explore the ‘tacking’ back and forth of such objects as both symbolic and technical objects within internet-space and governance-space. We especially note how the dialectic between symbolic register (i.e., technology as metaphor) and actual affordances-in-practice (socio-technical / standardised material capacities) influence individual uses and attempts toward regulation. 

To illustrate the potential of boundary objects of the internet, consider encryption as a thought experiment. Encryption follows the polysemic tacking from math (cryptography); encryption as technical process (cryptanalysis); encryption in/as ecommerce; encryption activism; encryption as ‘going dark’; and encryption law. Note here the tacking from technical to metaphorical, and then back to technical transverses and is transfigured through competing domains of power and meaning: we start in technical mathematics and computer science and end in technical legal scholarship. Uncovered through this tacking are forces of politics and policing (Rancière, 2006) that shape and shift meaning making through communities of practice linked to the various interpretive objects identified. Rancière’s distinction between politics and police might be useful for the organisation/legitimising of power insofar boundary objects communally exist and are experienced. Politics is antagonistic to policing, breaking tangible configurations to test the assumptions of equality in society (Rancière, 2006, pp. 29-30).

Using the examples of 'going dark' as a metaphor for understanding the risks posed by end-to-end encryption, the result from a law enforcement and regulatory view, based on this metaphorical interpretation as specific to a kind of problematisation, are attempts to force a re-design of the artefact itself, or ban it, or highlight those nefarious facets of what makes the boundary object what it is: encryption is for child exploitation. The use of metaphor thus flows into how courts and regulators interpret and understand technologies so as to establish and limit conditions for how it should be 'dealt with'.

End-to-end encryption is, on the one hand, a standardised mathematical and technical infrastructure that is embedded in popular mobile messaging applications. On the other, it is a discursive representation that is conceptualised within and across a range of situated institutional knowledge and settings, such as in law enforcement, the legal community, and computer science. For example, within the legal setting, as Gill (2018) notes “what does it mean to describe the encrypted machine as a locked container or building?” (2018, p. 1). How does such a symbolic interpretation influence user applications; the existence of particular kinds of standardisation in relation to how culture might inform design decisions? Or more specifically, how do government and law enforcement attempt to impact and direct “what encryption is” as a standardised technical infrastructure that becomes interpreted and applied in various contextual discourses specific to consumer privacy, patient health or citizen elections? Seeing encryption as a boundary object shows the power of influence that constructing boundary objects have in the user and policy discourse. What we make of the mathematical facts of cryptography, conditions potentials and constraints of future activities online and off. The example of encryption as infrastructure of the internet shows how our approach might be useful for the empirical study of discrete objects that become more or less standardised (Star, 2010) and perceived as internet infrastructure.

This remainder of the paper is focussed on empirical exploration of one such boundary object where encryption is employed through the internet: Virtual Private Networks (VPNs). We look at VPNs through observing the discourses available to users when constructing understandings of VPNs, and relatedly, how VPN providers construct their products and its governance. In other words, VPNs function as a technical artefact that reconfigures communication in particular ways, and as an imagined capacity for the conduct of conduct, which Foucault identifies with respect to how individuals govern themselves (Foucault, 1994 p. 237). The work adds to the literature in two ways. The first is empirical, unpacking VPNs in ways that combine literatures on boundary objects (Star, 2010) and internet studies. Our work here subsequently clarifies the stakes of political implications endemic to collisions between the standardisation and representational translations of boundary objects across different organisational, institutional, and user-centric settings. The second is more theoretical, in that instead of considering boundary objects ‘on’ the internet we move to conceptualise boundary objects ‘of’ the internet, which we argue opens a fruitful reconfiguration of Star’s work for internet research.

Boundary objects of the internet offer a methodological framework that helps discern agonisms within (and without) technologies’ conceptualisations, which together form the social and political terrain through which user applications and governance materialise. Identifying and potentially shifting the symbolic registers through which these objects come to be understood provides a unique point of leverage for those regulating and deploying these technologies. 

Existing research literature  

There is a resurgence in studying boundary objects in the digital age, with Language, Communication and Culture (searched via Australian and New Zealand Standard Research Classification FOR ‘code’ 20*, which spans these disciplines) publication mentions of the term more than doubling from 2010 through writing of this article (Digital Science & Research Solutions Inc, 2020). This trend indicates a revitalised consideration of the work of Star (see Bowker et al., 2016) whose work is closely related to the sociology of science and science and technology studies. The concept of boundary objects has proven useful for researchers as a means to conceptualise and make sense of the various experiential objects that come into existence via technological practices acted upon and through digitally distributed networks.

A sample of recent research utilising boundary objects that is of interest to internet researchers includes imagining news and technology nexus in terms of process, participation and curation (Lewis and Usher, 2016), digitisation and mixed document authorship (Huvila, 2019), Free-Libre and Open Source Software (FLOSS) documentation (Østerlund and Crowston, 2019), humor online (Gal, 2018), and charting discourses of power legitimisation via competing images of the internet itself (Shepherd, 2018). What is perhaps missing from this sample is a distinction of and reflection on the extent these objects may be thought of as of the internet; as technical artefacts and infrastructures that are sung into existence through distinct online cultures, practices and needs of internet use(rs), and software developers. These are of course constructed in relation to a broader apparatus of institutions that are tasked with a responsibility to regulate uses of these same artefacts and their multiple purposes on interconnected networks (i.e., the internet and its governance (DeNardis, 2014)). Within these discourses and varying levels of agency, we ask what are the implications for freedoms and controls over how users and regulators tack back and forth between technical and metaphorical claims of the technologies that enable products and features of internet life. 

This approach similarly includes interest in how such objects come to be (dis)trusted insofar as they facilitate specific expectations of use, or what can be referred to as ‘technological affordances-in-practice’ (Costa, 2018). We contend that this sense-making practice is a deeply political one (see Rancière, 2006, pp. 29-30) that establishes conditions for not only how technologies are used, but for how standards and regulations are set, which in turn can influence future design and deployment, and thus craft the political-structural affordances specific to artefacts themselves. The ‘political affordances’ (Heemsbergen, 2019) of boundary objects of the internet set experiential rules as well as tactical use cases. They also establish conditions of possibility for how and in what ways objects of the internet are used by specific communities and to what ends. In essence, we mean to call into question how boundary objects are inherently about (re)arrangements of power, and how this links to the expectations of the conduct of conduct through and with them.

A focus on power and deployment embodies the ethos of Star’s work. It aligns to a feminist approach to technology studies, which for Star, linked lived experience, technologies, and silences (in Olsen & Selinger, 2007, p. 227) in ways that proved political. Our work enables boundary objects to, more than explicate functional processes within communities, consider how socio-technical relationships are made through them (Star, 2010) and to consider the extent that these objects involve mediational qualities that are facilitative or inhibitory (Fox, 2011) of cross-boundary communication. Thus, our work is interested in contexts from which a boundary object is embedded (commercial, cultural), the ways in which boundary objects are interpreted; explanations that assert some intrinsic or essential property of the object that describe its functionality and; the regulatory environment that apprehends the object through symbolic (and often metaphorical) terms as a matter of policy or law. 

To foreshadow the importance of these delineations, consider that the technical definitions of malware and some VPN products intermesh (Ikram et al., 2016), while users (dis)trust each in dichotomous ways that uphold specific political-economic systems - or break down socio-political ones. The regulation of VPN and malware is of course vastly different based on user perceptions and the literal affordances in practice that similarly coded software is perceived to have through varying levels of user autonomy in its use and transparency in its goals and mechanisms. These differences play out in user discourse and policy discourse to stark political effect.

How metaphors transfigure technological practices and shape policy and legal structures has history in the digital age. Data itself, abstracted from binary or code, has been repeatedly conceptualised as a natural force or exploitable resource (Puschmann & Burgess, 2014) that through data streams, data mining, or data clouds, offers liquid, solid, and gaseous states of matter that beget industrial thinking on how to and who should exploit it (Hwang and Levy, 2015). Such discourses often juxtapose critical views of big data with statements that suggest data is people (Lemov, 2016), and should be regulated as such. 

Such contradictions matter. If the metaphors with which new technologies are identified are sufficiently linked to existing ‘things that we already have rules about’ (Hwang and Levy, 2015) similar logics of regulation will flow onwards to the new technology. There are fewer institutional settings where the stakes are more profound than in the context of law. As Gill (2018) notes, metaphors have the power to define realities, and in so doing, sanction legal rules and social conduct, which often includes the very reach of the state into private life. Likewise, a lack of clear link to existing regulated things causes confusion. The multiple metaphors for what "big data" is (and can do) results in a scattering of potentially relevant ethics codes as discourses around data cultures that shape data’s material, cultural, and political impact (Stark & Hoffman, 2019). 

One might measure one ‘tack’ of the internet itself as a boundary object, from technical to abstract on the following departure: pipes and warehouses; packets and protocols; content distribution networks / shaping practices; net neutrality ; civic capacities; and finally, public representation. What is key here is realising that the metaphors of clouds and torrents, flows and packet sniffing, relate to potentials that sit between the abstract and technical; subsequently, regulatory protocols are trained on what relates to these metaphors – that is, it becomes knowable and therefore governable – in this case targeted through arguments related to net neutrality.

Boundary objects as a methodology then, allows rigorous consideration of the influence of metaphorical and technical as objects tack back and forth from intrinsic natures and abstract metaphors to elicit policy consideration. These policy considerations are not born of consensus but how different options come to be chosen (Mol, 1999), and what the implications are for enacting associated politics and policing of the object. The article now discusses the potential for studying boundary objects of the internet by further detailing a suitable research design.

Research design and methodology

The design and approach of examining VPNs as a boundary object is drawn from Science and Technology Studies (STS) that acknowledges the mutual shaping of sociocultural and technical processes (Latour, 2005). Further the seemingly multiplicity of experiences created through the same or similar digital boundary object puts into new relief how digital media can move ‘past formalising a world taken for granted’ and realise ‘forms designed to produce alternative worlds’ (Flusser, 1999, p. 28). That is to say that not only are digital ‘objects’ generative, the same code might transfigure a multitude of experiences depending on context and perceived use potentials and user experimentation. The heterogeneity of boundary objects of the internet is, in part, due to their digital materialisations; the designs with which code can be reconfigured, interfaces re-skinned, and potentials only constrained via a spectrum of bandwidth. At the same time, what their technical constructions facilitates or inhibits (Fox, 2011) remains our concern in relation to the environment in which these potentials come to be rhetorically constructed. While the boundary object we focus on is the VPN, the research design that we discuss is potentially useful in examining other inquiries of boundary objects of the internet. We’ve already noted how end-to-end encryption, net neutrality, or the internet itself might apply. In short, our work is designed to assess discourses that surround search, discernment, choice, activation and governance of, in this case, VPN services.

Our methods present a ‘work in process’ that synthesises the strengths of discourse analysis for internet related phenomenon (Brock, 2018; Johnstone, 2018; Jones et al., 2015; Mautner, 2005) and the experiential phenomenological modes of inquiry in walk through methods (Light et al., 2018). This synthesis expands scope in relation to what boundary objects are and afford both technically and politically. Our focus in particular is on the ecology experienced by users that comes before in-app user experience. Symbolic and representational registers tied to user experiences of learning of and deciding to use a VPN presents rich data for mapping the facets of boundary objects. Similarly, competing registers amongst regulatory agencies surrounding VPNs also becomes a reflexive matter of concern.

Instead of focussing on the user experience or the political economy of a specific app we attempt to heed Star’s advice regarding the scale and scope of boundary objects. Star (2010) signals that she is less interested in actual specified things (e.g., a flag or an app), as it is

more interesting to study people making, advertising, and distributing [the specified thing], and their work arrangements and heterogeneity than to simply say that many people have different interpretations of the [specified thing] (Star, 2010, p. 613).

Thus the processes which are involved in creating the phenomenon-as-presented to potential users of the boundary object are of interest. How VPNs are marketed, how popular search results organise the discourse around VPNs, and how regulators envision policy vis-à-vis these presentations of object all offer insights to the competitive heterogeneity of boundary objects as well as the heterogeneity of people making, distributing, using, and regulating them.

Second, in some regards VPNs, while diverse at a level of specific app choice, share as a goal the potential to become standardised technical infrastructure. VPNs that ‘just work’ in tech parlance, disappear for their users and become part of the internet, as opposed to being things to do or use on the internet. For a related example, consider the proliferation of Hypertext Transfer Protocol Secure (HTTPS) connections to websites, which shows an additive security protocol becoming infrastructure. Securing the HyperText Transfer Protocol, became a common website design feature after Snowden’s (2013) whistleblowing highlighted the potential for widespread surveillance of user activity as well as growing recognition of third party man-in-the-middle content injection or de-anonymisation (Basques, 2015). By 2019, most modern browsers flag non-HTTPS webpages to the user as ‘non-secure’ anomalies; HTTPS now just works as an invisible part of the internet. HTTPS allows and enforces new forms of security and privacy for and between publishers and readers of web content through code and governance schemes that most users never consider. At the same time, it affords novel forms of connectivity that otherwise would not be possible (e.g., ‘getUserMedia()’ calls and geolocation services) that have ushered in evolving use cases by individuals; its material and imagined capacities for users are real, while its regulation has been confined to a protocol that allows the internet to function. 

VPN products provide similar profiles insofar that browsing and communication experiences are meant to be changed as little as possible for perceived day to day use. At the same time, VPNs provide, among other services, geo-anonymisation and data-anonymisation as ways to obfuscate identities and patterns of browsing behaviours so as to enable new experiences. Yet, VPN use does not hinge on, nor is it defined by, specific interface experiences past signals to users that the service is active; walking through the actual interface users experience to start up or manage VPN sessions becomes less important than mapping the decision to implement a VPN solution when explaining both the power, purpose, and experience of VPNs. 

This is not to discount notable differences in apps and their effects; most notably how peculiarities of interface (Poulsen, 2018; Richardson and Hjorth, 2017) afford not only user interaction potentials, but relate to use cases based on social contexts (Heemsbergen, 2019). Our mode of inquiry in fact highlights how communicative affordances (Schrock, 2015) and those in-practice experiential affordances (Costa, 2019) are distinct. Indeed, considering the above HTTPS example again, while the underlying technical-communicative and even technical-governmental affordances of HTTPS differ dramatically from HTTP, any perceived change in user communication experience is minimal. This perceived likeness does not, of course, speak to other novel potentials that users institute through HTTPS (or as we will encounter below, VPNs). The likeness demonstrates how infrastructural change of the internet has effects that are not apparent to users in Human Computer Interaction (HCI) terms, or in ways that are easily investigated by walkthroughs; an abstraction to the process of user practice is required to rigorously understand boundary objects of the internet.

Third, the user (experience) presents only one of many distinct populations that define the boundary object. While user experience on-app is important, it is not definitive of the object. Our aim is then to focus less on user interface arrangements, functions and features, and more on contextual content and tone, or the symbolic representation of the object as a user is being drawn to enter into a relationship with it. Walkthrough methods (Light et al., 2018) of course direct as much, and we mean to acknowledge the importance of ‘environment’ and abstract up to a level of ecology; the many actants and formations that come to collectively contest and define any boundary object.

Our methodology then offers a walkthrough of how a user would come to experience a boundary object as a product by way of observing the political economy and regulatory ecology that makes up these objects of the internet. While this method has various limitations, it offers a work in process insofar it encounters the discourses that contribute to the projects that boundary objects represent for the various communities that contribute to their existence. This entails a consideration of how the discourse around VPNs governs the limits of its normative place and use in society. As systems that are presented to users, we acknowledge the power of discourse to shape not only these experiences, but the regulation of their materiality. We assess discourses that surround search, discernment, choice, and activation of the apps that provide - in this case VPN services. That process builds data towards interpretation of the various facets of boundary objects that users are exposed to. In short, we seek to include market, experiential, and government pressures that come together at the boundaries of VPNs to create these shared yet disputed objects.

Understanding boundary objects of the internet, as opposed to on the internet has notable outcomes. The difference speaks to our interest in shared but contested objects that make the internet work, as opposed to things that work on the internet. As an example, social media work on the internet. Encryption makes the internet work. Passwords work on the internet. VPNs make the internet work (differently). Further, interpreting boundary objects of the internet also opens up the contextual heterogeneity that shapes specific objects like VPN via app and their larger industries. Finally, the focus on objects of the internet aligns with consideration of how boundary objects can become infrastructure. In Star’s (2010, p. 605) words, some interpretations of a particular boundary object become ‘standardised’ and help define life past the socio-technical assemblage of the object itself. 

In terms of relevance of our direction, there are clear and present debates on whether encryption is good, and what is it good for (and is to be legislated for), while VPNs provide a product category of ‘encryption’ that is marketed across multiple use cases, and contribute to the infrastructures of the internet. One resultant tack from technical to abstract for VPNs could imagine protocols (IPSec, SSTP, etc.); systems; anonymity and privacy devices; speech acts; commercial/market ecosystems. 1 These abstract concepts of market ecosystems for VPN content tack back to the technical through metaphors that reconfigure meaning: do VPN markets offer security and safety, or spying and vulnerability, a commercial data opportunity, or an ability to circumnavigate commercial or government censorship?

Methods to map VPNs as boundary objects of the internet

Our study design takes into account the jurisdictional geolocation and political context of Australia. People searching for VPNs in Australia will probably not be using VPNs. Australia provides an interesting case: an intellectual property morass of ‘geoblocked’ content streams that has allowed VPNs to proliferate (Lobato and Meese, 2016), while its data-retention regime has triggered fears of violation of privacy (Mann, 2018) in a developed and liberal democracy that is sliding to increasing authoritarian secrecy, surveillance, and suppression of speech (Molnar, 2017; Lidberg and Muller, 2019)

The specific ‘in process’ walkthrough of VPNs identified below show the discourses that users encounter when products are being explained to users. These steps involve movement of descriptions of the boundary object from the technical specifics and into the abstract metaphorical registers. First, as any potential user of VPNs might, we start as an (anonymised) google.com session that proceeds to work through a series of search decisions that expand discourses encouraging object knowledge, user discernment, choice, and activation.

User flow: search --> discourses encouraging activation --> discernment --> material choices --> activation

Following Light et al.’s (2018) walkthrough method, we followed the flow from search to activation to consider the environment (vision, operating models, and governance) that is disclosed from a user’s position. Splitting from the app walkthrough method, the app interface itself is a less important site of (mis)trust building compared to those pre-interface experiences that engender its use. Search as research here is reflected through the socio-epistemological ‘source standing’ (Rogers, 2015 p. 99) that Google search offers queries to what a VPN is and does - we are less interested in what search excludes but its algorithmic authority in developing the boundary object. We also thus grant our user-process some breadth in activity, cross referencing common search results sites that explain the specific boundary objects and/or review of one product over another. The discourses presented illustrate a rich and multifaceted creation of boundary objects that allows deeper consideration of the relevant communities that create them.

Specifically, websites that compared VPNs, explained their purpose, and advertised their features factored in to the discourses that our search registered. While we systematically assess the available information to users seeking VPN functionality, we did not move past the second page of google.com results as we felt this reflected long standing statistical implausibility (Van Deursen and Van Dijk, 2009). Specifically, we followed links positioned via google.com results with an ostensible click through rate of >5% (Advanced Web Ranking 2019 - suggests 5 organic). We also included the adwords entries displayed, usually four, which produced some overlap from organic results. We ran three anonymous search sets from different IPs in the Melbourne area and all returned the same list, albeit sometimes with different order of ads.

For our “VPN” search term Google auto-suggested adding the terms: free and Australia, so our assumed user followed Google's advice. This led to an overlapping set of 27 URLs from organic and adwords links, which we narrowed to recurring results that would ostensibly garner >5% clickthrough. One initial interesting finding is the homogeneity of these search results. 27 links (four ads per search, and five organic results) coalesce into 14 independent URL paths for the user to pursue. Of these, nine were various ‘guides’ to VPNs and five were VPN products themselves.

Based on these URLS, and significant linked pages (i.e., privacy policy, terms and conditions, ‘about’ pages, etc.) limited to two-hops, we open coded for ‘vision’, ‘operating models’ and ‘governance’ as per Light et al. (2018), making axial distinctions once we felt saturation was reached or our texts exhausted. The guides were coded somewhat differently as they reflected various levels of editorial and advertorial content about VPN boundary objects. The insights gained from these discourses suggest an assortment of themes that converge and conflict to create heterogeneous boundary objects of VPN. In these discourses we then consider how, when, and by whom, any tacking from the technical to the abstract can be observed, and how this relates to use and regulation of the technology.

Thus, a large part of our methods pursues interactions that are typical of user experience for acquiring VPN services, while also offering a systemic account of what such a user would experience. Our approach has distinct limits on these assumptions; the political economy of technical knowledge is experienced-based and gendered, as well as offers multifaceted ‘search’ layers and paths through friend groups and online forums, not to mention media market based measures like embedded links and ads. User generated narratives of VPN use, or how they change over time, while beyond the scope of this article, are another important avenue for inquiry. Thus, while precise, our method can only offer preliminary analysis of the object discourse presented to generic search-users. Future research that offers in depth ethnographic study to align registers of boundary object tacking to various socio-political contexts and practices, such as in fields of criminal and civil law, offer novel sites for inquiry. Similarly, in-depth search as research techniques across geographies (Roberts, 2019 p. 107) would further enrich contributions to the range of communities encountering VPNs as boundary objects.

Nevertheless, our methodological move allows consideration of how and where populations in Australia might encounter the boundary object, and put this phenomenological understanding in relief to the other technical and social aspects that explain the objects to/by users as the objects ‘tack’ back and forth in meaning from the technical to the abstract. How does the performance of boundary objects inform formations of trust in the ecology of internet-based boundary objects? What do users experience when coming to terms with the object? How do decisions of VPN product use come about in relation to their construction as multifaceted boundary objects of the internet?  

VPN data, and discussion

The two experiences prospective users of VPNs wade into via “VPN” search include direct product solicitation by VPN companies and second-hand aggregation sites that review VPN products. Both are designed to encourage use and enhanced sharing as reasons for using VPNs. Both sets of experiences detail distinct processes that come to terms with the multifaceted nature of what VPNs are and what they do as boundary objects of the internet. 

The overwhelming sense of purpose or mission VPNs have, as advertised in the Australian geolocation, relates to ‘security’ and ‘privacy’. 2 These two descriptors were consistent across all sites surveyed, while not always specifying what was being secured (from) or what was made private (or from whom). When security and privacy were explained, the latter was often referred to as a ‘right’ while the former was at risk ‘nowadays’ with increasing ‘vulnerabilities’ to ‘cybercrime’ and surveillance from criminals, ISPs, and governments. To a remarkable degree, the security and privacy measures VPNs employ were only spoken about in abstract terms, with only some sites mentioning but not explaining - nor offering links to explicate - specific protocols (IPSec, IKEv2, OpenVPN, and WireGuard). Cognitive metaphors (Lakoff, 1987) to describe VPNs were scarce, but included bears digging tunnels, armoured vans, packages in a box, parking garages, and traffic lights to signal technical standards. One site summarised:

Of necessity, discussion on VPN protocols and the nitty-gritty...is highly technical. Alternatively, our handy OpenVPN encryption chart uses a traffic-light system to give an at-a-glance assessment of the VPN’s security that even the most tech-phobic out there should easily understand. (Crawford, 2018)

The standardisation of security and privacy as mission and vision might be construed in Star-ian terms with relation to infrastructure. But, we must remember that these descriptions provide facets of the boundary object that do not include regulators’ accounts, for which further research is required but currently out of scope. When security is mentioned in more ordinal technical specifications, it is mostly described through superlative terms to differentiate market offerings as “best possible” or “military grade”. Privacy is a feature described through banal adjective additives: “solid/great/strong” 

Further, it seems that for a potential casual user of VPNs, when the object is signalled through security and privacy claims, it is disproportionately defined in terms of markets and features. These tack past protocols, private networks as systems, and gloss over VPNs’ specific capacities (and limitations) as anonymity and privacy devices or speech acts. Instead, the discourse centres around commercial ecosystems and markets that offer products that are most often differentiated in terms of generic consumer features such as speed, ease of use, or customer service. The small minority (n:3) that mentioned Australia/Australians specifically, emphasised geo-blocked content and other Australian specific censorship or data-retention regimes in relation to mission or features past the more standard representations of privacy and security. 

In terms of operating models presented, there were clear distinctions between review sites (n:10) and VPN product sites (n:4). Most review sites were upfront about their affiliate links business model, but there was a great deal of diversity in explaining editorial independence vis-à-vis recommendations. A minority of these sites’ missions seemed to fit unabashed advertorial design and language, with vacuous descriptors and inaccurate or contradictory language designed to sell various VPN services. Some review sites cobbled together editorial content via keyword-sentences such as “global threats to individual privacy with long maintained rights to anonymity and net neutrality being undermined with a cloak of legitimacy” as reasons to consider VPNs. It seemed those review sites with the least to say about their business model offered the least capable advice on which service to choose: often the best picks that had been called out in various online media as products with obvious security threats or failures, for example. Most others offered VPNs as a solution or partial solution to external threats, while only a small minority of sites considered the limits and risks of VPNs themselves. Only one review site reflexively questioned the VPN industry in terms of a trust equation that weighed relative risks: “It is important to keep in mind that when you are using a VPN, you are effectively transferring trust from your ISP to the VPN provider” (Protonvpn, 2017).

VPN businesses themselves offered surprisingly diverse business models. Past a simple subscription model, various VPN providers suggested their teams were available for IT security consultation, offered affiliate programmes for influencers (with up to 50% of subscription fees given to partners), as well as crowdsourced and foundation-funded revenue streams. This mix of business models reflects the diverse missions linked to the abstracted ideas of increasing ‘security’ and ‘privacy’ of users. These included profit motives and normative assumptions about the need to operationalise a ‘free’ internet as imagined “the way it was first envisioned – free from crime, censorship, and surveillance” (NordVPN, 2020). 

The utopian and business desires of VPN providers are enforced by governance schemes that operationalise VPN user capacities into specific terms and conditions and require various forms of trust. Interestingly, we find a long list of forbidden activities on VPN services that border from abstractions of anything ‘illegal’ (jurisdiction not defined) to specific sets of practices that include creating spam, hacking, exploiting children, violating third party rights (e.g., IP and data privacy), harassment in various specified forms, promoting bigotry, use for military purposes, etc. These normative qualifications are not standardised across VPN providers, nor do they seem to be tied to specific geo-jurisdictional structures. The VPN businesses seem largely to make these terms up to protect themselves and their users as they see fit, and to craft the communicative world they wish to enable. 

The terms and conditions offer an interesting ‘middle ground’ between the technical and abstract to understand the boundary object of VPNs. On the one hand they do not detail technical specificities of why and what can(not) be accomplished via VPNs. On the other, while some reference violation of “general ethic or moral norms, good customs and fair conduct norms”, others offer a level of specificity that tacks to specific types of harm or abuse. VPN vendors chose to highlight activities they feel are outside the communicative world they are creating and do so at a level—not of legal or technical information—but use practice that constitutes problematic activity they do not want to be associated with. 

Whether internal governance discourse is based on public perception/public relations framing or past alleged abuses is unclear. Regardless, these differences in terms and conditions show the multiplicity of worlds that VPN vendors and users think the products afford; security and privacy (and from whom and for what) is contested across the VPN ecosystem. 

At the same time, the underlying technical infrastructures that afford these specific communication regimes are largely unmentioned. Encryption technologies within the VPN product work unseen to enable freedom from surveillance and censorship. What is made visible in the terms and conditions seems designed to guide the political and policing structures that relate to affordances in practice (Costa, 2019) made possible through these technical capacities.

Another topic that is frequently tied to VPNs in our corpus are logs of use(r) data. Almost all VPNs claim they do not keep any logs. Others claim that while they once did keep some basic activity logs such as time-stamped user access (as proven in assistance to police investigations), now claim they no longer keep logs. In any of these scenarios, users are supposed to trust these statements. Interestingly, this trust is manufactured without infrastructure; there is no active or real-time way to check that non-existent logs don’t exist. This type of trust harkens back to Web 1.0 interactions, where anonymous users created communities on an electronic frontier (Rheingold, 1993), as opposed to gardens walled by Web 2.0 corporate regulations. This is the difference between trusting someone on Craigslist and trusting someone on AirBnB (Lingel, 2020). Traceable audits of actor behaviour are not available in the former set of relations. 

Some VPN providers seek to buttress trust via audits that employ trusted third parties (e.g., PwC) to explore and back claims. This is interesting in the creation of boundary objects for two reasons. First, the requirement for verification, or infrastructures of trust around consensual claims (you pay me, I don’t log you) negates the ethos of an internet free from surveillance and the normative project that extends from VPNs. Second, the audits are unsatisfactorily focussed on the past; each auditor can only provide witness on what is, not what was, or will become via a few extra lines of code in the VPN. Note here how issues of (dis)trust offer competing valences in respect to the object itself and the systems that the object is acting upon. How users interpret the object says much about how they situate trust in relation to the policing/political actions that the object acts upon. 

Conclusion 

Our discussion on VPNs brings to light how users of internet objects come to trust them for what they are and what they do, and the extent these trusts are misplaced in relation to the connective polysemy (Gal, 2018) that boundary objects provide as larger ecosystems. Here, among other normative/political pressures, we again find a unique ability of internet-based boundary objects to ‘tack’ back and forth from abstract to technical in a way that concurrently translates meaning across communities to engender (mis)trust. For instance, trust in mathematics belies mistrust in application deployment, the existence of nefarious geopolitical actors, and so on.

Our work suggests the back and forth ‘tacking’ of abstract to concrete does not just manifest as a universal and singular, but is made manifest from multiple community vantage points. This complexification shows how digital objects of the internet feed and are fed by multiple use cases and relational practices across commercial, security, rights based, and identity practices that they underpin, undercut or act upon. Users trusting the politics of one case may miss a need to police the other; we conclude by contextualising these concerns for future research of the internet.

Future research might then look at how 'metaphor' is used to shape multiple boundary objects through contextualised user and regulatory imaginaries about how boundary objects of the internet (i) make sense of the technology as a tool, (ii) consider and condition their contextual understanding of affordances-in-practice, and (iii) have follow on implications for attempts at regulation, both by entities promoting the object and institutional forms of governing. We hope future research can develop this methodology in ways that combine the best of recent progressions of walkthrough methods and Star’s concept of the boundary object to enhance capacity for understanding boundary objects of the internet. This paper has offered one such step by walking through the interfaces that craft user perceptions of encountering and deciding to interact with the VPN boundary object.

References 

Basques, K. (2015). Why HTTPS matters. Google Web.Dev. https://web.dev/why-https-matters/

Bowker, G. C., Timmermans, S., Clarke, A. E., & Balka, E. (Eds.). (2016). Boundary objects and beyond: Working with Leigh Star. MIT Press.

Costa, E. (2019). Affordances-in-practice: An ethnographic critique of social media logic and context collapse. New Media & Society, 20(10). https://doi.org/10.1177/1461444818756290

Crawford, D. (2018). ProPrivacy’s VPN Review Process Overview. ProPrivacy. https://proprivacy.com/vpn/guides/review-process-overview

DeNardis, L. (2014). The global war for internet governance. Yale University Press. https://doi.org/10.12987/yale/9780300181357.001.0001

Digital Science & Research Solutions. (2020). Overview: Language, Communciation, and Culture; Boundary Object. Dimensions. https://app.dimensions.ai/analytics/publication/overview/timeline?search_mode=content&search_text=%22Boundary%20Object%22&search_type=kws&search_field=full_search&or_facet_for=2220

Foucault, M. (1994). Dits et Ècrits: Vol. IV. Gallimard.

Fox, N. J. (2011). Boundary Objects, Social Meanings and the Success of New Technologies. Sociology, 45(1), 70–85. https://doi.org/10.1177/0038038510387196

Gal, N. (2018). Ironic humor on social media as participatory boundary work. New Media & Society, 21(3), 729–749. https://doi.org/10.1177/1461444818805719

Gill, L. (2018). Law, Metaphor, and the Encrypted Machine. Osgoode Hall Law Journal, 55(2). https://digitalcommons.osgoode.yorku.ca/ohlj/vol55/iss2/3/

Heemsbergen, L. (2019). Killing secrets from Panama to Paradise: Understanding the ICIJ through bifurcating communicative and political affordances. New Media & Society, 21(3), 693–711. https://doi.org/10.1177/1461444818804847

Huvila, I. (2019). Authoring social reality with documents: From authorship of documents and documentary boundary objects to practical authorship. Journal of Documentation, 75(1), 44–61. https://doi.org/10.1108/JD-04-2018-0063

Hwang, T., & Levy, K. (2015). ‘The cloud’ and other dangerous metaphors. The Atlantic. https://www.theatlantic.com/technology/archive/2015/01/the-cloud-and-other-dangerous-metaphors/384518/

Ikram, M., Vallina-Rodriguez, N., & Seneviratne, S. (2016). An analysis of the privacy and security risks of android vpn permission-enabled apps. Proceedings of the 2016 Internet Measurement Conference, 349–364. https://doi.org/10.1145/2987443.2987471

Johnstone, B. (2018). Discourse analysis (3rd ed.). John Wiley & Sons.

Jones, R. H., Chik, A., & Hafner, C. A. (Eds.). (2015). Discourse and digital practices: Doing discourse analysis in the digital age. Routledge. https://doi.org/10.4324/9781315726465

Lakoff, G. (1987). Women, fire, and dangerous things: What categories reveal about the mind. University of Chicago Press.

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.

Leigh Star, S. (2010). This is not a boundary object: Reflections on the origin of a concept. Science, Technology, & Human Values, 35(5), 601–617. https://doi.org/10.1177/0162243910377624

Lewis, S. C., & Usher, N. (2016). Trading zones, boundary objects, and the pursuit of news innovation: A case study of journalists and programmers. Convergen, 22(5), 543–560. https://doi.org/10.1177/1354856515623865

Lidberg, J., & Muller, D. (2018). In the Name of Security Secrecy. Surveillance and Journalism.

Light, B., Burgess, J., & Duguay, S. (2018). The Walkthrough Method: An Approach to the Study of Apps. New Media & Society, 20(3), 881–900. https://doi.org/10.1177/1461444816675438

Lingel, J. (2020). An Internet for the People: The Politics and Promise of Craigslist. Princeton University Press. https://doi.org/10.23943/princeton/9780691188904.001.0001

Lobato, R., & Meese, J. (2016). Australia: Circumvention goes mainstream (E. D. I. T. E. D. B. Y. R. A. M. O. N. LOBATO, Ed.).

Mann, M., Daly, A., Wilson, M., & Suzor, N. (2018). The Limits of (Digital) Constitutionalism: Exploring the Privacy-Security (Im)Balance in Australia. International Communication Gazette, 80(4), 369–384. https://doi.org/10.1177/1748048518757141

Mautner, G. (2005). Time to get wired: Using web-based corpora in critical discourse analysis. Discourse & Society, 16(6), 809–828. https://doi.org/10.1177/0957926505056661

Mol, A. (1999). Ontological politics. A word and some questions. Sociological Review, 47(1), 74–89. https://doi.org/10.1111/j.1467-954X.1999.tb03483.x

Molnar, A. (2017). Technology, Law, and the Formation of (il)Liberal Democracy? Surveillance & Society, 15(3/4), 381–388. https://doi.org/10.24908/ss.v15i3/4.6645

NordVPN. (2020). Social Responsibility: Promoting equal opportunities in the digital age. NordVPN. https://nordvpn.com/social-responsibility/

Olsen, J.-K. B., & Selinger, E. (Eds.). (2007). Philosophy of technology: 5 questions. Automatic Press.

Østerlund, C., & Crowston, K. (2019). Documentation and access to knowledge in online communities: Know your audience and write appropriately? Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.24152

Poulsen, S. V. (2018). Becoming a semiotic technology – a historical study of Instagram’s tools for making and sharing photos and videos. Internet Histories, 2(1–2), 121–139. https://doi.org/10.1080/24701475.2018.1459350

ProtonVPN. (2017). Understanding the VPN Threat Model [Blog post]. ProtonVPN Blog. https://protonvpn.com/blog/threat-model/

Rancière, J. (2006). Hatred of democracy (S. Corcoran, Trans.). Verso.

Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. Addison-Wesley Publishing Company.

Rogers, R. (2013). Digital methods. MIT Press.

Rogers, R. (2019). Doing digital methods. SAGE Publications Limited.

Shepherd, T. (2018). Discursive Legitimation in the Cultures of Internet Policymaking. Communication Culture & Critique, 11(2), 231–246. https://doi.org/10.1093/ccc/tcx020

Footnotes

1. IPsec refers to ‘Internet Protocol Security’, which is a secure network protocol used to authenticate and encrypt data packets between two computers over an internet protocol network. SSTP refers to, ‘Secure Socket Tunneling Protocol’, which is a secure network protocol that also encrypts data packets between a VPN client and VPN server.

2. While VPNs might have legal consequences for various reasons not limited to privacy and data protection questions, including intellectual property concerns, our data does not show that.

Cybersecurity

$
0
0

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction

Cybersecurity1 covers the broad range of technical, organisational and governance issues that must be considered to protect networked information systems against accidental and deliberate threats. It goes well beyond the details of encryption, firewalls, anti-virus software, and similar technical security tools. This breadth is captured in the widely used International Telecommunication Union (ITU) definition (ITU-T, 2008, p. 2):

Cybersecurity is the collection of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies that can be used to protect the cyber environment and organization and user’s assets. Organization and user’s assets include connected computing devices, personnel, infrastructure, applications, services, telecommunications systems, and the totality of transmitted and/or stored information in the cyber environment. Cybersecurity strives to ensure the attainment and maintenance of the security properties of the organization and user’s assets against relevant security risks in the cyber environment

The importance of cybersecurity has increased as so many government, business, and day-to-day activities around the world have moved online. But especially in emerging economies, “[m]any organizations digitizing their activities lack organizational, technological and human resources, and other fundamental ingredients needed to secure their system, which is the key for the long-term success” (Kshetri, 2016, p. 3).

The more technically-focused information security is still in widespread use in computer science. But as these issues have become of much greater societal concern as “software is eating the world” (Andreessen, 2011), cybersecurity has become more frequently used, not only in the rhetorics of democratic governments as in the 2000s, but also in general academic literature (shown in Figure 1):

A graph showing the increase of the usage of the term cybersecurity versus information-, data-, or computer- security
Figure 1: Academic articles with cybersecurity/cyber-security/cyber security versus information security, data security and computer security in title, keywords or abstract of Web of Science indexed publications over time. Small numbers of records exist for both information security and computer security in the database since 1969. Data from Web of Science.

Barely used in academic literature before 1990 (except in relation to the Cray CYBER 205 supercomputer from the late 1970s), cyber became ubiquitous as a prefix, adjective and even noun by the mid-1990s, with Google Scholar returning results across a broad range of disciplines with titles such as ‘Love, sex, & power on the cyber frontier’ (1995), ‘Surfing in Seattle: What cyber-patrons want’ (1995), ‘The cyber-road not taken’ (1994) and even the ‘Cyber Dada Manifesto” (1991).

It evolved from Wiener’s cybernetics, a “field of control and communication theory, whether in machine or in the animal” (1948)—derived from the Greek word for ‘steersman’—with an important intermediate point being the popular usage of cyborg, a contraction of cybernetic organism, alongside the Czech-derived robot (Clarke, 2005, section 2.4). The notion of a ‘governor’ of a machine goes back to the mid-19th century, with J. C. Maxwell (discoverer of the electron) noting in 1868 it is “a part of a machine by means of which the velocity of the machine is kept nearly uniform, notwithstanding variations in the driving-power or the resistance” (Maxwell, 1868, p. 270)—what Wiener called homeostasis.

The use of cyberspace to refer to the electronic communications environment was coined in William Gibson’s 1982 short story Burning Chrome (“widespread, interconnected digital technology”) and popularised by his 1984 science fiction novel Neuromancer (“a graphic representation of data abstracted from the banks of every computer in the human system […] lines of light ranged in the nonspace of mind, clusters and constellations of data […] a consensual hallucination experienced by millions”). Cyberspace’s arrival in legal and policy discussions was spearheaded by John Perry Barlow’s Declaration of the Independence of Cyberspace (1996). But by 2000, Gibson declared cyberspace was “evocative and essentially meaningless ... suggestive ... but with no real meaning” (Neale, 2000).

Despite its ubiquity in present-day national security and defence-related discussions, Wagner and Vieth found: “Cyber ​​and cyberspace, however, are not synonymous words and have developed different meanings [...] Cyber ​​is increasingly becoming a metaphor for threat scenarios and the necessary militarisation” (2016). Matwyshyn suggested the term is “the consequence of a cultural divide between the two [US] coasts: ‘cybersecurity’ is the Washington, D.C. legal rebranding for what Silicon Valley veterans have historically usually called ‘infosec’ or simply ‘security’” (2017, p. 1158). Cybersecurity issues have, to many whose interests are served by the interpretation, becomenational security issues (Clarke, 2016; Kemmerer, 2003; Nissenbaum, 2005).

A review by Craigen et al. (2014) found cybersecurity used in a range of literature and fields from 2003 onwards, including software engineering, international relations, crisis management and public safety. Social scientists interacting with policymakers, and academics generally applying for research and translation funding from government sources and interacting with the defence and signals intelligence/information security agencies that are the cybersecurity centres of expertise in many larger governments, have further popularised the term, 2 which appears in similar form in many languages, as shown in Appendix 1.

Looking beyond academia to literature more widely, Figure 2 shows computer security was most prevalent in the Google Books corpus from 1974, overtaken by information security in 1997, and cybersecurity in 2015 (with cyber security increasingly popular since 1996, but cyber-security negligible the entire period). Computer (Ware, 1970), system, and data (Denning, 1982) security were all frequently used as closely-related terms in the 1970s (Saltzer & Schroeder, 1975). 3

Google n-gram analysis of the usage of the words computer-, cyber-, data-, and information- security over time.
Figure 2: Google n-gram analysis (Lin et al., 2012) of the usage of variants of information security over time. Cybersecurity encompasses cybersecurity, cyber security and cyber-security. Retrieved using ngramr (Carmody, 2020).

This trend is unfortunate, since “using the term ‘cybersecurity’ seems to imply that information security issues are limited to code connected to the Internet [but] physical security of machines and human manipulability through social engineering are always key aspects of information security in both the private and public sector” (Matwyshyn, 2017, p. 1156).

Cybersecurity in early context

In computer science, attacks on the security of information systems are usually concerned with:

  • Breaching the confidentiality of systems, with data exposed to unauthorised actors;
  • Undermining the integrity of systems, and disruption of the accuracy, consistency or trustworthiness of information being processed;
  • Affecting the availability of systems, and rendering them offline, unusable or non-functional.

Together, confidentiality, integrity and availability are called the CIA triad, and have been the basis of information security since the late 1970s (Neumann et al., 1977, pp. 11–14). Echoing this history decades later, the Council of Europe’s 2001 Budapest Convention on Cybercrime set out in its first substantive section “Offences against the confidentiality, integrity and availability of computer data and systems”.

Cybersecurity across disciplines

The study and practice of cybersecurity spans a range of disciplines and fields. In this article, we consider three of the main angles important to cybersecurity practice: technical aspects; human factors; and legal dimensions. This is necessarily an incomplete list—notably, the topic is also the subject of study by those who are interested in, for example, how it reconfigures organisational structures (information systems), or relationships between actors such as states (international relations), and significant non-state actors such as organised crime gangs (criminology).

Technical aspects

Many technical domains are of direct relevance to cybersecurity, but the field designed to synthesise technical knowledge in practical contexts has become known as security engineering: “building systems to remain dependable in the face of malice, error, or mischance” (Anderson, 2008, p. 3). It concerns the confluence of four aspects—policy (the security aim), mechanisms (technologies to implement the policy), assurance (the reliability of each mechanism) and incentives (of both attackers and defenders). Security engineers may be intellectually grounded in a specialised technical domain, but they require a range of bridging and boundary skills between other disciplines of research and practice.

A daunting (and worsening) challenge for security engineers is posed by the complexities of the sociotechnical environments in which they operate. Technological systems have always evolved and displayed interdependencies, but today infrastructures and individual devices are networked and co-dependent in ways which challenge any ability to unilaterally “engineer” a situation. Systems are increasingly servitised, (e.g., through external APIs) with information flows not under the control of the system engineer, and code subject to constant ‘agile’ evolution and change which may undermine desired system properties (Kostova et al., 2020).

Human factors and social sciences

The field of human factors in cybersecurity grew from the observation that much of the time “hackers pay more attention to the human link in the security chain than security designers” (Adams & Sasse, 1999, p. 41), leaving many sensitive systems wide open to penetration by “social engineering” (Mitnick & Simon, 2002).

It is now very problematic to draw cybersecurity’s conceptual boundaries around an organisation’s IT department, software vendors and employer-managed hardware, as in practice networked technologies have permeated and reconfigured social interactions in all aspects of life. Users often adapt technologies in unexpected ways (Silverstone & Hirsch, 1992) and create their own new networked spaces (Cohen, 2012; Zittrain, 2006), reliant on often-incomprehensible security tools (Whitten & Tygar, 1999) that merely obstruct individuals in carrying out their intended tasks (Sasse et al., 2001). Networked spaces to be secured—the office, the university, the city, the electoral system—cannot be boxed-off and separated from technology in society more broadly. Communities often run their networked services, such as a website, messaging group, or social media pages, without dedicated cybersecurity support. Even in companies, or governments, individuals or groups with cybersecurity functions differ widely in location, autonomy, capabilities, and authority. The complexity of securing such a global assemblage, made up of billions of users as well as hundreds of millions of connected devices, has encouraged a wider cross-disciplinary focus on improving the security of these planetary-scale systems, with social sciences as an important component (Chang, 2012).

Research focussed on the interaction between cybersecurity and society has also expanded the relevant set of risks and actors involved. While the term cybersecurity is often used interchangeably with information security (and thus in terms of the CIA triad), this only represents a subset of cybersecurity risks.

Insofar as all security concerns the protection of certain assets from threats posed by attackers exploiting vulnerabilities, the assets at stake in a digital context need not just be information, but could, for example, be people (through cyberbullying, manipulation or intimate partner abuse) or critical infrastructures (von Solms & van Niekerk, 2013). Moreover, traditional threat models in both information and cybersecurity can be limited. For example, domestic abusers are rarely considered as a threat actor (Levy & Schneier, 2020) and systems are rarely designed to protect their intended users from the authenticated but adversarial users typical in intimate partner abuse (Freed et al., 2018).

The domain of cyber-physical security further captures the way in which cybersecurity threats interact with physically located sensors and actuators. A broader flavour of definition than has been previously typical is used in the recent EU Cybersecurity Act (Regulation 2019/881), which in Article 2(1) defines cybersecurity as “the activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats” [emphasis added]. The difficult interaction between information systems, societies and environments is rapidly gaining traction in the research literature.

Research at the intersection of human–computer interaction and cybersecurity has also pointed to challenges of usability and acceptability in deploying approaches developed in fields such as security engineering. Consider the encryption of information flowing across the internet using Transport Layer Security (TLS), a protocol which is able to cryptographically authenticate the endpoints and protect the confidentiality and integrity of transmitted data. TLS raises usability challenges in relation to developers’ and administrators’ understanding of how it works and thus how to correctly implement it (Krombholz et al., 2017, 2019) as well as challenges with communicating its properties—and what to do in its absence—to end users in their web browsers (Felt et al., 2015; Reeder et al., 2018). Focusing on the user experience of the web browser, Camp (2013) suggests principles of translucent security: high security defaults, single-click override, context-specific settings, personalised settings, and use-based settings.

Related challenges faced by both users and developers or other specialists are found widely across the cybersecurity field, including passwords (e.g., Naiakshina et al., 2019) and encrypted email (Whitten & Tygar, 1999). The field of usable security seeks a fit between the security task and the humans expected to interact with it (Sasse et al., 2001). Without an understanding of issues such as these, the techniques used can bring at best a false sense of security, and at worst, entirely new threat vectors.

Legal dimensions

While few laws explicitly state they are governing cybersecurity, cybersecurity–related provisions are found in an extremely wide array of instruments. Law might incentivise or require certain cybersecurity practices or standards; apply civil or criminal sanctions, or apportion liability, for persons experiencing or taking action which leads to cybersecurity breaches; mandate practices (such as information sharing or interoperability) that themselves have cybersecurity implications; or create public advisory or enforcement bodies with cybersecurity responsibilities.

Data protection and privacy laws generally contain varied provisions with cybersecurity implications. They are, at the time of writing, present in 142 countries around the world (Greenleaf & Cottier, 2020) as well as promoted by the Council of Europe’s Convention 108+ and model laws from several international organisations, such as the Commonwealth (Brown et al., 2020). They often, although not always, span both the public and private sectors, with common stipulations including the creation of an independent supervisory authority; overarching obligations to secure ‘personal’ data or information, often defined by reference to its potential identifiability; data breach notification requirements; obligations to design in enforcement of data protection principles and appoint a data protection officer; and rights that can be triggered by individuals to access, manage and if they wish, erase identifiable data that relates to them.

Other specific laws also contain cybersecurity breach notification (to users and/or regulators) and incident requirements scoped beyond personal data, such as the European eIDAS Regulation (Regulation 910/2014, concerning identity and trust providers) and Network and Information Security Directive (Directive 2016/1148, concerning essential infrastructure, including national infrastructure such as electricity and water as well as ‘relevant digital service providers’, meaning search engines, online marketplaces and cloud computing). While lacking an omnibus federal data protection law, all 50 US states have some form of data breach law, although their precise requirements vary (Kosseff, 2020, Appendix B).

In the EU, the law that would seem the most likely candidate for a horizontal regime is the 2019 Cybersecurity Act (Regulation 2019/881).It however provides little of real substantive interest, mainly increasing the coordination and advisory mandates of ENISA, the EU’s cybersecurity agency, and laying the foundation for a state-supported but voluntary certification scheme.

A grab-bag of highly specific cybersecurity laws also exists, such as the California Internet of Things Cybersecurity Law, aimed mostly at forbidding devices from using generic passwords (Cal. Civ. Code § 1798.91.04). These reactive, ad-hoc instruments are often not technologically neutral: they may have clarity and legal certainty in the current situation, but may not be sustainable as technologies change, for example, away from passwords (Koops, 2006). On the other hand, generic laws have also, over time, morphed into cybersecurity laws. The Federal Trade Commission in the US penalises companies for exceptionally poor data security practices under the prohibition of “unfair or deceptive practices” in the FTC Act (15 U.S.C. § 45).

There are, however, limits to the ability of generic laws to morph into cybersecurity laws. Computer misuse laws emerged in legal regimes in part due to the limitations of existing frameworks in capturing digital crime. Before the mid-1980s, the main avenue to prosecuting computer misuse in the US was theft (Kerr, 2003), a rationale which proved strained and unpredictable. The UK saw unsuccessful attempts to repurpose the law of forgery against unauthorised password use (R v Gold [1988] AC 1063), leading to the passing of the Computer Misuse Act 1990.

The US has struggled with the concept of ‘unauthorised’ access in its law. Offences in the Computer Fraud and Abuse Act (CFAA) of 1984 typically occur when individuals enter systems without authorisation, or where they exceed authorised access, mimicking laws of trespass (Kerr, 2016). But the notion of authorisation in digital systems quickly becomes tricky. If a website is designed such that sensitive information is discoverable by typing in a long URL (a problematic “security through obscurity” approach), without any authentication mechanism, is there implicit authorisation? Is an address bar more like a password box—guessing someone else’s being telling about your motive to access unauthorised material; or a telephone keypad or map—and the user is simply exploring?

The CFAA has also created tensions based on its interaction with a site’s terms of service (ToS). This tension centres on whether authorisation is revoked based on statements in these long, legalistic documents that only few read. For example, such documents often preclude web scraping in broad, vague language (Fiesler et al., 2020), and despite over sixty legal opinions over the last two decades, the legal status of scraping remains “characterized as something just shy of unknowable, or a matter entirely left to the whims of courts” (Sellars, 2018, p. 377). This becomes highly problematic for firms, researchers or journalists, as computer misuse law may effectively turn potential civil liability for breach of contract into criminal liability under the CFAA.

As a consequence, scholars such as Orin Kerr have argued that only the bypassing of authentication requirements, such as stealing credentials, or spoofing a log-in cookie, should be seen as creating a lack of authorisation under CFAA (Kerr, 2016). This contrasts with messy existing case law, which includes prosecution on the basis that an IP address was changed (as it often does by design) to avoid a simple numeric IP block. Contingent and subjective social aspects of cybersecurity law will remain, both in computer misuse and in other areas, even if this argument was accepted.

Legal instruments around cybercrime and cybersecurity more generally continue to develop—the Council of Europe’s Budapest Convention on Cybercrime was concluded in 2001, seeking to harmonise cybercrime legislation and facilitate international cooperation, and drawing on experiences and challenges of earlier cybersecurity and cybercrime law. It has been ratified/acceded to by 65 countries including the US, which has only ever ratified three Council of Europe treaties. However, the further development of legal certainty in areas of cybersecurity will require yet clearer shared norms of how computing systems, and in particular, the internet, should be used.

Cybersecurity’s broader impact

Here, we select and outline just two broader impacts of cybersecurity—its link to security-thinking in other domains of computing and society, and its effect on institutional structures.

(Cyber)securitisation

While computer security narrowly focussed on the CIA triad, the cybersecurity concept expanded towards both national security and the use of computers for societally harmful activities (e.g., hatred and incitement to violence; terrorism; child sexual abuse) and attacks on critical infrastructures, including the internet itself (Nissenbaum, 2005). The privileged role of technical experts and discourse inside computer security has given technical blessing to this trend of securitisation (Hansen & Nissenbaum, 2009, p. 1167).

Security is not new to technification, as ‘Cold War rationality’ showed (Erickson et al., 2013). Yet not only have technical approaches arguably been able to take a more privileged position in cybersecurity than any other security sector (Hansen & Nissenbaum, 2009, p. 1168), their success in raising salience through securitisation has resonated widely across computing issues.

For example, privacy engineering has a dominant strand focussing on quantitative approaches to confidentiality, such as minimising theoretical information leakage (Gürses, 2014); while algorithmic fairness and anti-discrimination engineering has also emerged as a similar (and controversial) industry-favoured approach to issues of injustice (Friedler et al., 2019; see Gangadharan & Niklas, 2019). Gürses connects the engineering of security, privacy, dependability and usability—an ideal she claims “misleadingly suggests we can engineer social and legal concepts” (Gürses, 2014, p. 23).

These echoes may have their origins in the very human dimensions of these fast-changing areas, as organisations seek to apply or redeploy employees with security skill sets shaped by strong professional pressures to these recently salient problems (DiMaggio & Powell, 1983), as well as the hype-laden discourse of cybersecurity identified as fuelling a range of problems in the field (Lee & Rid, 2014). While these areas may not yet be able to be considered securitised, insofar as neither privacy nor discrimination is commonly politically positioned as an existential threat to an incumbent political community (Buzan et al., 1998; Cavelty, 2020; see Hansen & Nissenbaum, 2009), neither can they be said to be unaffected by the way cybersecurity and national security, and the forms of computing knowledge and practice considered legitimate in those domains, have co-developed over recent decades.

Institutions

Requirements of cybersecurity knowledge and practice have led states to create new institutions to meet perceived needs for expertise. The location of this capacity differs. In some countries, there may be significant public sector capacity and in-house experts. Universities may have relevant training pipelines and world-leading research groups. In others, cybersecurity might not be a generic national specialism. In these cases, cybersecurity expertise might lie in sector-specific organisations, such as telecommunications or financial services companies, which may or may not be in public hands.

Some governments have set up high-level organisations to co-ordinate cybersecurity capacity-building and assurance in public functions, such as the Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the National Cyber Security Centre (UK and Ghana—soon to become an Authority) and the Cyber Security Agency (Singapore). A new Cybersecurity Competence Centre for the EU is set to be based in Bucharest. Relatedly, and sometimes independently or separately, countries often have cybersecurity strategy groups sitting under the executive (Brown et al., 2020).

Cybersecurity agencies can find themselves providing more general expertise than simply security. During the COVID-19 pandemic, for example, the first version of the UK’s National Health Service (NHS) contact tracing app for use in England had considerable broad technical input from the government’s signals intelligence agency GCHQ and its subsidiary body the National Cyber Security Centre, which was considered a data controller under UK data protection law (Levy, 2020). Relatedly, these agencies have also been called upon to give advice in various regimes to political parties who are not currently in power—a relationship that would be challenging in countries where peaceful transitions of power cannot be easily taken for granted, particularly given many of these institutions’ close links with national security agencies which may have politically-motivated intelligence operations (Brown et al., 2020).

National Computer Security Incident Response Teams (CSIRTs) are a relatively recent form of institution, which act as a coordinator and a point of contact for domestic and international stakeholders during an incident. Some of these have been established from scratch, while others have been elevated from existing areas of cybersecurity capacity within their countries (Maurer et al., 2015). These expert communities, trusted clearing houses of security information, are found in many countries, sectors and networks, with 109 national CSIRTs worldwide as of March 2019 (International Telecommunication Union, 2019).

CSIRTs can play important international roles, although as they are infrequently enshrined in or required by law, they often occupy a somewhat unusual quasi-diplomatic status (Tanczer et al., 2018). Under the EU’s Network and Information Security Directive however, all 27 member states must designate a national CSIRT, with ENISA playing a coordinating role under the NIS Directive.

Some researchers have expressed a more sceptical view of CSIRTs, with Roger Clarke telling the authors: “Regrettably, in contemporary Australia, at least, the concept has been co-opted and subverted into a spook sub-agency seeking ever more power to intrude into the architecture and infrastructure of telecommunications companies, and whatever other ‘critical infrastructure’ organisations take their fancy. Would you like a real-time feed of the number-plates going under toll-road gantries? Easily done!” (personal communication, September 2020).

Conclusion

Understanding cybersecurity is a moving target, just like understanding computing and society. Exactly what is being threatened, how, and by whom are all in flux.

While many may still look on with despair at the insecurities in modern systems, few computing concepts excite politicians more. It is hardly surprising to see the language of security permeate other computing policy concepts as a frame. Politicians talk of keeping the internet safe; dealing with privacy breaches, and defending democracies against information warfare. This makes cybersecurity an important concept for scholars to study and understand, and its legal and institutional adventures instructive for the development of neighbouring domains (although perhaps not always as the best template to follow). Its tools and methodological approach are also a useful training ground for interdisciplinary scholars to gain the skills required to connect and work across social, legal and technical domains.

In a 2014 review, three Canadian Communications Security Establishment science and learning advisers (Craigen et al., 2014) concluded cybersecurity is “used broadly and its definitions are highly variable, context-bound, often subjective, and, at times, uninformative”. In 2017, Matwyshyn noted “‘cyberized’ information security legal discourse makes the incommensurability problems of security worse. It exacerbates communication difficulty and social distance between the language of technical information security experts on the one hand, and legislators, policymakers and legal practitioners on the other” (Matwyshyn, 2017, p. 1150).

It is not clear the situation has since improved in this regard. Cybersecurity has become a catch-all term, attached to the prevention of a very wide range of societal harms seen to be related to computing and communications tools now omnipresent in advanced economies, and increasingly prevalent in emerging economies. There are concerns this has led to a militarisation (Wagner & Vieth, 2016) or securitisation of the concept and hence measures taken by states as a result. (The UK Ministry of Defence trumpeted the launch of its “first cyber regiment” in 2020.) And the large-scale monitoring capabilities of many cybersecurity tools have led to serious concerns about their impact on human rights (Korff, 2019).

Meanwhile, many computer and social scientists publicly mock 4 the notion of cyber and cyberspace as a separate domain of human action (Graham, 2013). Rid (2016, chapter 9) noted even Wiener “would have disdained the idea and the jargon. The entire notion of a separate space, of cordoning off the virtual from the real, is getting a basic tenet of cybernetics wrong: the idea that information is part of reality, that input affects output and output affects input, that the line between system and environment is arbitrary”. Matwyshyn concluded “[s]ecurity experts fear that in lieu of rigorously addressing the formidable security challenges our nation faces, our legal and policy discussions have instead devolved into a self-referential, technically inaccurate, and destructively amorphous “cyber-speak,” a legalistic mutant called “cybersecurity”” (p. 1154).

We have described now that notions relating to the protection of information systems—and all the societal functions those systems now support—are increasingly significant in both academic literature and the broader public and policy discourse. The development of the “Internet of Things” will add billions of new devices over time to the internet, many with the potential to cause physical harm, which will further strengthen the need for security engineering for this overall system (Anderson, 2018).

There appears little likelihood of any clear distinctions developing at this late stage between information security and cybersecurity in practice. It may be that the former simply falls out of common usage in time, as computer security slowly has since 2010—although those with security capabilities (a.k.a. state hacking) still stick resolutely with cyber.

Anderson suggests the continued integration of software into safety-critical systems will require a much greater emphasis on safety engineering, and protection of the security properties of systems like medical devices (even body implants) and automotive vehicles for decades—in turn further strengthening political interest in the subject (2021, p. 2).

Martyn Thomas, a well-known expert in safety-critical system engineering, told us (personal communication, September 2020):

Rather than attackers increasingly finding new ways to attack systems, the greater threat is that developers increasingly release software that contains well-known vulnerabilities – either by incorporating COTS (commercial off-the-shelf) components and libraries with known errors, or because they use development practices that are well known to be unsafe (weakly typed languages, failure to check and sanitise input data, etc.). So, the volume of insecure software grows, and the pollution of cyberspace seems unstoppable.

Powerful states (particularly the US) have since at least the 1970s used their influence over the design and production of computing systems to introduce deliberate weaknesses in security-critical elements such as encryption protocols and libraries (Diffie & Landau, 2010), and even hardware (Snowden, 2019). The US CIA and NSA Special Collection Service “routinely intercepts equipment such as routers being exported from the USA, adds surveillance implants, repackages them with factory seals and sends them onward to customers” (Anderson, 2020, p. 40). It would be surprising if other states did not carry out similar activities.

In the long run, as with most technologies, we will surely take the cyber element of everyday life for granted, and simply focus on the safety and security (including reliability) of devices and systems that will become ever more critical to our health, economies, and societies.

Acknowledgements

The authors thank Roger Clarke, Alan Cox, Graham Greenleaf, Douwe Korff, Chris Marsden, Martyn Thomas and Ben Wagner for their helpful feedback, and all the native speakers who shared their linguistic knowledge.

References

Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40–46. https://doi.org/10.1145/322796.322806

Anderson, R. (2008). Security Engineering: A Guide to Building Dependable Distributed Systems (2nd ed.). Wiley.

Anderson, R. (2018). Making Security Sustainable. Communications of the ACM, 61(3), 24–26. https://doi.org/10.1145/3180485

Anderson, R. (2020). Security Engineering: A Guide to Building Dependable Distributed Systems (3rd ed.). Wiley.

Andreessen, M. (2011, August 20). Why Software Is Eating The World. The Wall Street Journal. https://www.wsj.com/articles/SB10001424053111903480904576512250915629460

Baran, P. (1960). Reliable Digital Communications Systems Using Unreliable Network Repeater Nodes (P-1995 Paper). The RAND Corporation. https://www.rand.org/pubs/papers/P1995.html

Barlow, J. P. (1996). A declaration of the independence of cyberspace. https://www.eff.org/cyberspace-independence

Bell, D. E., & LaPadula, L. J. (1973). Secure Computer Systems: Mathematical Foundations (Technical Report No. 2547; Issue 2547). MITRE Corporation.

Biba, K. J. (1975). Integrity Considerations for Secure Computer Systems (Technical Report MTR-3153). MITRE Corporation.

Brown, I., Marsden, C. T., Lee, J., & Veale, M. (2020). Cybersecurity for elections: A Commonwealth guide on best practice. Commonwealth Secretatiat. https://doi.org/10.31228/osf.io/tsdfb

Buzan, B., Wæver, O., & De Wilde, J. (1998). Security: A new framework for analysis. Lynne Rienner Publishers.

Camp, P. L. J. (2013). Beyond usability: Security Interactions as Risk Perceptions [Position paper]. https://core.ac.uk/display/23535917

Carmody, S. (2020). ngramr: Retrieve and Plot Google n-Gram Data (1.7.2) [Computer software]. https://CRAN.R-project.org/package=ngramr

Cavelty, M. D. (2020). Cybersecurity between hypersecuritization and technological routine. In E. Tikk & M. Kerttunen (Eds.), Routledge Handbook of International Cybersecurity (1st ed., pp. 11–21). Routledge. https://doi.org/10.4324/9781351038904-3

Chang, F. R. (2012). Guest Editor’s Column. The Next Wave, 19(4). https://www.nsa.gov/Portals/70/documents/resources/everyone/digital-media-center/publications/the-next-wave/TNW-19-4.pdf

Clark, D. D., & Wilson, D. R. (1987). A Comparison of Commercial and Military Computer Security Policies. 184–194. https://doi.org/10.1109/SP.1987.10001

Clarke, R. (2005, May 9). Human-Artefact Hybridisation: Forms and Consequences. Ars Electronica 2005 Symposium, Linz, Austria. http://www.rogerclarke.com/SOS/HAH0505.html

Clarke, R. (2016). Privacy Impact Assessments as a Control Mechanism for Australian National Security Initiatives. Computer Law & Security Review, 32(3), 403–418. https://doi.org/10.1016/j.clsr.2016.01.009

Clarke, R. (2017). Cyberspace, the Law, and our Future [Talk]. Issue Launch of Thematic Issue Cyberspace and the Law, UNSW Law Journal, Sydney. http://www.rogerclarke.com/II/UNSWLJ-CL17.pdf

Cohen, J. E. (2012). Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. Yale University Press. http://juliecohen.com/configuring-the-networked-self

Craigen, D., Diakun-Thibault, N., & Purse, R. (2014). Defining Cybersecurity. Technology Innovation Management Review, 4(10), 13–21. https://doi.org/10.22215/timreview/835

Denning, D. E. R. (1982). Cryptography and data security. Addison-Wesley Longman Publishing Co., Inc.

Diffie, W., & Landau, S. (2010). Privacy on the Line: The Politics of Wiretapping and Encryption. MIT Press. https://library.oapen.org/handle/20.500.12657/26072

DiMaggio, P. J., & Powell, W. W. (1983). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review, 48(2), 147. https://doi.org/10.2307/2095101

Erickson, P., Klein, J. L., Daston, L., Lemov, R. M., Sturm, T., & Gordin, M. D. (2013). How Reason Almost Lost its Mind: The Strange Career of Cold War Rationality. The University of Chicago Press. https://doi.org/10.7208/chicago/9780226046778.001.0001

Felt, A. P., Ainslie, A., Reeder, R. W., Consolvo, S., Thyagaraja, S., Bettes, A., Harris, H., & Grimes, J. (2015). Improving SSL Warnings: Comprehension and Adherence. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI, 15, 2893–2902. https://doi.org/10.1145/2702123.2702442

Fiesler, C., Beard, N., & Keegan, B. C. (2020). No Robots, Spiders, or Scrapers: Legal and Ethical Regulation of Data Collection Methods in Social Media Terms of Service. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 187–196.

Freed, D., Palmer, J., Minchala, D., Levy, K., Ristenpart, T., & Dell, N. (2018). "A Stalker’s Paradise”: How Intimate Partner Abusers Exploit Technology. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 667, 1–667. https://doi.org/10.1145/3173574.3174241

Friedler, S. A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2019). A comparative study of fairness-enhancing interventions in machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT*, 19, 329–338. https://doi.org/10.1145/3287560.3287589

Gangadharan, S. P., & Niklas, J. (2019). Decentering technology in discourse on discrimination. Information, Communication & Society, 22(7), 882–899. https://doi.org/10.1080/1369118X.2019.1593484

Global Cyber Security Capacity Centre. (2016). Cybersecurity Capacity Maturity Model for Nations (CMM) Revised Edition. Global Cyber Security Capacity Centre, University of Oxford. https://doi.org/10.2139/ssrn.3657116

Graham, M. (2013). Geography/internet: Ethereal alternate dimensions of cyberspace or grounded augmented realities? The Geographical Journal, 179(2), 177–182. https://doi.org/10.1111/geoj.12009

Greenleaf, G., & Cottier, B. (2020). 2020 ends a decade of 62 new data privacy laws. Privacy Laws & Business International Report, 163, 24–26.

Grossman, W. (2017, June). Crossing the Streams: Lizzie Coles-Kemp. Research Institute for the Science of Cyber Security Blog.

Gürses, S. (2014). Can you engineer privacy? Communications of the ACM, 57(8), 20–23. https://doi.org/10.1145/2633029

Hansen, L., & Nissenbaum, H. (2009). Digital Disaster, Cyber Security, and the Copenhagen School. International Studies Quarterly, 53(4), 1155–1175. https://doi.org/10.1111/j.1468-2478.2009.00572.x

International Telecommunication Union. (2019, March). National CIRTs Worldwide [Perma.cc record]. https://perma.cc/MSL6-MSHZ

I.T.U.-T. (2008, April 18). X.1205: Overview of cybersecurity. https://www.itu.int/rec/T-REC-X.1205-200804-I

Kabanov, Y. (2014). Information (Cyber-) Security Discourses and Policies in the European Union and Russia: A Comparative Analysis (WP 2014-01. Centre for German and European Studies (CGES. https://zdes.spbu.ru/images/working_papers/wp_2014/WP_2014_1–Kabanov.compressed.pdf

Kanwal, G. (2009). China’s Emerging Cyber War DoctrineJournal of Defence Studies, 3(3).

Kemmerer, R. A. (2003). Cybersecurity. 25th International Conference on Software Engineering, 2003. Proceedings, 705–715. https://doi.org/10.1109/ICSE.2003.1201257

Kerr, O. S. (2003). Cybercrime’s Scope: Interpreting Access and Authorization in Computer Misuse Statutes. New York University Law Review, 78(5), 1596–1668.

Kerr, O. S. (2016). Norms of Computer Trespass. Columbia Law Review, 116, 1143–1184.

Koops, B.-J. (2006). Should ICT Regulation Be Technology-Neutral? In B.-J. Koops, C. Prins, M. Schellekens, & M. Lips (Eds.), Starting Points for ICT Regulation: Deconstructing Prevalent Policy One-liners (pp. 77–108). T.M.C. Asser Press.

Korff, D. (2019). First do no harm: The potential of harm being caused to fundamental rights and freedoms by state cybersecurity interventions. In Research Handbook on Human Rights and Digital Technology. Elgar.

Kosseff, J. (2020). Cybersecurity law (Second). Wiley. https://doi.org/10.1002/9781119517436

Kostova, B., Gürses, S., & Troncoso, C. (2020). Privacy Engineering Meets Software Engineering. On the Challenges of Engineering Privacy By Design. ArXiv. http://arxiv.org/abs/2007.08613

Krombholz, K., Busse, K., Pfeffer, K., Smith, M., & Zezschwitz, E. (2019). 'If HTTPS Were Secure, I Wouldn’t Need 2FA’—End User and Administrator Mental Models of HTTPS. 246–263. https://doi.org/10.1109/sp.2019.00060

Krombholz, K., Mayer, W., Schmiedecker, M., & Weippl, E. (2017). I Have No Idea What I’m Doing’—On the Usability of Deploying HTTPS. 1339–1356. https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/krombholz

Kshetri, N. (2016). Cybersecurity and Development. Markets, Globalization & Development Review, 1(2). https://doi.org/10.23860/MGDR-2016-01-02-03

Lee, R. M., & Rid, T. (2014). OMG Cyber! The RUSI Journal, 159(5), 4–12. https://doi.org/10.1080/03071847.2014.969932

Levy, I. (2020). High level privacy and security design for NHS COVID-19 Contact Tracing App. National Cyber Security Centre. https://www.ncsc.gov.uk/files/NHS-app-security-paper%20V0.1.pdf

Levy, K., & Schneier, B. (2020). Privacy threats in intimate relationships. Journal of Cybersecurity, 6(1). https://doi.org/10.1093/cybsec/tyaa006

Lin, Y., Michel, J.-B., Aiden, E. L., Orwant, J., Brockman, W., & Petrov, S. (2012). Syntactic annotations for the Google Books ngram corpus. Proceedings of the ACL 2012 System Demonstrations, 169–174.

Matwyshyn, A. M. (2017). CYBER! Brigham Young University Law Review, 2017(5), 1109. https://digitalcommons.law.byu.edu/lawreview/vol2017/iss5/6/

Maurer, T., Hohmann, M., Skierka, I., & Morgus, R. (2015). National CSIRTs and Their Role in Computer Security Incident Response [Policy Paper]. New America; Global Public Policy Institute. http://newamerica.org/cybersecurity-initiative/policy-papers/national-csirts-and-their-role-in-computer-security-incident-response/

Maxwell, J. C. (1867-1868). On Governors. Proceedings of the Royal Society of London, Vol. 16 (1867 - 1868), pp. 270-283

Miller, B. (2010, March 1). CIA Triad [Blog post]. Electricfork. http://blog.electricfork.com/2010/03/cia-triad.html

Mitnick, K. D., & Simon, W. L. (2002). The Art of Deception: Controlling the Human Element of Security. Wiley.

Moyle, E. (2019). CSIRT vs. SOC: What’s the difference? In Ultimate guide to cybersecurity incident response [TechTarget SearchSecurity]. https://searchsecurity.techtarget.com/tip/CERT-vs-CSIRT-vs-SOC-Whats-the-difference

Naiakshina, A., Danilova, A., Gerlitz, E., Zezschwitz, E., & Smith, M. (2019). 'If you want, I can store the encrypted password’: A Password-Storage Field Study with Freelance Developers. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 19, 1–12. https://doi.org/10.1145/3290605.3300370

Neale, M. (2000, October 4). No Maps for These Territories [Documentary]. Mark Neale Productions.

Neumann, A. J., Statland, N., & Webb, R. D. (1977). Post-processing audit tools and techniques. In Z. G. Ruthberg (Ed.), Audit and evaluation of computer security (pp. 2–5). National Bureau of Standards. https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nbsspecialpublication500-19.pdf

Nissenbaum, H. (2005). Where Computer Security Meets National Security. Ethics and Information Technology, 7(2), 61–73. https://doi.org/10.1007/s10676-005-4582-3

Reeder, R. W., Felt, A. P., Consolvo, S., Malkin, N., Thompson, C., & Egelman, S. (2018). An Experience Sampling Study of User Reactions to Browser Warnings in the Field. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 18, 1–13. https://doi.org/10.1145/3173574.3174086

Rid, T. (2016). Rise of the Machines: The lost history of cybernetics. Scribe.

Saltzer, J. H., & Schroeder, M. D. (1975). The protection of information in computer systems. Proceedings of the IEEE, 63(9), 1278–1308. https://doi.org/10.1109/PROC.1975.9939

Sasse, M. A., Brostoff, S., & Weirich, D. (2001). Transforming the ‘Weakest Link’—A Human/Computer Interaction Approach to Usable and Effective Security. BT Technology Journal, 19(3), 122–131. https://doi.org/10.1023/a:1011902718709

Sellars, A. (2018). Twenty Years of Web Scraping and the Computer Fraud and Abuse Act. Boston University Journal of Science & Technology Law, 24(2), 372. https://scholarship.law.bu.edu/faculty_scholarship/465/

Silverstone, R., & Hirsch, E. (1992). Consuming Technologies: Media and Information in Domestic Spaces. Routledge. https://doi.org/10.4324/9780203401491

Snowden, E. (2019). Permanent Record. Pan Macmillan.

Solms, R., & Niekerk, J. (2013). From information security to cyber security. Computers & Security, 38, 97–102. https://doi.org/10.1016/j.cose.2013.04.004

Tanczer, L. M., Brass, I., & Carr, M. (2018). CSIRTs and Global Cybersecurity: How Technical Experts Support Science Diplomacy. Global Policy, 9(S3), 60–66. https://doi.org/10.1111/1758-5899.12625

Wagner, B., & Vieth, K. (2016). Was macht Cyber? Epistemologie und Funktionslogik von Cyber. Zeitschrift für Außen- und Sicherheitspolitik, 9(2), 213–222. https://doi.org/10.1007/s12399-016-0557-1

Ware, W. (1970). Security Controls for Computer Systems: Report of Defense Science Board Task Force on Computer Security (Issues R609-1)) [Report]. The RAND Corporation. https://doi.org/10.7249/R609-1

Whitten, A., & Tygar, J. D. (1999). Why Johnny can’t encrypt: A usability evaluation of PGP 5.0. Proceedings of the 8th Conference on USENIX Security Symposium, 8. https://www.usenix.org/legacy/events/sec99/full_papers/whitten/whitten.ps

Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.

Zittrain, J. L. (2006). The Generative Internet. Harvard Law Review, 119, 1974–2040. http://nrs.harvard.edu/urn-3:HUL.InstRepos:9385626

Appendix 1 – Cybersecurity in other languages

5

Table 1: Terms for cybersecurity (via Google Translate on 14 September 2020, checked against by native speakers).

Language

Term

Afrikaans

kubersekuriteit

Arabic

الأمن الإلكتروني

Bengali

সাইবার নিরাপত্তা

Bulgarian

киберсигурност

Chinese

网络安全

Danish

computersikkerhed

Dutch

cyberbeveiliging

Finnish

Kyberturvallisuus

Farsi

امنیت شبکه (or امنیت سایبری/ امنیت رایانه)

French

la cyber-sécurité

German

Cybersicherheit (sometimes IT-sicherheit, Informationssicherheit, or Onlinesicherheit in Austria)

Greek

κυβερνασφάλεια

Hindi

साइबर सुरक्षा

Bahasa Indonesia

keamanan siber

Italian

sicurezza informatica

Japanese

サイバーセキュリティ

Portuguese

cíber segurança

Marathi

सायबर सुरक्षा

Romanian

securitate cibernetica

Russian

кибербезопасность

Spanish

ciberseguridad or (more popularly) seguridad informática

Swahili

usalama wa mtandao

Swedish

Cybersäkerhet (or, commonly, IT-säkerhet)

Urdu

سائبر سیکورٹی

Xhosa

ukhuseleko

One important difference between European languages is that some (such as English) differentiate security and safety, while others (such as Swedish and Danish) do not. One sociologist of security noted: “it does frame how you understand the concepts, particularly structure. When you're talking about access control in Swedish it's a different logic than when you talk about it in Anglo-Saxon languages […] In the Scandinavian view of the world there is always a much more socio-technical bent for thinking about security” (Grossman, 2017).

Footnotes

1. The authors use cybersecurity, not cyber security, throughout this text, as it is the one most in use in computer science, even in Britain.

2. The second author must admit he has not been immune to this.

3. Ware’s 1970 report begins: “Although this report contains no information not available in a well stocked technical library or not known to computer experts, and although there is little or nothing in it directly attributable to classified sources…”

4. See the Twitter hashtag #cybercyber and @cybercyber account, and Google search results for “cyber cyber cyber", for hundreds of thousands of further examples, and the “cyber song” and video Unsere Cyber Cyber Regierung - Jung & Naiv: Ultra Edition.

5. According to Google Translate, confirmed or updated by native speakers consulted by the authors, including the top-15 most spoken languages according to Wikipedia. With thanks to Eleftherios Chelioudakis, Francis Davey, Fukami, Andreas Grammenos, Hamed Haddadi, Werner Hülsmann, Douwe Korff, Sagwadi Mabunda, Bogdan Manolea, Matthias Marx, Veni Markovski, Grace Mutung'u, Yudhistira Nugraha, Jan Penfrat, Judith Rauhofer, Kaspar Rosager, Eric Skoglund, Anri van der Spuy and Mathias Vermeulen for many of these translations!

Personal information management systems: a user-centric privacy utopia?

$
0
0

1. Introduction

Online systems and services are driven by data. There are growing concerns regarding the scale of collection, computation and sharing of personal data, the lack of user control, individuals’ rights, and generally, who reaps the benefits of data processing (German Data Ethics Commission, 2019).

Currently, data processing largely entails the capture of individuals’ data by organisations, who use this data for various purposes, in a manner that is often opaque to those to whom the data relates. This general lack of transparency has meant that consent and other legal arrangements for the safe and responsible processing of personal data are considered rather ineffective (Blume, 2012; Cate & Mayer-Schönberger, 2013; Tolmie et al., 2016; German Data Ethics Commission, 2020).

Privacy Enhancing Technologies (PETs) are technologies that aim to help in addressing privacy concerns (The Royal Society, 2019). Personal data stores (PDSs), otherwise known as personal information management systems (PIMS), represent one class of such technology, focused on data management. In essence, a PDS equips an individual (user) with a technical system for managing their data (a ‘device’). Generally, a PDS device provides the user with technical means for mediating, monitoring and controlling: (i) the data captured, stored, passing through, or otherwise managed by their device; (ii) the computation that occurs over that data; and (iii) how and when the data, including the results of computation, is transferred externally (e.g., off-device, to third-parties).

Proponents of PDSs argue that it empowers users, by “put[ting] individuals in control of their data” (Crabtree et al., 2018). This is because PDSs provide means for ‘users to decide’ what happens to their data; in principle, third-parties cannot access, receive or analyse the data from a PDS without some user agreement or action. In this way, PDSs purport a range of user benefits, from increased privacy and the ability to ‘transact’ (or otherwise monetise) their data, to better positioning users to gain insights from their own data (see subsection 2.3).

More broadly, PDSs seek to provide an alternative to today’s predominant form of data processing, where organisations collect, store and/or use the data of many individuals. As this often occurs within a single organisation’s technical infrastructure, there may be limited scope for individuals to uncover – let alone control – what happens with their data. The vision for PDSs is to decentralise data and compute, away from organisations, such that it happens with more user control.

PDS technology is nascent, but growing in prominence. Exemplar PDS platforms currently at various stages of development and availability include Hub of All Things& Dataswift (Dataswift) 1; Mydex, CitizenMe, Databox and Inrupt/Solid (Inrupt) 2 (which is led by Sir Tim Berners-Lee). As nascent technology, PDSs raise several areas for investigation by academia, policymakers, and industry alike. There is already work, for instance, on how PDSs might facilitate better accountability (Crabtree, 2018; Urquhart, 2019), and on the legal uncertainties surrounding the technology, particularly concerning data protection (Janssen et al., 2020; Chen et al., 2020).

This paper takes a broader view, questioning the extent to which PDS technology can actually empower individuals and address the concerns inherent in data processing ecosystems. After giving an overview of the technology, and its purported benefits in section 2, we examine, in section 3, some data protection implications of PDSs focusing on the user’s perspective: whether they support particular legal bases for processing personal data; the social nature of personal data captured by PDSs; and the relation of PDSs to data subject rights. In section 4, we argue that the broader information and power asymmetries inherent in current online ecosystems remain largely unchallenged by PDSs. Section 5 synthesises the discussion, indicating that many of the concerns regarding personal data are systemic, resulting from current data surveillance practices, and concluding that PDSs – as a measure that ultimately still requires individuals to ‘self-manage’ their privacy – only go so far. 3

2. Technology overview

PDSs represent a class of data management technologies that seek to localise data capture, storage and the computation over that data towards the individual. Generally, they entail equipping a user with their own device for managing their data. A device operates as a (conceptual) data ‘container’, in a non-technical sense of the word: a strictly managed technical environment in which data can be captured or stored or can pass through, and within which certain computation can occur. 4 Some devices are wholly virtual (e.g. Digi.me), hosted in the cloud, while others encompass particular physical equipment such as a box or hub (see e.g. Databox).

PDSs generally purport to empower users through their devices. Though offerings vary, generally PDSs provide technical functionality for:

  1. Local (within-device) capture and storage of user data. Mechanisms for users to populate their PDS with data from a range of sources, which may include from their phones, wearables, online services, manual data entry, sensors, etc.
  2. Local (on-device) computation. Enabling computation to occur (software to execute) on the device, which generally entails some processing of data residing with the device.
  3. Mediated data transfers. Allowing control over the data transferred externally (off-device); including ‘raw’ user data, the results of computation, and other external interactions (e.g. calls to remote services).
  4. Transparency and control measures. Tooling for monitoring, configuring and managing the above. This includes governance measures for users to set preferences and constraints over data capture, transfer and processing; visualising and alerting of specific happenings within the device; etc.

The device’s technical environment (infrastructure) manages security aspects. This can include data encryption, managing and controlling user access to the device and its data, and providing means for isolating data and compute. Further, it also works to ensure adherence with any policies, preferences and constraints that are set (see #4 above). For instance, if a user specifies that particular data cannot be transferred to some party (or component), or should not be included in some computation, the device’s technical environment will ensure these constraints are respected.

Core to many PDSs is allowing for computation (potentially any form of code execution, including analytics) to be ‘brought’ to the data. This occurs through an app: software that executes on a user’s device for processing that device’s data. 5 Some apps may provide the user with functionality without any external transfer of data. Though often apps will transfer some data off-device (such as the results of computation). PDS proponents describe such functionality as of key industry interest, arguing that receiving only the results of computation (e.g. aggregated findings) avoids the sensitivities, overheads and resistance associated with receiving and managing granular and specific user data (see subsection 2.4). Apps operate subject to constraints: they must define what data sources they seek, the data they transfer, and other details; and users may put constraints on how apps behave, e.g. regarding the data that apps may access, process, and transfer. The device’s technical environment ensures adherence to these constraints. Legal mechanisms also operate to govern the behaviour and operation of PDS ecosystems (see subsection 2.2). 6

2.1 A multi-actor ecosystem

It is worth recognising that there are several actors within a PDS ecosystem. We now introduce those most pertinent for this discussion. The focus is on users, but this article is about empowerment and power, so other actors need to be introduced.

Users are those individuals who hold a device, leveraging the PDS functionality to manage their data.

Organisations are those interested in processing user data. Here, we describe organisations as app developers, as they build apps that process user data for installation on user devices. Again, apps will often transfer some data to the organisation, such as the results of computation. PDSs may also support the transfer of data to an organisation without a specific app. This process is managed through the direct data transfer mechanisms provided by the device (which may itself be a form of app, packaged with the device).

Platformsare the organisations that provide the PDS and/or manage the PDS ecosystem. There will be a range of platforms that differ in their offerings. Often a platform’s core offering is equipping a user with a device; though this could vary from merely providing the codebase for users to compile and self-manage the operation of their devices, to providing the entire operational infrastructure—perhaps including hardware, managed cloud services for backup, and so forth (Janssen et al., 2020). Moreover, some platforms envisage hosting ‘app stores’ or ‘data marketplaces’ that broker between users and the organisations seeking to process their data, while many platforms require adherence with ‘best practices’, have defined terms of service, and may even have contractual agreements with users and organisations. In this way, platforms vary in their level of involvement in the operation of the PDS ecosystem.

2.2 Governance regimes

In addition to technical aspects, PDS platforms often entail legal governance mechanisms. These operate to help ensure that app behaviour, and data usage more generally, is compliant with user preferences, and platform requirements. Some of these are encapsulated in a platform’s Terms of Service (ToS), which commonly define how the platform can be used, and the platform’s position on the allocation of responsibilities and liabilities. Platform ToS often require app developers to have appropriate measures in place to safeguard users against unlawful processing (e.g. Dataswift’s acceptable use policy), and to safeguard users against accidental data loss or destruction (idem) while requiring them to, for instance, safely keep their passwords or to regularly update their PDSs for security purposes (e.g. Dataswift’s terms for users). Platforms may also have contracts with app developers, which contain business specific terms and conditions, governing their interactions with user data, the functionality of their apps etc. ToS and contracts might stipulate, for example, that app developers must fully comply with platform policies and principles regarding user data processing, where failure to do so may result in the platform terminating their data processing activities (example from Mydex ToS).

2.3 Purported user benefits

PDSs generally purport to provide functionality to empower users. Some claimed benefits for users include:

  • Users having granular control over the data captured about them, and how that data is shared and used (Article 29 Data Protection Working Party 2014; Crabtree et al., 2018; Urquhart et al., 2019);
  • Better protecting personal data (including ‘sensitive’ personal data) from access by third parties, by way of the technical functionality provided (Crabtree et al., 2018; Lodge et al., 2018);
  • Better informed user consent, by giving more information about data processing. This may be through various means, including the device’s monitoring functionality; the app’s data usage specifications; platform features, such as app stores ranking and describing app data usage, requiring transparency best practices, etc. (Mydata);
  • Compartmentalised data storage and computation to prevent apps from interacting with data (and other apps) inappropriately, inadvertently and without user agreement/intervention (e.g. Crabtree et al., 2018);
  • Providing opportunities for users to gain more insights from their data (e.g., Mydex; Mydata);
  • Allowing users to transact with or monetise their personal data (Ng & Haddadi, 2018);
  • Generally incentivising developers towards more privacy friendly approaches (Crabtree et al., 2018).

PDSs have also caught the attention of policymakers; the European Commission recently expressed that PDSs and similar tools have significant potential as “they will create greater oversight and transparency for individuals over the processing of their data […] a supportive environment to foster [their] development is necessary to realise [their] benefits” (European Commission, 2020). This potentially indicates that the European Commission might in the future define policy encouraging the development of these tools.

2.4 Purported organisational benefits

For organisations (app developers), the appeal of PDSs is the promise of access to more data—potentially in terms of volume, richness, velocity and variety—for processing. PDS enthusiasts argue that if users better understand how their data is being processed, and feel empowered by way of PDS’s control mechanisms, they may be less ‘resistant’ and harbour a greater ‘willingness’ for (managed) data sharing and processing (e.g., Control-Shift; Mydata; Digi.me; CitizenMe mention this in their descriptions). Similarly, given that PDSs will encapsulate a variety of user information, PDSs might offer app developers access to a broader range of data types than if they attempted to collect the data themselves (Mydata).

Though PDSs are typically described with reference to an individual, most aim to support ‘collective computation’, whereby the processing of data across many users or (particular) populations is enabled through apps operating on their devices (e.g., Mydata; Databox; CitizenMe; Digi.me). 7 Collective computations often entail some user or population profiling to support various organisational aims—customer insight, market research, details of product usage, or indeed, and as is common in online services to support a surveillance-driven advertising business model (as discussed in section 5). In this way, PDS platforms effectively provide a personal data processing architecture that operates at scale across a population. This is attractive for organisations, as PDS platforms with large user-bases offer access to a wider population and thus more data than the organisation would otherwise themselves have access to. Importantly, this also comes without the costs, risks, and compliance overheads incurred in undertaking data collection, storage, and management ‘in-house’, using their own infrastructure (Crabtree et al., 2018).

2.5 PDS platforms: the commercial landscape

Some predict that PDSs could generate substantial economic benefits for businesses and consumers alike (Control-Shift; Brochot et al., 2015; European Commission, 2020). Although the business models for organisations are likely similar to those already existing, the business models for the PDS platforms are unclear and remain under development (Bolychevsky & Worthington, 2018). A range of possible revenue streams for PDS platforms have been developed and proposed. These include:

  • Platforms charging organisations fees for access to the PDS ecosystem (e.g., annual fee, Mydex); charges for access to the platform’s app store, per user download of their app, etc);
  • Platforms charging organisations per ‘data transaction’ with a PDS device, where the type of transaction (access, computation, and/or transfer of data, including raw data, see e.g. Mydex) and/or the type of data requested (e.g. queries, behavioural data) often determines the price (see e.g. CitizenMe);
  • Organisations sharing revenue with the platform through in-app purchases (e.g. Digi.me);
  • Platforms charging organisations for support services (e.g. Mydex);
  • Users paying a subscription fee, or to unlock additional functionality (Digi.me);
  • Platforms selling, renting or leasing PDS devices to users, which could include service or maintenance contracts (Crabtree et al., 2018); or
  • Platforms in the public interest (e.g. PDSs platforms for public health) might be ‘fee-free’, funded through, e.g. donations, and public funds (see e.g. BBC-Box).

As PDSs are a developing area, the business models of platforms are nascent. In practice, one expects that platforms will likely employ a range of monetisation mechanisms.

3. Data protection

A key aim of PDSs is to give users greater visibility and control over the processing of their personal data. PDS architectures concern issues regarding personal data, and therefore the General Data Protection Regulation (GDPR) must be considered. GDPR centres around three legal roles: controllers (acting alone or with others together as joint controllers; (Arts. 4(7), 26 GDPR), processors (including sub-processors; Arts. 4(8), 28(4) GDPR), and data subjects (Art. 4(1) GDPR). The role of a particular actor as a controller or processor is generally a question of their factual influence over data processing; how an actor describes their role (for example in contract) may be indicative, but won’t be definitive (Article 29 Working Party, 2010).

GDPR tasks both controllers and processors with a range of responsibilities and obligations, the bulk of which fall on controllers, given their role in determining the nature of the data processing. Obligations for controllers include complying with data protection principles (Art. 5(1) GDPR), that this compliance is demonstrable (Art. 5(2) GDPR), that their processing of personal data is predicated on one of the GDPR’s lawful grounds (Art. 6(1) GDPR), to name a few. Typical rights afforded to data subjects (i.e. those whose personal data is being processed) which controllers are tasked with meeting, include the rights to object to data processing, to have their data erased, or to port data (subsection 3.3).

While PDS technologies and their governance models are still developing, many unresolved data protection issues exist. The assignment of roles and responsibilities in PDS systems is complex, given such ecosystems are largely shaped by the collaboration of multiple parties, including the key actors mentioned here. This reality can be difficult to reconcile with GDPR’s approach with controllers who ‘orchestrate’ the data processing in an entire system. In practice, a PDS’s ecosystem can take a number of forms, and the legal position of those involved will depend on the circumstances. Issues of roles and responsibilities under the GDPR in different PDS contexts are explored in detail by Chen et al., and Janssen et al. (2020). In this paper, we consider three key ‘user-facing’ data protection considerations: (1) how PDSs, in being oriented towards consent, relates to GDPR’s lawful grounds; (2) how personal data often relates to more persons than just the PDS user; and (3) the relationship between PDSs and data subject rights.

3.1 Lawful grounds for processing

GDPR requires that processing is predicated on one of its lawful bases as defined by Art. 6(1) GDPR. Controllers must determine which lawful ground is most appropriate in a given situation, depending on specific purposes and contexts for use, the nature of parties involved, and their motivations and relationships, and of course, the requirements for the lawful basis on which they rely. However, due to the ePrivacy Directive, where the PDS entails a physical (hardware) device, consent will generally be required for app developers to process any data (Art. 5(3) ePrivacy Directive; Janssen et al., 2020). In this context, for such devices the only available basis for processing on these devices will be consent (Arts. 6(1)(a) & 7 GDPR; Recitals 32, 42, 43 GDPR) and explicit consent (for special category data—particular classes of data deemed to require extra protections (Art. 9(1), Recitals 51-56 GDPR)). For ‘virtual’ PDS devices, such as those cloud hosted (as are currently by far the most common), legal bases other than consent may be available (unless that data is special category data, in which case explicit consent is often the only option).

PDS devices are fundamentally oriented towards supporting the grounds of (user) consent and contract (where the processing is necessary for the performance of a contract to which the user is a party) as the legal bases for processing. Importantly, both consent and contract are grounds that require agreement by the data subject to render the processing lawful. PDS platforms are generally explicitly designed for supporting such, by requiring active user agreement regarding data processing (Crabtree et al., 2018; Urquhart 2019). PDSs generally purport functionality that aims at informing users, e.g. providing them information about an app and its related data processing, and requiring the user to take positive actions, e.g. agreeing to terms upon installing the app, configuring data usage preferences and policies, in order for that processing to occur.

There are also lawful grounds for processing, such as legal obligation, public interest or legitimate interest which allow the controllers—not the data subjects (users)—to decide whether processing can occur. That is, user consent is not required for certain public tasks (e.g. perhaps in taxation), or for legitimate controller interest (e.g. perhaps for the processing of certain data to detect fraud). The requirements vary by legal basis, and can include (depending on the ground) considerations like the necessity of that processing (Arts. 6(1)(b)—(f) GDPR), that controller interests are balanced with the fundamental rights of the data subject (Art. 6(1)(f) GDPR; Kamara & De Hert, 2018), and a foundation in compatible member state law (Arts. 6(1)(c) and (e) GDPR). These grounds for processing that are not based on specific and active user involvement or agreement are rarely considered in PDS architectures, and at present it is unclear how PDS architectures would support or reconcile with these grounds where they may apply (Janssen et al., 2020).

3.2 Social nature of personal data

Personal data is relational and social by nature; it often does not belong to one single individual, as much personal data is created through interactions with other people or services (Article 29 Working Party, 2017; Crabtree & Mortier, 2015).

In practice, a PDS device will likely capture data relating to multiple individuals other than the user—for example, through sensing data from other dwellers or visitors in and around someone’s home. This raises interesting questions regarding the mechanisms for one to control what is captured about them in someone else’s PDS. That is, there may be conflicting visions and preferences between the user and others regarding the use and processing of ‘joint’ data, and these others may also have data subject rights (see subsection 3.3). At present, PDSs generally give a device’s user greater control over the processing related to that device; functionality enabling the preferences and rights of others to be managed and respected has yet had little consideration. This is an area warranting further attention.

3.3 Supporting data subject rights

GDPR affords data subjects several rights regarding the processing of their personal data. These include the rights of access to their personal data (Art. 15), rectification of inaccurate personal data (Art. 16), erasure (Art. 17), to object (Art. 21), to restrict the processing of their data (Art. 18), to port their data to another controller in a commonly used machine-readable format (Art. 20 GDPR), and to not be subject to solely automated decision-making or profiling which produces legal or similarly significant effects (Art. 22 GDPR). Controllers are tasked with fulfilling these rights. Data subject rights are not absolute—GDPR imposes conditions on the exercise of some rights, and not all rights will apply in every situation.

Data subject rights have had little consideration in a PDS context. Again, to improve the transparency of processing, PDSs usually afford users some visibility over what occurs on-device and provide information on their device’s interactions (data exchanges) with organisations (Urquhart et al., 2018). They also generally offer certain controls to manage on-device processing. As such, some have suggested that PDSs may (at least for data within the PDS device) to some extent “negate” a user’s need to exercise certain data subject rights (Urquhart et al., 2018), where such mechanisms could potentially provide means for users to themselves restrict certain processing, and erase, delete or port data, and so forth. However, current PDS tooling, at best, only gives certain users visibility and the ability to take action regarding processing happening on-device (see subsection 4.1). Data subject rights, however, are broader, and encompass more than simply giving users visibility over on-device data processing. Users will, for instance, have interests in the behaviour of organisations involved in processing.

GDPR requires controllers to account for data protection considerations, including those relating to rights, in their technological and organisational processes (Data protection by design, GDPR Art 25(1)). This has implications not only for app developers, but also for PDS platforms, who could provide mechanisms that specifically and more holistically facilitate users in exercising their rights. Though there may be questions as to whether this is legally obliged—for instance in light of the complexities regarding a platform’s roles and responsibilities given that Art 25(1) applies to controllers (see Chen et al., 2020; Janssen et al., 2020). Indeed, these considerations are exacerbated as some PDSs represent ‘open source’ projects, potentially involving a wide range of entities in the development, deployment and operation of the platform and/or device functionality. However, regardless of any legal obligation, any PDS platform should aim to better support users with regards to their data rights, given that this is wholly consistent with the stated aims of PDSs as ‘empowering users’.

Beyond PDS functionality that specifically aims at rights, there is potential for PDS transparency mechanisms to assist users with their rights more generally. For instance, PDSs might, by providing information, help users in detailing and targeting their rights requests. User observation of, or a notification by the platform indicating particular application behaviour, might encourage users to exercise their right ‘to find out more’, or perhaps encourage them to validate that their rights requests were properly actioned. This might help users to determine whether processing should continue, or help them confirm whether the information provided by the controller corresponds to the operations observed on-device.

The right to data portability grants users the right to receive copies of the data they provided to a controller in an electronic format, and to transfer that data or to have it transferred to another controller. This can only be invoked if the processing was based on the lawful grounds of consent or contract (Art. 20(1)(a) GDPR), and concerns only that data provided by data subjects themselves (Art. 20 (1) GDPR; Article 29 Working Party, 2016; Urquhart et al., 2017).

Portability is considered a key means for users to ‘populate’ their PDSs by bringing their data from an organisation’s databases to the PDS (Art. 20 GDPR; Article 29 Working Party, 2019). Indeed, some PDS platforms describe the right as enabling users to ‘reclaim’ their data from organisations (e.g. CitizenMe; Dataswift; Digi.me),and envisage offering users technical mechanisms that leverage portability rights for populating their devices (idem). Subject access requests (Art. 15(3) GDPR) may also assist in populating devices, particularly given they are less constrained in terms of when it can be used, and usually result in more information than would be received from a portability request. However, subject access requests do not require that the data be returned in a machine-readable format. Without agreed-upon interoperability standards, using subject access requests (and indeed, even portability requests to some degree) to populate PDSs will often be impractical and cumbersome.

PDSs’ transparency mechanisms are also relevant here, as they can work to improve the user’s position. This is because such mechanisms can expose the on-device computations, possibly including the results of those computations, and potentially in a meaningful technical format. This is useful not only for portability considerations (e.g. in a PDS context, potentially moving the results of computations across apps), but also in generally providing users with more knowledge and insight into the nature of data processing occurring.

4. Information asymmetries

PDS platforms state that they empower users by providing them with means for increased transparency and control, enabling users to take better, more informed decisions about whether to engage or, indeed, disengage with particular processing. However, systemic information and power asymmetries are inherent in current digital ecosystems, whereby the highly complex and largely opaque nature of data processing amplifies the asymmetries between data subjects and the organisations processing their data (Mantelero, 2014). These asymmetries, stemming from an unequal distribution of opportunities in terms of understanding, knowledge, prediction, risk assessment, and so forth (Mantelero, 2014), make it difficult if not impossible for even knowledgeable users to properly evaluate and come to genuinely informed decisions about the processing of their data (Solove, 2013; Solove, 2020).

The opaque nature of data processing is largely systemic because users of digital services often lack (or are prevented from gaining) knowledge or understanding of: (1) the practices of organisations capturing and processing their data, including the details, reasons for and implications of holding particular data or performing particular computation; (2) the data sharing practices of those organisations with third parties and beyond; (3) the technical details of the systems involved; (4) the data-driven, and indeed, often surveillance-driven business models (see section 5); and (5) the insights and power that organisations can gain through having access to data, particularly where data is aggregated or computation occurs at scale (collective computation). Legal issues may further contribute to systemic problems—including information asymmetries—within digital ecosystems (Cohen, 2019); for example, copyright, trade secrecy, or documents or databases owned by large organisations might work to restrict the information that is available to the public. However, these restrictions are not absolute and do not apply to every stakeholder. Under certain conditions, courts or regulators can be given access to data relating to trade secrets or databases not generally available to the public (Art. 58(1)(e); Recital 63 GDPR).

Crucially, PDSs only partially respond to these issues and therefore only partially address the systemic nature of the information asymmetries of digital ecosystems. Providing a localised, user-centric containerisation of data and processing may assist users in gaining some knowledge of what happens with their personal information, but only to a limited extent. While users might gain some greater understanding over the data processing relating to their device, PDSs themselves are unlikely to solve these systemic information asymmetries. Fundamentally, PDSs are grounded in the mistaken idea that with enough information presented in the right way, individuals will be able to overcome barriers that are ultimately structural and systemic in nature (Nissenbaum, 2011).

4.1 Organisational data processing practices remain largely opaque

An organisation’s intentions, motivations and behaviours may not always be clear to users (Burrell, 2016). Attempting to address this, PDSs require app developers to provide some information about their organisational processes and intentions. Such information (often encapsulated in ‘app manifests’) might include details of the types of data an app will process; the app developer’s purposes for that processing; the risks of the app; or with whom the app developer may share data received from the PDS (Crabtree, 2018; Janssen et al., 2020). 8 However, less discussed in PDS proposals is conveying information about why that particular data is necessary (as opposed to other, perhaps less sensitive data), why these weights are attached to particular data in the analytics process, and, more broadly, why that particular processing needs to occur, and the possible data protection implications this may have. This is an area needing attention.

We now elaborate two additional aspects: (i) the lack of information available regarding data that flows beyond organisational boundaries, and (ii) how the opacity of app developers’ processes can hinder PDS platform’s governance processes. Note, however, that even if PDSs could provide additional information on developers’ processing practices, the utility of this for users is unclear. Moreover, this risks potentially creating a false sense of having adequately informed users while in actuality the problems caused by information asymmetries remain (this dimension is explored in subsection 4.2).

4.1.1 Transparency and control diminish as data moves across boundaries

Once data moves beyond a system or organisation’s boundaries, the visibility over that data typically diminishes, as does the ability to control any subsequent processing (Singh et al., 2017; Crabtree et al., 2018; Singh et al., 2019). So, while PDSs might provide users with insights into device-related processing, PDSs generally will not (at least at a technical-level) provide users with information about – let alone access to – data that has moved to app developers (and, indeed, beyond). Even in a PDS context, users will (still) therefore have little meaningful information regarding the specifics of the data actually being shared between organisations and third parties. 9

The fact that some data usage is essentially out of sight raises various risks, including, for instance, around secondary uses of data that a user would not have agreed with, e.g. undisclosed monetisation (Silverman 2019), or unexpected or undesired inferences or profiling, which could be used to influence, nudge or manipulate (Wachter et al., 2019). Moreover, as many online services entail a ‘systems supply-chain’ (Cobbe et al., 2020) – whereby services from various organisations are used to deliver functionality – there may be little visibility regarding the specific organisations involved in processing once the data moves ‘off-device’.

Though these issues are not typically the focus of PDSs, they relate to the technology’s broader aims. PDSs might potentially assist where technical mechanisms can improve the visibility over data processing and transfer from the device to the first recipient (one-hop), and legal means can govern such transfers (subsection 2.2). For instance, Mydex stipulates in its ToS that app developers may not transfer user data that is obtained through the platform’s service to third-parties, except to the extent that this is expressly permitted in the relevant app developer notice (see, for another example, Dataswift). Through these measures, PDSs might better inform users of – and offer greater control over – what is initially transferred ‘off-device’. However, the ability to actually monitor, track and control data as it moves across technical and administrative boundaries is an area for research (e.g. see Singh et al., 2017; Singh et al., 2019; Pearson & Casassa-Mont, 2011).

4.1.2 Issues with opacity and non-compliance for PDS platforms

Many PDS platforms describe ToS and contractual arrangements with app developers, which define how app developers may process user data. However, organisational data processing opacities can also hinder platforms in uncovering and assessing the risks of non-compliant app and developer behaviour (Crabtree et al., 2018). Platforms’ monitoring and compliance measures might to some extent mitigate the implications of limited user understanding of app developers’ data processing practices, where non-compliance by a developer could result in termination of their processing, the app’s removal from the platform, payment of damages, etc (e.g. ToS of Mydex). This could entail log file analysis, app audits, and manual reviews, including ‘sandboxing’ (examining behaviour in a test environment), and reporting measures when non-compliance is detected on a device (comparable to software ‘crash reports’ in other contexts).

However, there are questions around whether platforms themselves can effectively detect or otherwise uncover non-compliance by app developers. Platform operators generally position themselves to not have direct access to user devices (including data, processing and logs thereof), which limits their visibility over what is happening ‘on the ground’. Platforms becoming actively involved in device monitoring, by gaining visibility over the happenings on user devices, brings additional data protection considerations, while effectively involving a device ‘backdoor’ which has security implications and could undermine the PDS ecosystem. Questions of incentives are also raised, e.g. regarding the propensity for a provider to take action against app developers where doing so has impacts on the platform’s income or business. These issues need further attention.

4.2. Users still require knowledge and expertise

PDSs are oriented towards data protection concerns, particularly regarding the difficulties in obtaining genuinely informed consent and offering users real control. But for this to be effective, users must also be able to understand the potential data protection implications of processing. This means PDS users will require some degree of data protection expertise and knowledge to enable them to comprehend the implications of certain computation and transfers. Though PDSs seek to provide users with more information about processing, and may offer some general guidance, it will not always be clear to users what the full implications of certain data processing or transfers are—not least given the risks are often contextual. A user might, for instance, allow an app developer to build a detailed profile, not realising these could subsequently be used to influence, nudge or manipulate themselves and others (Wachter & Mittelstadt, 2019).

Similarly, an app’s or platform’s explanations and visualisations of data flows, technical parameters, configuration and preference management mechanisms, and so forth, can also be complex and difficult to understand for non-experts (Anciaux et al., 2019). Moreover, identifying where app behaviour does not comply with user preferences or is unexpected can be challenging even for expert users, let alone the non-tech-savvy. Users will therefore also require some technical expertise and knowledge to meaningfully interrogate, control and interact with the functionality of the platform (Crabtree et al., 2018).

As a result, though PDSs seek to better inform users, simply providing them with more information may not produce substantially better informed and empowered users. That is, the information asymmetries currently inherent in digital ecosystems may remain largely unaddressed, and many users may remain largely unempowered and under-protected.

There is on-going research by the PDS community on how platforms can make their transparency and control measures more effective (Crabtree et al., 2018). Default policies or usage of ‘policy templates’ might enable third parties (civil society groups, fiduciaries, etc) to set a predefined range of preferences (in line with certain interests and values) which users can easily adopt. Generally, mechanisms facilitating the meaningful communication and management of data protection risks and implications are an important area of research, not just for PDSs, but for digital ecosystems as a whole.

4.3 App developers may still collect and process at scale

Many PDSs seek to support collective computations, allowing app developers to process user data at scale to generate insights from across a population (subsection 2.4). In practice, this likely contributes to further consolidating the information asymmetries between users and organisations. PDSs may help users to understand these asymmetries to some extent, as they allow users to generate insights into the personal data in their own PDSs. However, the fact that app developers can operate across user PDSs—and are encouraged by platforms to do so—means that they can process the data from many users, and thus remain better informed than individual users can ever be. Although an individual’s data may be interesting to that individual, it is analysing data at scale that can provide the insights into user behaviour and preferences that are often truly valuable to organisations. It is unlikely that PDSs will address this systemic issue by means of any of their measures; indeed, by enabling and encouraging collective computations, PDSs are likely to even further contribute to these asymmetries.

As we will explore next, these asymmetries do not only exist with respect to individual users, but also society as a whole. This is because in the current digital environment, power resides with organisations who have the ability to access and process data. In facilitating collective computations, PDSs continue to support organisations to process data at scale.

5. Discussion: PDSs, privacy self-management and surveillance capitalism

A range of commercial business models are surveillance oriented, where economic value is extracted by collecting and analysing extensive data about people’s behaviour, preferences, and interests (Andrejevic, 2011; Fuchs, 2011; Palmås, 2011; Zuboff, 2015). At present, this typically involves aggregating individual data, and analysing that aggregated data to identify patterns. The knowledge obtained through that analysis is used for various purposes. In the context of online services, where the issues are particularly pronounced, this includes algorithmically personalisation to keep users engaged with the service and to target advertising (Cobbe & Singh, 2019). Often this involves profiling, which poses threats to personal integrity, and online services often target user vulnerabilities for exploitation with addictive designs, dark patterns, and behavioural nudging (Yeung, 2017). Online service providers can work towards vendor lock-in and systemic consumer exploitation. Given the central commercial and economic imperatives of most online services, nearly all data-driven business models involve (to some degree) the trading of data and insights for profit (German Data Ethics Commission, 2019). Note, however, that not only online service providers are surveillance-oriented; PDSs themselves also encourage traditional off-line business models to be augmented with some form of user surveillance, for example, to observe the nature of product usage in a home. The extensive processing of personal data in surveillance-oriented or supported business models raises a range of concerns (Kerber, 2016; Christl, 2017; Myers West, 2017).

As discussed in section 2, PDSs seek to address these concerns by giving users greater ‘control’ over their data and its processing through more information and options regarding processing and then enforcing their choices (by bringing the data processing closer to the user and placing legal and technical constraints on it). In this way, as discussed in section 3, PDSs adopt an approach to privacy and data protection that is still centred on consent-based grounds for processing, working to achieving more effective ‘notice and consent’. Although the approach taken by PDSs may seem to empower users by giving them more ‘control’, (i) the problems with ‘notice and consent’ as a way of protecting users in digital ecosystems are well-established (Barocas & Nissenbaum, 2009; Sloan & Warner, 2013; Barth & De Jong, 2017; Bietti, 2020), and (ii) it does not fundamentally challenge the logic of those business models and surveillance practices. PDSs therefore remain firmly grounded in the logic of ‘privacy self-management’ (Solove, 2013; Solove, 2020), whereby individuals are expected to manage their own privacy and are themselves held responsible where they fail to adequately do so. This can be understood as part of a broader trend of ‘responsibilisation’ in Western societies (Hannah-Moffat, 2001; Ericson & Doyle, 2003; Brown, 2015); putting ever more responsibility on individuals to manage risks in various aspects of their lives, despite the existence of systemic issues beyond their control that can make doing so difficult if not impossible (such as the asymmetries described in section 4 that PDSs do not sufficiently alleviate).

Further, PDSs fail to deal with the realities of collective computations, whereby app developers process user data in aggregate and at scale (subsection 2.2), or with the social nature of personal data (subsection 3.3). Collective computations still exist in—indeed, largely result from—the often commercial drivers for PDS platforms and apps. Through these computations PDSs both allow and contribute to further consolidation of power and information asymmetries (subsection 4.3). However, concerns about collective computations go beyond commercial processing, such as where platforms or app developers pursue public policy or security ends (rather than or additional to commercial gains). This is of significant concern, given the rich, detailed and high-personal nature of the information that a PDS device might capture. Moreover, the social nature of personal data means that individual-level controls are sometimes inappropriate (subsection 3.2)—processing may affect a number of people, only one of whom will have had an opportunity to intervene to permit or constrain it. In all, the individualist approach taken by PDSs, rooted firmly in self-management, does not and cannot capture these more collective, social dimensions of privacy and data protection.

The inability of PDSs to adequately address these concerns speaks to a more fundamental issue with PDSs as a concept: they put too much onus on the individual and not enough focus on the business models (or other incentives for data processing). The root cause of the appropriation of user’s personal data is generally not, in fact, the failure of individuals to exercise control over that data, but those surveillance-supported business models that demand the data in the first place. These business models operate at a systemic level, supported by information asymmetries, commercial considerations, legal arrangements (Cohen, 2019), network effects, and other structural factors, and beyond the control of any individual user.

Indeed, the information asymmetries inherent in surveillance business models result in a significant asymmetry of power between users and app developers (Mantelero 2014). As Lyon argues, through information asymmetries, surveillance “usually involves relations of power in which watchers are privileged” (Lyon, 2017, p. 15). This power asymmetry is at the core of how surveillance capitalism attempts to extract monetary value from individuals, by modifying their behaviour in pursuit of commercial interests (Zuboff 2015). Yet, as discussed above, PDSs seek to ‘empower’ users without significantly dealing with those asymmetries. Nor do they address other systemic factors with structural causes that disempower users in favour of organisations. While PDSs seek to decentralise processing to users’ devices, then, it does not follow that power will also be decentralised to users themselves: decentralising processing does not necessarily imply decentralising power. Without a more systemic challenge to surveillance-based models for deriving value, shifting away from individualised forms of notice and consent and alleviating the effect of information asymmetries and other structural issues, the underlying power dynamic in those surveillance models—skewed heavily in favour of organisations rather than individuals—remains largely unchanged.

Relevant is what Fuchs describes as a form of academic ‘victimisation discourse’, where “privacy is strictly conceived as an individual phenomenon that can be protected if users behave in the correct way and do not disclose too much information” (Fuchs, 2011, p. 146), while issues related to the political economy of surveillance capitalism—advertising, capital accumulation, the appropriation of user data for economic ends—are largely ignored or unchallenged. Responses to these business models that are grounded in placing ever-greater responsibility onto users to actively manage their own privacy, in the face of systemic challenges such as endemic surveillance and data monetisation, are destined to fail. This is the case with PDSs as currently envisaged. Indeed, as previously noted, PDSs have even been described as a way of reducing user ‘resistance’ to data sharing, bringing about a greater ‘willingness’ to allow personal data to be processed (subsection 2.4). This not only explicitly accepts the logic of these business models, but appears to make them easier to pursue. In this way, PDSs following this approach might lull users into a false sense of security through the rhetoric of greater ‘choice’, ‘control’, and ‘empowerment’—despite the evidence that these are flawed concepts in light of the structural and systemic nature of the concerns—while in practice facilitating the very data extraction and monetisation practices that users may be trying to escape.

6. Concluding remarks

PDSs are nascent, but growing in prominence. Their proponents claim that PDSs will empower users to get more from their data, and to protect themselves against privacy harms by providing technical and legal mechanisms to enforce their choices around personal data processing. Though, as we have detailed, their ability to deal with the broader challenges associated with current data processing ecosystems appears limited. Regarding data protection, platforms, regulators and lawyers might together work on the specific data issues brought by PDSs, including how best to deal with issues concerning the rights of data subjects. However, despite any such efforts, and regardless of the purported benefits of PDSs, most of the issues inherent to the systemic information asymmetries and challenges in the current ecosystems remain. While PDSs might offer some helpful user-oriented data management tools, they are fundamentally grounded in the mistaken idea that with enough information presented in the right way, individuals will be able to overcome barriers that are ultimately structural and systemic in nature.

References

Anciaux, N. (2019). Personal Data Management Systems: The security and functionality standpoint. Information Systems, 21, 13 – 35. https://doi.org/10.1016/j.is.2018.09.002

Andrejevic, M. (2011). Surveillance and Alienation in the Online Economy. Surveillance & Society, 8(3), 270 – 287. https://doi.org/10.24908/ss.v8i3.4164

Article 29 Data Protection Working Party. (2007). Opinion 1/2010 on the concepts of ‘controller’ and ‘processor’. (WP169 of 16 February 2010).

Article 29 Data Protection Working Party. (2010). Opinion 1/2010 on the concepts of ‘controller’ and ‘processor’. (WP169 of 16 February 2010).

Article 29 Data Protection Working Party. (2014). Opinion 8/2014 on Recent Developments on the Internet of Things. (WP 223 of 16 September 2014).

Article 29 Data Protection Working Party. (2016). Guidelines on the right to data portability (WP242 rev.01 13 December 2016).

Barocas, S., & Nissenbaum, H. (2009). On Notice: The Trouble with 'Notice and Consent’. Proceedings of the Engaging Data Forum: The First International Forum on the Application and Management of Personal Electronic Information.

Barth, S., & De Jong, M. (2017). The privacy paradox – Investigating discrepancies between expressed privacy concerns and actual online behavior – A systematic literature review’. Telematics and Informatics, 34(7), 1038 – 1058. https://doi.org/10.1016/j.tele.2017.04.013

Bietti, E. (2020). Consent as a Free Pass: Platform Power and the Limits of the Informational Turn. Pace Law Review, 40, 317 – 398.

Binns, R. (2020). Human Judgement in Algorithmic Loops: Individual justice and automated decision-making. Regulation & Governance, 1 – 15. https://doi.org/10.1111/rego.12358

Blume, P. (2012). The inherent contradictions in data protection law. International Data Privacy Law, 2(1), 26 – 34. https://doi.org/10.1093/idpl/ipr020

Bolychevsky, I., & Worthington, S. (2018, October 8). Are Personal Data Stores about to become the NEXT BIG THING? [Blog post]. @shevski. https://medium.com/@shevski/are-personal-data-stores-about-to-become-the-next-big-thing-b767295ed842

Brochot, G. (2015). Personal Data Stores [Report]. Cambridge University. https://ec.europa.eu/digital-single-market/en/news/study-personal-data-stores-conducted-cambridge-university-judge-business-school

Brown, W. (2015). Undoing the Demos: Neoliberalism’s Stealth Revolution. Zone Books.

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512

Cate, F. H., & Mayer-Schönberger, V. (2013). Notice and consent in a world of Big data. International Data Privacy Law, 3(2), 67 – 73. https://doi.org/10.1093/idpl/ipt005

Chen, J. (2020). Who is responsible for data processing in smart homes? Reconsidering joint controllership and the household exemption. International Data Privacy Law. https://doi.org/10.1093/idpl/ipaa011

Christl, W. (2017). Corporate Surveillance in Everyday Life [Report]. Cracked Labs. https://crackedlabs.org/en/corporate-surveillance

Cobbe, J. (2020). What lies beneath: Transparency in online service supply chains. Journal of Cyber Policy, 5(1), 65 – 93. https://doi.org/10.1080/23738871.2020.1745860

Cobbe, J., & Singh, J. (2019). Regulating Recommending: Motivations, Considerations, and Principles. European Journal of Law and Technology, 10(3), 1 – 37. http://ejlt.org/index.php/ejlt/article/view/686

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press. https://doi.org/10.1093/oso/9780190246693.001.0001

ControlShift. (2014). Personal Information Management Services – An analysis of an emerging market: Unleashing the power of trust [Report]. ControlShift.

Crabtree, A. (2018). Building Accountability into the Internet of Things: The IoT Databox Model. Journal of Reliable Intelligent Environments, 4, 39 – 55. https://doi.org/10.1007/s40860-018-0054-5

Crabtree, Andy, & Mortier, R. (2015). Human Data Interaction: Historical Lessons from Social Studies and CSCW. In N. Boulus-Rødje, G. Ellingsen, T. Bratteteig, M. Aanestad, & P. Bjørn (Eds.), ECSCW 2015: Proceedings of the 14th European Conference on Computer Supported Cooperative Work, 19-23 September 2015, Oslo, Norway (pp. 3–21). Springer International Publishing. https://doi.org/10.1007/978-3-319-20499-4_1

Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector. http://data.europa.eu/eli/dir/2002/58/oj

Directive (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of personal data, and repealing Directive 95/46/EC, (2016).

E-Privacy Directive – Directive 2002/58/EC of the European Parliament and the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector, (2002). http://data.europa.eu/eli/dir/2002/58/2009-12-19

Ericson, R. V., & Doyle, A. (2003). Risk and Morality. University of Toronto Press.

European Commission. (2020). A European strategy for Data. European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1593073685620&uri=CELEX%3A52020DC0066

European Data Protection Board. (2019). Opinion 5/2019 on the interplay between the ePrivacy Directive and the GDPR, in particular regarding the competence, tasks and powers of data protection authorities (Opinion No. 5/2019; pp. 38 – 40). European Data Protection Board.

Fuchs, C. (2011). An Alternative view on the Privacy of Facebook. Information, 2(1), 140 – 165. https://doi.org/10.3390/info2010140

German Data Ethics Commission. (2019). Gutachten der Deutschen Datenethik Kommission [Expert opinion]. Datenethikkomission. https://datenethikkommission.de/wp-content/uploads/191015_DEK_Gutachten_screen.pdf

Hannah-Moffat, K. (2001). Punishment in Disguise: Penal Governance and Canadian Women’s Imprisonment. University of Toronto Press.

Janssen, H., Cobbe, J., Norval, C., & Singh, J. (2020). Decentralised Data Processing: Personal Data Stores and the GDPR [Forthcoming]. https://doi.org/10.2139/ssrn.3570895

Janssen, H., Cobbe, J., & Singh, J. (2019). Personal Data Stores and the GDPR’s lawful grounds for processing personal data. Data for Policy, Kings College London. https://doi.org/10.5281/zenodo.3234880

Kamara, I., & De Hert, P. (2018). Understanding the balancing act behind the legitimate interest of the controller ground: A pragmatic approach. (Working Paper No. 4/12; pp. 1 – 33). Brussels Privacy Hub.

Kerber, W. (2016). Digital Markets, data, and privacy: Competition law, consumer law and data protection. Journal of Intellectual Property Law & Practice, 11(11), 855 – 866. https://doi.org/10.1093/jiplp/jpw150

Lodge, T. (2018). Developing GDPR compliant apps for the edge. Proceedings of the 13th International Workshop on Data Privacy Management, 313 – 328. https://doi.org/10.1007/978-3-030-00305-0_22

Lyon, D. (2017). Surveillance Studies: An Overview. Polity Press.

Mantelero, A. (2014). Social Control, Transparency, and Participation in the Big Data World. Journal of Internet Law, 23 – 29. https://staff.polito.it/alessandro.mantelero/JIL_0414_Mantelero.pdf

Myers West, S. (2019). Data Capitalism: Redefining the Logics of Surveillance and Privacy. Business & Society, 58(1), 20–41. https://doi.org/10.1177/0007650317718185

Ng, I., & Haddadi, H. (2018, December 28). Decentralised AI has the potential to upend the online economy. Wired. https://www.wired.co.uk/article/decentralised-artificial-intelligence

Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Dædalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113

Palmås, K. (2011). Predicting What You’ll Do Tomorrow: Panspectric Surveillance and the Contemporary Corporation. Surveillance & Society, 8(3), 338 – 354. https://doi.org/10.24908/ss.v8i3.4168

Pearson, S., & Casassa-Mont, M. (2011). Sticky Policies: An Approach for managing Privacy across Multiple Parties. Computer, 44(9), 60 – 68. https://doi.org/10.1109/MC.2011.225

Poikola, A., Kuikkaniemi, K., & Honko, H. (2014). MyData – A Nordic Model for human-centered personal data management and processing [White Paper]. Open Knowledge Finland.

Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233 – 243. https://doi.org/10.1093/idpl/ipx022

Silverman, C. (2019, April 14). Popular Apps In Google’s Play Store Are Abusing Permissions And Committing Ad Fraud. Buzzfeed.

Singh, J. (2017). Big Ideas paper: Policy-driven middleware for a legally-compliant Internet of Things. Proceedings of the 17th ACM International Middleware Conference. https://doi.org/10.1145/2988336.2988349

Singh, J. (2019). Decision Provenance: Harnessing Data Flow for Accountable Systems. IEEE Access, 7, 6562 – 6574. https://doi.org/10.1109/ACCESS.2018.2887201

Sloan, R. H., & Warner, R. (2013). Beyond Notice and Choice: Privacy, Norms, and Consent (Research Paper No. 2013–16; pp. 1 – 34). Chicago-Kent College of Law. https://doi.org/10.2139/ssrn.2239099

Solove, D. (2013). Privacy Self-Management and the Consent Dilemma. Harvard Law Review, 126, 1888 – 1903. https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/

Solove, D. (2020). February 11. The Myth of the Privacy Paradox (Research Paper No. 2020–10; Law School Public Law and Legal Theory; Legal Studies). George Washington University. https://doi.org/10.2139/ssrn.3536265

The Royal Society. (2019). Protecting privacy in practice: The current use, development and limits of Privacy Enhancing Technologies in data analysis [Report]. The Royal Society. https://royalsociety.org/topics-policy/projects/privacy-enhancing-technologies/

Tolmie, P. (2016, February). This has to be the cats – personal data legibility in networked sensing systems. Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work. https://doi.org/10.1145/2818048.2819992

Urquhart, L. (2018). Realising the Right to Data Portability for the Domestic Internet of Things. Pers Ubiqui Comput, 22, 317 – 332. https://doi.org/10.1007/s00779-017-1069-2

Urquhart, L. (2019). Demonstrably doing accountability in the Internet of Things. International Journal of Law and Information Technology, 2(1), 1 – 27. https://doi.org/10.1093/ijlit/eay015

Wachter, S., & Mittelstadt, B. (2019). A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review, 2, 494 – 620. https://doi.org/10.7916/cblr.v2019i2.3424

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Wagner, B. (2019). Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems. Policy & Internet, 11(1), 104 – 122. https://doi.org/10.1002/poi3.198

Yeung, K. (2017). 'Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118X.2016.1186713

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30, 75 – 89. https://doi.org/10.1057/jit.2015.5

Footnotes

1. Note that Hub-of All-Things (HAT) recently changed its name into Dataswift Ltd; Dataswift Ltd represents the commercial enterprise that grew from the university-led HAT research project which was tasked to build the decentralised HAT infrastructure and the governance model. Where we refer in the text to Dataswift, both the HAT project and the commercial enterprise Dataswift are considered within our analysis.

2. Note that Solid offers the technical infrastructure, while Inrupt is the company offering services that are built on that infrastructure. Where we refer to Inrupt, both the technical infrastructure and the company services come within our analysis.

3. This article builds on our earlier comparative analysis of commercial PDS offerings and different PDS formulations, as focused on data protection concerns (Janssen et al., 2020).

4. Note that a 'device' is conceptual, and can be underpinned by a range of technical architectures. In describing the data and processing 'within' a device, we refer to that logically governed by the device. This means, for example, that the data and compute might not necessarily occur all within a single technical component, but could potentially occur in various locations, e.g. across a range of (managed) cloud services.

5. Note that the terminology varies by platform; not all platforms would describe processing as occurring through apps, though generally there is some conceptually similar construct.

6. Note that despite the similar terms (devices, apps, app stores), PDS differ from mobile ecosystems, in that PDSs are governance oriented, with far richer and granular controls. Moreover, the degree of resemblance will depend on the specific formulation of the PDS and its ecosystem – many different approaches are possible.

7. We use ‘collective computation’ simply to refer to computation that occurs across a range of user devices. There is potential for the methods facilitating such computation to employ privacy-enhancing mechanisms (e.g. The Royal Society, 2019).

8. Note that differences exist as to what PDSs require from app developers to describe in their manifests. Databox envisages to assess risks as to whether an app developer intends to share the data with third parties, while other platforms might not envisage any risk assessment on this aspect (or it is not explicit from their documentation that they do this).

9. Databox envisages to give indications to users in their risk assessment whether app developers intend to transfer user data beyond the EU (which entails high risks to that data), or whether an app developer transfers personal data to other recipients (this also entails high risks to user data).

Privacy self-management and the issue of privacy externalities: of thwarted expectations, and harmful exploitation

$
0
0

1. Introduction

This article examines the interdependent dimension of privacy and criticises the individualistic framework of notice and consent (hereafter ‘N&C’) through which one’s personal data is in practice protected. This framework is presented as problematic due to the way it obscures the role of data subjects other than the ‘main’ data subject (the user of the product or service, henceforth ‘the user’ of the ‘service’), and thereby prevents privacy externalities from being confronted and adequately addressed. ‘Externality’ is a term from the field of economics which designates the by-product of a business activity; it occurs when the production or consumption of a good or service by an agent imposes a cost or benefit on an unrelated third party (Tietenberg and Lewis, 2018, p. 26). A textbook example of the concept would be the activity of an industry which pollutes a water stream, generating profits for those actively engaged in the activity but also covertly impacting the health of locals. By extension, the concept of privacy externality refers to the inclusion of others’ personal data in the processing activity agreed to between the controller and the user, whereby costs are imposed on these third-party data subjects: the undermining of their privacy and of their right to data protection, as well as potential harm.

For example, we routinely upload pictures of others to proprietary platforms such as Facebook. We disclose the genetic data of our whole family, together with our own, when we get DNA testing kits from companies such as MyHeritage. The discussions we have with our friends fuel the training of Amazon’s AI when they enter our Alexa-equipped ‘smart home’. None of the aforementioned individuals in practice benefits from adequate privacy protection, because the means we too often primarily rely on to ensure the protection of data subjects’ personal data (such as contract-like Terms of Service between user and service provider), allow the exercise of data protection rights to the user only (Solove, 2013). This article thus shows that, independently and in spite of one’s effort to manage it, one’s privacy and right to data protection can be fundamentally undermined by the behaviour of others; further, that this disclosure can be (and often is) exploited by data-hungry organisations whose business model is the insatiable extraction, accumulation and monetisation of personal data (Shoshana Zuboff’s surveillance capitalism (2015, 2019); see also European Data Protection Supervisor (EDPS) (2020, p. 5).

The economics’ aspect of privacy externalities has hitherto often remained absent from the debate about the phenomenon. Indeed, the interdependent dimension of privacy, as well as the issue of privacy externalities, are being directly addressed in legal, policy and philosophical scholarship at least since the 2010s. Part of the contribution made by this article is the collection of relevant literature, which otherwise stands in isolated clusters and refers to a similar phenomenon using different concepts, such as: joint controllership, and privacy infringements (Helberger and van Hoboken, 2010; van Alsenoy, 2015; Edwards et al., 2019) or infringements of data protection law and networked services (Mahieu et al., 2019); collective privacy (Squicciarini et al.,2009) and collective action problems in privacy law (Strahilevitz, 2010); multi-party privacy (Thomas et al., 2010); collateral damage and spillover (Hull et al., 2011; Symeonidis et al., 2016); interpersonal management of disclosure (Lampinen et al., 2011); networked privacy (boyd, 2011; Lampinen, 2015; Marwick and boyd, 2014); interdependent privacy (Biczók and Chia, 2013; Symeonidis et al., 2016; Pu and Grossklags, 2017; Kamleitner and Mitchell, 2019); peer privacy (Chen et al., 2015; Ozdemir et al., 2017); multiple subjects personal data (Gnesi et al., 2014); privacy leak factor, shadow profiles and online privacy as a collective phenomenon (Sarigol et al., 2014); privacy externalities (Laudon, 1996, pp. 14-6; MacCarthy, 2011; Humbert et al., 2015, 2020; Symeonidis et al., 2016; Choi et al., 2019), especially as compared to externalities in the context of environmental pollution (Hirsch, 2006, 2014; Hirsch and King, 2016; Froomkin, 2015; Nehf, 2003; Ben-Shahar, 2019); 1genetic groups (Hallinan and De Hert, 2017); or sociogenetic risks (May, 2018). 2

While the phenomenon has thus been addressed in scholarly and policy settings already (although often with a different goal or scope), the present article frames it in a way which puts into light an important aspect hitherto mostly unaddressed. This aspect is the financial incentives and the exploitative dynamics behind these disclosures of others’ data; it is not only a major factor in making the phenomenon ethically problematic, it is also the very reason the phenomenon is perpetuated. These incentives and dynamics give competition and consumer-protection ramifications to this data protection issue, and failing to pick up on them has hindered scholars and authorities from adequately grasping and addressing the problematic phenomenon.

This concern about externalities is moreover different from more traditional data protection issues of inappropriate disclosure such as leaks and hacks: privacy externalities are not only about bad personal data management, but also about impossible personal data management. Privacy cannot adequately be managed alone, as it is in some aspects necessarily an interdependent matter. Whereas this is a neutral fact about the world, the way we (do not) deal with it is problematic, because individual users and controllers take advantage of it and allow costs to be imposed onto others, undermining their privacy. This is even more deeply problematic as the current data ecosystem (which generally harvests every bit of data for monetisation or exploitation) has been designed in a way that often amplifies the negative nature of privacy externalities. Framing the issue as one of privacy externalities and exploitation, instead of as the mere downside of certain technologies, is moreover important if we want to have an adequate philosophical, societal and juridical debate on the issue of privacy externalities, because it allows us to recognise the responsibilities upon which the relevant parties fail to act.

In this article, I begin by introducing the ideal of privacy self-management which, in an ecosystem that heavily relies on consent as the legal basis for data processing, is de facto commonly imposed onto data subjects through the ‘Notice and Choice’ (N&C) framework; this self-management ideal is contrasted with the reality of the interdependent dimension of privacy (section 2). I argue that improperly taking this dimension into account allows for the creation of privacy externalities, whereby others inconspicuously and unfairly pay (part of) the price for others’ benefit; moreover, I argue that this is the term most appropriate to conceptualise and analyse the phenomenon (section 3). Building upon the concepts and concrete examples discussed in the existing body of literature collected, I then attempt to draw a systematic and comprehensive picture of the phenomenon, analysing the various forms it takes (section 3.1). Finally, I briefly explore two possible ways of addressing the issue of privacy externalities (section 3.2).

In terms of methodology, this article does a conceptual analysis of a concrete issue (privacy externalities), combining theoretical insights from the field of economics with knowledge of data protection legislation and real-life examples. This analysis responds to, and is informed by relevant works in the existing literature.

2. Privacy self-management and interdependent privacy

The 2016/679 General Data Protection Regulation (GDPR) states (art. 5) that personal data shall be (a) processed lawfully, fairly and in a transparent manner in relation to the data subject, (b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes, and (c) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed. While additional principles are important in the European data protection regulation, these ‘lawfulness,’ ‘fairness,’ ‘transparency,’ ‘purpose-limitation’ and ‘data minimisation’ principles are its pillars.

To ensure the lawfulness of their processing, however, the majority of processors in practice rely on only one of the multiple grounds available: consent. Consent, as an expression of individual autonomy, is accorded great value in Europe and particularly in the field of data protection, with the consequence that some controllers over-rely on it or use it to (erroneously attempt to) legitimise routine or even disproportionate data processing (for an in-depth analysis of this topic, see van Alsenoy et al.,2013, pp. 4-6). In consequence, the framework of N&C (especially through online privacy notices) has sprung forward as the de facto preferred means by controllers to ensure the transparency of their practices and to collect the consent of data subjects (see also Barocas and Nissenbaum, 2014; Hull, 2015; Mantelero, 2017, p. 72). In practice, this, together with a widespread business model relying on the collection and monetisation or exploitation of personal data (EDPS, 2020, p. 5; Holloway, 2019), has led to an individualistic system of personal data protection where the consent of individuals is repeatedly queried for a multitude of purposes, whereas in theory, a data subject would not necessarily have to micro-manage their privacy as much as they currently do.

This means that privacy management often takes the contractual form of two parties agreeing about the processing (the collection, use, disclosure, etc) of the data subject’s personal information (personal data), in exchange for a service offered by the controller. It is furthermore reflected in one of the currently dominant legal and philosophical definitions of privacy, which is: the relative control over the ways and the extent to which one selectively discloses (information about) oneself to others. 3

This (over-)reliance on consent has the impractical effect that the privacy of individuals is only protected per individual, i.e., it is achieved in an individualistic fashion, where data subjects have to (and are expected to) micro-manage their privacy (Whitley, 2009; Solove, 2013; van Alsenoy et al.,2013; Mantelero, 2014; Taylor et al.,2017, p. 6). In addition to the burden of self-management it creates for individuals, it will become clear that this individualism is also problematic because it obscures the fact that, in many instances, the data subject’s choice to consent in fact impacts other data subjects, and thereby pre-empts these third parties’ own consent. Indeed, privacy has both a collective and an interdependent dimension to it.

To see this, one has to understand the scope of the GDPR’s definition of personal data, which is

any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. (GDPR, art. 4.1)

This definition is extensive, and protects data subjects whenever information about them is processed. Crucially for this article, the scope of this definition also entails that one’s personal data may also be another’s personal data. When I upload material on a website, it is related to me (and, therefore, is my personal data) in that it is uploaded by me, and about me—two relations of ‘identifiable relatedness’ arguably relevant for constituting personal data. Accordingly, when I upload content clearly about someone else (henceforth a ‘third-party subject’), it is both my personal data and theirs—as long as they are identifiable—because, although it is uploaded by me, it is about them. These relations can be referred to as ‘causal agency’ and ‘personal relevance.’ 4

If they do not also have a ‘causal agency’ relation to it, controllers rarely (if ever) provide N&C or other rights to data subjects who have a ‘personal relevance’ relation with the material (data) processed. For instance, Facebook has a portal dedicated to the provision of their personal data to Facebook users; yet, this access is restricted to “information you’ve entered, uploaded or shared [yourself].” 5 This is incoherent, when one realises that the range of our personal data processed (often knowingly) by Facebook exceeds the data we have provided ourselves. This also means that a narrow understanding of personal data is often applied, and therefore that many data subjects’ right to effective data protection is unfairly restrained.

The distinction made between the two kinds of ‘identifiability’ is important, because it allows me to identify and frame a major obstacle to privacy self-management: the interdependent dimension of privacy, i.e., the idea that in a networked world, one’s privacy depends at least partly on others (and on the choices they have themselves made regarding their own privacy). While I may decide what information about myself I give to the world (and to individual controllers), others may decide it for me as well; I am thus at least partly dependent on others for retaining my informational privacy. This interdependent dimension is an obstacle insofar as privacy is framed as an individualistic matter (through the N&C mechanism that is the favoured tool of many controllers to achieve appropriate data protection), an aspect of one’s life which is self-(sufficiently-)manageable.

3. Privacy externalities

As mentioned earlier, an externality is a cost or benefit imposed on a third party who did not choose to incur that cost or benefit, and which is the by-product of an activity (such as the production or consumption of a service). Externalities often occur when the equilibrium price (i.e., the price when supply and demand are balanced) for the production or consumption of a service does not reflect the true costs or benefits of that service for society as a whole (see Heath, 2007). In the context of informational privacy, this article argues that people’s decisions to use certain services, or to share their personal information, may allow the data controller to know more about them, but also about others. To the (limited) extent that people can be said to ‘pay’ for a service with their data, 6 part of the price is actually also other people’s data. That is, the full costs of the production or consumption of the service include the impact on others’ privacy and the (dis)utility resulting therefrom—a form of latent harm peculiar to the 21st century (Calo, 2011; Laudon, 1996, pp. 16-17; see also Article 29 Data Protection Working Party (WP29), 2014, p. 37; see also van Dijk et al.,2016, on “increased risks to rights”; see also Ben-Shahar, 2019, on how these externalities “undermine and degrade public goods and interests”); 7 the problem is that this is neither transparent nor accounted for in the transaction between user and service-provider. Hence the term privacy externalities.

Referring to the phenomenon as privacy externalities allows me to capture a crucial aspect of the issue: the cost of services in the digital, hyperconnected era and, furthermore, the externalisation of these costs. While other terms used to refer to the phenomenon (see section 1) conceptualise it as a mere side-effect of certain digital practices, using the concept of externality brings to light the fact that this side-effect is not neutral, i.e., that users and/or controllers are not indifferent to it (whether they are conscious of it or not). On the one hand, by not investing as much as they should in the design of their service, and by not addressing all their obligations toward (third-party) data subjects, controllers can de facto dump costs and responsibilities onto the user (such as the duty to notify the user’s peers of the data processing, in the case of smart homes), 8 thereby saving resources. On the other hand, by not carefully choosing privacy-respecting services (when that is possible), and/or by not taking adequate precautions for others’ privacy when using these services, users may often themselves be dumping costs onto third-party subjects: the infringement of their privacy, increased risks to their rights, potential harm, as well as the time and energy required for taking the appropriate measures (when possible). 9 This means that privacy externalities can cause distortions in the production and consumption of social goods, by making the perceived price of a service lower than the actual total cost, and therefore more attractive than it should be.

Moreover, because some service providers’ business model relies on the accumulation and monetisation of as much data as possible (Zuboff, 2015; EDPS, 2020, p. 5; Holloway, 2019), privacy externalities are costs for third-party subjects not only in the sense that they expose the latter and undermine their rights to privacy and data protection, but also in the sense that they make way for profit-driven controllers to (illegally) exploit this data for their own benefits at the expense of the data subjects (see esp. Court of Justice of the European Union (CJEU), 2019, para 80, and Ben-Shahar, 2019, p. 115; see also Humbert et al., 2020, on service providers as “adversaries”). For instance, exploiting the externalities generated through the sharing of users’ contact data is part of Facebook’s massive targeted prediction and advertising endeavour, which is how the company makes most of its profit (Venkatadri et al., 2019). Similarly, direct-to-consumers genetic testing services which offer to predict medical risk factors and to reveal ancestry or genealogy actually make profit through reusing the data for medical research, profiling, and offering their services to law enforcement (EDPS, 2020, pp. 5, 25). Thus not only does the flawed price of the relevant services allow controllers to save resources, it also leads users to consume more of these services, feeding the controllers even more data to extract value from. Realising the potential residing in these troves of data, some rogue controllers may even intentionally design the structure of their services so as to encourage and capture such privacy externalities, leading powerless, careless or unaware users to provide the system with not only data about themselves, but also about others.

These passively and actively beneficial aspects of privacy externalities are thus, effectively, incentives for the perpetuation of the phenomenon. They are crucial to understanding and tackling it, and their absence from the existing literature on the topic is therefore regrettable. Moreover, in addition to not picking up on the economic aspect of the phenomenon, the authors of the works cited in section 1 often only addressed it in relation to a unique context—such as social networks or databases. Similarly, when the issue was addressed at court- or policy-level, the kind of privacy externalities taken into account did not necessarily reflect the whole range of the phenomenon (see the categories discussed below). This substantially limited the scope of both their analysis and the solutions they sought, with for instance the CJEU and the WP29 focusing on the ‘disclosure to [a certain number] and [a certain kind] of peers’ as a criterion determining the wrongness and illegitimacy of the processing (CJEU, 2003; WP29, 2013, p. 2).

Further, and especially as these works are scattered in ‘clusters’ that do not necessarily refer to each other, 10 the absence of the broader perspective in these studies would lead one to believe, at first sight at least, that these works (or clusters) address a different issue from one to the other. By abstracting the contingencies from each case and putting them all under the umbrella of privacy externalities, however, one can identify the different clusters, and it becomes apparent that there actually is a whole body of literature on the phenomenon, instead of scattered studies of different phenomena.

Still in contrast to the works referenced above, the present article brings about the clear distinction between two separate things that their authors often discuss inextricably interwoven, and unites their work as revolving around this distinction. This distinction is between the interdependent (or networked, interpersonal, collective, social) aspect of privacy—which is a necessary fact about the world—and the phenomenon of privacy externalities (or spillovers, collateral damages, disclosures, leaks)—which is partly contingent on controllers’ and users’ decisions. This distinction, and especially this contingency (i.e., the fact that it depends on other factors, such as default privacy settings or on the way a service is used), is important when (if) the responsibility of the various actors is addressed (section 3.2).

We may thus start to see the broader picture, and to focus on the cause of the problem instead of its symptoms. This article argues that privacy externalities are mostly the result of the necessarily interdependent dimension of individual privacy being coupled with economic incentives to externalise certain costs (and to exploit and monetise them further in complex and obscure ways when possible). The phenomenon is widespread and may produce or amplify future harm (including intrusive predictions and advertising), at least when the issue is systemic and the externalities accumulate. Besides, the phenomenon violates individuals’ right to privacy and threatens the ideal of privacy self-management itself, independently from whether it produces concrete (latent, tangible or intangible) harm or not, and independently from whether it is exploited by the controller or not; it is yet another risk to the rights of data subjects. Like Martin Tisne (2019, n.p.) succinctly puts it, “we are not only bound by other people’s data; we are bound by other people’s consent [… and] this effectively renders my denial of consent meaningless. […] The issue is systemic, it is not one where a lone individual can make a choice and opt out of the system” (see also Barocas and Nissenbaum, 2014). This practice, whereby one’s consent is overridden, should receive the attention it deserves, especially in light of the expectations of privacy self-management.

Disclosure of others’ personal data through one’s activities can be repetitive, commonplace, extensive and substantial, and is thus a serious issue. Building upon the concepts and examples discussed in the existing literature, I will now have a closer and systematic look at the various forms privacy externalities can take.

3.1. Four different kinds of disclosure

Privacy externalities can take multiple forms, each problematic in their own way. Once abstracted from their individual contingencies, they can be separated into the following four (possibly overlapping) categories:

  1. Direct disclosure: data is revealed about subject A when subject B discloses data about subject A.
  2. Indiscriminate sensing: data is revealed about subject A when subject B reveals data about subject B that was formed through an indiscriminate process of capture, and which therefore included data about subject A alongside the data of subject B.
  3. Fundamentally interpersonal data: data is revealed about subject A when subject B reveals data about subject B, which necessarilyis also data about subject A.
  4. Predictive analytics: subject B discloses data about subject B, from which the data controller is able to infer or predict more data about subject B as well as about subject A.

The difference between categories (2) and (3) is that in the former, the interpersonal data (the term used here for data which is about more than one subject) is only contingently interpersonal, whereas in the latter it is necessarily so. In the former, the data could have been only about the user, if she had been cautious for instance; that is not an option in the latter category. The distinction becomes clearer with examples from each category:

  1. Direct disclosure: as long as it is digitally-recorded, any activity that consists in explicitly discussing aboutsomeone counts as revealing that person’s personal data, and thus as an activity relevant to interpersonal privacy. This includes blogging about people (Solove, 2007, p. 24); talking about them and posting pictures of them on social networks (Wong, 2009, p. 143 et seq.; van Alsenoy et al.,2009, p. 70; Helberger and van Hoboken, 2010; College Bescherming Persoonsgegevens (‘CBP’), 2007, pp. 12-13; Belgisch Commissie voor de Bescherming van de Persoonlijke Levenssfeer, 2007, pp. 21-22); outing a sexual preference online, broadcasting a traumatic experience, public shaming or posting ‘revenge porn’ (van Alsenoy, 2015); or “tagging” others (see Privacy International, 2019 about the app ‘TrueCaller’). Beside this, category 1 also involves directly handing over other people’s data to the data controller, like when Facebook apps ask the user to access her friends’ list and their data (Besmer and Lipford, 2010; Hull et al.,2011; Bizcók and Chia, 2013; Symeonidis et al.,2016, Facebook Inc., 2018). Moreover, embedding a Facebook “Like” button into one’s personal website (CJEU, 2019, para 76-7) de facto means handing over the personal data of visitors to Facebook, and similarly for other buttons and third-party services allowing behavioural targeting and user analytics (Mahieu et al.,2019).
  2. Indiscriminate sensing: recording one’s voice or environment often also implies indiscriminately recording others. Sensors capture all the available data of a given category (e.g., sound or image) within a perimeter, and do not discriminate between consenting and non-consenting data subjects. Therefore, the following activities will also capture the personal data of other people who may neither be aware nor capable of resisting the invasion of their privacy: uploading pictures of crowded places on social media; using a drone or Google Glass (van Alsenoy, 2015; EDPS, 2014a); driving someone in one’s connected car (EDPS, 2019b, p. 3) or just driving a self-driving car around; ‘Netflix & Chilling’ in front of a smart TV; relying on a Ring doorbell (Herrman, 2020); using ‘voice assistants’ 11 or ‘smart’ speakers in one’s home (EDPS, 2019a, p. 3). Recording events in sound or image can be a sensitive practice, because many personal aspects of one’s and others’ life can be thus made available to data controllers, including sensitive data like political opinions, religious beliefs, or health data (Vallet, 2019). This data can moreover be automatically ‘mined’ by image-processing, voice-processing, and facial-recognition software. This category is quite broad, and includes CCTV (ICO, 2017; CJEU, 2014b); Internet of Things objects; or smart homes (see Kitchin, 2014b).
  3. Fundamentally interpersonal data: there are some kinds of data which necessarily constitute or reveal personal data of multiple persons. A striking example is genetic data: giving rights to a data controller to process your genetic data not only affects you and your privacy, but also potentially countless individuals to whom you are related—knowingly or unknowingly (Chadwick et al.,2014; Olejnik et al.,2014; Hallinan and De Hert, 2017; Taylor et al., 2017, p. 9; Erlich et al., 2018; May, 2018; Molteni, 2019). Because certain genetic traits are necessarily shared with family members, it suffices that a single person undertakes such an analysis for a kind of ‘family-wide sharing of personal data’ (i.e., a generational data breach). Other practices involving such interpersonal data include telecommunications (where the metadata reveals at least the identity of correspondents and the frequency of calls); the use of certain email providers (Dodds and Murphy, 2018; Ben-Shahar, 2019, p. 115); or the use of a shared system (such as smart grids, see McDaniel and McLaughlin, 2009). Finally, the category of fundamentally interpersonal data also includes relational data (Jernigan and Mistree, 2009; boyd, 2011; Backstrom and Kleinberg, 2014; see also the activity of address book sharing described in section 3.2.1), but also data about groups (such as households or neighbourhoods) (Taylor et al., 2017).
  4. Predictive analytics: when enough people disclose ample information about themselves, data controllers (especially data brokers) are able to understand the relation between having a given trait and a specific characteristic. For example, there is a correlation between, on the one side, buying felt-pads to prevent one’s furniture from scratching the floor, and on the other side paying one’s bills on time (Duhigg, 2009). When correlations like these have been found (through mining massive troves of data), the small, seemingly-insignificant pieces of information that even prudent people disclose (willingly or not) will reveal more data about them, whether they like it or not (Barocas and Nissenbaum, 2014, the “tyranny of the minority”; Choi et al.,2019, p. 8, Wachter and Mittelstadt, 2019). This is the case of the ‘dynamic’ groups from profiling categories, ‘Big Data’ analytics, predictive analytics and recommendation systems (Vedder, 1997; boyd et al.,2014; Mantelero, 2014, 2017). 12

These four categories and the examples provided show how important and diverse the cases are where one’s behaviour can negatively impact (the privacy of) others, and thus that the issue at stake here is not a rare or minor one. Each of the non-sensitive pieces of data that are thereby processed may seem innocuous on their own; however, not only does their processing remain an encroachment on and increased risk to third-party subjects’ fundamental rights, but when the phenomenon is widespread, the aggregation of all its instances will worsen its potential to do harm. Furthermore, even the smallest disclosures are significant, due to the possibility of the data being exchanged with others (such as data brokers, see Symeonidis etal.,2016;Choi et al.,2019, p. 8). Finally, in some cases (such as with biometric or genetic data) the data can be very sensitive, and the harm brought by the disclosure can be lifelong.

A screenshot of the Facebook Messenger app's notification requesting the user to "continuously upload info about [their] contacts", in order to let friends find each other on the platform and to help Facebook "create a better experience for everyone".<br />
The options are "turn on", "not now" or "manage your contacts".
Figure 1: Notification from the Facebook Messenger app requesting access to the user’s contacts

3.2. Whose responsibility?

Different categories of privacy externalities will plausibly require different coping strategies; for instance, categories 1 and 4 seem to be unavoidable, to a certain extent, and would motivate a mitigating strategy rather than a prevention strategy. It is out of the scope of this article to solve the issue of privacy externalities; however, what the article can still do before closing, is briefly exploring two promising paths.

The common denominator to the most problematic kinds of privacy externalities is the perpetuating force behind them, i.e., the passive and active benefits of externalities—respectively: dumping costs, and (the potential for) exploiting the third-party subjects’ data. Tackling these incentives should be at the heart of any response to the phenomenon. However, it should be noted that while the active benefits are enjoyed by data controllers alone, the passive ones (cheaper prices, less effort required, etc) are enjoyed by both the controllers and the users. While focusing on data controllers is therefore the logical place to start (and thus the first path examined), the roles of users should not be overlooked.

3.2.1. Enforcing data protection by design and by default

The controller often plays an important role in the generation of externalities. For instance, some controllers offer services through which the acquisition of the personal data of the subject’s peers is requested, even though such services could do without it. The comparison between messaging apps Facebook Messenger and Signal illustrates this well.

Messenger asks the user to(consent to) upload her contacts to Facebook’s servers, and to do so continuously (see figure 1). Facebook thus stores internally the contacts’ data, with the ensuing function creep Facebook is notorious for (Gebhart, 2018; Venkatadri et al.,2019). Signal, on the other hand, periodically sends the user’s contacts’ phone numbers to its servers in truncated, cryptographically-hashed form; it then identifies the overlap (i.e., the user’s contacts who also use Signal) and indicates this overlap on the user’s device exclusively, after which the server discards the information it received about the user’s contacts. 13

In general, even if only limited data, such as the nickname and a phone number, were disclosed for each contact in the user’s list, it would remain a potentially fruitful acquisition for the controller, as the widespread disclosure by users of their contact list would allow the controller, if it were as privacy-invasive as Facebook is, to identify the overlapping contacts in users’ phones, create network maps and start building ‘shadow profiles’ about non-users (WP29, 2009, p. 8; Sarigol et al., 2014; boyd et al.,2014; Levy, 2020, p. 222). Even solely knowing about this network of relations is valuable to the data controller, based on homophily—the tendency people have to interact with others who are similar to them. Homophily can be relied on to infer the “ethnicity, gender, income, political views and more” of people based on their communication networks (Caughlin et al., 2013, p. 1; see also Sarigol et al.,2014; Garcia, 2017; Jernigan and Mistree, 2009 (the “Gaydar”)). Thus, my ability to remain under Facebook’s (or others’) radar is heavily undermined by other individuals’ seemingly innocuous actions, which not only disclose information about them, but also (foreseeably) about me—even if I am not a Facebook user myself. This is not the case for data subjects using Signal.

While Facebook in this case is invasive by design, Signal follows the approach of Data Protection by Design and by Default (DPbDD) which requires (GDPR art. 25) taking technical and organisational measures to (a) implement data-protection principles in an effective manner and to (b) ensure that only personal data which are necessary for each specific purpose of the processing are processed. DPbDD forces complying controllers to take the necessary steps to prevent, contain and mitigate the privacy externalities that might result from (the way they offer) their services. As such, the strength of DPbDD is that it is a solution generic enough to be applied to privacy externalities beyond messaging services (i.e., to “handle various data types and adversaries” (Humbert et al., 2020, p. 33)). For instance, Facebook incorporated some mechanisms to reduce privacy externalities on its platform, such as requiring the peers’ assent before a user can tag them in a picture or make them appear on her Facebook wall. Another example of useful DPbDD is found in clinics, where there are legal and other mechanisms governing conduct when genetic information about an inherited disease is relevant to the tested person’s relatives, or for cases where a diagnostic incidentally indicates misattributed paternity.

While DPbDD requirements are specified in the GDPR and are thus the remit of data protection authorities, privacy externalities hitherto persist nearly unchallenged (perhaps due to these authorities’ lack of adequate funding, see Ryan and Toner, 2020; Satariano, 2020). In light of the harm certain practices can cause to third-party subjects, it could be argued that other authorities, especially consumer-protection authorities, should take data protection issues more seriously into consideration (without prejudice to the data protection authorities' powers) (on this, see Rhoen, 2016; see also EDPS, 2014b). Further research is needed to explore the extent to which this is possible; either way, the idea is that the stricter enforcement of DPbDD requirements—especially for services that seem to be invasive by design (rather than merely not designed with privacy in mind (see Helberger and van Hoboken, 2010, p. 106; see also CJEU, 2019, para 80))—could efficiently address part of the privacy externalities (van Alsenoy, 2015, p. 32, Edwards et al.,2019), i.e., the part where controllers are otherwise incentivised to dump certain costs and obligations onto the user and third-party subjects.

One should be held accountable when one facilitates risks and harms for the peers of the users of one’s services; inaction is unacceptable, even more when one is profiting from this inaction, and taking advantage of the issue should be a no-go. Yet, the whole issue cannot be averted through the enforcement of these DPbDD requirements alone, because the user often plays an important part in the creation of externalities (through the way they use certain technologies, or the invasive practices they opt-in for), and because the issue can sometimes be most effectively and cost-efficiently addressed by users themselves. 14 The question is, in the current state of affairs, how much can we rely on individuals to adequately internalise the costs of their behaviour? This question leads us to a second, arguably more intricate, way out: the framework of joint controllership.

3.2.2. Joint controllership

To illustrate the need for this complementary strategy, let us take the case of the smart home. A smart home is a data-driven premise which necessarily monitors all its occupants to provide its services, since its sensors most of the time cannot distinguish between the user and her relatives or visitors. In such a scenario, it is inevitable that the service will generate privacy externalities, and it is unclear whether thorough DPbDD would adequately prevent or mitigate them all.

In essence, when multiple natural or legal persons determine the purpose and means of processing of the personal data, under the GDPR they are joint controllers and each is responsible for the part of the processing that it controls, to the degree it has control over it (see GDPR art. 26). Following European jurisprudence (CJEU, 2018) and guidance from the WP29 (2010, p. 18), it appears that “[i]nfluencing the processing (or agreeing to the processing and making it possible) [is] enough to qualify as determining both the purposes and the means of that processing operation” (Mahieu et al., 2019, p. 95).

If the user may indeed be considered a joint controller in such cases (but we will see shortly that this claim may be contested), privacy externalities would be internalised (or their negative impact reduced) insofar as the user would be legally responsible for any inadequate processing of her peers’ personal data, and would hence be incentivised to take measures to avoid such unlawful processing—such as giving appropriate notice to visitors, or turning the smart devices off before they enter the premises. 15 However, important uncertainties remain regarding how this framework is to be applied, which raise substantial doubts as to the extent to which joint controllership could form (part of) the solution to the issue of privacy externalities. They are briefly listed below:

1. A first issue is that “the framework for assigning responsibilities to different stages of processing and different degrees of responsibilities is underdeveloped; there are no guidelines for assigning specific responsibilities to specific ‘stages’, no clear principles to determine different ‘degrees of responsibility’, nor criteria to connect particular consequences (enforcement actions) to particular levels of responsibility” (Mahieu et al.,2019, p. 99). That is, joint controllership as an effective framework of governance might not be mature enough yet, for this specific context at least.

2. A second issue comes from the GDPR’s ‘household exemption’, which states (Recital 18) that the GDPR “does not apply to the processing of personal data by a natural person in the course of a purely personal or household activity and thus with no connection to a professional or commercial activity,” though it “applies to controllers or processors which provide the means for processing personal data for such personal or household activities”. 16 When it comes to its application to privacy externalities, recent judgements from the CJEU (2003, 2014a, 2014b, 2018, 2019) advance criteria for determining whether data subjects using certain services should (a) be considered joint controllers and (b) benefit from the household exemption. The criteria put forward in these judgements would exclude many of the privacy externalities discussed above, 17 but not all externalities would be dismissed: depending on the weight accorded to a criterion endorsed in the Fashion ID CJEU ruling (2019, para 80: that the “processing operations are performed in the economic interests of both” parties), all the externalities passively beneficial to both the user and the controller would be admissible. Furthermore, the active exploitation (in various forms) of third-party subjects’ data by controllers, which is a crucial component of the privacy externalities that are most problematic, may also mean that the household/personal activity actually often has an important connection to a commercial activity (or at least that the distinction between personal and commercial is blurred), and may thus not benefit from the exemption (see also WP29, 2017, p. 8). 18 For a more in-depth discussion of privacy externalities, joint controllership and the household exemption, see also De Conca, 2020.

3. A third issue to be considered comes from the burden of data protection and privacy (self-)management, and from the complexity of being a controller. The GDPR’s framework of joint controllership was primarily intended to adequately divide tasks and responsibilities between controllers of organisations—it was not intended to make private individuals take on the burden of being a data controller (see OECD, 2011, pp. 27-28). Being a controller entails legal duties and requires thorough understanding of both the legal landscape and the technicalities of data processing; the framework of joint controllership could hence plausibly be too burdensome in practice to be realistically applicable to private individuals (on this issue, see Helberger and van Hoboken, 2010, p. 104; van Alsenoy, 2015, pp. 6, 24, 28; Edwards et al.,2019). And let’s not even discuss the increased strain on data protection authorities’ limited resources that this solution would entail (see Ryan and Toner, 2020).

4. Finally, if, as the term ‘privacy externalities’ suggests (as well as the analysis of the incentives behind the phenomenon), this data protection issue can be linked to the context of externalities in environmental pollution, then there may be valuable policy lessons to learn from the latter field. This is what Omri Ben-Shahar (2019) does, as he frames privacy externalities as “data pollution” (see also Hirsch, 2006, 2014; Froomkin, 2015; Hirsch and King, 2016). However, a central element of his argument is that, just like in environmental protection, “[t]he optimism that contracts and behaviorally-informed choice architecture would help people make wise data sharing decisions and reduce data pollution is fundamentally misplaced, because private bilateral contracts are the wrong mechanisms to address the harm to third parties” (2019, p. 108). He adds that “[i]t is not people who need to be protected from a mesh of data-predatory contracts; but rather, it is the ecosystem that needs to be protected from the data sharing contracts that people endlessly enter into” (ibid). If this is right and the analogy with environmental protection holds, then joint controllership will be inadequate to solve privacy externalities, and DPbDD (as part of a wider data protection ex ante package, which Ben-Shahar includes within the promising solutions he analyses) is the way to go. 19

4. Conclusion

Many people remain to this day oblivious to the fact that ‘free’ services online only mean ‘without a monetary cost’, and that they actually ‘pay’ (to a limited and imperfect extent) with their data, i.e., by providing information (presumably about themselves) and agreeing that it be leveraged, in particular for intrusive advertising, prediction services or research. However, even those who realise this may not realise that it is not just their data they give away: it is often also the data of others. This ‘cost’ that is imposed on others, the article argued, is a form of disclosure most adequately conceptualised as privacy externalities.

This article has demonstrated, in concord with existing literature, that one’s privacy is sometimes dependent on others—that is, that there is an interdependent aspect to individual informational privacy. This dimension makes it fundamentally impossible for a data subject to be fully in control of her personal data, despite such expectations. Part of the issue is that, in contempt of other important elements and legal bases in the GDPR, the protection of personal data nowadays still largely relies on consent. This happens through an individualistic mechanism of N&C, whereby only the data subject in direct relation to the controller providing the service is consulted, even if she will foreseeably also provide the personal data of other data subjects as part of the service. This individualistic framework obscures the possibility that one’s peers might need to be consulted, or that measures should be taken to mitigate the collateral processing of their personal data, for example.

However, because the existing literature has often conceived of the issue precisely in this sense—that is, as a collateral damage, a neutral side-effect—the important dynamics behind the phenomenon have hitherto been poorly highlighted, if at all. The advantage of talking of interdependent privacy, and of taking the economic lens of externalities, is that it allows us to uncover the unethical incentives perpetuating the phenomenon. These are, first and foremost, the passive benefits of dumping costs on others: data controllers on users, and users on their peers. The savings realised are the time, resources and energy that would otherwise be invested in: designing a product of appropriate quality; putting in place legal and other mechanisms governing appropriate conduct in case externalities are created; due diligence; or taking steps to mitigate the externality. As a result, the services offered by data controllers can be offered for cheaper than if the appropriate efforts had been taken to ensure their quality—something which may distort the market by increasing the production and the consumption of these lower-grade services, at the expense of services of better quality (the price of which reflect better their true costs). The negative externalities resulting from the use of these cheaper services are the invisible price for these users’ peers. Concretely, these externalities are the unlawful processing of the peers’ personal data, the increased risks to their rights that result from it, as well as possible latent, tangible or intangible harm.

Notwithstanding the risks and harms that result from the ‘passive’ benefits from externalities, additional risks and harms arise when some data controllers also actively create and/or harvest privacy externalities. In a hyperconnected world marked by surveillance capitalism, to rogue data controllers the privacy externalities are only a bonus—a bonus that further subsidises their cheap (or ‘free’) services. However, when the externalities become a feature rather than just a bug, their inexcusable exploitation undermines even further the data protection rights of countless unaware data subjects. This is highly problematic, both ethically and legally, and should be addressed by data protection, but also perhaps by competition and consumer-protection authorities. I briefly pointed toward two possible solutions, marking a preference for the path of better enforcement of data protection by design and by default.

This article has furthermore served the goal of drawing together and listing the abundant and diverse scholarly (and policy) works on the topic. Pertaining to different fields and jurisdictions, using different terms to conceptualise a similar phenomenon, or simply not referring to related publications, the existing literature can be found in clusters that do not make reference to each other. The two lists found in Sections 1 and 3.1 can therefore be used to connect, learn from, and avoid repeating what has already been expressed.

This article, however, does no literature analysis, comparison or evaluation of this existing body of works and of the solutions (if any) each put forward. What it does, besides framing the phenomenon in a particular way and scrutinising the elements revealed under this particular light, is using the different examples and conceptions of privacy externalities discussed in this body of works to draw a holistic picture of the phenomenon and of the four different forms it can take—something which had not been done before and which is indispensable to fully understand privacy externalities, and hence to appropriately address them.

References

Article 29 Data Protection Working Party (‘WP29’). (2009). Opinion 5/2009 on Online Social Networking (WP163). https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2009/wp163_en.pdf

Article 29 Data Protection Working Party (‘WP29’). (2010). Opinion 1/2010 on the Concepts of ‘Controller’ and ‘Processor’ (WP169).https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2010/wp169_en.pdf

Article 29 Data Protection Working Party (‘WP29’). (2013). Statement of the Working Party on Current Discussions Regarding the Data Protection Reform Package—Annex 2: Proposals for Amendments Regarding Exemption for Personal or Household Activities. https://ec.europa.eu/justice/article-29/documentation/other-document/files/2013/20130227_statement_dp_annex2_en.pdf

Article 29 Data Protection Working Party (‘WP29’). (2014). Opinion 06/2014 on the Notion of Legitimate Interests of the Data Controller under Article 7 of Directive 95/46/EC (WP217). https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp217_en.pdf

Article 29 Data Protection Working Party (‘WP29’). (2017). Guidelines on Data Protection Impact Assessment (DPIA) and Determining Whether Processing Is ‘Likely to Result in a High Risk. http://ec.europa.eu/newsroom/document.cfm?doc_id=47711

Barocas, S., & H, N. (2014). Big Data’s End Run around Procedural Privacy Protections. Communications of the ACM, 57(11), 31–33. https://doi.org/10.1145/2668897

Belgisch Commissie voor de Bescherming van de Persoonlijke Levenssfeer. (2007). Aanbeveling Uit Eigen Beweging Inzake de Verspreiding van Beeldmateriaal. Belgisch Commissie voor de Bescherming van de Persoonlijke Levenssfeer (Belgian data protection authority).

Bennett, C. J., & Raab, C. D. (2006). The Governance of Privacy: Policy Instruments in Global Perspective (2nd and updated ed.). MIT Press. https://doi.org/10.1080/19331680801979039

Ben-Shahar, O. (2019). Data Pollution. Journal of Legal Analysis, 11, 104–159. https://doi.org/10.1093/jla/laz005

Besmer, A., & Lipford, H. R. (2010). Users’ (Mis)Conceptions of Social Applications. Proceedings of Graphics Interface. https://doi.org/10.1007/978-3-319-07509-9_2

Biczók, G., & P.H, C. (2013). Interdependent Privacy: Let Me Share Your Data. Financial Cryptography and Data Security. Springer. https://doi.org/10.1007/978-3-642-39884-1_29

Bloustein, E. J. (1978). Individual and Group Privacy. Transaction Publishers.

Boyd, d. (2011). Networked Privacy. Personal Democracy Forum. https://www.danah.org/papers/talks/2011/PDF2011.html.

Boyd, d, K., L., & Marwick, A. E. (2014). The Networked Nature of Algorithmic Discrimination. In S. P. Gangadaranwith, V. Eubanks, & S. Barocas (Eds.), Data and Discrimination: Collective Essays. Open Technology Institute and New America. http://www.newamerica.org/downloads/OTI-Data-an-Discrimination-FINAL-small.pdf

Bygrave, L. A. (2004). Privacy Protection in a Global Context. A Comparative Overview. Scandinavian Studies in Law, 47. https://www.uio.no/studier/emner/jus/jus/JUS5630/v13/undervisningsmateriale/privacy-and-data-protection-in-international-perspective.pdf

Calo, R. (2011). The Boundaries of Privacy Harm. Indiana Law Journal, 86(3). http://ilj.law.indiana.edu/articles/86/86_3_Calo.pdf

Caughlin, T. T., Ruktanonchai, N., Acevedo, M. A., Lopiano, K. K., Prosper, O., Eagle, N., & Tatem, A. J. (2013). Place-Based Attributes Predict Community Membership in a Mobile Phone Communication Network. Angel Sánchez. PLoS ONE 8, 2. https://doi.org/10.1371/journal.pone.0056057

Chadwick, R. F., Levitt, M., & Shickle, D. (Eds.). (2014). The Right to Know and the Right Not to Know: Genetic Privacy and Responsibility (Second). Cambridge Bioethics and Law. Cambridge. https://doi.org/10.1017/CBO9781139875981

Chen, J., Ping, J. W., Xu, Y., & Tan, B. C. Y. (2015). Information privacy concern about peer disclosure in online social networks. IEEE Transactions on Engineering Management, 62(3), 311–324. https://doi.org/10.1109/TEM.2015.2432117

Choi, J. P., Jeon, D., & Kim, B. (2019). Privacy and Personal Data Collection with Information Externalities. Journal of Public Economics, 173, 113–124. https://doi.org/10.1016/j.jpubeco.2019.02.001

Cohen, J. (2000). Examined Lives: Informational Privacy and the Subject as Object. Stanford Law Review, 52, 1373–1438. https://doi.org/10.2307/1229517

College Bescherming Persoonsgegevens. (2007). Publicatie van Persoonsgegevens Op Internet [Guidelines]. College Bescherming Persoonsgegevens (Dutch Data Protetection Authority). https://autoriteitpersoonsgegevens.nl/sites/default/files/downloads/rs/rs_20071211_persoonsgegevens_op_internet_definitief.pdf

College Bescherming Persoonsgegevens. (2013). Investigation into the processing of personal data for the ‘whatsapp’ mobile application by WhatsApp Inc (Report No. Z2011-00987; Issue Z2011). College Bescherming Persoonsgegevens (Dutch Data Protetection Authority). https://autoriteitpersoonsgegevens.nl/sites/default/files/downloads/mijn_privacy/rap_2013-whatsapp-dutchdpa-final-findings-en.pdf

Culnan, M. J. (2000). Protecting Privacy Online: Is Self-Regulation Working? Journal of Public Policy & Marketing, 19(1), 20–26. https://doi.org/10.1509/jppm.19.1.20.16944

Culnan, M. J., & Armstrong, P. K. (1999). Information Privacy Concerns, Procedural Fairness, and Impersonal Trust: An Empirical Investigation. Organization Science, 10(1), 104–115. https://doi.org/10.1287/orsc.10.1.104

De Conca, S. (2020). Between a rock and a hard place: Owners of smart speakers and joint control. SCRIPT-Ed, 17(2), 238–268. https://doi.org/10.2966/scrip.170220.238

De Hert, P. (2008). Identity Management of E-ID, Privacy and Security in Europe. A Human Rights View. Information Security Technical Report, 13(2), 71–75. https://doi.org/10.1016/j.istr.2008.07.001

Edwards, L., Finck, M., Veale, M., & Zingales, N. (2019). Data Subjects as Data Controllers: A Fashion(Able) Concept? Internet Policy Review. https://policyreview.info/articles/news/data-subjects-data-controllers-fashionable-concept/1400

Erlich, Y., Shor, T., Pe’er, I., & Carmi, S. (2018). Identity Inference of Genomic Data Using Long-Range Familial Searches. Science, 362(6415), 690–94. https://doi.org/10.1126/science.aau4832

European Commission. (2012). Proposal for a Regulation of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data. https://www.europarl.europa.eu/registre/docs_autres_institutions/commission_europeenne/com/2012/0011/COM_COM(2012)0011_EN.pdf

European Data Protection Supervisor. (2014a). Opinion of the European Data Protection Supervisor on the Communication from the Commission to the European Parliament and the Council on ‘A New Era for Aviation Opening the Aviation Market to the Civil Use of Remotely Piloted Aircraft Systems in a Safe and Sustainable Manner, COM(2014) 207 Final. https://edps.europa.eu/sites/edp/files/publication/14-11-26_opinion_rpas_en.pdf

European Data Protection Supervisor. (2014b). Preliminary Opinion of the European Data Protection Supervisor on Privacy and Competitiveness in the Age of Big Data. https://edps.europa.eu/sites/edp/files/publication/14-03-26_competitition_law_big_data_en.pdf

European Data Protection Supervisor. (2019a). Connected Cars (TechDispatch). Publications Office of the European Union. https://doi.org/10.2804/70098

European Data Protection Supervisor. (2019b). Smart Speakers and Virtual Assistants (TechDispatch). Publications Office of the European Union. https://doi.org/10.2804/755512

European Data Protection Supervisor. (2020). A Preliminary Opinion on Data Protection and Scientific Research. https://edps.europa.eu/sites/edp/files/publication/20-01-06_opinion_research_en.pdf

Fashion ID, C-40/17, EU:C:2019:629 (Court of Justice of the European Union 29 July 2019). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62017CA0040&qid=1590355470801&from=EN

Froomkin, M. (2015). Regulating Mass Surveillance as Privacy Pollution: Learning from Environmental Impact Statements. University of Illinois Law Review, 2015(5), 1713–1790. https://doi.org/10.2139/ssrn.2400736

Garcia, D. (2017). Leaking Privacy and Shadow Profiles in Online Social Networks. Science Advances, 3(8). https://doi.org/10.1126/sciadv.1701172

Garcia-Murillo, M., & MacInnes, I. (2018). Così Fan Tutte: A Better Approach than the Right to Be Forgotten. Telecommunications Policy, 42(3), 227–40. https://doi.org/10.1016/j.telpol.2017.12.003

Gnesi, S., Matteucci, I., Moiso, C., Mori, P., Petrocchi, M., & Vescovi, M. (2014). My Data, Your Data, Our Data: Managing Privacy Preferences in Multiple Subjects Personal Data. In B. Preneel & D. Ikonomou (Eds.), Privacy Technologies and Policy (Vol. 8450, pp. 154–171). Springer International Publishing. https://doi.org/10.1007/978-3-319-06749-0_11

Google Spain and Google, EU:C:2014:317 (Court of Justice of the European Union 13 May 2014). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62012CJ0131&qid=1590355288547&from=EN

Hallinan, D., & Hert, P. (2017). Genetic Classes and Genetic Categories: Protecting Genetic Groups Through Data Protection Law. In Linnet Taylor, L. Floridi, & B. Sloot (Eds.), Group Privacy (pp. 175–196). Springer International Publishing. https://doi.org/10.1007/978-3-319-46608-8_10

Hann, I.-H., Hui, K.-L., Lee, T. S., & Png, I. (2002). Online Information Privacy: Measuring the Cost-Benefit Trade-Off. Proceedings of the International Conference on Information Systems (ICIS. https://aisel.aisnet.org/icis2002/1

Heath, J. (2007). An Adversarial Ethic for Business: Or When Sun-Tzu Met the Stakeholder. Journal of Business Ethics, 72(4), 359–374. https://doi.org/10.1007/s10551-006-9175-5

Helberger, N., & Hoboken, J. (2010). Little Brother Is Tagging You—Legal and Policy Implications of Amateur Data Controllers. Computer Law International, 11(4), 101–109. https://hdl.handle.net/11245/1.337383

Hirsch, D. (2006). Protecting the Inner Environment: What Privacy Regulation Can Learn from Environmental Law. Georgia Law Review, 41(1), 1–63. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1021623

Hirsch, D. (2014). The Glass House Effect: Big Data, the New Oil, and the Power of Analogy. Maine Law Review, 66(2), 373–395. https://digitalcommons.mainelaw.maine.edu/mlr/vol66/iss2/3

Hirsch, D., & King, J. H. (2016). Big Data Sustainability: An Environmental Management Systems Analogy. Washington and Lee Law Review Online, 72(3), 406–419. https://scholarlycommons.law.wlu.edu/wlulr-online/vol72/iss3/4

Holloway, D. (2019). Surveillance capitalism and children’s data: The Internet of toys and things for children. Media International Australia, 170(1), 27–36. https://doi.org/10.1177/1329878X19828205

Hull, G. (2015). Successful Failure: What Foucault Can Teach Us about Privacy Self-Management in a World of Facebook and Big Data. Ethics and Information Technology, 17(2), 89–101. https://doi.org/10.1007/s10676-015-9363-z

Hull, G., Lipford, H. R., & Latulipe, C. (2011). Contextual Gaps: Privacy Issues on Facebook. Ethics and Information Technology, 13(4), 289–302. https://doi.org/10.1007/s10676-010-9224-8

Humbert, M., Ayday, E., Hubaux, J.-P., & Telenti, A. (2015). On Non-Cooperative Genomic Privacy. In R. Böhme & T. Okamoto (Eds.), Financial Cryptography and Data Security (pp. 407–426). Springer. https://doi.org/10.1007/978-3-662-47854-7_24

Humbert, M., Trubert, B., & Huguenin, K. (2020). A Survey on Interdependent Privacy. ACM Computing Surveys, 52(6). https://doi.org/10.1145/3360498

Inc, F. (2018). Facebook Post-Hearing Responses to Commerce Committee: “Facebook, Social Media Privacy, and the Use and Abuse of Data. https://www.judiciary.senate.gov/imo/media/doc/Zuckerberg%20Responses%20to%20Commerce%20Committee%20QFRs1.pdf

Information Commissioner’s Office. (2017). In the Picture: A Data Protection Code of Practice for Surveillance Cameras and Personal Information [Report]. Information Commissioner’s Office. https://ico.org.uk/media/1542/cctv-code-of-practice.pdf

Introna, L. D. (1997). Privacy and the Computer: Why We Need Privacy in the Information Society. Metaphilosophy, 28(3), 259–75. https://doi.org/10.1111/1467-9973.00055

Jernigan, C., & Mistree, B. F. T. (2009). Gaydar: Facebook Friendships Expose Sexual Orientation. First Monday, 14(10). https://firstmonday.org/article/view/2611/2302

Jia, H., & Xu, H. (2016). Measuring Individuals’ Concerns over Collective Privacy on Social Networking Sites. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 10(1). https://doi.org/10.5817/CP2016-1-4

Kamleitner, B., & Mitchell, V. (2019). Your Data Is My Data: A Framework for Addressing Interdependent Privacy Infringements. Journal of Public Policy & Marketing, 38(4), 433–450. https://doi.org/10.1177/0743915619858924

Kitchin, R. (2014a). The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. SAGE Publications.

Kitchin, R. (2014b). The Real-Time City? Big Data and Smart Urbanism. GeoJournal, 79(1), 1–14. https://doi.org/10.1007/s10708-013-9516-8

Kupfer, J. (1987). Privacy, Autonomy, and Self-Concept. American Philosophical Quarterly, 24(1), 81–89.

Lampinen, A. (2015). Networked Privacy Beyond the Individual: Four Perspectives to “Sharing”. Aarhus Series on Human Centered Computing, 1(1). https://doi.org/10.7146/aahcc.v1i1.21300

Lampinen, A., Lehtinen, V., Lehmuskallio, A., & Tamminen, S. (2011). We’re in It Together: Interpersonal Management of Disclosure in Social Network Services. Proceedings of the 29th International Conference on Human Factors in Computing Systems. https://doi.org/10.1145/1978942.1979420

Laudon, K. C. (n.d.). Markets and Privacy (1996. Communications of the ACM, 39(9), 92–104. https://doi.org/10.1145/234215.234476

Le Borgne-Bachschmidt, F., Girieud, S., Leiba, M., Munck, S., Limonard, S., Poel, M., Kool, L., Helberger, N., Guibault, L., Janssen, E., Eijk, N., Angelopoulos, C., Hoboken, J., & Swart, E. (2008). User-Created-Content: Supporting a participative Information Society. https://www.ivir.nl/publicaties/download/User_created_content.pdf

Levy, S. (2020). Facebook: The inside Story. Dutton.

Lindqvist, C-101/01, EU:C:2003:596 (Court of Justice of the European Union 6 November 2003). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62001CJ0101&from=EN

MacCarthy, M. (2011). New Directions in Privacy: Disclosure, Unfairness and Externalies. I/S: A Journal of Law and Policy for the Information Society, 6(3), 425–512. https://kb.osu.edu/handle/1811/72971

Mahieu, R., Hoboken, J., & Asghari, H. (2019). Responsibility for Data Protection in a Networked World: On the Question of the Controller, ‘Effective and Complete Protection’ and Its Application to Data Access Rights in Europe. JIPITEC, 10(1). https://nbn-resolving.org/urn:nbn:de:0009-29-48796

Mantelero, A. (2014). The Future of Consumer Data Protection in the EU: Rethinking the ‘Notice and Consent’ Paradigm in the New Era of Predictive Analytics. Computer Law & Security Review, 30(6), 643–660. https://doi.org/10.1016/j.clsr.2014.09.004

Mantelero, A. (2016). Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection. Computer Law & Security Review, 32(2), 238–55. https://doi.org/10.1016/j.clsr.2016.01.014

Mantelero, A. (2017). Towards a Big Data Regulation Based on Social and Ethical Values. The Guidelines of the Council of Europe. Revista de Bioética y Derecho, 41, 67–84. http://hdl.handle.net/11583/2687425

Marwick, A. E., & boyd, d. (2014). Networked Privacy: How Teenagers Negotiate Context in Social Media. New Media & Society, 16(7), 1051–67. https://doi.org/10.1177/1461444814543995

May, T. (2018). Sociogenetic Risks—Ancestry DNA Testing, Third-Party Identity, and Protection of Privacy. New England Journal of Medicine, 379(5), 410–12. https://doi.org/10.1056/NEJMp1805870

McDaniel, P., & McLaughlin, S. (2009). Security and Privacy Challenges in the Smart Grid. IEEE Security & Privacy Magazine, 7(3), 75–77. https://doi.org/10.1109/MSP.2009.76

Moore, A. D. (2007). Toward Informational Privacy Rights. San Diego Law Review, 44(4), 809–846. https://digital.sandiego.edu/sdlr/vol44/iss4/8/

Nehf, J. P. (2003). Recognizing the Societal Value in Information Privacy. Washington Law Review, 78(1), 1–92. https://digitalcommons.law.uw.edu/wlr/vol78/iss1/2

Nissenbaum, H. F. (2004). Privacy as Contextual Integrity. Washington Law Review, 79(119), 101–139. https://crypto.stanford.edu/portia/papers/RevnissenbaumDTP31.pdf

Olejnik, L., Konkolewska, A., & Castelluccia, C. (2014). I’m 2.8% Neanderthal—The beginning of genetic exhibitionism? PETS Workshop on Genome Privacy. 14th Privacy Enhancing Technologies Symposium (PETS 2014). https://hal.inria.fr/hal-01087696

Organisation Economic Co-operation. (2013). Supplementary Explanatory Memorandum to the Revised Recommendation of the Council Concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data. In OECD Privacy Guidelines 2013 (pp. 19–37). Organisation for Economic Co-operation and Development. https://www.oecd.org/sti/ieconomy/2013-oecd-privacy-guidelines.pdf

Organisation Economic Co-operation and Development. (2011). The Evolving Privacy Landscape: 30 Years After the OECD Privacy Guidelines [176]. OECD Publishing. http://dx.doi.org/10.1787/5kgf09z90c31-en

Ozdemir, Z. D., Jeff Smith, H., & Benamati, J. H. (2017). Antecedents and outcomes of information privacy concerns in a peer context: An exploratory study. European Journal of Information Systems, 26(6), 642–660. https://doi.org/10.1057/s41303-017-0056-z

Privacy International. (2019). Betrayed by an App She Had Never Heard of—How TrueCaller Is Endangering Journalists[Case Study]. Privacy International. https://www.privacyinternational.org/node/2997

Pu, Y., & Grossklags, J. (2017). Valuating Friends’ Privacy: Does Anonymity of Sharing Personal Data Matter? Proceedings of the Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017). https://www.usenix.org/system/files/conference/soups2017/soups2017-pu.pdf

Purtova, N. (2018). The Law of Everything. Broad Concept of Personal Data and Future of EU Data Protection Law. Law, Innovation and Technology, 10(1), 40–81. https://doi.org/10.1080/17579961.2018.1452176

Rhoen, M. (2016). Beyond Consent: Improving Data Protection through Consumer Protection Law. Internet Policy Review, 5(1). https://doi.org/10.14763/2016.1.404

Roessler, B., & Mokrosinska, D. (2013). Privacy and Social Interaction. Philosophy & Social Criticism, 39(8), 771–91. https://doi.org/10.1177/0191453713494968

Ryan, J., & Toner, A. (2020). Europe’s governments are failing the GDPR: Brave’s 2020 report on the enforcement capacity of data protection authorities (Brave Insights) [Report]. Brave. https://brave.com/wp-content/uploads/2020/04/Brave-2020-DPA-Report.pdf

Ryneš, C-212/13, EU:C:2014:2428 (Court of Justice of the European Union 11 December 2014). https://eur-lex.europa.eu/legal-content/AUTO/?uri=CELEX:62013CA0212&qid=1590355384101&rid=3

Sarigol, E., Garcia, D., & Schweitzer, F. (2014). Online Privacy as a Collective Phenomenon. Proceedings of the Second ACM Conference on Online Social Networks (COSN ’14), 95–106. https://doi.org/10.1145/2660460.2660470

Solove, D. (2013). Privacy Self-Management and the Consent Dilemma. Harvard Law Review, 126, 1888 – 1903. https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/

Solove, D. J. (2002). Conceptualizing Privacy. California Law Review, 90(4), 1087–1155. https://doi.org/10.2307/3481326

Solove, D. J. (2007). The Future of Reputation: Gossip, Rumor, and Privacy on the Internet. Yale University Press. https://doi.org/10.24908/ss.v6i3.3300

Solove, D. J. (2008). Understanding Privacy. Harvard University Press.

Squicciarini, A. C., Shehab, M., & Paci, F. (2009). Collective privacy management in social networks. Proceedings of the 18th International Conference on World Wide Web - WWW, 521–531. https://doi.org/10.1145/1526709.1526780

Strahilevitz, L. J. (2010). Collective Privacy. In S. Levmore & M. C. Nussbaum (Eds.), The Offensive Internet: Speech, Privacy and Reputation (pp. 217–236). Harvard University Press. https://doi.org/10.2307/j.ctvjf9zc8

Symeonidis, I., Shirazi, F., Biczók, G., Pérez-Solà, C., & Preneel, B. (2016). Collateral Damage of Facebook Apps: Friends, Providers, and Privacy Interdependence. In J.-H. Hoepman & S. Katzenbeisser (Eds.), ICT Systems Security and Privacy Protection (Vol. 471, pp. 194–208). Springer International Publishing. https://doi.org/10.1007/978-3-319-33630-5_14

Taylor, L., Floridi, L., & Sloot, B. (Eds.). (2017). Group Privacy: New Challenges of Data Technologies. Springer International Publishing. https://doi.org/10.1007/978-3-319-46608-8

Thomas, K., Grier, C., & Nicol, D. M. (2010). unFriendly: Multi-party Privacy Risks in Social Networks. In M. J. Atallah & N. J. Hopper (Eds.), Privacy Enhancing Technologies (Vol. 6205, pp. 236–252). Springer. https://doi.org/10.1007/978-3-642-14527-8_14

Tietenberg, T. H., & Lewis, L. Y. (2018). Environmental and natural resource economics (11th edition, international student). Routledge.

Vallet, F. (2019, May 13). Les droits de la voix (1/2): Quelle écoute pour nos systèmes ? [Blog post]. Laboratoire d’Innovation Numérique de La CNIL (LINC).https://linc.cnil.fr/fr/les-droits-de-la-voix-12-quelle-ecoute-pour-nos-systemes

Van Alsenoy, B. (2015). The Evolving Role of the Individual under EU Data Protection Law (Working Paper No. 23/2015). KU Leuven Centre for IT & IP Law (CiTiP). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2641680

Van Alsenoy, B., Ballet, B., Kuczerawy, A., & Dumortier, J. (2009). Social Networks and Web 2.0: Are Users Also Bound by Data Protection Regulations? Identity in the Information Society, 2, 65–79. https://doi.org/10.1007/s12394-009-0017-3

Van Alsenoy, B., Kosta, E., & Dumortier, J. (2013). Privacy Notices versus Informational Self-Determination: Minding the Gap. International Review of Law, Computers & Technology, 28(2), 185–203. https://doi.org/10.1080/13600869.2013.812594

van Dijk, N., Gellert, R., & Rommetveit, K. (2016). A Risk to a Right? Beyond Data Protection Risk Assessments. Computer Law & Security Review, 32(2), 286–306. https://doi.org/10.1016/j.clsr.2015.12.017

Veale, M., Binns, R., & Ausloos, J. (2018). When Data Protection by Design and Data Subject Rights Clash. International Data Privacy Law, 8(2), 105–23. https://doi.org/10.1093/idpl/ipy002

Vedder, A. H. (1997). Privatization, Information Technology and Privacy: Reconsidering the Social Responsibilities of Private Organizations. In G. Moore (Ed.), Business Ethics: Principles and Practice. Business Education Publishers. https://doi.org/10.1177/1468018105053677

Venkatadri, G., Lucherini, E., Sapiezynski, P., & Mislove, A. (2019). Investigating Sources of PII Used in Facebook’s Targeted Advertising. Proceedings on Privacy Enhancing Technologies, 1, 227–44. https://doi.org/10.2478/popets-2019-0013

Wachter, S., & Mittelstadt, B. (2019). A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review, 2, 494 – 620. https://doi.org/10.7916/cblr.v2019i2.3424

Weinreb, L. L. (2000). The Right to Privacy. Social Philosophy and Policy, 17(2), 25–44. https://doi.org/10.1017/S0265052500002090

Westin, A. F. (1967). Privacy and Freedom. Atheneum Press.

Whitley, E. A. (2009). Informational Privacy, Consent and the “Control” of Personal Data. Information Security Technical Report, 14(3), 154–59. https://doi.org/10.1016/j.istr.2009.10.001

Whitman, J. (2003). The two Western cultures of privacy: Dignity versus liberty. Yale Law Journal, 113, 1151–1222. https://doi.org/10.2307/4135723

Wirtschaftsakademie, C-210/16, EU:C:2018:388 (Court of Justice of the European Union 5 June 2018). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62016CJ0210&qid=1590355426224&from=EN

Wong, R. (2009). Social Networking: A Conceptual Analysis of a Data Controller. Communications Law, 14(5). http://irep.ntu.ac.uk/id/eprint/18914/1/200497_6128%20Wong%20Publisher.pdf

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30, 75 – 89. https://doi.org/10.1057/jit.2015.5

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Footnotes

1. While these authors also use the term privacy externalities, they do not analyse the same dynamics that this paper addresses at length in section 3. Further, the scope of the phenomenon they examined is not exactly the same, and some even use the concept of externality in a very different (though relevant) sense.

2. In addition to this list of directly-related work, the topics of interdependent privacy or privacy externalities have been more tangentially or briefly touched upon by the following: Bloustein, 1978; Roessler and Mokrosinska, 2013 (the network effect); Kitchin, 2014a (data shadows); Hull, 2015; Jia and Xu, 2016 (collective privacy); Taylor et al. , 2017 (some aspects of group privacy); Facebook Inc., 2018 (the sharing of one’s friends information with third-party apps in the Fracebook-Cambridge Analytica scandal); Garcia-Murillo and MacInnes, 2018. See also Section 3.1 below for authors who address particular cases of privacy externalities.This article’s topic itself is situated within a wider context of more theoretical critiques of individualistic notions being applied to the networked self; the article puts these aside to focus on concrete cases and dynamics.

3.‘Relative’, because there is a continuum of degrees of control that fall under the concept of ‘having privacy’. It is difficult to specify what degree of control is required, especially as privacy is at least partly subjective and context-sensitive (Kupfer, 1987; Nissenbaum, 2004).For references to this understanding (i.e., definition) of privacy, see Westin, 1967, p. 7; Culnan and Armstrong, 1999; Culnan, 2000; Cohen, 2000; Weinreb, 2000; Hann et al., 2002; Whitman, 2004, p. 1161; Bygrave, 2004, pp. 324-5; Moore, 2007; Bennett and Raab, 2003 ch. 1; De Hert, 2008; Solove, 2008 ch. 2; Whitley, 2009; Mantelero, 2014, 2017 pp. 71-72. Although privacy may be defined in different ways (see Introna, 1999; Solove, 2002), it is unfortunately out of the scope of this article to discuss other conceptions.

4. An even broader perspective would be that “[i]nformation can ‘relate’ to an individual in content, purpose, or result” (Purtova, 2017, p. 54).

5. Facebook Inc. https://www.facebook.com/your_information/ (accessed 17/03/2020).

6. While this narrative is not particularly accurate—as the reality is closer to users providing the controller with information about themselves which can then be exploited for e.g. advertising purposes—it is useful to convey the message that the ‘price’ of a service can be shared (i.e., shared among all those whose data is provided). See specifically EDPS, 2014b, p. 37, and Zuboff, 2019, p. 94.

7. Privacy externalities can be both negative and positive for third parties; however, while the benefits are often appropriated and internalised by the controller, the costs remain orphan—affecting groups too broad and dispersed, and causing injuries that are too abstract for private remedies to be effective (Ben-Shahar, 2019, p. 115).Moreover, when considering harm, it is good to go beyond mere monetisation and to note that privacy externalities can also serve for the surveillance of specific individuals, as they amount to a form of (often unintended) lateral and decentralised surveillance, i.e., monitoring by one’s peers, the recordings of which can be consulted by the controller or requested by law enforcement (on this, see also Zuboff’s surveillance capitalism (2015)).

8. See Google’s statement that “[y]ou (and not Google) are responsible for ensuring that you comply with any applicable laws when you use Nest devices and services, including any video recording, biometric data or audio recording laws that require you to give notice or obtain consent from third parties” (Google, https://support.google.com/googlenest/answer/9327735?hl=en&ref_topic=7173611, accessed 28/03/20).

9. To keep the same example, Google’s smart home system Nest has a voice recognition feature which allows guests to add their own account to the owner’s device, following which their interactions will be stored in their own communication history at myactivity.google.com. When unrecognised data subjects interact with the device, their communication history is stored in the activity history of the Google Account used to set up the device (i.e. the owner’s). Google therefore recommends the user to make sure any guests “understand that their interactions will be stored by Google in [the owner’s] Google Account and that [the owner] can view and delete that information”, and adds that the owner “may consider muting the microphone or unplugging and putting the device away” when there are guests (Google, https://support.google.com/googlenest/answer/7177221?hl=en, accessed 06/12/2020).

10. This was also demonstrated by Humbert et al.(2020) regarding interdependent privacy. However, through my analysis of privacy externalities, yet more of these clusters have been uncovered—accentuating the point that “research on the topic has been conducted in isolation in different communities” (ibid, pp. 2, 4).

11. For a discussion of privacy issues and related risks brought about by voice assistants, see Veale et al.,2018. Their discussion of privacy harms, rights and data protection by-design for Apple’s Siri is applicable to the risks and harms highlighted here for third-party subjects.

12. Category 4 represents a very important kind of privacy externality. However, it may be valuable to note that the category relates both to the collective dimension of privacy (Mantelero, 2016) and to its interdependent dimension. The two dimensions are distinct (neither necessarily implies the other), even though it is sometimes difficult to distinguish between them, due to significant overlap. Acting as if these two dimensions of privacy were the same would limit us, because an important element would be missing from the analysis of privacy.

13. See “How does Signal know my contact is using Signal?” at https://support.signal.org/hc/en-us/articles/360007061452-Does-Signal-send-my-number-to-my-contacts- (Accessed 18/03/20). This system became notorious as a ‘compare and forget’ system when WhatsApp came under scrutiny by the Dutch data protection authority (DPA) for the way it handled contact data (see CBP, 2013, p. 30).

14. The discussion about the most appropriate way to address illegal content online may be relevant here (Le Borgne-Bachschmidt et al.,2008). See also Helberger and van Hoboken, 2010, p. 106.

15. Power imbalance between the smart home owners (the ‘user’) and their peers will however give rise to higher risk of abuse. The user will often be in a position of greater control (e.g. by owning the smart home, and by being able to ‘turn it off’ (or on) remotely), and there will always be a risk that the third-party subject (e.g. children or the plumber) is not even made aware of the privacy-invasive practice. These predictable power imbalances should prevent data controllers from fully allocating the responsibility for privacy externalities on the smart home user.

16. For a more detailed discussion of the household exemption with regards to data subjects, see Helberger and van Hoboken, 2010; van Alsenoy, 2015; Edwards et al., 2019; van Alsenoy et al., 2009.

17. The criteria include: the processing activity being carried out “in the course of private and family life,” and not being made accessible to an “indefinite number of people” (CJEU, 2003); the scale and frequency of the processing, the potential adverse impact on the fundamental rights and freedoms of others (CJEU, 2014b, para 29), or the processing being ‘directed outwards from the private setting of the person processing the data’ (ibid, para 33).

18. It is worth noting that, in its draft for the GDPR, the European Commission (2012, p. 20) restricted the criteria for the household exemption to the “processing of personal data by a natural person […] without any gainful interest and thus without any connection with a professional or commercial activity” (emphasis added). My framing of the issue of privacy externalities as an (at least partly) incentive-based and exploitable phenomenon means that the latter two criteria considered by the Commission would have been particularly relevant in assessing the scope of the household exemption for the issue at hand. Furthermore, discussing these criteria, the WP29 added (2013, p. 8) that “[t]hought should also be given as to whether non-commercial, non-personal activity […] also needs to be addressed”. This grey zone in-between the purely personal and the purely commercial could bring nuance to the debate (although it could arguably just as well muddy it) and allow to better grasp the issue of exploitative privacy externalities.

19. Other potential solutions I have not discussed include: reconsidering the framework of privacy protection (see Mantelero, 2014); publishing a list of guidelines for private uses of each relevant technology, and instituting generic provisions in civil and criminal codes (van Alsenoy, 2015, pp. 28, 32); generating a public debate by raising awareness on the privacy implications of relevant technologies, or assisting controllers with compliance (EDPS, 2014a, para 56 et seq.; OECD, 2013, p. 32); adapting social norms about privacy; or creating a “lite” framework for individuals who are de facto amateur controllers (WP29, 2013, p. 5). Furthermore, the methods to resolve externalities used in economics, i.e., regulation, subsidies and taxation, could also be applied to the issue of privacy externalities (Ben-Shahar, 2019).

Viewing all 178 articles
Browse latest View live