A critique of the balancing metaphor in privacy and security (Mitchener-Nissen, 2014)

Timothy Mitchener-Nissen, during his affiliation with the University College London as a Teaching Fellow in Sociology of Technology, published an article entitled Failure to collectively assess surveillance-oriented security technologies will inevitably lead to an absolute surveillance society (Surveillance & Society Vol 12 Issue 1, p.73-88). In this post I quote paragraphs from that article with the purpose of sharing, and for my own reference purposes.

First, here’s the abstract:

The arguments presented by this paper are built on two underlying assertions. The first is that the assessment of surveillance measures often entails a judgement of whether any loss in privacy is legitimised by a justifiable increase in security. However one fundamental difference between privacy and security is that privacy has two attainable end-states (absolute privacy through to the absolute absence of privacy), whereas security has only one attainable end-state (while the absolute absence of security is attainable, absolute security is a desired yet unobtainable goal). The second assertion, which builds upon the first, holds that because absolute security is desirable new security interventions will continuously be developed each potentially trading a small measure of privacy for a small rise in security. When assessed individually each intervention may constitute a justifiable trade-off. However when combined together these interventions will ultimately reduce privacy to zero. To counter this outcome, when assessing the acceptability of any surveillance measure which impacts upon privacy (whether this constitutes a new technology or the novel application of existing technologies) we should do so by examining the combined effect of all surveillance measures currently employed within a society. This contrasts with the prevailing system whereby the impact of a new security technology is predominantly assessed on an individual basis by a subjective balancing of the security benefits of that technology against any reductions in concomitant rights, such as privacy and liberty. I contend that by continuing to focus on the effects of individual technologies over the combined effects of all surveillance technologies within a society, the likelihood of sleepwalking into (or indeed waking-up in) an absolute surveillance society moves from being a possible to the inevitable future.

In the body of the paper, the author defines surveillance-oriented security technologies (SOSTs) as follows:

These are technologies intended to enhance the security of citizens via some inherent surveillance capability either operated by or accessible to the state. They facilitate the monitoring, screening, or threat assessment of individuals, groups, or situations, and are based on live-events, past events or the processing of data.

The author is skeptical about the effectiveness of Privacy Impact Assessments (PIAs) that have become mandatory for governments in various countries (including the Netherlands: see this):

PIAs also employ balancing through cost/benefit analyses of different actions and values, the production of business cases justifying both privacy intrusions and the resulting implications, and when public acceptability of the proposed project is analysed [Wright et al. 2011: Precaution and privacy impact assessment as modes towards risk governance. In Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields]. Again, the focus is on the specific project to hand. There is the possibility here to take a more collective view within such assessments; however, for our purposes this would require knowledge of the current state of SOSTs operating within a society so as to form a clear picture of the status quo. It is doubtful those undertaking the PIA would have access to such information or the resources to acquire it. Recently the concept of Surveillance Impact Assessment (SIA) has been developed, described as a ‘methodology for identifying, assessing and resolving risks, in consultation with stakeholders, posed by the development of surveillance systems’ [Wright and Raab 2012: Constructing a surveillance impact assessment. Computer Law & Security Review 28(6): p.613-626]. The SIA concept seeks to increase stakeholder engagement, minimise the impact of surveillance technologies, and improve upstream development. However, this initial conceptualisation still appears to focus on the individual technology and not the collective assessment of other existing SOSTs within its methodology. Whether this changes in practice remains to be seen.

The author then argues against the “ubiquitous balancing metaphor” that expresses privacy and security as a trade-off:

[The balancing metaphor] is arbitrary and subjective, lacking in meta-rules, and purports to successfully compare objects (such as privacy and security) which possess different intrinsic characteristics.

Furthermore:

Focusing and expanding upon this final point, one of the fundamental differences between privacy and security is that only one of them has two attainable end-states. Privacy (P) exists as a finite resource on a quantifiable spectrum with two attainable end-states; that being absolute privacy (P=1) at one end through to the absolute absence of privacy (P=0) at the other. Whereas security (S) also exists as a finite resource but on a quantifiable spectrum with only one attainable end-state; that being the absolute absence of security (S=0). However, as discussed earlier, absolute security (S=1) can never be achieved and therefore must exist as a desirable yet ultimately unobtainable goal always equating to something less than 100 per cent (S=<1); hence the absence of a second attainable end-state.

The second assertion, which follows from and builds upon the first, holds that one consequence of absolute security being unobtainable yet desirable is that new SOSTs will continuously be developed in a futile search for this unobtainable goal. These technologies each potentially trade a small measure of privacy for a small rise in security. This production process is driven by a variety of internal and external sources beyond the conflicting internal characteristics of security and privacy. These include; the nature of fear and risk, pressure by politicians and police, the availability of funding, push by the security industry, and public support for these technologies. These factors operate together to ensure a fertile environment exists for the research and development processes of the security industry to thrive.

The author concludes his paper with an elaboration and several proposals. I quote it entirely, and added bold emphasis.

6. Complementing individual with collective assessment

By collective assessment I refer to a process whereby the acceptability of a new SOST is determined by assessing its effects in combination with other SOSTs already operational to determine both the necessity of this new entrant and the resultant quantum of proportionality of the new technology was adopted [Footnote 13: Being the proportionality of all operating SOSTs, including the proposed technology being assessed, given the security they afford and the resultant infringement of privacy]. This collective approach is not intended as a replacement for existing assessment methodologies which determine the acceptably of each individual technology, rather it would complement them by forming a secondary assessment step. Hence if the technology is individually unacceptable it would be rejected outright without the need for collective assessment.

Any adoption of a collective assessment methodology for the purpose of retaining privacy would be premised on a number of requirements. Firstly it requires citizens (or at least the majority) not wanting to actually live in a surveillance society where their physical, communication, location, and personal data is routinely collected and/or processed so as to maximise their individual and collective security. This position entails the concomitant acceptance of insecurity as the natural condition; i.e. the conscious understanding and acceptance that absolute security can never be achieved regardless of how many security measures are introduced. This also needs to be coupled with an appreciation of the value of other rights and freedoms (besides security) to temper the temptation to introduce ever more SOSTs. I must stress here that this desire by citizens to oppose a total surveillance society is by no means given. Privacy and security are social constructs; the different weights assigned to them exist differently across societies, are contextual, and change over time [Solove, D. 2009: Understanding Privacy, Ch.3]. It is completely conceivable that a given society at a given time and/or under given circumstances, may desire to live in a surveillance society. At this point they may still wish to adopt a collective assessment methodology for the purpose of identifying areas of social existence requiring additional surveillance as opposed to the purpose of preserving privacy.

Secondly, collective assessment requires a general acceptance that privacy has to be retained; that once privacy levels are reduced to a certain level, any further reductions cannot be justified regardless of the competing right. If this consensus does not exist (regardless of where these levels are set) then the total surveillance society envisioned within my paper will occur. If there is nothing within the act of living within a society that most/many citizens believe should remain off limits to surveillance, then this view represents tacit approval for a total surveillance society; if nothing is off-limits then everything becomes a valid target.

On the assumption however that a society wishes to preserve a certain level of privacy, this could conceivably be achieved through different methods and protections. I have set out three below which could operate either individually or in combination.

The first option is to designate certain objects as prima facie ‘off-limits’ to surveillance. This could include; certain geographical locations (individual homes, wider community spaces, streets, work-spaces, etc.), certain data (geographical, communication, internet, etc.), and/or certain physical actions (correspondence, physical interactions, etc.). In the event of reasonable suspicion that an individual is committing offences within one of these restricted areas a surveillance warrant could be issued by a judge.

The second option is to ban certain actions by law enforcement agencies. This might include:

  • any form of stop-and-search without reasonable suspicion (and suspicion could not be inferred simply because somebody is physically present within a certain geographical location[Footnote 14: Thus requiring a repeal of Section 44 UK Terrorism Act 2000]);
  • any form of data-mining where it either equates to a fishing expedition or where if the data being sifted was not digitally available a warrant would be required to gain access to it;
  • and prosecutions based on targeted surveillance where no prior reasonable suspicion existed justifying that surveillance.

A third option is to use citizen juries in conjunction with political and/or judicial bodies to monitor and limit the current surveillance options available to law enforcement agencies within a society. They would be afforded complete oversight such that only SOSTs and measures approved by these bodies would be lawful. No surveillance, or prosecution based on surveillance, outside of these designated limits would be permissible.

There are challenges with all these options, with each raising difficult questions. On the idea of setting surveillance limits, who would decide where these limits are set and how would they go about doing this? How easy would it be to modify these limits, and under what circumstances would this occur? On the option of fettering the activities of law enforcement agencies, how would this be policed and what would happen if officers discovered information pertaining to a crime whilst themselves engaging in illegal surveillance activities? And on the option of citizen juries, how would these be empanelled, who could sit on them, and what oversight would exist?

The presence of such challenges does not automatically negate the viability of any of these options. This is merely an acknowledgement that any method adopted should be done so with caution and considerable foresight. That said, the ideas set out above are achievable for they reflect values and norms that are currently observable within UK society. Despite the preoccupation with security leading to the spread of SOSTs throughout society, both UK citizens and their government still protect certain areas from interference and consider certain actions unacceptable. The home and bedroom are still somewhat protected from intrusion in that police are not (yet) allowed to randomly enter homes to search for evidence of crimes without prior suspicion or evidence of an offence. Written and oral communication between suspects or prisoners with their legal representatives is still largely protected, and the use of torture is thankfully still considered beyond the pale for the majority of citizens. And yet all of these actions have the potential to uncover evidence of criminal offences.

These examples show UK citizens are not yet willing to sacrifice every concomitant right on the altar of security, and while this holds true the opportunity remains to introduce measures for protecting privacy and scaling back the surveillance society. Collective assessment is a step down this path in that it makes explicit the current overall balance between security and privacy, thereby raising citizen awareness of the state of their society. Nevertheless, if privacy is valued at least as much as security is valued then this collective assessment needs to be backed up with protection measures such as those outlined above. Without these measures any such assessment is merely an exercise in collecting and collating information. It will tell us how far away we are from the oncoming train that is the absolute surveillance society without affording us the wherewithal to change its direction before we find ourselves forever wedged firmly under its wheels.

Jeroen van der Ham (Twitter: @1sand0s), a former colleague of mine, shares his criticism of Mitchener-Nissen’s article (my translation, switching to American English):

In the definition Mitchener-Nissen uses for “privacy”, he attempts to make privacy expressible on a scale, where I think that is not possible. His definition of privacy is also limited by only looking how it relates to security, while many security measures do not have to stand in the way of privacy, and it is one-sided not to take that into consideration. Furthermore, privacy is subjective, and bound to time and context. What we do and share on the internet today, still feels like sufficient privacy, but 50 years ago everyone would be out on the streets trying to stop it.

The approach that he proposes does not completely follow from his arguments. In addition, I think the solutions he proposes are not very promising, or redundant. By the European Convention on Human Rights we already have certain things that are clearly off-limits; we do not need additional legislation for that. To me it also seems totally infeasible to restrict certain things as completely off-limits. We are likely to always have an intelligence agency that can look at certain things, but with the right safeguards. That same applies to police having an injunction granted by a court.

Lastly, I also doubt whether the current assessment of new measures do not already take into account the context of existing measures. Perhaps currently the right knowledge and science exists to determine the real impact of various measures in conjunction. But I do not see him argue that.

EOF

2 thoughts on “A critique of the balancing metaphor in privacy and security (Mitchener-Nissen, 2014)

Leave a Reply

Your email address will not be published. Required fields are marked *