Month: October 2014

A critique of the balancing metaphor in privacy and security (Mitchener-Nissen, 2014)

Timothy Mitchener-Nissen, during his affiliation with the University College London as a Teaching Fellow in Sociology of Technology, published an article entitled Failure to collectively assess surveillance-oriented security technologies will inevitably lead to an absolute surveillance society (Surveillance & Society Vol 12 Issue 1, p.73-88). In this post I quote paragraphs from that article with the purpose of sharing, and for my own reference purposes.

First, here’s the abstract:

The arguments presented by this paper are built on two underlying assertions. The first is that the assessment of surveillance measures often entails a judgement of whether any loss in privacy is legitimised by a justifiable increase in security. However one fundamental difference between privacy and security is that privacy has two attainable end-states (absolute privacy through to the absolute absence of privacy), whereas security has only one attainable end-state (while the absolute absence of security is attainable, absolute security is a desired yet unobtainable goal). The second assertion, which builds upon the first, holds that because absolute security is desirable new security interventions will continuously be developed each potentially trading a small measure of privacy for a small rise in security. When assessed individually each intervention may constitute a justifiable trade-off. However when combined together these interventions will ultimately reduce privacy to zero. To counter this outcome, when assessing the acceptability of any surveillance measure which impacts upon privacy (whether this constitutes a new technology or the novel application of existing technologies) we should do so by examining the combined effect of all surveillance measures currently employed within a society. This contrasts with the prevailing system whereby the impact of a new security technology is predominantly assessed on an individual basis by a subjective balancing of the security benefits of that technology against any reductions in concomitant rights, such as privacy and liberty. I contend that by continuing to focus on the effects of individual technologies over the combined effects of all surveillance technologies within a society, the likelihood of sleepwalking into (or indeed waking-up in) an absolute surveillance society moves from being a possible to the inevitable future.

In the body of the paper, the author defines surveillance-oriented security technologies (SOSTs) as follows:

These are technologies intended to enhance the security of citizens via some inherent surveillance capability either operated by or accessible to the state. They facilitate the monitoring, screening, or threat assessment of individuals, groups, or situations, and are based on live-events, past events or the processing of data.

The author is skeptical about the effectiveness of Privacy Impact Assessments (PIAs) that have become mandatory for governments in various countries (including the Netherlands: see this):

PIAs also employ balancing through cost/benefit analyses of different actions and values, the production of business cases justifying both privacy intrusions and the resulting implications, and when public acceptability of the proposed project is analysed [Wright et al. 2011: Precaution and privacy impact assessment as modes towards risk governance. In Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields]. Again, the focus is on the specific project to hand. There is the possibility here to take a more collective view within such assessments; however, for our purposes this would require knowledge of the current state of SOSTs operating within a society so as to form a clear picture of the status quo. It is doubtful those undertaking the PIA would have access to such information or the resources to acquire it. Recently the concept of Surveillance Impact Assessment (SIA) has been developed, described as a ‘methodology for identifying, assessing and resolving risks, in consultation with stakeholders, posed by the development of surveillance systems’ [Wright and Raab 2012: Constructing a surveillance impact assessment. Computer Law & Security Review 28(6): p.613-626]. The SIA concept seeks to increase stakeholder engagement, minimise the impact of surveillance technologies, and improve upstream development. However, this initial conceptualisation still appears to focus on the individual technology and not the collective assessment of other existing SOSTs within its methodology. Whether this changes in practice remains to be seen.

The author then argues against the “ubiquitous balancing metaphor” that expresses privacy and security as a trade-off:

[The balancing metaphor] is arbitrary and subjective, lacking in meta-rules, and purports to successfully compare objects (such as privacy and security) which possess different intrinsic characteristics.

Furthermore:

Focusing and expanding upon this final point, one of the fundamental differences between privacy and security is that only one of them has two attainable end-states. Privacy (P) exists as a finite resource on a quantifiable spectrum with two attainable end-states; that being absolute privacy (P=1) at one end through to the absolute absence of privacy (P=0) at the other. Whereas security (S) also exists as a finite resource but on a quantifiable spectrum with only one attainable end-state; that being the absolute absence of security (S=0). However, as discussed earlier, absolute security (S=1) can never be achieved and therefore must exist as a desirable yet ultimately unobtainable goal always equating to something less than 100 per cent (S=<1); hence the absence of a second attainable end-state.

The second assertion, which follows from and builds upon the first, holds that one consequence of absolute security being unobtainable yet desirable is that new SOSTs will continuously be developed in a futile search for this unobtainable goal. These technologies each potentially trade a small measure of privacy for a small rise in security. This production process is driven by a variety of internal and external sources beyond the conflicting internal characteristics of security and privacy. These include; the nature of fear and risk, pressure by politicians and police, the availability of funding, push by the security industry, and public support for these technologies. These factors operate together to ensure a fertile environment exists for the research and development processes of the security industry to thrive.

The author concludes his paper with an elaboration and several proposals. I quote it entirely, and added bold emphasis.

6. Complementing individual with collective assessment

By collective assessment I refer to a process whereby the acceptability of a new SOST is determined by assessing its effects in combination with other SOSTs already operational to determine both the necessity of this new entrant and the resultant quantum of proportionality of the new technology was adopted [Footnote 13: Being the proportionality of all operating SOSTs, including the proposed technology being assessed, given the security they afford and the resultant infringement of privacy]. This collective approach is not intended as a replacement for existing assessment methodologies which determine the acceptably of each individual technology, rather it would complement them by forming a secondary assessment step. Hence if the technology is individually unacceptable it would be rejected outright without the need for collective assessment.

Any adoption of a collective assessment methodology for the purpose of retaining privacy would be premised on a number of requirements. Firstly it requires citizens (or at least the majority) not wanting to actually live in a surveillance society where their physical, communication, location, and personal data is routinely collected and/or processed so as to maximise their individual and collective security. This position entails the concomitant acceptance of insecurity as the natural condition; i.e. the conscious understanding and acceptance that absolute security can never be achieved regardless of how many security measures are introduced. This also needs to be coupled with an appreciation of the value of other rights and freedoms (besides security) to temper the temptation to introduce ever more SOSTs. I must stress here that this desire by citizens to oppose a total surveillance society is by no means given. Privacy and security are social constructs; the different weights assigned to them exist differently across societies, are contextual, and change over time [Solove, D. 2009: Understanding Privacy, Ch.3]. It is completely conceivable that a given society at a given time and/or under given circumstances, may desire to live in a surveillance society. At this point they may still wish to adopt a collective assessment methodology for the purpose of identifying areas of social existence requiring additional surveillance as opposed to the purpose of preserving privacy.

Secondly, collective assessment requires a general acceptance that privacy has to be retained; that once privacy levels are reduced to a certain level, any further reductions cannot be justified regardless of the competing right. If this consensus does not exist (regardless of where these levels are set) then the total surveillance society envisioned within my paper will occur. If there is nothing within the act of living within a society that most/many citizens believe should remain off limits to surveillance, then this view represents tacit approval for a total surveillance society; if nothing is off-limits then everything becomes a valid target.

On the assumption however that a society wishes to preserve a certain level of privacy, this could conceivably be achieved through different methods and protections. I have set out three below which could operate either individually or in combination.

The first option is to designate certain objects as prima facie ‘off-limits’ to surveillance. This could include; certain geographical locations (individual homes, wider community spaces, streets, work-spaces, etc.), certain data (geographical, communication, internet, etc.), and/or certain physical actions (correspondence, physical interactions, etc.). In the event of reasonable suspicion that an individual is committing offences within one of these restricted areas a surveillance warrant could be issued by a judge.

The second option is to ban certain actions by law enforcement agencies. This might include:

  • any form of stop-and-search without reasonable suspicion (and suspicion could not be inferred simply because somebody is physically present within a certain geographical location[Footnote 14: Thus requiring a repeal of Section 44 UK Terrorism Act 2000]);
  • any form of data-mining where it either equates to a fishing expedition or where if the data being sifted was not digitally available a warrant would be required to gain access to it;
  • and prosecutions based on targeted surveillance where no prior reasonable suspicion existed justifying that surveillance.

A third option is to use citizen juries in conjunction with political and/or judicial bodies to monitor and limit the current surveillance options available to law enforcement agencies within a society. They would be afforded complete oversight such that only SOSTs and measures approved by these bodies would be lawful. No surveillance, or prosecution based on surveillance, outside of these designated limits would be permissible.

There are challenges with all these options, with each raising difficult questions. On the idea of setting surveillance limits, who would decide where these limits are set and how would they go about doing this? How easy would it be to modify these limits, and under what circumstances would this occur? On the option of fettering the activities of law enforcement agencies, how would this be policed and what would happen if officers discovered information pertaining to a crime whilst themselves engaging in illegal surveillance activities? And on the option of citizen juries, how would these be empanelled, who could sit on them, and what oversight would exist?

The presence of such challenges does not automatically negate the viability of any of these options. This is merely an acknowledgement that any method adopted should be done so with caution and considerable foresight. That said, the ideas set out above are achievable for they reflect values and norms that are currently observable within UK society. Despite the preoccupation with security leading to the spread of SOSTs throughout society, both UK citizens and their government still protect certain areas from interference and consider certain actions unacceptable. The home and bedroom are still somewhat protected from intrusion in that police are not (yet) allowed to randomly enter homes to search for evidence of crimes without prior suspicion or evidence of an offence. Written and oral communication between suspects or prisoners with their legal representatives is still largely protected, and the use of torture is thankfully still considered beyond the pale for the majority of citizens. And yet all of these actions have the potential to uncover evidence of criminal offences.

These examples show UK citizens are not yet willing to sacrifice every concomitant right on the altar of security, and while this holds true the opportunity remains to introduce measures for protecting privacy and scaling back the surveillance society. Collective assessment is a step down this path in that it makes explicit the current overall balance between security and privacy, thereby raising citizen awareness of the state of their society. Nevertheless, if privacy is valued at least as much as security is valued then this collective assessment needs to be backed up with protection measures such as those outlined above. Without these measures any such assessment is merely an exercise in collecting and collating information. It will tell us how far away we are from the oncoming train that is the absolute surveillance society without affording us the wherewithal to change its direction before we find ourselves forever wedged firmly under its wheels.

Jeroen van der Ham (Twitter: @1sand0s), a former colleague of mine, shares his criticism of Mitchener-Nissen’s article (my translation, switching to American English):

In the definition Mitchener-Nissen uses for “privacy”, he attempts to make privacy expressible on a scale, where I think that is not possible. His definition of privacy is also limited by only looking how it relates to security, while many security measures do not have to stand in the way of privacy, and it is one-sided not to take that into consideration. Furthermore, privacy is subjective, and bound to time and context. What we do and share on the internet today, still feels like sufficient privacy, but 50 years ago everyone would be out on the streets trying to stop it.

The approach that he proposes does not completely follow from his arguments. In addition, I think the solutions he proposes are not very promising, or redundant. By the European Convention on Human Rights we already have certain things that are clearly off-limits; we do not need additional legislation for that. To me it also seems totally infeasible to restrict certain things as completely off-limits. We are likely to always have an intelligence agency that can look at certain things, but with the right safeguards. That same applies to police having an injunction granted by a court.

Lastly, I also doubt whether the current assessment of new measures do not already take into account the context of existing measures. Perhaps currently the right knowledge and science exists to determine the real impact of various measures in conjunction. But I do not see him argue that.

EOF

New Dutch intelligence oversight report: the (il)legality of SIGINT carried out by the AIVD in 2012-2013

On October 7th 2014, the Dutch Review Committee on the Intelligence and Security Services (CTIVD) published a new oversight report (.pdf, in Dutch) concerning the use of intercept powers by the Dutch General Intelligence & Security Service (AIVD) between August 2012 and September 2013. The AIVD has two interception powers: first, Article 25 Wiv2002 permits the AIVD to specify an individual or organization and, after Ministerial approval, carry out targeted interception (e.g. internet tap, phone tap, microphone). Second, Article 26 Wiv2002 and Article 27 Wiv2002 permit the AIVD to intercept non-cablebound communications (such as satellite and HF radio) in bulk and, after Ministerial approval, select data from it using telecommunication characteristics (e.g. phone number, fax number, email address, IP address; called “selectors” hereafter). Article 26/27 Wiv2002 are informally referred to as “sigint powers”; Article 25 Wiv2002 is not, and neither is hacking ex Article 24 Wiv2002.

Similar to previous oversight reports concerning Article 25 Wiv2002 (targeted interception), the new oversight report confirms that the AIVD generally uses Article 25 Wiv2002 (targeted interception) carefully and heedfully (in a legal sense, as evaluated within the framework of Dutch law). In individual cases, issues exist that the CTIVD states to be careless or illegal; for instance cases where privileged persons are tapped (lawyers, doctors).

Similar to previous oversight reports concerning Article 27 Wiv2002 (sigint selection by keywords, by identity of person or organization, and/or by telecommunication characteristics), the new oversight report confirms that the AIVD often acts careless (in the legal sense), and that the legally required motivation for the use of the power (necessity, proportionally, subsidiarity) is often insufficient. Strangely, up to 2013, the CTIVD systematically withheld itself from judging such practices to be illegal, even though the practice obviously did not comply with the law: proper motivation is a legal requirement, and that requirement has largely not been met for years.

Because of this, in November 2013 I stated that sigint oversight in the Netherlands is broken. Taking into account the developments concerning a possible extension of the sigint powers such that the AIVD can also carry out sigint on cablebound communications (think of GCHQ’s Tempora, NSA’s DANCINGOASIS and BND’s cooperation in the Eikonal program), this is a very serious issue. And at least as important as the issues concerning (il)legality of the acquisition of intelligence from of social media in 2011-2014 through hacking, human sources and exchange with foreign agencies: at least in nearly all those cases the CTIVD found that the activities met the requirement of necessity (though not necessarily also the requirements of proportionality and subsidiarity). As of June 15th 2014, all hacking and sigint is formally carried out by the Joint Sigint Cyber Unit.

As of January 1st 2014, two out of the three committee positions within the CTIVD are held by new persons. Former Chairman of Attorneys General Harm Brouwer replaced Bert van Delden as chair, and former Rotterdam police chief Aad Meijboom replaced Eppo van Hoorn, who resigned in Q3/2013. The third person is formerpublic prosecutor Liesbeth Horstink-Von Meyenfeld: she joined in 2009, and will be legally required to resign or be reappointed in 2015. (After an insanely complex selection process, a position is filled by Royal Decree for a six year period, and members can be reappointed once.) As shown below, the “new” CTIVD has turned out to be willing to conclude that lack of legally required motivation constitutes illegality. This seemingly changed standpoint solves one part of the oversight problem. The next question is: does the practice change in reality? This remains to be seen; the CTIVD itself has no formal means to intervene with AIVD activities. This is left up to the Minister, who typically defends the AIVD and who probably doesn’t spend a lot of time critically assessing the request for permission to interception, and to the Parliament, which historically showed scarce interest in intelligence — this has only slightly changed since Snowden.

Interestingly, the CTIVD decided to disclose statistics concerning the use of Article 25 Wiv2002 and Article 27 Wiv2002 in the report, and the Minister of the Interior chose to censored the statistics in the final publication. This is basically offending the CTIVD. Dutch readers should read this and this. I suggest that the CTIVD in response initiates an investigation into the use of Article 27 Wiv2002 to select by keywords. No oversight report is yet available that addresses this in length. The Minister of the Interior annually approves a list of topics for sigint selection by keywords; the keywords are then chosen by the AIVD itself. One always wonders about the dynamics of keyword-based surveillance. For instance, whether the keywords are limited to, say, export-controlled chemicals, or that the AIVD also selects using general keywords (“bomb”); and also, what the thresholds and conditions are for someone (e.g. an activist) to become a person of interest to the AIVD’s a-task (National Security) or d-task (Foreign Intelligence).

The remainder of this post is a translation of the part of the new oversight report that specifically addresses sigint selection ex Article 27 Wiv2002:

WARNING: this is an unofficial translation

12 Usage of the power to select sigint

12.1 Introduction

A request for approval for the use of the power to select sigint consists of two parts. First, the motivation for using the power. This details the specific investigation within which the power is used, and needs to discuss aspects of necessity, proportionality and subsidiarity. Second, the motivation has an appendix that lists the telecommunication characteristics (hereafter: list of selectors). The list of selectors describes the telecommunication characteristics that will be used as selectors (for instance, name of person or organization, phone numbers, email addresses). In the list of selectors, a column is included with a (very) brief description of the reason the selector is included. This description can also be a reference to the AIVD’s internal information system. The list of selectors is sent to the Minister of the Interior for approval, together with a summary of the motivation. Similar to the use of the interception power [ex Article 25 Wiv2002], approval can be given for a period of up to three months. The CTIVD has examined both the motivation of the use of the power to select sigint, and the (justification for the) list of selectors.

12.2 Selection of sigint in previous oversight reports

In oversight report 19, the CTIVD found that the AIVD did not deal with sigint selection in a careful manner. Often it was not explained whom the numbers and other telecommunication characteristics belong to, or why this telecommunication needed to be selected. The CTIVD concluded that it had insufficient knowledge about the motivation of the selection, and thereby could not judge whether the power was used in a legal manner in accordance with Article 27 Wiv2002. The CTIVD urgently recommended that requests for the use or for the extension of the use the sigint selection power should include a specific motivation. The Minister of the Interior responded by stating he agreed with the CTIVD, but also expressed worries about the practical feasibility of that recommendation. The Minister agreed to further consult with the CTIVD on this matter.

In oversight report 26, concerning the AIVD’s foreign intelligence task, the CTIVD found that in the application of Article 27 Wiv2002 for foreign intelligence purposes, in many cases it still was not specified whom a characteristic belonged to and why it is important to selection the information that can be acquired through this specific characteristic. The CTIVD did notice, however, that once sigint operations ran for a longer period, the AIVD was better able to explain whom telecommunication characteristics belong to, and could better argue why the use of the power was justified against these persons. The CTIVD emphasized that the AIVD seriously must strive to better specify against which person or organization sigint is used.

In oversight report 28, concerning the use of sigint by the Military Intelligence & Security Service (MIVD), the CTIVD has further elaborated the legal framework for the entire procedure of the processing of sigint. In that report the CTIVD once again concluded, this time concerning the MIVD, that it could not judge whether the use of sigint was legal in accordance with Article 27 Wiv2002 because the CTIVD had insufficient knowledge about the motivation. In oversight report 38, the CTIVD repeated her earlier findings.

In oversight report 35, the CTIVD has examined one specific operation that involved selection of sigint, and judged aspects of to be illegal.

The current investigation constitutes the first time since oversight report 19 that the CTIVD has assessed the legality of the use of sigint selection in general concerning the AIVD.

12.3 Search for the purpose of selection in previous oversight reports

In certain cases the AIVD uses its sigint search power [ex Article 26 Wiv2002] prior to sigint selection [ex Article 27 Wiv2002]. The AIVD hereby aims to identify whether a relevant person of interest is present within the communication intercepted in bulk. In this case, the AIVD attempts to establish the identity of the person, and whether a relation exists to the field of investigation. This constitutes search for the purpose of selection. The use of sigint search supports better targeted sigint selection.

In oversight report 28, the CTIVD described the practice of sigint search by the MIVD, and in oversight report 38 the CTIVD repeated its conclusions. The CTIVD distinguishes three forms of search aimed at selection, that involve taking note of the contents of communication. In short, this involved the following forms:

  1. Searching bulk intercepts to determine whether Ministerially approved selectors can in fact generate the desired information;
  2. Searching bulk intercepts to identify or describe potential `targets’;
  3. Searching bulk intercepts for data that, in the context of an anticipated new area of investigation, future selectors can be retrieved from.

In oversight report 28, the CTIVD only found the first form of sigint search for the purpose of sigint selection to be legal, because only that form has safeguards for privacy infringement, that is, through the prior approval of the Minister to use sigint against the person or organization involved. This use of sigint search supports the sigint selection for which permission was obtained. This is necessary because Article 13 of the Dutch Constitution requires authorization by a competent body prior to infringements of phone secrecy and telegraph secrecy. The CTIVD finds the second and third form of search to be illegal, because it has no legal ground, and privacy infringement is not safeguarded by the requirement of Ministerial permission prior to using sigint selection against a person or organization.

The CTIVD left the legislator to consider whether it is necessary that the MIVD (and AIVD) be granted the power to search for the purpose of selection, taking into account the right to privacy. In his response, the Minister of Defense stated that he would cooperate with the Minister of the Interior to establish a future-proof legal framework. During the General Meeting that addressed, among others, oversight report 28, the Minister of Defense stated that he agrees with the CTIVD’s conclusion considering the sigint search power, but that while waiting for the intended change of law, the current practice, for reasons of national security, will continue. In their response to oversight report 38, both Ministers stated that the practice that was found to be illegal, will be taken into account in an intended change of law. During a plenary meeting on eavesdropping by the NSA, that addressed oversight report 38, the Minister of Defense stated that the third form of sigint search has stopped, and repeated that the MIVD continues to carry out the second form of search while waiting for a change of law. In oversight report 38, the CTIVD announced that it will address sigint search by the AIVD in the ongoing investigation into the use of interception power [ex Article 25 Wiv2002] and the power to select sigint [ex Article 27 Wiv2002] by the AIVD.

12.4 Methods of AIVD concerning search for the purpose of selection

The CTIVD has taken notice of an internal method of the AIVD concerning the use of sigint search and sigint selection powers. This method is set out in writing and approved by management. The method provides for the possibility that an operational team examines whether a telecommunication characteristic, such as a phone number or email address, that no Ministerial permission is available for yet, is relevant to the investigation. The method is aimed as establishing the identity of the communicating party associated with this telecommunication characteristic. The CTIVD noticed that the AIVD interprets the identification of the communicating party in a broader sense than only establishing the name of the involved party. The AIVD also assesses whether the party is relevant for the investigation carried out by the operational team. One can think of establishing that a person has a certain function, relevant to the investigation, within an organization.

The department that facilitaties bulk interception, can at the request of, and in cooperation with, the operational team carry out a metadata analysis to determine how the telecommunication characteristics relates to other persons and organizations that are included in the investigation (who/what has contact with who, how long, how often, from what location, etc.). To establish the identity of the communication parties and determine their relevancy for the AIVD’s investigation, it can be necessary to also take note of the contents of communication. The possibility is then offered to examine the nature of the communication that has already been intercepted. This can involve stored bulk intercepts for which no Ministerial permission has yet been obtained to select from. It can also involve communication that has been previously selected from the bulk, and is thus already accessible by the team; or information that is in the possession of the AIVD through other (special) powers, such as hacking or acquisition of a database.

The team’s data processor is given the opportunity to briefly see (or hear) the contents of the communication that can be related to the telecommunication characteristic to determine whether the telecommunication characteristic is relevant. This allows the team to determine whether it is useful to obtain permission and include the telecommunication characteristic on the list of selectors so that the communications can be fully known to the team. The CTIVD understood that not all teams use this method equally.

According to the AIVD’s internal policy, it is not intended that the processor exploits the information obtained without obtaining permission from the Minister.

The CTIVD depicts this method as follows.

AIVD  sigint method in practice

12.5 Assessment of the methods

The CTIVD considers the question whether the AIVD’s method is legal.

Insofar the method is used concerning telecommunication characteristics that Ministerial permissions was obtained for, the CTIVD finds the method to be legal, because the privacy infringement is safeguarded by the Minister’s permission. The CTIVD also considers this method legal concerning the MIVD.

The CTIVD finds the AIVD’s method, insofar it concerns telecommunication characteristics that the Minister has not yet approved, is equivalent to the second form of sigint search described in section 12.3. The CTIVD thus concludes that this method is illegal.

This conclusion will be upheld as long as the anticipated legislation does not yet exist.

The CTIVD notes that the Minister of Defense, under the announcement of an intended change of law – in consultation with Parliament – stated that the MIVD’s practice will continue. Although the Minister of Defense only mentioned the MIVD, the CTIVD notes that the AIVD, that applies the same method, awaiting a change of law, also continues with this practice. Now that this is the current practice, the CTIVD considers it to be important to evaluate the compatibility between the practice and the right to privacy.

The CTIVD notes that the AIVD uses this method for the purpose of carefully establishing of list of selectors, and avoid that unnecessary permissions are requested. The CTIVD recognizes that the described method can support this. Moreover, the CTIVD can conceive of the possibility that this also supports implementation of the previous recommendation to improve the motivation of sigint selection. The CTIVD also expects that this could result in fewer infringement on the right to privacy, because the pre-investigation allows a more targeted use of sigint.

The right to privacy requires that the following safeguards are present:

  1. The only purpose of briefly peruse the contents of communication can be the determination of the identity of communicating parties and the relevance of the communication to the ongoing investigation. Other use of is not permitted until permission has been obtained from the Minister. A requirement for briefly peruse of contents of communication is that an adequate separation of duties exists between the department that facilitates bulk interception and the operational team, in that sense that the operational does not itself obtain access to the communication.
  2. To ensure this separation of duties, it is important to provide an adequate written registration and reporting of having peruses the communication. It must be recorded what communication has been seen/heard, and what the outcome was. The CTIVD consides this registration and reporting to be important for internal accountability and external control, as well as for carefulness.

The CTIVD notes that the second safeguard is currently insufficiently implemented by the AIVD. The CTIVD notes that insufficient reporting is performed on what communication has been seen/heard and what the outcome was.

12.6 Usage of the sigint selection power

The CTIVD has examined the AIVD operations that between September 2012 and August 2013 involved the use of sigint selection. Concerning the examined operations, the CTIVD has several remarks. Insofar necessary this is elaborated on in the classified appendix.

The CTIVD finds that sigint selection is used in varying ways by the various operational teams of the AIVD. Specifically, a difference is seen between the teams of the National Security unit and the teams of the Foreign Intelligence unit. The use of this power by Foreign Intelligence teams is generally broader, as can be observed in the size of the list of selectors. The operations of the National Security teams are largely more targeted. Considering that the legal definition of the a-task [National Security] is focussed on individual persons and organizations, and the d-task [Foreign Intelligence] provides for carrying out investigations on countries, this is not surprising. The CTIVD finds that the lists of selectors vary in size between several selectors to thousands selectors.

The CTIVD notes that for each person or organization that emerges during the investigation, it must be motivated why sigint selection is necessary. In must be stated what the purpose of the sigint selection is within the context of the investigation, and what the grounds are for the expectation that yields of selection will contribute to that purpose. There hence must be a link between the broader investigation that is carried out and the necessity of the selection of communication from the specific person or organization. This is different for every person or organization.

In a request for extension, the yields of the selection and the added value to the investigation must be considered, not in general terms but specific to the person or organization. General statements that the use of the special power contributed to the intelligence need, or resulted in (unspecified) reports, or confirmed current believes, are insufficient. In addition to necessity, a request for permission must state in what way the requirements of proportionality and subsidiarity are met. The CTIVD notes that absence of yields can be a result of the nature of sigint. Communication can possibly not be intercepted because of the range [sic] of the satellite dishes. The CTIVD is of the opinion that it is permissible to uphold [selection by] telecommunication characteristics as long as periodically it is reconsidered whether upholding the characteristics still complies with the legal requirements for the use of the power, and that this consideration is written down. Insofar the selection of certain telecommunication characteristics yielded communication but this communication turned out to be irrelevant, the AIVD must remove the characteristics from the list of selectors that permission is requested for.

The CTIVD finds that the extent to which the motivation for use of the power to select sigint established a framework from which it can be foreseen which persons or organizations are within the scope of the operation, varies from operation to operation. In certain operations it is made clear whom the AIVD is interested in, and the motivation clearly states which persons and organizations are within the scope of the operation. The CTIVD notes that it is important that the motivation provides sufficient clarity about which persons and organizations can be selected, under which conditions, and why. A direct and clear link must exist between the motivation and the persons and organizations that are included in the list of selectors. The CTIVD finds that this link is absent in three operations, or made insufficiently clear by the AIVD. The CTIVD finds this to be illegal. Moreover, the CTIVD finds that in one operation, non-limitative enumeration are used in the motivation. An example of this is the phrase “persons and institutions such as [.], et cetera”. The CTIVD finds this to be illegal.

The CTIVD finds that in two operations, persons or organizations with a special status (e.g. non-targets, sources, occupations that have professional secrecy) were included on the list of selectors, without specific attention being paid on this in the motivation. This CTIVD notes that special categories of persons or organizations must be explicitly mentioned in the motivation if they are included on the list of selectors, and that attention must be paid to the legal requirements of necessity, proportionality and subsidiarity in relation to these telecommunication characteristics. The CTIVD notes that it is not, or insufficiently, apparent what considerations were made concerning these requirements. Considering the special status of these persons or organizations, the CTIVD find this to be illegal. A number of telecommunicatino characteristics are related to a person with whom is being cooperated by the AIVD, and whose interception [ex Article 25 Wiv2002] by the AIVD is found to be illegal by the CTIVD. The CTIVD also finds the sigint selection against this person to be illegal.

The CTIVD finds that in the motivation that the Minister based his approval for, in two cases no attention was paid to the requirements of necessity, proportionality and subsidiarity. In the internal motivation, the AIVD does pay sufficient attention to this. The CTIVD finds this to be careless. Although the AIVD is not required to provide the Minister an exhaustive motivation considering necessity, proportionality and subsidiarity, the CTIVD notes that the permission request must provide sufficient clarity about the considerations to allow the Minister to assess the request.

The CTIVD finds that the AIVD in multiple cases incorrectly explains the requirements of proportionality and subsidiarity. Deliberating on subsidiarity, for instance, in one operation the AIVD stated that sigint selection “could possibly yield change for exchange with other foreign agencies”. In a different operation, it has been stated sigint selection allows the AIVD to “carry out investigations in a relatively simple and efficient manner, involving limited risks”. The CTIVD notes that this does not constitute a correct deliberation in terms of the requirements of proportionality and subsidiarity. Correct deliberation on proportionality implies, after all, a deliberation on interests that explicitly involves the interests of the target. This also applies to the requirement of subsidiarity, on the basis of which the AIVD is required to use the means that are least infringing on rights. The CTIVD finds that the outcome of this deliberation can result in the AIVD having to use an inefficient and relatively complex means. The CTIVD did not find evidence that indicates that the operations involved do not meet these requirements. The CTIVD thereby finds that the motivation by the AIVD is lacking and thus careless, but not illegal.

The way in which the list of selectors is structured, varies from team to team. The list of selectors includes a (very) brief motivation of why the characteristic is included. The CTIVD finds that the motivation in various operations is done in very different ways. The CTIVD observed cases in which the list of selectors refers to an internal document of the AIVD that explains the relevance of the characteristic to the operation. In addition, the CTIVD observed cases in which a short explanation of the relevance is included. In one operation, the motivation does not include more than a brief indication of the function of the person, or other indications of the characteristic. This indication can comprise a single word (e.g. biologist, toxicologist, phone number, fax). The CTIVD finds this method of motivation, that only includes a single word and no further explanation, to be illegal. The CTIVD notes that the latter cases are insufficiently traceable to the motivation of the request for permission, and that the framework must be included in that. The CTIVD notes that concerning every characteristic, at least the relevance of the characteristic must be (briefly) included in the list of selectors, and where necessary, a clear reference must be included (to an internal document) on the basis of which the relevance of the characteristic can be further assessed.

The CTIVD finds that in a certain list of selectors, multiple telecommunication characteristics have been included with the remark that these “probably” belong to a person that the Minister has approved sigint selection for. The CTIVD notes that the use of the first form of sigint search, as described in section 12.3, could help limit the characteristics included of the list of selectors to characteristics that the AIVD has determined could actually be related the person of interest.

The CTIVD finds that the list of selectors in certain operations has substantially increased over time. In one operation, the CTIVD observed telecommunication characteristics that were obtained through legal use of a different special power (such as phone numbers or email addresses). The CTIVD finds that all these characteristics where used for sigint selection by the AIVD. In nearly all cases, it was not indicated whom the number belongs to are what the specific relevance is. In fact the only link to the AIVD investigation was the circumstance that the persons associated with the characteristics have contacted a person of interest to the AIVD, without any indication of the nature of the contact or other relevant clues. The CTIVD notes that the mere contact with a person of interest, without the relevance of this contact to the AIVD’s investigation, is insufficient justification for including it in the list of selectors. The CTIVD considers it the be likely that it is possible to use less infringing means to determine what contacts are evidently irrelevant. The CTIVD thus finds the selection based on these telecommunication characteristics to be illegal.

EOF

Dutch Hosting Provider Association (DHPA) opposes Dutch govt’s sort-of-voluntary internet censorship plan

In August 2014, the Dutch government proposed a 38 step action plan (.pdf, in Dutch) to fight jihadism. As explained here, the proposal included voluntary cooperation-based internet censorship with the purpose of reducing jihadist use of the internet. Today, the Dutch Hosting Provider Association (DHPA / @stichtingDHPA) posted a press release explaining that it, representing its members, opposes the current proposal. Here is my translation of that press release:

Dutch Internet industry and Ministry of Justice collide over fight against jihadism

Law enforcement agencies increasingly force internet companies to remove radicalizing content without court order. This leads to an impossible situation, says Michiel Steltman, director of the Dutch Hosting Provider Association (DHPA), on behalf of the internet sector. ‘Does the government want to force companies, for instance, to include jihadism in the general conditions? And how does a hoster decide what content is undesirable?’

According to the companies, the underlying problem is that the Public Prosecution does not judge many of the suspicions of the Ministry of Justice as criminal, and because of that reason refuses to prosecute, meaning that no judicial review takes place. If the Ministry of Justice still believes that the videos or documents must be removed, no other option remains than to pressure companies into complying with the request. But they say they can’t and won’t judge if something is a criminal offense.

Not a censorship agency

Steltman mentions the recent example of a group of shooting men around a campfire shouting ‘allahu ahkbar’, with some lines in Arabic. ‘Did they just kill someone, are they made because someone was killed, or are they having a party and has a goat just been slaughtered’?

In addition, Alex de Joode, company lawyer Government affairs at Leaseweb, the largest business hosting provider of the Netherlands, does not like the methods of the Ministry of Justice: ‘We are not an age verification or censorship agency. The government has a fine legal instrument to remove content, but chooses to not use that in alleged jihadism’.

Pay damages

The sector is afraid of being held responsible by eventual victims. De Joode: ‘suppose that we are wrong and illegally take down a site without a court order. That can cause us a lot of damage.’

Ever more internet companies disclose the number of demands by law enforcement agencies, including Xs4all and Leaseweb. In the US, mostly Google set this trend, following by companies such as Microsoft and Twitter.

Responsibility of companies

Dick Schoof, National Coordinator Counterterrorism and Security (NCTV), considers it to be a responsibility of companies to, ‘on the basis of interpretation by the NCTV, assess the content of the website against their own general conditions. Hereby we appeal to the responsibility of the providers.’

Steltman emphasizes that internet companies are willing to establish better procedures in cooperation with the Ministry of Justice. Schoof describes the ‘currently ongoing conversation with internet companies and social media companies as very constructive’.

EOF

Dutch Data Protection Agency’s considerations on necessity and proportionality of the hacking power for LE proposed by the Dutch govt

In May 2013, the Dutch government proposed legislation — specifically this document (.pdf, in Dutch) — that would grant Dutch law enforcement the power to break into “automated works” (computers, smartphones, etc.), for instance using FinFisher. The Dutch intelligence agencies already have that power since 2002. The Dutch LE agencies do not. But lack of legal authority notwithstanding, some hacking by Dutch police has been seen in practice: for instance to take down Bredolab (2010) and to fight child porn on Tor (2011). This is confirmed by the finding that the Dutch police indeed has currently active FinFisher licenses; and by yesterday’s answers (in Dutch) to Parliamentary questions on this topic (h/t @rejozenger).

The proposed legislation is flawed, as is apparent from the contributions to the public consultation that closed in July 2013, and from this post by Bits of Freedom. The legislation also proposes granting the law enforcement the authority to force suspects of certain crimes (such as terrorism & child porn) to decrypt their data, under penalty of three years imprisonment or a fine of the fourth category (some 20k euro). Prior to the proposal, professor Bert-Jaap Koops was commissioned by the Dutch govt to carry out a study (.pdf, 2012, in Dutch) of infringement on nemo tenetur (the right not to self-incriminate) in other countries. Koops established three possible ways forward; the Dutch govt chose the toughest of the three.

In February 2014, Dutch Data Protection Authority (CBP) published (.pdf) a critical advice that addresses issues concerning necessity and proportionality of the proposed hacking power. The CBP recommends the government to not submit the proposal to Parliament in its current form. The report contains a few considerations that are interesting also to anyone unfamiliar with the proposal or even Dutch law.

Here is my translation of the interesting parts from the CBP’s advice (TL;DR: the proposal is insufficiently substantiated; there are flaws concerning necessity and proportionality):

1.4 Review of necessity, proportionality and subsidiarity

Necessity

With regard to the need to demonstrate necessity, the proposal argues on the basis of the technological developments that existing investigative powers are insufficient and necessitate more power. Although it is argued that law enforcement is in urgent need of this new power, and some scenarios are brought forward that existing powers provide no solace, insufficient concrete evidence is provided that demonstrates an urgent need for society to introduce these infringing measures. The considerations underlying the proposed powers are indeed largely based on a number of concrete scenarios, but those do not in itself sufficiently justify granting new powers. The urgency referred to in Article 8 of the ECHR also requires an independent contemplation and substantiation that transcends casuistry. The necessity (“pressing social need”) for the introduction of this new authority should be established conclusively in objective terms, and is currently insufficiently substantiated. The Dutch Data Protection Agency recommends including the missing considerations. Furthermore, the CBP considers the following.

Insufficient distinction is made between encryption of files and data by suspects, encryption of communication flows in transit, and the fact that people store data elsewhere, in the cloud. This distinction is essential to determine to what extent the power is necessary, and whether no other means exist to achieve the same goal that make a lesser infringement on privacy of those involved. In the Netherlands, all providers of public electronic communication networks and services are required to provide decrypts of communication they themselves encrypt. In case an investigation requires urgent access to data that are managed by foreign providers such as Google, Skype or Facebook, it is insufficiently substantiated that these data would not cooperate with legal requests. The fact that they have or can provide access to the decrypted content of email and files on their servers, or can be asked to cooperate with intercepting the communication of a specific suspect. In case the suspect has encrypted data himself using software such as PGP or TrueCrypt, the investigators could use existing authorities, or use the proposed authority to force the suspect to decrypt the data. The necessity of exercising the power of breaking into computers is insufficiently substantiated, considering the size and severity of the privacy infringement it produces. Considering the use of Tor networks to encrypt communication in transit, it needs to be substantiated why other often-used methods of fighting serious crime are not effective (the requirement of subsidiarity).

When fighting botnets, scenarios are conceivable that command-and-control servers are located abroad, or that their location cannot be determined. In those cases, the existing powers do not suffice, and making the systems inaccessible through remote intrusion of an automated work may offer a solution. Also in case of specific scenarion, for instance an ongoing DDoS attack on a bank or another essential service, it is conceivable that this combination of powers can offer a solution. Also in case of bulletproof hosting providers, other means are insufficient. However, the reasoning that insufficient means are available in the case of bulletproof hosting providers, does not warrant the conclusion that law enforcement needs to have access to all data stored in the cloud.

Proportionality

Concerning proportionality, the proposal ignores the size of the privacy infringement that will result from the introduction of this power. That infringement concerns the large amount and the nature of the personal data on the one hand, and on the other hand the large circle of persons whose right to privacy is infringed upon. The mandatory consideration of whether the severity of the privacy infringement is proportional to the objectives sought is missing in the proposal. According to the proposal, the power to carry out investigation in an automated work can only be used for the objectives mentioned under a to e. The objective under a (establishing the presence of data or determining the identity or location of the automated work or the user) is characterized as non-far-reaching, but once access has been obtained, the result will be far-reaching, and law enforcement has unlimited access to all available digital data. That also holds for the other objectives. After access is obtained through the use of spyware, that access cannot be restricted to the objectives stated in the warrant. This is not only disproportional, but also results in excessive processing of police data (Article 3, second paragraph, Police Data Act).
1.5 Safeguards

Given the extent of the power and the severity of the privacy infringement, the use of the power must have strict safeguards. The proposal provides several safeguards, including a clause that restricts the use of the power to suspects of crimes of a certain severity, the clause that the power can only be used for specific objectives, the mandatory specification of the grounds for the warrant, and the requirement of prior approval from the prosecutor by the magistrate. In addition to these safeguards, the CBP also considers the following safeguards to be essential.

Controls and logging

An important safeguard is the verifiability of the exercise of the power throughout the entire process, from requesting approval to using the power. Article 4, third paragraph of the Police Data Act, requires that adequate technical and organizational measures are taken, requires that a comprehensive auditing system be set up for accountability during the entire process. In addition, knowledge of and insight into the used software is necessary. Quality and reliability, as well as possibly hidden vulnerabilities must be subject of constant evaluation. Besides the “regular” journaling and reporting, logging is important. Considering logging, it is argued that at all times it can be checked what technical actions were taken, such that at a later moment there can be no doubt about the nature and consequences of the actions that have been taken [6].

However, logging can as of yet not always result in showing all relevant actions [7]. It als holds that useful logging requires that the precise way in which the software works must be known, including the source code.

Legal protection; criminal system

This new power is placed in Title IV concerning any special coercive measures. These coercive measures are characterized by a certain knowability of its application to the person involved. The proposed power, on the contrary, is characterized by covert application, and therefore undeniable has the character of a special investigatory power. The special investigatory powers are placed in separate titles in the Code of Criminal Procedure, since the introduction of this system in 2000 by the Special Investigation Powers Act. The basic principle of this law is that investigation powers that carry a large risk to the integrity and verifiability of the investigation, or infringe upon fundamental rights of citizens, require a specific basis in the Code of Criminal Procedure. The interests and fundamental rights at stake require this. The title of general provisions that applies to all special investigatory powers contains specific safeguards that — at least partially — are withheld by the proposed placement in the title of coercive means.

Notification of individuals, oversight and review of effectiveness

Notification afterwards to the individual is, also considering the flawed current practice in which the mandatory notification often does note take place, a scant safeguard for the required accountability of use of the power. Taking into account the implications of the exercise of the power, it is also recommended that the proposal provides a control instrument, that allows direct and effective oversight on the way the power is used, among others through means of a requirement to regularly provide statistics and overviews. In this regards, the inclusion of a sunset clause is indispensable.

We are yet to see what the government will do with this advice. The government submitted its proposal to the Dutch Council of State for (other) advice. After that, it may or may not be submitted to Parliament.

Some related topics: Europol recently published a report that warns of the risks of encryption and anonymity to law enforcement, and Bruce Schneier observed that the crypto wars are back.

EOF

Two short stories of real criminal use of computers in the 90s concerning Columbia, New Zealand and the UK

UPDATE 2014-10-16: turns out one of the stories mentioned in Aldrich’s book is a hoax. Thanks to Nick R for pointing me to this comment left at Bruce Schneier’s blog. I removed the story from the below post and apologize for spreading false information.

I’m reading Richard J. Aldrich’s book GCHQ – The Uncensored Story of Britains Most Secret Intelligence Agency (2010). Earlier I quoted a few of Aldrich’s paragraphs that discuss TEMPEST in the 1960s. I’m currently reading Chapter 24 (“The New Age of Ubiquitous Computing”) and think the following two examples by Aldrich of criminal use of computers in the 1990s are interesting to share here.

Aldrich’s first example is of a drug cartel using an IBM AS400 mainframe to analyze phone records to discover informants:

In the autumn of 1994, elite counter-drugs forces were searching a compound in an affluent neighbourhood of the Colombian city of Cali, home to some of the world’s major cocaine cartels. This time, instead of finding drugs, they uncovered a large computer centre, with six technicians slaving over an IBM AS400 mainframe around the clock. The presumption was that this had something to do with major underworld financial transactions, so the computer was dismantled and taken to the United States for analysis. In fact, the drug cartel had loaded all the office and home telephone numbers of US diplomats and counter-narcotics agents based in Colombia. They had then added the entire regional telephone log containing the call history of the last two years, purchased illegally from the commercial telephone company in Cali. This was being systematically analysed, using ‘data-mining’ software of the kind now commonly used by intelligence agencies, to identify all the people who had been calling the counter-narcotics officers on a regular basis. The drug barons were engaged in sophisticated sigint to uncover informants in their ranks. Chillingly, a dozen had already been assassinated, and this was the machine that had uncovered them. [Footnote 2: P. Kaihla, ‘The Technology Secrets of Cocaine Inc.’, Business2.com, July 2002]

Second and last, Aldrich cites cyber attacks on banks in the City of London by blackmailers:

In 1995 GCHQ also found itself investigating cyber attacks on banks in the City of London. Working with the Department of Trade and Industry and the Bank of England, it began to probe crimes which the banks were extremely anxious to hide. Outwardly, they claimed to be secure, but in fact they had paid out millions of pounds to blackmailers who had gained entry to their systems and threatened to wipe their computer databases. GCHQ was hampered by limited cooperation from the banks, which were reluctant to admit the extent to which they had been damaged, for fear of undermining the confidence of investors. Nevertheless, GCHQ was able to identify forty-six attacks that had taken place over a period of two years, including attacks on three British banks and one American investment house. One of the questions GCHQ was asking was how the blackmailers had gained access to ‘hacking’ technologies that had been developed by military scientists. [Footnote 4: Insight Teilm, ‘Secret DTI Inquiry Into Cyber Terror’, Sunday Times, 09.06.96.]

EOF