Author: mrkoot

Dutch IRS uses the Dutch police’s (nearly) nation-wide ANPR camera network for state tax collection

UPDATE 2017-02-24: NRC Handelsblad reports that the Dutch supreme court (“Hoge Raad”) granted an appeal seeking that the Dutch IRS can, for privacy reasons not further explained in the news report, can not use the nation-wide ANPR camera network for the purposes of checking whether or drivers of leased company vehicles drive more than 500km per year privately (conceivably, that mass processing of nation-wide ANPR data is claimed to be disproportionate for that purpose). Persons who drive more than 500km per year for private reasons in a leased company care (effectively) have to pay more tax. The final ruling by the supreme court is expected later this year.

UPDATE 2014-10-22 #2: build your own vehicle license plate recognition using the DTK ANPR SDK v2.0 (kudos to unnamed person for the tip).

UPDATE 2014-10-22: similar news was covered several weeks ago in length by Maurits Martijn (De Correspondent). His reports have the attention of national politics. In 2013, Wilmer Heck (NRC Handelsblad) first reported on the existence of a covenant (.pdf, in Dutch) between police and IRS on the use of ANPR, a cooperation that turned out to exist since (at least?) 2011. Also see this document (.pdf, in Dutch).

Yesterday, Dutch news site GeenStijl reported (in Dutch) on information (.pdf, in Dutch) obtained through a FOIA request revealing that the Dutch police allows the Dutch IRS to use all Automatic Number Plate Recognition (ANPR) camera footage and data that is “fiscally relevant” to collection of state taxes. ANPR cameras are installed throughout the Netherlands: it essentially is a (nearly) nation-wide network of traffic cameras. The camera footage will be used to enforce, among others, the following Dutch tax laws:

Since journalist Wilmer Heck’s report in 2013 it is known that the IRS uses ANPR data since at least 2011. But up until last month, that “only” involved the 200ish camera’s on main traffic axes. This cooperation is now extended so that the IRS may use data collected through all ANPR camera’s. Here’s a translation of the report on GeenStijl (note: I rephrased bits to make it more clear for people not familiar with Dutch media & politics):

IRS will be fully watching ANPR camera footage

At the end of September, the Dutch Minister of Security & Justice, Ivo Opstelten, wrote that it is “technically and administratively feasible” to use the Dutch police’s ANPR cameras “more extensively” [in Dutch]. The letter was written in a way that suggests that expansion still had to take place. This was another creative view on the Hague reality of the senile old bastard: the month before, a covenant had already been signed between the police and the IRS. That covenant states that the IRS can “co-use” the “ANPR cameras that are in use by the police atmain thoroughfares“, but also that the IRS can extend this co-use to “other ANPR cameras that belong to the police’s ANPR network”. So, ALL of them. It’s very friendly of the police to allow the IRS to browse through number plate data of all Dutch citizens. Especially taking into account that the police really cannot store & retain the data, according the a law they have been violating for years [in Dutch]. And as opposed to car drivers who exceeded the speed limit by three kilometers per hour, the IRS does not even have to pay administrative costs to the police for using their stasi-cams. More friendly collegiality, that enables the government to more easily see behind the car doors of the unwitting driver. We asked the police and IRS for the implications regarding privacy. Their response, singing all together: “Privacy? Hahaha LOL!” Duly noted. (h/t)

The police and IRS are also legally allowed to carry out tasks on behalf of the Dutch intelligence & security services, such as the General Intelligence & Security Service (AIVD). If one can think of a plausible use of ANPR data to intelligence services, it can be safely assumed they use it as well (note: no specific evidence for that is known to me).

EOF

In early 2015, Dutch govt will ask parliament to grant hacking power to law enforcement

UPDATE 2017-10-30: the cybercrime bill (Computer Crime Act III, aka Wcc3) is still being handled by the Senate. Jan-Jaap Oerlemans posted a message on oversight on hacking & notice & takedown (NTD) in this bill. He also wrote an elaborate overview article (.pdf, in Dutch) of the bill.

UPDATE 2016-03-09: report on Wcc3 by the lowerhouse committee on Security & Justice.

UPDATE 2015-12-22: and here they are: the new cybercrime bill and MoU (in Dutch) as submitted by the cabinet to the House. Notably, the cabinet cancelled compelled decryption because of the right not to self-incriminate (nemo tenetur principle). Thus, the final bill, that will be discussed in the House, does not contain a power for LE to compel suspects of certain “very serious criminal offenses” to decrypt their data under penalty three years imprisonment or a fine of up to ~20k euro.

UPDATE 2015-11-27: the cabinet announced today that it submitted the cybercrime bill to the House of Representatives, as part of a series of bills relevant to counterterrorism. The bill should become available in the not-too-distant future; I’ll add the link here. The bill’s status has moved from “Raad van State” (Council of State) to “Tweede Kamer” (House of Representatives). NOS has a report (in Dutch).

UPDATE 2015-06-11: it is reported that the cabinet will submit the proposal after the parliamentary summer break, which ends on August 31st 2015.

In October 2012, the Dutch government announced its initiative to grant law enforcement the power to covertly and remotely access “automated works” (computers, phones, etc.), under certain circumstances. In 2013, draft legislation (Memorandum of Understanding) was published. The proposal concerning covert and remote access is part of a larger text — unofficial English summary available here — that criminalizes the trade in stolen (digital) data and that proposes the following powers:

  • Remote entry of automated works and the placement of technical means (such as software) for the purpose of investigation of severe forms of cybercrime. (Note 1: this applies to “serious criminal offenses”. Note 2: some hacking has already been carried out by Dutch police, for instance to take down Bredolab (2010) and to fight child porn on Tor (2011), under authorization of a magistrate.)
  • Remote search of data that is accessible from an automated work, regardless of the location of the automated work on which the data is stored and taking into consideration agreements and rules of international legal assistance;
  • Remotely making data inaccessible that is accessible from an automated work, regardless of the geographical location of the automated work on which the data is stored and taking into consideration agreements and rules of international legal assistance;
  • Compelling suspects of certain “very serious criminal offenses” to decrypt their data under penalty three years imprisonment or a fine of up to ~20k euro (at odds with nemo tenetur).

All of the proposed powers require authorization from a magistrate. The proposal was covered on Slashdot and criticized by Bits of Freedom. In May 2013, the Dutch government submitted the proposal for public consultation (in Dutch). Bits of Freedom submitted criticism, as many others did, including me (in Dutch). The government also submitted the proposal to the Dutch Data Protection Agency (CBP), who in February 2014 expressed concerns relating to the requirements of necessity and proportionality imposed by the European Convention on Human Rights (ECHR). That same month, the government submitted its proposal to the Dutch Council of State for further consultation.

It is publicly known that the Dutch national police (KLPD) had, and still has, active licenses for FinSpy (trojan horse that runs on Windows, OS X and Linux) and FinSpy Mobile (that runs on Android, Blackberry, iOS and Windows Phone): this was observed in WikiLeaks’ SpyFiles 4. The use of such methods is confirmed through the answers (in Dutch) given on October 6th 2014 to Parliamentary questions on this topic (h/t @rejozenger).

On October 18th 2014, the Dutch Minister of Security & Justice answered (.pdf, in Dutch) Parliamentary questions by MP’s Berndsen-Jansen and Verhoeven (both affiliated with the D66 party) concerning this proposal. The last answer indicates that the govt will submit its proposal to the Dutch Parliament in early 2015. Here is a translation of all six questions and answers:

Question 1:
Are the reports correct that a large international investigation is ongoing into Blackshades, software that can be used to create malware, among others? [Footnote 1: http://www.nu.nl/weekend/3858563/huiszoeking-aanschaffen-omstreden-software.html]
Answer 1:
The reports are correct to the extent that the US and Canada have ongoing criminal investigations in various European countries against buyers, sellers, distributors and/or creators of software primarily designed to commit, in short, computer crime as meant in Articles 138ab (first section), 138b and 139c of the Penal Code.

Question 2:
Does the Public Prosecution, in the context of investigation into Blackshades, commissioned the hacking of the Blackshades server? If so, can you explain the legal basis for that, and the grounds on which it is permissible?
Answer 2:
The Public Prosecution did not commission the accessing of the Blackshades server. Dutch law enforcement has, under the responsibility of the Public Prosecution, and after authorization of a magistrate, remotely accessed a server and searched this server to record data on the basis of Article 125i of the Code of Criminal Procedure.
Under certain circumstances, Article 125i, after authorization of a magistrate, permits remote access of a computer, for the sole purpose of searching the computer for predetermined data files and if necessary seize those by recording them. This occurred in two criminal cases involving very serious offenses. I refer to the answers to the questions by MP Gesthuizen (Socialist Party) to the Minister of Security & Justice on the use of controversial spying software by Dutch law enforcement (2014Z13948, submitted August 11th 2014).

Question 3:
How often did the Public Prosecution so far commissioned the police to hack servers and computers in the context of an investigation and what was the basis for the authority to hack?
Answer 3:
Police carries out investigations on the basis of the Code of Criminal Procedure. The term “hacking” is not present there. The police has, as mentioned in the previous answer, on the basis of Article 125i, only in several (exceptional) cases, with authorization from the magistrate, accessed an automated system and secured data from a server whose location and ownership were unknown. One of those investigations concerns Blackshades.

Question 4:
To what extent is the current Penal Code sufficient as a legal ground for the police to access servers and computers of suspects?
Question 5:
Is it true that your proposal to “Change the Penal Code and the Code of Criminal Procedure in relation to the improvement and strengthening of investigation and prosecution of computer crime (Computer Crime III)” aims to provide a legal basis for Justice to hack servers and computers for the purpose of an investigation? If so, how does the current practice of commissioning hacking for the purpose an investigation relate to this proposal?
Answers 4 and 5:
As explained in answer 2, the current legislation must be supplemented, which the Computer Crime III proposal aims to do. The purpose of that legislative proposal is to tailor the legal framework for investigation and prosecution of cybercrime towards the investigation and prosecution of computer crime and new methods used by criminals. Today’s society and the fast changes of technology for communicating and sharing or storing information globally require that law enforcement keeps pace (also see my letter to Parliament of October 15th 2012 concerning legislation for fighting cybercrime).

Besides various changes and supplements, the legislative proposal provides a new power that allows an investigating officer, following an order of a prosecutor, to covertly and remotely access an automated work to exercise certain investigatory powers in that automated work. Accessing an automated work is a more infringing power than searching an automated work, and necessary for the investigation of many forms of internet crime.

Question 6:
When do you expect to submit the Computer Crime III proposal, that has been in consultation since May 2013, to Parliament?
Answer 6:
The legislative proposal will be submitted to Parliament in early 2015.

One important aspect will be to what extent the government addressed the concerns expressed by the Dutch Data Protection Authority (CBP). Notably, the CBP advised that logging of police actions through malware for the purpose of accountability requires that the precise way in which the software works must be known — including the source code (although that probably won’t fly IRL).

EOF

 

 

A critique of the balancing metaphor in privacy and security (Mitchener-Nissen, 2014)

Timothy Mitchener-Nissen, during his affiliation with the University College London as a Teaching Fellow in Sociology of Technology, published an article entitled Failure to collectively assess surveillance-oriented security technologies will inevitably lead to an absolute surveillance society (Surveillance & Society Vol 12 Issue 1, p.73-88). In this post I quote paragraphs from that article with the purpose of sharing, and for my own reference purposes.

First, here’s the abstract:

The arguments presented by this paper are built on two underlying assertions. The first is that the assessment of surveillance measures often entails a judgement of whether any loss in privacy is legitimised by a justifiable increase in security. However one fundamental difference between privacy and security is that privacy has two attainable end-states (absolute privacy through to the absolute absence of privacy), whereas security has only one attainable end-state (while the absolute absence of security is attainable, absolute security is a desired yet unobtainable goal). The second assertion, which builds upon the first, holds that because absolute security is desirable new security interventions will continuously be developed each potentially trading a small measure of privacy for a small rise in security. When assessed individually each intervention may constitute a justifiable trade-off. However when combined together these interventions will ultimately reduce privacy to zero. To counter this outcome, when assessing the acceptability of any surveillance measure which impacts upon privacy (whether this constitutes a new technology or the novel application of existing technologies) we should do so by examining the combined effect of all surveillance measures currently employed within a society. This contrasts with the prevailing system whereby the impact of a new security technology is predominantly assessed on an individual basis by a subjective balancing of the security benefits of that technology against any reductions in concomitant rights, such as privacy and liberty. I contend that by continuing to focus on the effects of individual technologies over the combined effects of all surveillance technologies within a society, the likelihood of sleepwalking into (or indeed waking-up in) an absolute surveillance society moves from being a possible to the inevitable future.

In the body of the paper, the author defines surveillance-oriented security technologies (SOSTs) as follows:

These are technologies intended to enhance the security of citizens via some inherent surveillance capability either operated by or accessible to the state. They facilitate the monitoring, screening, or threat assessment of individuals, groups, or situations, and are based on live-events, past events or the processing of data.

The author is skeptical about the effectiveness of Privacy Impact Assessments (PIAs) that have become mandatory for governments in various countries (including the Netherlands: see this):

PIAs also employ balancing through cost/benefit analyses of different actions and values, the production of business cases justifying both privacy intrusions and the resulting implications, and when public acceptability of the proposed project is analysed [Wright et al. 2011: Precaution and privacy impact assessment as modes towards risk governance. In Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields]. Again, the focus is on the specific project to hand. There is the possibility here to take a more collective view within such assessments; however, for our purposes this would require knowledge of the current state of SOSTs operating within a society so as to form a clear picture of the status quo. It is doubtful those undertaking the PIA would have access to such information or the resources to acquire it. Recently the concept of Surveillance Impact Assessment (SIA) has been developed, described as a ‘methodology for identifying, assessing and resolving risks, in consultation with stakeholders, posed by the development of surveillance systems’ [Wright and Raab 2012: Constructing a surveillance impact assessment. Computer Law & Security Review 28(6): p.613-626]. The SIA concept seeks to increase stakeholder engagement, minimise the impact of surveillance technologies, and improve upstream development. However, this initial conceptualisation still appears to focus on the individual technology and not the collective assessment of other existing SOSTs within its methodology. Whether this changes in practice remains to be seen.

The author then argues against the “ubiquitous balancing metaphor” that expresses privacy and security as a trade-off:

[The balancing metaphor] is arbitrary and subjective, lacking in meta-rules, and purports to successfully compare objects (such as privacy and security) which possess different intrinsic characteristics.

Furthermore:

Focusing and expanding upon this final point, one of the fundamental differences between privacy and security is that only one of them has two attainable end-states. Privacy (P) exists as a finite resource on a quantifiable spectrum with two attainable end-states; that being absolute privacy (P=1) at one end through to the absolute absence of privacy (P=0) at the other. Whereas security (S) also exists as a finite resource but on a quantifiable spectrum with only one attainable end-state; that being the absolute absence of security (S=0). However, as discussed earlier, absolute security (S=1) can never be achieved and therefore must exist as a desirable yet ultimately unobtainable goal always equating to something less than 100 per cent (S=<1); hence the absence of a second attainable end-state.

The second assertion, which follows from and builds upon the first, holds that one consequence of absolute security being unobtainable yet desirable is that new SOSTs will continuously be developed in a futile search for this unobtainable goal. These technologies each potentially trade a small measure of privacy for a small rise in security. This production process is driven by a variety of internal and external sources beyond the conflicting internal characteristics of security and privacy. These include; the nature of fear and risk, pressure by politicians and police, the availability of funding, push by the security industry, and public support for these technologies. These factors operate together to ensure a fertile environment exists for the research and development processes of the security industry to thrive.

The author concludes his paper with an elaboration and several proposals. I quote it entirely, and added bold emphasis.

6. Complementing individual with collective assessment

By collective assessment I refer to a process whereby the acceptability of a new SOST is determined by assessing its effects in combination with other SOSTs already operational to determine both the necessity of this new entrant and the resultant quantum of proportionality of the new technology was adopted [Footnote 13: Being the proportionality of all operating SOSTs, including the proposed technology being assessed, given the security they afford and the resultant infringement of privacy]. This collective approach is not intended as a replacement for existing assessment methodologies which determine the acceptably of each individual technology, rather it would complement them by forming a secondary assessment step. Hence if the technology is individually unacceptable it would be rejected outright without the need for collective assessment.

Any adoption of a collective assessment methodology for the purpose of retaining privacy would be premised on a number of requirements. Firstly it requires citizens (or at least the majority) not wanting to actually live in a surveillance society where their physical, communication, location, and personal data is routinely collected and/or processed so as to maximise their individual and collective security. This position entails the concomitant acceptance of insecurity as the natural condition; i.e. the conscious understanding and acceptance that absolute security can never be achieved regardless of how many security measures are introduced. This also needs to be coupled with an appreciation of the value of other rights and freedoms (besides security) to temper the temptation to introduce ever more SOSTs. I must stress here that this desire by citizens to oppose a total surveillance society is by no means given. Privacy and security are social constructs; the different weights assigned to them exist differently across societies, are contextual, and change over time [Solove, D. 2009: Understanding Privacy, Ch.3]. It is completely conceivable that a given society at a given time and/or under given circumstances, may desire to live in a surveillance society. At this point they may still wish to adopt a collective assessment methodology for the purpose of identifying areas of social existence requiring additional surveillance as opposed to the purpose of preserving privacy.

Secondly, collective assessment requires a general acceptance that privacy has to be retained; that once privacy levels are reduced to a certain level, any further reductions cannot be justified regardless of the competing right. If this consensus does not exist (regardless of where these levels are set) then the total surveillance society envisioned within my paper will occur. If there is nothing within the act of living within a society that most/many citizens believe should remain off limits to surveillance, then this view represents tacit approval for a total surveillance society; if nothing is off-limits then everything becomes a valid target.

On the assumption however that a society wishes to preserve a certain level of privacy, this could conceivably be achieved through different methods and protections. I have set out three below which could operate either individually or in combination.

The first option is to designate certain objects as prima facie ‘off-limits’ to surveillance. This could include; certain geographical locations (individual homes, wider community spaces, streets, work-spaces, etc.), certain data (geographical, communication, internet, etc.), and/or certain physical actions (correspondence, physical interactions, etc.). In the event of reasonable suspicion that an individual is committing offences within one of these restricted areas a surveillance warrant could be issued by a judge.

The second option is to ban certain actions by law enforcement agencies. This might include:

  • any form of stop-and-search without reasonable suspicion (and suspicion could not be inferred simply because somebody is physically present within a certain geographical location[Footnote 14: Thus requiring a repeal of Section 44 UK Terrorism Act 2000]);
  • any form of data-mining where it either equates to a fishing expedition or where if the data being sifted was not digitally available a warrant would be required to gain access to it;
  • and prosecutions based on targeted surveillance where no prior reasonable suspicion existed justifying that surveillance.

A third option is to use citizen juries in conjunction with political and/or judicial bodies to monitor and limit the current surveillance options available to law enforcement agencies within a society. They would be afforded complete oversight such that only SOSTs and measures approved by these bodies would be lawful. No surveillance, or prosecution based on surveillance, outside of these designated limits would be permissible.

There are challenges with all these options, with each raising difficult questions. On the idea of setting surveillance limits, who would decide where these limits are set and how would they go about doing this? How easy would it be to modify these limits, and under what circumstances would this occur? On the option of fettering the activities of law enforcement agencies, how would this be policed and what would happen if officers discovered information pertaining to a crime whilst themselves engaging in illegal surveillance activities? And on the option of citizen juries, how would these be empanelled, who could sit on them, and what oversight would exist?

The presence of such challenges does not automatically negate the viability of any of these options. This is merely an acknowledgement that any method adopted should be done so with caution and considerable foresight. That said, the ideas set out above are achievable for they reflect values and norms that are currently observable within UK society. Despite the preoccupation with security leading to the spread of SOSTs throughout society, both UK citizens and their government still protect certain areas from interference and consider certain actions unacceptable. The home and bedroom are still somewhat protected from intrusion in that police are not (yet) allowed to randomly enter homes to search for evidence of crimes without prior suspicion or evidence of an offence. Written and oral communication between suspects or prisoners with their legal representatives is still largely protected, and the use of torture is thankfully still considered beyond the pale for the majority of citizens. And yet all of these actions have the potential to uncover evidence of criminal offences.

These examples show UK citizens are not yet willing to sacrifice every concomitant right on the altar of security, and while this holds true the opportunity remains to introduce measures for protecting privacy and scaling back the surveillance society. Collective assessment is a step down this path in that it makes explicit the current overall balance between security and privacy, thereby raising citizen awareness of the state of their society. Nevertheless, if privacy is valued at least as much as security is valued then this collective assessment needs to be backed up with protection measures such as those outlined above. Without these measures any such assessment is merely an exercise in collecting and collating information. It will tell us how far away we are from the oncoming train that is the absolute surveillance society without affording us the wherewithal to change its direction before we find ourselves forever wedged firmly under its wheels.

Jeroen van der Ham (Twitter: @1sand0s), a former colleague of mine, shares his criticism of Mitchener-Nissen’s article (my translation, switching to American English):

In the definition Mitchener-Nissen uses for “privacy”, he attempts to make privacy expressible on a scale, where I think that is not possible. His definition of privacy is also limited by only looking how it relates to security, while many security measures do not have to stand in the way of privacy, and it is one-sided not to take that into consideration. Furthermore, privacy is subjective, and bound to time and context. What we do and share on the internet today, still feels like sufficient privacy, but 50 years ago everyone would be out on the streets trying to stop it.

The approach that he proposes does not completely follow from his arguments. In addition, I think the solutions he proposes are not very promising, or redundant. By the European Convention on Human Rights we already have certain things that are clearly off-limits; we do not need additional legislation for that. To me it also seems totally infeasible to restrict certain things as completely off-limits. We are likely to always have an intelligence agency that can look at certain things, but with the right safeguards. That same applies to police having an injunction granted by a court.

Lastly, I also doubt whether the current assessment of new measures do not already take into account the context of existing measures. Perhaps currently the right knowledge and science exists to determine the real impact of various measures in conjunction. But I do not see him argue that.

EOF

New Dutch intelligence oversight report: the (il)legality of SIGINT carried out by the AIVD in 2012-2013

On October 7th 2014, the Dutch Review Committee on the Intelligence and Security Services (CTIVD) published a new oversight report (.pdf, in Dutch) concerning the use of intercept powers by the Dutch General Intelligence & Security Service (AIVD) between August 2012 and September 2013. The AIVD has two interception powers: first, Article 25 Wiv2002 permits the AIVD to specify an individual or organization and, after Ministerial approval, carry out targeted interception (e.g. internet tap, phone tap, microphone). Second, Article 26 Wiv2002 and Article 27 Wiv2002 permit the AIVD to intercept non-cablebound communications (such as satellite and HF radio) in bulk and, after Ministerial approval, select data from it using telecommunication characteristics (e.g. phone number, fax number, email address, IP address; called “selectors” hereafter). Article 26/27 Wiv2002 are informally referred to as “sigint powers”; Article 25 Wiv2002 is not, and neither is hacking ex Article 24 Wiv2002.

Similar to previous oversight reports concerning Article 25 Wiv2002 (targeted interception), the new oversight report confirms that the AIVD generally uses Article 25 Wiv2002 (targeted interception) carefully and heedfully (in a legal sense, as evaluated within the framework of Dutch law). In individual cases, issues exist that the CTIVD states to be careless or illegal; for instance cases where privileged persons are tapped (lawyers, doctors).

Similar to previous oversight reports concerning Article 27 Wiv2002 (sigint selection by keywords, by identity of person or organization, and/or by telecommunication characteristics), the new oversight report confirms that the AIVD often acts careless (in the legal sense), and that the legally required motivation for the use of the power (necessity, proportionally, subsidiarity) is often insufficient. Strangely, up to 2013, the CTIVD systematically withheld itself from judging such practices to be illegal, even though the practice obviously did not comply with the law: proper motivation is a legal requirement, and that requirement has largely not been met for years.

Because of this, in November 2013 I stated that sigint oversight in the Netherlands is broken. Taking into account the developments concerning a possible extension of the sigint powers such that the AIVD can also carry out sigint on cablebound communications (think of GCHQ’s Tempora, NSA’s DANCINGOASIS and BND’s cooperation in the Eikonal program), this is a very serious issue. And at least as important as the issues concerning (il)legality of the acquisition of intelligence from of social media in 2011-2014 through hacking, human sources and exchange with foreign agencies: at least in nearly all those cases the CTIVD found that the activities met the requirement of necessity (though not necessarily also the requirements of proportionality and subsidiarity). As of June 15th 2014, all hacking and sigint is formally carried out by the Joint Sigint Cyber Unit.

As of January 1st 2014, two out of the three committee positions within the CTIVD are held by new persons. Former Chairman of Attorneys General Harm Brouwer replaced Bert van Delden as chair, and former Rotterdam police chief Aad Meijboom replaced Eppo van Hoorn, who resigned in Q3/2013. The third person is formerpublic prosecutor Liesbeth Horstink-Von Meyenfeld: she joined in 2009, and will be legally required to resign or be reappointed in 2015. (After an insanely complex selection process, a position is filled by Royal Decree for a six year period, and members can be reappointed once.) As shown below, the “new” CTIVD has turned out to be willing to conclude that lack of legally required motivation constitutes illegality. This seemingly changed standpoint solves one part of the oversight problem. The next question is: does the practice change in reality? This remains to be seen; the CTIVD itself has no formal means to intervene with AIVD activities. This is left up to the Minister, who typically defends the AIVD and who probably doesn’t spend a lot of time critically assessing the request for permission to interception, and to the Parliament, which historically showed scarce interest in intelligence — this has only slightly changed since Snowden.

Interestingly, the CTIVD decided to disclose statistics concerning the use of Article 25 Wiv2002 and Article 27 Wiv2002 in the report, and the Minister of the Interior chose to censored the statistics in the final publication. This is basically offending the CTIVD. Dutch readers should read this and this. I suggest that the CTIVD in response initiates an investigation into the use of Article 27 Wiv2002 to select by keywords. No oversight report is yet available that addresses this in length. The Minister of the Interior annually approves a list of topics for sigint selection by keywords; the keywords are then chosen by the AIVD itself. One always wonders about the dynamics of keyword-based surveillance. For instance, whether the keywords are limited to, say, export-controlled chemicals, or that the AIVD also selects using general keywords (“bomb”); and also, what the thresholds and conditions are for someone (e.g. an activist) to become a person of interest to the AIVD’s a-task (National Security) or d-task (Foreign Intelligence).

The remainder of this post is a translation of the part of the new oversight report that specifically addresses sigint selection ex Article 27 Wiv2002:

WARNING: this is an unofficial translation

12 Usage of the power to select sigint

12.1 Introduction

A request for approval for the use of the power to select sigint consists of two parts. First, the motivation for using the power. This details the specific investigation within which the power is used, and needs to discuss aspects of necessity, proportionality and subsidiarity. Second, the motivation has an appendix that lists the telecommunication characteristics (hereafter: list of selectors). The list of selectors describes the telecommunication characteristics that will be used as selectors (for instance, name of person or organization, phone numbers, email addresses). In the list of selectors, a column is included with a (very) brief description of the reason the selector is included. This description can also be a reference to the AIVD’s internal information system. The list of selectors is sent to the Minister of the Interior for approval, together with a summary of the motivation. Similar to the use of the interception power [ex Article 25 Wiv2002], approval can be given for a period of up to three months. The CTIVD has examined both the motivation of the use of the power to select sigint, and the (justification for the) list of selectors.

12.2 Selection of sigint in previous oversight reports

In oversight report 19, the CTIVD found that the AIVD did not deal with sigint selection in a careful manner. Often it was not explained whom the numbers and other telecommunication characteristics belong to, or why this telecommunication needed to be selected. The CTIVD concluded that it had insufficient knowledge about the motivation of the selection, and thereby could not judge whether the power was used in a legal manner in accordance with Article 27 Wiv2002. The CTIVD urgently recommended that requests for the use or for the extension of the use the sigint selection power should include a specific motivation. The Minister of the Interior responded by stating he agreed with the CTIVD, but also expressed worries about the practical feasibility of that recommendation. The Minister agreed to further consult with the CTIVD on this matter.

In oversight report 26, concerning the AIVD’s foreign intelligence task, the CTIVD found that in the application of Article 27 Wiv2002 for foreign intelligence purposes, in many cases it still was not specified whom a characteristic belonged to and why it is important to selection the information that can be acquired through this specific characteristic. The CTIVD did notice, however, that once sigint operations ran for a longer period, the AIVD was better able to explain whom telecommunication characteristics belong to, and could better argue why the use of the power was justified against these persons. The CTIVD emphasized that the AIVD seriously must strive to better specify against which person or organization sigint is used.

In oversight report 28, concerning the use of sigint by the Military Intelligence & Security Service (MIVD), the CTIVD has further elaborated the legal framework for the entire procedure of the processing of sigint. In that report the CTIVD once again concluded, this time concerning the MIVD, that it could not judge whether the use of sigint was legal in accordance with Article 27 Wiv2002 because the CTIVD had insufficient knowledge about the motivation. In oversight report 38, the CTIVD repeated her earlier findings.

In oversight report 35, the CTIVD has examined one specific operation that involved selection of sigint, and judged aspects of to be illegal.

The current investigation constitutes the first time since oversight report 19 that the CTIVD has assessed the legality of the use of sigint selection in general concerning the AIVD.

12.3 Search for the purpose of selection in previous oversight reports

In certain cases the AIVD uses its sigint search power [ex Article 26 Wiv2002] prior to sigint selection [ex Article 27 Wiv2002]. The AIVD hereby aims to identify whether a relevant person of interest is present within the communication intercepted in bulk. In this case, the AIVD attempts to establish the identity of the person, and whether a relation exists to the field of investigation. This constitutes search for the purpose of selection. The use of sigint search supports better targeted sigint selection.

In oversight report 28, the CTIVD described the practice of sigint search by the MIVD, and in oversight report 38 the CTIVD repeated its conclusions. The CTIVD distinguishes three forms of search aimed at selection, that involve taking note of the contents of communication. In short, this involved the following forms:

  1. Searching bulk intercepts to determine whether Ministerially approved selectors can in fact generate the desired information;
  2. Searching bulk intercepts to identify or describe potential `targets’;
  3. Searching bulk intercepts for data that, in the context of an anticipated new area of investigation, future selectors can be retrieved from.

In oversight report 28, the CTIVD only found the first form of sigint search for the purpose of sigint selection to be legal, because only that form has safeguards for privacy infringement, that is, through the prior approval of the Minister to use sigint against the person or organization involved. This use of sigint search supports the sigint selection for which permission was obtained. This is necessary because Article 13 of the Dutch Constitution requires authorization by a competent body prior to infringements of phone secrecy and telegraph secrecy. The CTIVD finds the second and third form of search to be illegal, because it has no legal ground, and privacy infringement is not safeguarded by the requirement of Ministerial permission prior to using sigint selection against a person or organization.

The CTIVD left the legislator to consider whether it is necessary that the MIVD (and AIVD) be granted the power to search for the purpose of selection, taking into account the right to privacy. In his response, the Minister of Defense stated that he would cooperate with the Minister of the Interior to establish a future-proof legal framework. During the General Meeting that addressed, among others, oversight report 28, the Minister of Defense stated that he agrees with the CTIVD’s conclusion considering the sigint search power, but that while waiting for the intended change of law, the current practice, for reasons of national security, will continue. In their response to oversight report 38, both Ministers stated that the practice that was found to be illegal, will be taken into account in an intended change of law. During a plenary meeting on eavesdropping by the NSA, that addressed oversight report 38, the Minister of Defense stated that the third form of sigint search has stopped, and repeated that the MIVD continues to carry out the second form of search while waiting for a change of law. In oversight report 38, the CTIVD announced that it will address sigint search by the AIVD in the ongoing investigation into the use of interception power [ex Article 25 Wiv2002] and the power to select sigint [ex Article 27 Wiv2002] by the AIVD.

12.4 Methods of AIVD concerning search for the purpose of selection

The CTIVD has taken notice of an internal method of the AIVD concerning the use of sigint search and sigint selection powers. This method is set out in writing and approved by management. The method provides for the possibility that an operational team examines whether a telecommunication characteristic, such as a phone number or email address, that no Ministerial permission is available for yet, is relevant to the investigation. The method is aimed as establishing the identity of the communicating party associated with this telecommunication characteristic. The CTIVD noticed that the AIVD interprets the identification of the communicating party in a broader sense than only establishing the name of the involved party. The AIVD also assesses whether the party is relevant for the investigation carried out by the operational team. One can think of establishing that a person has a certain function, relevant to the investigation, within an organization.

The department that facilitaties bulk interception, can at the request of, and in cooperation with, the operational team carry out a metadata analysis to determine how the telecommunication characteristics relates to other persons and organizations that are included in the investigation (who/what has contact with who, how long, how often, from what location, etc.). To establish the identity of the communication parties and determine their relevancy for the AIVD’s investigation, it can be necessary to also take note of the contents of communication. The possibility is then offered to examine the nature of the communication that has already been intercepted. This can involve stored bulk intercepts for which no Ministerial permission has yet been obtained to select from. It can also involve communication that has been previously selected from the bulk, and is thus already accessible by the team; or information that is in the possession of the AIVD through other (special) powers, such as hacking or acquisition of a database.

The team’s data processor is given the opportunity to briefly see (or hear) the contents of the communication that can be related to the telecommunication characteristic to determine whether the telecommunication characteristic is relevant. This allows the team to determine whether it is useful to obtain permission and include the telecommunication characteristic on the list of selectors so that the communications can be fully known to the team. The CTIVD understood that not all teams use this method equally.

According to the AIVD’s internal policy, it is not intended that the processor exploits the information obtained without obtaining permission from the Minister.

The CTIVD depicts this method as follows.

AIVD  sigint method in practice

12.5 Assessment of the methods

The CTIVD considers the question whether the AIVD’s method is legal.

Insofar the method is used concerning telecommunication characteristics that Ministerial permissions was obtained for, the CTIVD finds the method to be legal, because the privacy infringement is safeguarded by the Minister’s permission. The CTIVD also considers this method legal concerning the MIVD.

The CTIVD finds the AIVD’s method, insofar it concerns telecommunication characteristics that the Minister has not yet approved, is equivalent to the second form of sigint search described in section 12.3. The CTIVD thus concludes that this method is illegal.

This conclusion will be upheld as long as the anticipated legislation does not yet exist.

The CTIVD notes that the Minister of Defense, under the announcement of an intended change of law – in consultation with Parliament – stated that the MIVD’s practice will continue. Although the Minister of Defense only mentioned the MIVD, the CTIVD notes that the AIVD, that applies the same method, awaiting a change of law, also continues with this practice. Now that this is the current practice, the CTIVD considers it to be important to evaluate the compatibility between the practice and the right to privacy.

The CTIVD notes that the AIVD uses this method for the purpose of carefully establishing of list of selectors, and avoid that unnecessary permissions are requested. The CTIVD recognizes that the described method can support this. Moreover, the CTIVD can conceive of the possibility that this also supports implementation of the previous recommendation to improve the motivation of sigint selection. The CTIVD also expects that this could result in fewer infringement on the right to privacy, because the pre-investigation allows a more targeted use of sigint.

The right to privacy requires that the following safeguards are present:

  1. The only purpose of briefly peruse the contents of communication can be the determination of the identity of communicating parties and the relevance of the communication to the ongoing investigation. Other use of is not permitted until permission has been obtained from the Minister. A requirement for briefly peruse of contents of communication is that an adequate separation of duties exists between the department that facilitates bulk interception and the operational team, in that sense that the operational does not itself obtain access to the communication.
  2. To ensure this separation of duties, it is important to provide an adequate written registration and reporting of having peruses the communication. It must be recorded what communication has been seen/heard, and what the outcome was. The CTIVD consides this registration and reporting to be important for internal accountability and external control, as well as for carefulness.

The CTIVD notes that the second safeguard is currently insufficiently implemented by the AIVD. The CTIVD notes that insufficient reporting is performed on what communication has been seen/heard and what the outcome was.

12.6 Usage of the sigint selection power

The CTIVD has examined the AIVD operations that between September 2012 and August 2013 involved the use of sigint selection. Concerning the examined operations, the CTIVD has several remarks. Insofar necessary this is elaborated on in the classified appendix.

The CTIVD finds that sigint selection is used in varying ways by the various operational teams of the AIVD. Specifically, a difference is seen between the teams of the National Security unit and the teams of the Foreign Intelligence unit. The use of this power by Foreign Intelligence teams is generally broader, as can be observed in the size of the list of selectors. The operations of the National Security teams are largely more targeted. Considering that the legal definition of the a-task [National Security] is focussed on individual persons and organizations, and the d-task [Foreign Intelligence] provides for carrying out investigations on countries, this is not surprising. The CTIVD finds that the lists of selectors vary in size between several selectors to thousands selectors.

The CTIVD notes that for each person or organization that emerges during the investigation, it must be motivated why sigint selection is necessary. In must be stated what the purpose of the sigint selection is within the context of the investigation, and what the grounds are for the expectation that yields of selection will contribute to that purpose. There hence must be a link between the broader investigation that is carried out and the necessity of the selection of communication from the specific person or organization. This is different for every person or organization.

In a request for extension, the yields of the selection and the added value to the investigation must be considered, not in general terms but specific to the person or organization. General statements that the use of the special power contributed to the intelligence need, or resulted in (unspecified) reports, or confirmed current believes, are insufficient. In addition to necessity, a request for permission must state in what way the requirements of proportionality and subsidiarity are met. The CTIVD notes that absence of yields can be a result of the nature of sigint. Communication can possibly not be intercepted because of the range [sic] of the satellite dishes. The CTIVD is of the opinion that it is permissible to uphold [selection by] telecommunication characteristics as long as periodically it is reconsidered whether upholding the characteristics still complies with the legal requirements for the use of the power, and that this consideration is written down. Insofar the selection of certain telecommunication characteristics yielded communication but this communication turned out to be irrelevant, the AIVD must remove the characteristics from the list of selectors that permission is requested for.

The CTIVD finds that the extent to which the motivation for use of the power to select sigint established a framework from which it can be foreseen which persons or organizations are within the scope of the operation, varies from operation to operation. In certain operations it is made clear whom the AIVD is interested in, and the motivation clearly states which persons and organizations are within the scope of the operation. The CTIVD notes that it is important that the motivation provides sufficient clarity about which persons and organizations can be selected, under which conditions, and why. A direct and clear link must exist between the motivation and the persons and organizations that are included in the list of selectors. The CTIVD finds that this link is absent in three operations, or made insufficiently clear by the AIVD. The CTIVD finds this to be illegal. Moreover, the CTIVD finds that in one operation, non-limitative enumeration are used in the motivation. An example of this is the phrase “persons and institutions such as [.], et cetera”. The CTIVD finds this to be illegal.

The CTIVD finds that in two operations, persons or organizations with a special status (e.g. non-targets, sources, occupations that have professional secrecy) were included on the list of selectors, without specific attention being paid on this in the motivation. This CTIVD notes that special categories of persons or organizations must be explicitly mentioned in the motivation if they are included on the list of selectors, and that attention must be paid to the legal requirements of necessity, proportionality and subsidiarity in relation to these telecommunication characteristics. The CTIVD notes that it is not, or insufficiently, apparent what considerations were made concerning these requirements. Considering the special status of these persons or organizations, the CTIVD find this to be illegal. A number of telecommunicatino characteristics are related to a person with whom is being cooperated by the AIVD, and whose interception [ex Article 25 Wiv2002] by the AIVD is found to be illegal by the CTIVD. The CTIVD also finds the sigint selection against this person to be illegal.

The CTIVD finds that in the motivation that the Minister based his approval for, in two cases no attention was paid to the requirements of necessity, proportionality and subsidiarity. In the internal motivation, the AIVD does pay sufficient attention to this. The CTIVD finds this to be careless. Although the AIVD is not required to provide the Minister an exhaustive motivation considering necessity, proportionality and subsidiarity, the CTIVD notes that the permission request must provide sufficient clarity about the considerations to allow the Minister to assess the request.

The CTIVD finds that the AIVD in multiple cases incorrectly explains the requirements of proportionality and subsidiarity. Deliberating on subsidiarity, for instance, in one operation the AIVD stated that sigint selection “could possibly yield change for exchange with other foreign agencies”. In a different operation, it has been stated sigint selection allows the AIVD to “carry out investigations in a relatively simple and efficient manner, involving limited risks”. The CTIVD notes that this does not constitute a correct deliberation in terms of the requirements of proportionality and subsidiarity. Correct deliberation on proportionality implies, after all, a deliberation on interests that explicitly involves the interests of the target. This also applies to the requirement of subsidiarity, on the basis of which the AIVD is required to use the means that are least infringing on rights. The CTIVD finds that the outcome of this deliberation can result in the AIVD having to use an inefficient and relatively complex means. The CTIVD did not find evidence that indicates that the operations involved do not meet these requirements. The CTIVD thereby finds that the motivation by the AIVD is lacking and thus careless, but not illegal.

The way in which the list of selectors is structured, varies from team to team. The list of selectors includes a (very) brief motivation of why the characteristic is included. The CTIVD finds that the motivation in various operations is done in very different ways. The CTIVD observed cases in which the list of selectors refers to an internal document of the AIVD that explains the relevance of the characteristic to the operation. In addition, the CTIVD observed cases in which a short explanation of the relevance is included. In one operation, the motivation does not include more than a brief indication of the function of the person, or other indications of the characteristic. This indication can comprise a single word (e.g. biologist, toxicologist, phone number, fax). The CTIVD finds this method of motivation, that only includes a single word and no further explanation, to be illegal. The CTIVD notes that the latter cases are insufficiently traceable to the motivation of the request for permission, and that the framework must be included in that. The CTIVD notes that concerning every characteristic, at least the relevance of the characteristic must be (briefly) included in the list of selectors, and where necessary, a clear reference must be included (to an internal document) on the basis of which the relevance of the characteristic can be further assessed.

The CTIVD finds that in a certain list of selectors, multiple telecommunication characteristics have been included with the remark that these “probably” belong to a person that the Minister has approved sigint selection for. The CTIVD notes that the use of the first form of sigint search, as described in section 12.3, could help limit the characteristics included of the list of selectors to characteristics that the AIVD has determined could actually be related the person of interest.

The CTIVD finds that the list of selectors in certain operations has substantially increased over time. In one operation, the CTIVD observed telecommunication characteristics that were obtained through legal use of a different special power (such as phone numbers or email addresses). The CTIVD finds that all these characteristics where used for sigint selection by the AIVD. In nearly all cases, it was not indicated whom the number belongs to are what the specific relevance is. In fact the only link to the AIVD investigation was the circumstance that the persons associated with the characteristics have contacted a person of interest to the AIVD, without any indication of the nature of the contact or other relevant clues. The CTIVD notes that the mere contact with a person of interest, without the relevance of this contact to the AIVD’s investigation, is insufficient justification for including it in the list of selectors. The CTIVD considers it the be likely that it is possible to use less infringing means to determine what contacts are evidently irrelevant. The CTIVD thus finds the selection based on these telecommunication characteristics to be illegal.

EOF

Dutch Hosting Provider Association (DHPA) opposes Dutch govt’s sort-of-voluntary internet censorship plan

In August 2014, the Dutch government proposed a 38 step action plan (.pdf, in Dutch) to fight jihadism. As explained here, the proposal included voluntary cooperation-based internet censorship with the purpose of reducing jihadist use of the internet. Today, the Dutch Hosting Provider Association (DHPA / @stichtingDHPA) posted a press release explaining that it, representing its members, opposes the current proposal. Here is my translation of that press release:

Dutch Internet industry and Ministry of Justice collide over fight against jihadism

Law enforcement agencies increasingly force internet companies to remove radicalizing content without court order. This leads to an impossible situation, says Michiel Steltman, director of the Dutch Hosting Provider Association (DHPA), on behalf of the internet sector. ‘Does the government want to force companies, for instance, to include jihadism in the general conditions? And how does a hoster decide what content is undesirable?’

According to the companies, the underlying problem is that the Public Prosecution does not judge many of the suspicions of the Ministry of Justice as criminal, and because of that reason refuses to prosecute, meaning that no judicial review takes place. If the Ministry of Justice still believes that the videos or documents must be removed, no other option remains than to pressure companies into complying with the request. But they say they can’t and won’t judge if something is a criminal offense.

Not a censorship agency

Steltman mentions the recent example of a group of shooting men around a campfire shouting ‘allahu ahkbar’, with some lines in Arabic. ‘Did they just kill someone, are they made because someone was killed, or are they having a party and has a goat just been slaughtered’?

In addition, Alex de Joode, company lawyer Government affairs at Leaseweb, the largest business hosting provider of the Netherlands, does not like the methods of the Ministry of Justice: ‘We are not an age verification or censorship agency. The government has a fine legal instrument to remove content, but chooses to not use that in alleged jihadism’.

Pay damages

The sector is afraid of being held responsible by eventual victims. De Joode: ‘suppose that we are wrong and illegally take down a site without a court order. That can cause us a lot of damage.’

Ever more internet companies disclose the number of demands by law enforcement agencies, including Xs4all and Leaseweb. In the US, mostly Google set this trend, following by companies such as Microsoft and Twitter.

Responsibility of companies

Dick Schoof, National Coordinator Counterterrorism and Security (NCTV), considers it to be a responsibility of companies to, ‘on the basis of interpretation by the NCTV, assess the content of the website against their own general conditions. Hereby we appeal to the responsibility of the providers.’

Steltman emphasizes that internet companies are willing to establish better procedures in cooperation with the Ministry of Justice. Schoof describes the ‘currently ongoing conversation with internet companies and social media companies as very constructive’.

EOF