Month: August 2014

Dutch govt announces plan to fight jihadist internet use through sort-of-voluntary censorship

UPDATE 2015-07-01: EU Observer reports: “The EU’s police agency Europol on Wednesday launched its web unit tasked to hunt down online extremist propaganda. Europol’s director Rob Wainwright said the unit is “aimed at reducing terrorist and extremist online propaganda.” The unit consists of over a dozen Europol officials and experts from national authorities.” The name of the unit is the European Union Internet Referral Unit (IRU).

UPDATE 2015-06-02: EDRi reports that the European Commission is set to launch a related initiative in 2015.

UPDATE 2015-03-04: on March 12-13, the JHA ministers meet again, and according to the Dutch govt’s annotated agenda (.doc, in Dutch) for that meet, “cooperation is sought with the private sector and Europol to detect [Dutch: “opsporen”] content that propagates terrorism and violent extremism, and for developing strategic communication to provide a counter-narrative against terrorist ideologies and that promotes tolerance, non-discrimination and fundamental freedoms.”

UPDATE 2014-xx-xx/2015-xx-xx: relevant reading: Terrorism, Communication and New Media: Explaining Radicalization in the Digital Age (2015, Archetti, in Perspectives on Terrorism Vol 9, No 1) and The Taliban and Twitter: Tactical Reporting and Strategic Messaging (2014, Bernatis, in Perspectives on Terrorism Vol 8, No 6).

UPDATE 2015-01-30: a joint statement (.pdf, Jan 30) following an informal meeting on Jan 29/30 2015 of the EU Member States’ ministers for Justice & Home Affairs promotes the idea of “cooperat[ing] closely with the industry and to encourage them to remove terrorist and extremist content from their platforms”: “We reiterate the urgent need for further actions addressing radicalization to terrorism not only on EU but also on national and local level. The internet plays a significant role in radicalization. In this regard, we must strengthen our efforts to cooperate closely with the industry and to encourage them to remove terrorist and extremist content from their platforms. The further possibilities to detect and remove illegal content, in full respect of fundamental rights, fundamental freedoms and in full accordance to national legislation. The possible creation of effective counter-narratives, notably on social media, should also be explored. In this context, Internet referral capabilities, also through Check-the-web, could be developed within Europol to support efforts of Member States in detecting illegal content and improving exchange of information. The development of different preventive projects within the future RAN Centre of Excellence and the maximum use of the Syria Strategic Communication Advisory Team (SSCAT) should also be strengthened.”

UPDATE 2015-01-11: in a joint statement (.pdf) following the Charlie Hebdo attacks, the interior ministers of France, Germany, Latvia, Austria, Belgium, Denmark, Spain, Italy, the Netherlands, Poland, Sweden and the U.K. stated while the internet must remain “in scrupulous observance of fundamental freedoms, a forum for free expression, in full respect of the law,” ISPs need to help “create the conditions of a swift reporting of material that aims to incite hatred and terror and the condition of its removing, where appropriate/possible”. Seven out of the twelve countries that signed the statement have participated in the Clean IT Project: it is clear that the — Dutch-chaired — Clean IT Project resulted in the current ‘critical mass’ behind the policy objective of voluntary censorship. Recall that one of the topics that was internally scheduled to be discussed during the Clean IT Project was whether non-compliant ISPs should be excluded from government contracts. It’s not clear what happened to that idea; hopefully it was rejected at a very early stage, and will not be (re)introduced into the policy debate.

UPDATE 2014-12-20: Dutch readers may also want to browse http://www.polarisatie-radicalisering.nl (website made by the NCTV; contains a lot of information and reports).

UPDATE 2014-10-14: the Dutch Hosting Provider Association (DHPA), that represents Dutch ISP’s, posted a press release (in Dutch) stating the DHPA opposes the Dutch govt’s voluntary censorship plan. The NCTV responded (.pdf, in Dutch) by calling upon ISPs to cooperate in fighting online radicalization within the available legal means (thus implying the NCTV considers voluntary censorship to be within such means).

UPDATE 2014-09-01: Dutch readers: see Arnoud Engelfriet’s take on this: Kabinet wil notice/takedown van extremistische uitingen opvoeren.


 

Earlier this week I blogged about the 13 best practices to “reduce terrorist use of the internet” that were the outcome of the somewhat controversial Clean IT project (2011-2013). Today, the Dutch government announced a “strengthening” of their “integral approach” to cope with jihadism & radicalization, and part of that “strengthening” seems to be the implementation of several of the Clean IT best practices. Here’s my translation from Dutch of the relevant paragraph in today’s announcement:

Social media

The government wants to fight the distribution of radicalizing, hate-mongering, violent jihadist online information. Producers and distributors of online jihadist propaganda and the digital platforms they abuse will be identified. This information will be actively shared with the competent authorities and relevant service providers (including internet services).

An expert team of the National Police will focus on fighting the online distribution of those data. This team will inform the Public Prosecutor about potentially punishable expressions (under existing expression offenses). If application of the voluntary behavioral code does not lead to removal, a criminal warrant can follow. The team will also make agreements with internet companies about effective blocking.

Internet companies who persist in facilitating ‘listed’ terrorist organizations by distributing jihadist content, will be addressed. Furthermore, an actualized list of online jihadist (social media) websites will be published. Communities, professionals and parents can use this list to warn their social environment.

Specifically, the 38 step action plan (.pdf, in Dutch) states the following under action #29 (I translated from Dutch):

  • 29. Fighting distribution of radical, hate-mongering jihadist content

    • Concerned citizens can report jihadist (terrorist, hate-mongering and violence-glorifying) content on the Internet and social media.
    • Producers and distributors of online jihadist propaganda and the digital platforms they abuse are identified.
    • This information is actively shared with the competent authorities and relevant service providers (including Internet services).
    • A specialist team from the National Police fights online jihadist content. This team informs the Public Prosecutor about potentially criminal statements (under existing offences of expression). If the voluntary code of conduct does not lead to removal, a criminal warrant may follow. The bill Cybercrime III is proposed to further improve this procedure (Notice and Takedown).
    • This team makes agreements with Internet companies on effective blocking and provides them referrals to evaluate the content against their own terms of use (Notice and Take Action).
    • Internet companies that persist (after being informed) in facilitating ‘listed’ terrorist organizations by distributing jihadist content, will be addressed either based on an adaptation of EU Regulation 2580/2001 (.pdf) in conjunction with the national sanctions regime against terrorism in 2002, or on the basis of national regulations to be determined.
    • The specialist team will monitor independently, but works closely together with the online citizen hotline.
    • An updated list of online jihadist (social media) websites will be published. This list can be used by, among others, communities, professionals and parents to warn their social environment.

So, the government essentially proposes a voluntary code of conduct for “internet companies” (29d) and, if they don’t comply, intends to enforce content removal through currently non-existing legislation (29f). Note that a leaked draft (.pdf) of the Clean IT project listed under “to be discussed” the proposal of taking obedience of internet companies into account when awarding public contracts.

A key question is to what extent this plan will turn out to impose new (and possibly extrajudicial?) restrictions on freedom of expression, as internet companies will also have to assess content against there own Terms & Conditions. How exactly will the National Police and internet companies in practice distinguish terrorist content from non-terrorist content? As stated here and here by Rejo Zenger (Bits of Freedom), it should be legality of content, not the question of whether content is in good taste, that should determine whether content should be removed/blocked. The final, public deliverable (.pdf, January 2013; unofficial HTML version available here) of the Clean IT project contains ample claims concerning human rights and freedoms, as shown in the aforementioned previous post. Let’s see how those claims hold up.

As a reminder: the Clean IT project sought, through a “structured public-private dialogue” and consensus, methods to “reduce terrorist use of the internet”. The following governments participated in the Clean IT project:

  • Austria (Federal Ministry of the Interior);
  • Belgium (Coordination Unit for Threat Analysis);
  • Denmark;
  • Hungary (Counter-terrorism Centre);
  • Germany (Federal Ministry of the Interior);
  • Greece (Hellenic Police);
  • Netherlands (National Coordinator for Counterterrorism and Security);
  • Portugal (Polícia Judiciária);
  • Romania (Romanian Intelligence Service).
  • Spain (Centro Nacional de Coordinación Antiterrorista);
  • United Kingdom (Office for Security and Counter Terrorism).

I don’t know whether any other countries (participants or non-participants of the Clean IT project) have implemented, or are planning to implement, any policies resembling the 13 best practices recommended in the final deliverable of the Clean IT project.

Related (academic):

Related (other):

EOF

13 Best Practices for “Reducing Terrorist Use of the Internet” (Clean IT Project, 2011-2013)

UPDATE 2015-07-01: EU Observer reports: “The EU’s police agency Europol on Wednesday launched its web unit tasked to hunt down online extremist propaganda. Europol’s director Rob Wainwright said the unit is “aimed at reducing terrorist and extremist online propaganda.” The unit consists of over a dozen Europol officials and experts from national authorities.” The name of the unit is the European Union Internet Referral Unit (IRU).

UPDATE 2015-06-02: EDRi reports that the European Commission is set to launch a related initiative in 2015.

UPDATE 2015-04-07: following a FOIA request, more documents from the Clean IT project are now public, totaling some 150 pages: here (.pdf) and here (.pdf).

UPDATE 2014-08-29b: today the Dutch government announced today a “strengthening” of their approach to cope with jihadism & radicalization, and that “strengthening” appears to include the implementation of several of the recommendations made by the Clean IT project. More about that here.

UPDATE 2014-08-29: relevant reading from EDRi: The slide from “self-regulation” to corporate censorship (.pdf, 2011). Thx @jmcest!

UPDATE 2014-08-28: for clarity of exposition, I moved the unofficial HTML version of the “Reducing terrorist use of the Internet” publication to a separate location.


 

Considering the recent media reports on the use of social media by extremists (Islamic State, etc.), now seems a good time to bring to attention, once more, the outcome of the EU-funded Clean IT project (2011-2013), i.e., 13 best practices for “reducing terrorist use of the internet”.

We should at all times remain vigilant and defend our rights and freedoms; that remains true within the context of the practices recommended by the partners of the Clean IT project, that essentially boil down to a framework for voluntary censorship (and “voluntary” is relative: peers could be pressured into cooperating). It would be much better if societies are defensible against extremist propaganda, rather than relying on removing or blocking content. As stated by Rejo Zenger (Bits of Freedom), it should be legality of content, not whether we perceive it to be in good taste, that determines what content should be removed/blocked. And legality is determined in court, not elsewhere. But considering the fact that the recommendations exist and are likely to be (partially or fully) implemented and/or promoted by some of the governmental & private-sector partners of Clean IT, it’s useful to spend some attention to them.

The 13 best practices were established through a “structured public-private dialogue” and consensus between partners of the Clean IT project. These were the initial governmental partners:

  • Belgium (Coordination Unit for Threat Analysis);
  • Germany (Federal Ministry of the Interior);
  • Netherlands (National Coordinator for Counterterrorism and Security);
  • Spain (Centro Nacional de Coordinación Antiterrorista);
  • United Kingdom (Office for Security and Counter Terrorism).

These were governmental partners that later joined:

  • Austria (Federal Ministry of the Interior);
  • Denmark;
  • Greece (Hellenic Police);
  • Hungary (Counter-terrorism Centre);
  • Portugal (Polícia Judiciária);
  • Romania (Romanian Intelligence Service).

The Clean IT project received opposition concerning the role of the involved private partners and various observations made from a leaked draft (.pdf) of the “detailed recommendations document”, dated August 2012. The final, public deliverable of the Clean IT Project was published (.pdf) in January 2013 by the Dutch National Coordinator for Security and Counterterrorism (NCTV). It should be noted, though, that the proposals that caused opposition were input for discussion, not its output: in the leaked draft, nearly all of them are listed under “to be discussed”. The final, public deliverable does not include any of the proposals that had evident ground for opposition, such as real name policies (i.e., kill online anonymity), legalizing filtering/surveillance of employee’s internet connections, and making consumers and companies liable for various behaviors, such as “knowingly” linking to “terrorist content” or lack of acting upon it. From the final, public deliverable:

p. 9: “The EU has further identified the following offences as being linked to terrorist activities: ‘public provocation to commit a terrorist offence, recruitment for terrorism, and training for terrorism’ which can also be committed in the online environment (Framework Decision 2008/919/JHA [.pdf], 28 November 2008, amending the 2002 Framework Decision).”

p.9: “In this document ‘terrorist use of the Internet’ refers to the use of the Internet for terrorist purposes, which is illegal, including for public provocation (radicalisation, incitement, propaganda or glorification), recruitment, training (learning), planning and organizing terrorist activities.”

p.12: “It is often difficult to determine which content on the Internet is illegal, also because illegality depends on the context in which it is presented and can differ worldwide and even between EU Member States.”

p.12: “Illegal content itself does not always lead to radicalisation and terrorist acts, while content that does contribute to radicalisation is not always illegal.”

p.12: “Organizations choose on a voluntary basis to commit to the general principles, to join the dialogue that has started with the Clean IT project, and/or to implement the best practices described in this document.”

p.15: “Any action taken to reduce the terrorist use of the Internet, whether by governments or by private entities, must comply with national provisions, EU and other international legal instruments, and respect fundamental rights and civil liberties, including access to the Internet, freedoms of expression and assembly, the right to privacy and data protection.”

p.15: “This document does not suggest action which cannot be introduced by legislation for constitutional or human rights reasons.”

p.15: “Actions to reduce terrorist use of the Internet must be effective, proportionate and legitimate. (…)”

p.15: “In cases of suspected terrorist use of the Internet, when an activity is not deemed unequivocally illegal, but might be considered as harmful, organizations (i.e. competent authorities and ISPs) will first try to resolve the situation among themselves as quickly as possible within their respective legal obligations and competences. At all times, organizations have the right to refer the case to the competent court or seek other legal remedy which the applicable laws provide.”

p.17: “The partners in the Clean IT project consider the following best practices to be useful in reducing terrorist use of the Internet, provided they are implemented in compliance with the general principles. Therefore the best practices are to be conducted within the limits of applicable legislation and respect fundamental rights and civil liberties. Implementing best practices is voluntary and is the full responsibility of each individual organization.”

Of course, the road to hell is paved with good intentions, and we don’t know what happens behind closed doors; and judicial oversight seems optional rather than mandatory in the recommendations.

Here is an unofficial HTML version of the entire final, public deliverable of the Clean IT project.

Here are the 13 best practices:

  1. Best Practice 1: Legal framework
    • Challenge: Terrorist use of the Internet is not always clearly explained, while it can be difficult to apply existing legislation on unlawful terrorist activities to the technical reality of cyberspace. Every EU Member State uses its own sovereign powers to implement legislation, but these are not always tailored to the increased and cross-border terrorist use of the Internet. Differences between national legislation make it complicated for competent authorities and Internet companies to deal with terrorist use of the Internet.
    • Best practice: The legal framework to reduce the terrorist use of the Internet should be clearly explained to users, NGOs, competent authorities and Internet companies to make their work more effective. Increased efforts and international cooperation will help to reduce terrorist use of the Internet.
    • Explanatory note: All measures taken to reduce terrorist use of the Internet must be in accordance with human rights and fundamental rights and freedoms. All Member States’ regulation is based on the implementation of the EU Framework Decision of 13 June 2002 on combating terrorism and EU Framework Decision 2008/919/JHA [.pdf] of 28 November 2008. More analysis and explanation of differences in Member States’ legislation will help practitioners in reducing the international aspects of terrorist use of the Internet. Member States should have clear procedures in place to end terrorist use of the Internet. The legal framework to reduce terrorist use of the Internet must be clearly explained to users, NGOs, Internet companies and competent authorities to make their work more effective. Also youth protection legislation protects in some countries against terrorist use of the Internet. Governments should not put too much pressure on organizations when explaining legislation, e.g. by threatening to use (legitimate but) very invasive measures. Putting too much emphasis on terrorist use of the Internet could also have a chilling effect on freedom of expression. Explanation of legislation should be balanced and based on adequate analysis of relevant (national) legislation.
  2. Best Practice 2: Government policies

    • Challenge: Governments should take an active role in reducing terrorist use of the Internet. However, policies on reducing terrorist use of the Internet are not always comprehensive, clearly defined or explained, governments differ in being able to keep up with the rapidly developing Internet while for some governments, dialogue with Internet companies and NGOs is rather new. In addition, policies on reducing terrorist use of the Internet differ between governments, which limits synergy.
    • Best Practice: Governments that have a strategy in place are willing to lead in reducing terrorist use of the Internet and are well equipped to do so. Governments strive for efficient international cooperation and stimulate cooperation with Internet companies and NGOs to reduce terrorist use of the Internet.
    • Explanatory note: Many governments include reducing terrorist use of the Internet as an integral part of their security strategies and in foreign policy and stimulate international cooperation in this field. Governments should make sure competent authorities have enough capacity to deal effectively with the use of the Internet for all kinds of terrorist purposes. The time needed for international (legal) action against content in another country could be reduced. Some governments strive for good cooperation between competent authorities and Internet companies. Internet companies and competent authorities could be assisted by governments by sharing information on terrorist use of the Internet (see also the best practices “Awareness” and “Sharing abuse data“). Governments could stimulate self-regulation by Internet companies and organize programs to educate web moderators.
  3. Best Practice 3: Terms and conditions
    • Challenge: Not all Internet companies state clearly in their terms and conditions that they will not tolerate terrorist use of the Internet on their platforms, and how they define terrorism. This makes it more difficult to decide what to do when they are confronted with (potential) cases of terrorist incitement, recruitment and training on their platform.
    • Best Practice: Some Internet companies do explicitly include terrorist use of the Internet, with a definition or examples, in their terms and conditions, stating that such use is unacceptable. Some Internet companies effectively enforce this policy.
    • Explanatory note: This best practice does not require any EU standard. Internet companies can define and/or give examples of what is terrorist use of their services, and do so for legal, ethical or business reasons. In this best practice “Internet companies” does not refer to access providers. Access providers should refrain from including terrorist use of the Internet in their terms and conditions, as access-blocking is not a recommendable option. Terms and conditions do not create new legal rights for third parties, but solely govern and clarify the relationship between the respective Internet company and its customer. In contrast, the illegality of terrorist use of the Internet affects the relationship between the customer and the (Member State, representing the interests of the) general public. It is recommended that companies have sufficiently staffed and capable abuse departments and are consistent and transparent in how they deal with abuse of their networks and violations of their terms and conditions. Small and medium Internet companies might not have the capacity to maintain a well-staffed abuse department and understand the language in which potential terrorist use of their service takes place. In this case it would be best to forward possible cases of terrorist use of the Internet to hotlines, referral units or competent authorities.
  4. Best Practice 4: Awareness
    • Challenge: Terrorist use of the Internet is currently not widely known or understood. The public in general, but especially vulnerable groups like children, teenagers and young adults and the circle that surrounds them are largely unaware that they are being targeted by terrorists and terrorist groups for incitement and recruitment. Professionals like frontline workers should know what to do when they are confronted with terrorist content or someone who is radicalizing.
    • Best Practice: Cyber security awareness, education and information programs exist in a number of EU Member States, and some of them include terrorist use of the Internet.
    • Explanatory note: In increasing awareness, education and information on terrorist use of the Internet, best results might be expected if governments, competent authorities, Internet companies cooperate in and NGOs lead awareness programs. Increased awareness amongst Internet users will probably lead to more reports of terrorist use of the Internet. Governments have specific knowledge about terrorist activities and threats. It would help Internet companies in becoming more aware of terrorist use of the Internet if this kind of information is shared actively by governments. It is important to address Internet users in general, and vulnerable persons in particular, about the dangers of the Internet and how to recognize online signs of radicalisation. Awareness programs should be creative and appeal to the younger generation. This can be done by involving youth in developing programs, using the latest technology, involving former radicals and victims and implementing counter-narrative policies. Awareness programs should inform where to find help on or report terrorist use of the Internet, as well as provide examples of known cases of terrorist use of the Internet. Awareness projects should inform about the full extent of terrorist programs using the Internet, including radicalisation starting from hate speech, address psychological effects and group dynamics in radicalisation, and provide examples of this. Awareness programs should not violate the right to non-discrimination, while creating a culture of fear and stigmatisation must be avoided.
  5. Best Practice 5: Flagging mechanisms
    • Challenge: Internet users currently do not have enough easy ways of reporting terrorist use of social media. In addition, Internet users are not used to reporting what they believe is illegal. As a consequence, some terrorist use of the Internet is currently not brought to the attention of Internet companies and competent authorities.
    • Best Practice: Some websites with user-generated content offer simple and user-friendly flagging systems on their platforms, having a separate, specific category for terrorist use of their service.
    • Explanatory note: Flagging is a useful method of notifying Internet companies about potential terrorist use of the Internet. User-friendly flagging systems have a separate, specific category to flag cases of terrorism. The service providers should also explain to their users how these flagging systems work and otherwise stimulate its use. This practice is primarily meant for social media or websites that provide user-generated content, but it could be considered to make the technology more widely available where this is technologically possible. Flagged content means that possible illegal content is brought to the attention of the service provider, and from that point they are not excluded from liability for the information stored on their networks (E-commerce directive, 2000/31/EC of 8 June 2000, article 14e). Anonymous flagging should be possible and respected, while Internet companies can also extend a higher credibility status to trusted flagging organizations, like specialized NGOs. Higher credibility statuses should serve to prioritize handling reports. Individual users could also be provided higher credibility status based on their (calculated) reputation in successfully reporting abuse. Abuse of the flagging mechanism should be prevented as much as possible. Governments and competent authorities should primarily use formal ways of notifying Internet companies. In some Member States flagging is also regarded as a formal notification. If governments and competent authorities use flagging systems apart from their formal/legal procedures, they make clear this is exclusively meant as bringing the alleged terrorist use of the Internet to the actual knowledge of the provider (see also the best practice of notice and take action).
  6. Best Practice 6: End-user browser mechanism
    • Challenge: While content portals (like social networks, image or video portals) can offer “flagging” opportunities, other platforms (like hosted websites) often lack such a mechanism. Moreover, there is not one international, user-friendly reporting mechanism available to all Internet users, irrespective of which part of the Internet they are using at the moment they notice what they think is terrorist use of the Internet.
    • Best Practice: A browser-based reporting mechanism could be developed to allow end users to report terrorist use of the Internet.
    • Explanatory note: Webhosting companies and other content platform providing Internet companies that do not offer a single point for reporting to end-users, do not want terrorist use of their networks, but are technically, economically, legally and/or practically limited in detection of their clients’ content. If those clients do not have proper reporting mechanisms for abuse, Internet users cannot report the terrorist use of the Internet in a user-friendly way. In those cases, these Internet companies can play an intermediary role. In some countries Internet companies in coordination with competent authorities offer banners that clients can add to their website on a voluntary basis to report alleged illegal content. Similar mechanisms exist for phishing sites, spam and malware, as an option or add-on in browsers or mail client software. A more systematic approach to help Internet companies to be notified by Internet users about alleged terrorist use of the Internet is a reporting mechanism that is implemented in the standard distribution of a browser, or, as a fallback solution only, is offered as a plugin for browsers. This is a user-friendly notification tool to Internet companies that do not offer flagging tools or do not have effective abuse departments. This mechanism should also be considered for the browsers of mobile devices and their operating systems. While being developed, at for example the EU-level, the mechanism should have an open architecture, allowing non-EU organizations to start using it as well later on. The reporting mechanism will send an automated signal to the Internet company involved, which will allow them to contact their client (the content owner) so the client can take appropriate action. The client and or the host can also contact the competent authorities if necessary (see the best practice of referral units). The system works only for known hosts: hosts that register to this service. As such a mechanism needs to be organized and developed, a pilot is recommended to experiment and evaluate the added value of this system, as well as to solve legal/jurisdictional, organizational, procedural and technical issues. Abuse of the reporting mechanism should be prevented as much as possible.
  7. Best Practice 7: Referral units and hotlines
    • Challenge: Internet companies do have potential cases of terrorist use of the Internet reported to them, but they often lack the required specialist knowledge about terrorism to determine whether it is illegal. Determining what is illegal is primarily a law enforcement role. In other cases, Internet companies lack the language skills they need to make a judgment on the meaning and therefore on the legality of the content or other terrorist activity. In addition, some existing industry operated hotlines do not (explicitly) include terrorist use of the Internet as one of the abuse areas that could be reported to them. As a consequence, a large number of potential cases of terrorist use of the Internet are not dealt with adequately.
    • Best Practice: Some governments and competent authorities maintain one or more referral organization(s), to which Internet companies, NGOs and end-users can report potential cases of terrorist activity on the Internet. The referral organization analyzes whether content is illegal and takes appropriate action if necessary. Some industry operated hotlines do explicitly aim to also handle terrorist use of the Internet.
    • Explanatory note: Well-organized referral units (public sector) and hotlines (private sector), having an appropriate team behind them with the needed competences and skills, will help Internet service providers to handle notifications about terrorist use of the Internet more effectively and efficiently. This is especially the case if Internet service providers are not sure whether possible terrorist use of the Internet that is reported to them is illegal or not. The role of a public sector operated referral unit is to assess Internet content and where it is deemed unlawful to have the material removed and coordinate any prosecutions for offences which may have been identified. For regional and local police units the referral unit offers more expertise in dealing with (potential) terrorist use of the Internet. Referral units need to be advertised and promoted. Referral units’ authority must be grounded in legislation, while any referral unit must work according to local legislation and within its jurisdiction. As legislation differs between EU Member States, referral units’ activities might differ as well. Referral units should be in regular contact with relevant Internet companies, also to ensure actions are coordinated as much as necessary. Internet companies should only refer cases in which it is reasonable to think there is terrorist use of the Internet, not refer all that is reported to them. As cases of terrorist use of the Internet can be transnational, referral units need excellent working relations with referral units in other Member States and outside the EU. Private sector hotlines provide a mechanism for the public to report content or use of the Internet that they suspect to be illegal. Hotlines analyze the reports to determine if the content is illegal under their national legislation, and if so, will perform a “trace” on the web to identify where it appears to be located (source country). With this data, the hotline will then pass the information to the relevant stakeholders (internet service provider or competent authority) for further action. Referral units and hotlines should technically be able to receive all kinds of terrorist use of the Internet (e.g. websites, videos, messages, emails, profiles) and if both exist within one Member State they should have excellent cooperation. Referral units and hotlines would benefit from a points of contact system as proposed in best practice number 9. As a secondary task referral units and hotlines can contribute to awareness, education and information efforts on terrorist use of the Internet. For example, governments and competent authorities could help Internet companies by sharing information on specific phenomena of illegal content and have programs to educate web moderators. Governments can also subsidize competent NGOs that substantially contribute to reducing terrorist use of the Internet and radicalizing content on the Internet.
  8. Best Practice 8: Notice and take action procedures
    • Challenge: When Internet companies are notified of probable cases of terrorist use of the Internet, the procedure to handle these reports is not always effective and efficient. Internet companies have liabilities both to respond to the reporter and to protect the services they deliver to their users. Sometimes competent authorities notify Internet companies to bring terrorist use of the Internet to the actual knowledge of the provider. From their formal role competent authorities can also order Internet companies to remove illegal content. The difference between these two actions is not always clear to Internet companies. In addition, when competent authorities give an order to remove content, these orders suffer sometimes from being insufficiently specific or inappropriate to the company being approached.
    • Best Practice: Some individual Internet companies have their own effective and efficient notice and take action procedures and some have agreed to use a standard for notice and take action. In some EU Member States competent authorities apply a standard for take down orders.
    • Explanatory note: Legally there are two forms of reporting alleged illegal content:
      • (1) a notification brings the content to the actual knowledge of the Internet service provider.
      • (2) an order is a binding request to an Internet service provider by a competent authority.


      Notifications
      Notice and take action applies to any service provided that consists of the storage of information provided by a recipient of the service, for example: providers of chat boxes, e-mail services, file sharing, hosting, social networks, e-commerce sites, and web forums. Effective notice and take action procedures imply that notified unequivocally illegal content is removed as fast as possible. This best practice will only work if the quality of notifications is sufficient. Notifications should be specific (unambiguously identify the material in question), proportionate (matching the offence and limiting collateral damage to other users) and appropriate to the service offered by the Internet company. In case of terrorist use of the Internet it is important to contextualize the terrorist content and describe how it is breaching (national) legislation. If competent authorities send an insufficient notification, the Internet company addressed should always reply as soon as possible and explain in detail the insufficiency. (Insufficient reports by other organizations or by individuals should merely be replied to with a standardized message stating the report was insufficient.) If it is unclear whether the content is illegal or not, effective notice and take action procedures make clear that Internet service providers have an intermediary role to avoid status-quo situations. The ultimate goal is that every notification is handled carefully and that appropriate action is taken (which is not necessarily the take down of the content). However, in some EU Member States it is possible for the service provider to ask the reporter and the content provider to settle the dispute and to wait for a final decision. This situation should always be limited in time. If the content is kept online in the meantime, the service provider can ask the reporter for a promise of indemnification. Notice and take action procedures should be surrounded with clear procedures and guarantees for the right of free speech and the right to a fair trial.

      Orders From the perspective of Internet companies the above mentioned qualifications for notices should at least also apply for formal orders by competent authorities. Internet companies would like to easily recognize the competent authorities, especially if they are based in other countries.

  9. Best Practice 9: Points of contact
    • Challenge: Governments, Internet companies, competent authorities and NGOs do not always know whom to contact on the issue of terrorist use of the Internet.
    • Best Practice: Some governments, competent authorities, Internet companies and NGOs have points of contact for terrorist use of the Internet.
    • Explanatory note: A network of trusted and listed points of contact facilitates cooperation between organizations committed to reducing the terrorist use of the Internet. Points of contact are experts able to represent their organization, preferably on a daily or even 24/7 basis. To establish a professional system of points of contacts, detailed working procedures and a central database, facilitated by an EU (level) organization, will be required. These points of contact should be identified by role and their contact details published. Where possible the people occupying these roles should remain in them for a reasonable period and develop relations with their most important counterparts in other organizations.
  10. Best Practice 10: Cooperation in investigations
    • Challenge: When competent authorities suspect illegal use of the Internet for terrorist purposes and contact Internet companies to assist in investigations of third parties, cooperation between the two is not always effective and efficient.
    • Best Practice: Some Internet companies and competent authorities have agreed on how to cooperate efficiently, effectively and lawfully in investigations of probable illegal terrorist activity on the Internet.
    • Explanatory note: The legal base and purpose of competent authorities’ investigations should always be clarified. It should be made clear what the legal status of the request for cooperation is: if it is mandatory, based on legislation, or voluntary, at the discretion of the Internet company to which the request is directed. Cooperation should be standardized, but human contact remains important. Internet companies have very different backgrounds and fields of activities, but have in common that they want to reduce terrorist use of the Internet. Exchanging knowledge can improve mutual understanding and lead to better cooperation in investigations. Competent authorities should respect the technical integrity of the company involved in the investigations (“do not pull the plug on the servers which might affect other entities than the ones targeted in the operations”). If an investigation needs additional efforts made by Internet companies that already took reasonable precautions to reduce terrorist use of the Internet, it is reasonable they ask for standardized adequate compensation by government.
  11. Best Practice 11: Sharing abuse information
    • Challenge: Most Internet companies have to deal with few cases of terrorism on their platforms. When illegal content is removed, terrorists often try and succeed to post it on other Internet companies’ services.
    • Best Practice: Some Internet companies share information on other kinds of abuse of their network with each other, using a trusted intermediate partner organization. This private sector practice could be extended to include confirmed illegal terrorist use of the Internet.
    • Explanatory note: Systems to share known abuse information via a trusted third partner organization and its databases with known abuse information already exist. These systems often make use of e-mail as it is reliable. Data with a time-stamp is exchanged using formats like xarf (http://x-arf.org) that allows the exchange of many kinds of abuse data like videos, pictures, IP-addresses, email addresses. Only data that is formally confirmed as terrorist use of the Internet, taking into account national legislation, including privacy and data protection legislation, should be added to these systems. Competent authorities and NGOs might help start such a sharing system by providing the information on formally confirmed terrorist use of the Internet they possess. Gathering information for this system should be done by providing information on a case by case basis, from cases of confirmed terrorist use of the Internet. Matching the information in the system with information on the Internet companies’ service will enable to assess more efficiently whether this content is illegal. The legal status of the trusted third party organization, its tasks and competencies, procedures (of information gathering and qualification) and guidelines must be clear and transparent.
  12. Best Practice 12: Voluntary end-user controlled services
    • Challenge: Various kinds of voluntary end-user controlled services exist to identify, log access to or block unwanted or illegal content. However, voluntary end-user controlled services rarely include terrorist use of the Internet. Technology for detecting, logging access to or blocking terrorist use of the Internet is often not mature enough to be precise and effective and risks blocking content that should be free to access.
    • Best Practice: Parental and other voluntary end-user controlled services that effectively address terrorist use of the Internet.
    • Explanatory note: In general, blocking and filtering options are considered a “bad practice”, especially if it is used at state level or if it is otherwise forced on Internet users. Filtering and controlling access on private networks cannot stop illegal web use completely — it is predominantly a tool to prevent accidental and/or casual exposure to illegal content. Filtering by Internet access companies at infrastructure level should not be promoted. Nevertheless, at a parental/end-user level individuals should not be limited in the possibilities to protect themselves or their children from what they believe is inappropriate. Vendors have already categorized and have the potential to control access to some “terrorist, race-hate, extremism” content through the use of keywords, phrases and known website addresses. This can be a helpful tool e.g. for parents who want to protect their children from radicalization attempts. All voluntary end-user controlled services must comply with EU and/or Member States regulations, i.e. on data protection and privacy. Development of accurate sources of information on what constitutes terrorist use of the Internet in each jurisdiction is considered a priority, to which the founding of an authoritative Research and Advisory Organization, as is described in the next best practice, could contribute. Any website addresses list or other information on what should be regarded as terrorist use of the Internet should be (treated) jurisdiction specific and be optional to implement by providers of these filtering services and end-users. Content that is blocked by using such information should have an optional information page that displays the reason, jurisdiction, contact information and method to redress inaccuracies of the supplier of this information.
  13. Best Practice 13: Research and advisory organization
    • Challenge: The understanding of what is terrorist use of the Internet is the result of many individual public and private organizations studying terrorist use of the Internet and sharing their expertise. In terrorist use of the Internet there is no single coordinating and academic authoritative body to which all organizations involved are likely to refer.
    • Best Practice: An academic network (on sub-national, national and/or international level) that is respected by all parties, to expand existing knowledge on terrorist use of the Internet, and how best to reduce it.
    • Explanatory note: An organization as proposed here should be part of a university and would be able to provide research and advice on terrorist use of the Internet throughout the EU and in each individual Member State. This organization should (among others) gather and combine the research that has been done on terrorist use of the Internet, and share this with others in its network. The organization should act independently, i.e. without political interference. The organization should (among others) facilitate meetings between and projects of academics and practitioners. Possible fields of work are:
      • Legislation, regulation and jurisprudence;
      • Academic work on the subject;
      • Known terrorist use of the Internet;
      • Information on the technologies used by terrorists.

EOF

[Dutch] Lijst van gegevens die onder de Wet bewaarplicht zes maanden worden bewaard

Even als notitie voor mezelf, hierbij de lijst van gegevens die door openbare elektronische communicatiediensten onder de Wet bewaarplicht telecommunicatiegegevens (MvT), de Nederlandse implementatie van de Europese dataretentierichtlijn, gedurende zes maanden moeten worden opgeslagen. Deze lijst betreft een bijlage behorende bij artikel 13.2a van de Telecommunicatiewet. Ter opfrissing: het Europese Hof van Justitie heeft die richtlijn in april 2014 afgekeurd. De Nederlandse regering moet nog reageren op die uitspraak (houd hier een oogje in ‘t zeil).

(Tekst geldend op: 25-08-2014)

Bijlage behorende bij artikel 13.2a van de Telecommunicatiewet

In deze bijlage wordt verstaan onder:

  1. telefoondienst: oproepen (met inbegrip van spraak, voicemail, conference call of call-gegevens), aanvullende diensten (met inbegrip van call forwarding en call transfer), messaging- en multimediadiensten (met inbegrip van short message service (SMS), enhanced media service (EMS) en multimedia service (MMS);

  2. gebruikersidentificatie: een unieke identificatie die aan een persoon wordt toegewezen wanneer deze zich abonneert op of registreert bij een internettoegangsdienst of internetcommunicatiedienst;

  3. celidentiteit (Cell ID): de unieke code van een cel van waaruit een mobiele telefoonoproep werd begonnen of beëindigd.

In deze bijlage worden als gegevens, bedoeld in artikel 13.2a van de wet, aangewezen de volgende gegevens:

  1. Bij telefonie over een mobiel of een vast netwerk:

    1. het telefoonnummer van de oproeper en het telefoonnummer (de telefoonnummers) die werden opgeroepen en, in het geval van aanvullende diensten zoals call forwarding of call transfer, het nummer (de nummers) waarnaar de verbinding is doorgeleid.

    2. namen en adressen van de betrokken abonnees of geregistreerde gebruikers;

    3. datum en tijdstip van aanvang en einde van de verbinding;

    4. de gebruikte telefoondienst;

    5. bij mobiele telefonie:

      • de International Mobile Subscriber Identity (IMSI) van de oproepende en van de opgeroepen deelnemer;

      • de International Mobile Equipment Identity (IMEI) van de oproepende en de opgeroepen deelnemer;

      • in geval van vooraf betaalde anonieme diensten, datum en tijdstip van de eerste activering van de dienst en aanduiding (Cell ID) van de locatie waaruit de dienst is geactiveerd;

      • de locatieaanduiding bij het begin van de verbinding;

      • gegevens voor het identificeren van de geografische locatie van cells middels referentie aan hun locatieaanduidingen gedurende de periode dat communicatiegegevens worden bewaard.

  2. Bij internettoegang, e-mail over het internet en internettelefonie:

    1. de toegewezen gebruikersidentificatie(s) en de gebruikersidentificatie of telefoonnummer van de beoogde ontvanger(s) van een internettelefoonoproep;

    2. de gebruikersidentificatie en het telefoonnummer toegewezen aan elke communicatie die het publieke telefoonnetwerk binnenkomt;

    3. naam en adres van de abonnee of de geregistreerde gebruiker aan wie het IP-adres, de gebruikersidentificatie of het telefoonnummer was toegewezen op het tijdstip van de communicatie en naam (namen) en adres (adressen) van de abonnee(s) of de geregistreerde gebruiker(s) en de gebruikersidentificatie van de beoogde ontvanger van communicatie;

    4. datum en tijdstip van de log-in en log-off van een internetsessie gebaseerd op een bepaalde tijdzone, samen met het IP-adres, hetzij statisch, hetzij dynamisch, dat door de aanbieder van een internettoegangsdienst aan een communicatie is toegewezen, en de gebruikersidentificatie van de abonnee of geregistreerde gebruiker;

    5. datum en tijdstip van de log-in en log-off van een e-maildienst over het internet of internettelefoniedienst gebaseerd op een bepaalde tijdzone;

    6. de gebruikte internetdienst;

    7. het inbellende nummer voor een inbelverbinding;

    8. de digital subscriber line (DSL) of ander eindpunt van de initiatiefnemer van de communicatie.

EOF

Practical online anonymity: Tor Browser works. Use it.

My response to the post “Privacy is almost impossible [to achieve on the internet]” by Arno Reuser on LinkedIn:

Used properly, Tor Browser probably provides sufficient unlinkability between your identity and the behavior that can be observed by Acxiom, BlueCava, Google, Facebook, etc. The HTML5 Canvas fingerprinting issue is irrelevant to the Tor Browser, because it asks user permission before using the Canvas API (simply deny it). User-Agent fingerprinting is also way less relevant than for normal browsers, because Tor Browser uses a generic User-Agent header that blends the user in with a larger crowd. Here’s a short test using Panopticlick (which involves much more than just the User-Agent header):

  • Tor browser w/JavaScript turned off: “one in 475 browsers have the same fingerprint as yours” (8.89 bits of entropy)
  • Tor browser w/JavaScript turned on: “one in 3,125 browsers have the same fingerprint as yours” (11.61 bits)
  • Firefox browser with various plug-ins: “Your browser fingerprint appears to be unique among the 4,459,482 tested so far.” (22.09 bits)

[Here, less bits of entropy = better anonymity.] Use the ‘Forbid Script Globally’ option in the built-in NoScript plug-in to disable JavaScript. Only enable JavaScript for specific websites and only if you really need to and only if you are sufficiently confident that that website is not a sinkhole or waterhole. Be careful about the preset domain whitelists, because it also whitelists subdomains — which can mean you’re vulnerable to XSS against their subdomains when running NoScript in its default configuration For web search, visit Ixquick or DuckDuckGo from Tor. Don’t Google your own name from Tor, etc. As long as you don’t submit personal information or log in to personal accounts, you’ll probably be fine. [Addendum: Also make sure you don’t post texts longer than a few hundred words and change how you use language to prevent against identification through stylometrics. Don’t use online translators for that, as you must assume they’ll store both the input and output, and that the translated text can be traced back to the original text.]

For additional protection, boot your system from non-writable storage containing TAILS [do read the warnings]. To protect anonymity in the face of possibly compromised entry nodes, run a Tor bridge or relay on the IP address you use Tor Browser from: if you can spare some bandwidth, that will give you cover traffic.

In the end, it all depends on the threat model. At the individual level, anonymity is bound by three variables (and one might argue that any claim concerning anonymity that ignores any of these variables has little meaning):

  1. a subject (you);
  2. an adversary (whom you want to be anonymous from);
  3. an ‘Item of Interest‘ (IOI; the message or action you want to be unobservable, or at least unlinkable to, from the perspective of the adversary).

The adversary can be scoped anywhere between global and local, as passive (only listens) or active (listens and manipulates or blocks), and has a certain information position (that can change due to changes in infrastructure, interception laws,  acquisitions, partnerships, bribe, etc.).

Unless you believe adversaries have (or are able and planning to obtain) remote or physical access to your computer, Tor Browser, used properly, will provide sufficiently strong anonymity.

Addendum: yes, bugs and new attack methods against Tor can be discovered, such as the (now-patched) exploitable vulnerability in Firefox ESR (Tor Browser is based on Firefox ESR), and more recently the “relay early” attack. Such attacks (so far) are not typically relevant when you’re using Tor to protect yourself from non-government actors. Tor’s threat model explicitly does not protect against an adversary who is able to intercept both the first link (between your computer and the Tor entry node) and the last link (between the Tor exit node and the website you visit), i.e., the global passive adversary (or a domestic adversary that discovers a method to manipulate Tor path selection to enforce a domestic entry node and a domestic exit node). Intelligence agencies may cooperate to achieve that. For instance: if the Dutch Intelligence & Security Act will be changed in the way advised, the Dutch Joint SIGINT Cyber Unit (JSCU) may, legally speaking, intercept all traffic entering to / exiting from the Netherlands from known Tor nodes, and share that with NSA/GCHQ/etc.

Contribute to the Tor network: donate money to Tor exit node operators, donate bandwidth by running a Tor relay or donate money to the Tor project.

Related:

EOF

[Dutch] Roger Vleugels’ reactie op de internetconsultatie Wet voorkomen misbruik Wob

Met diens toestemming citeer ik hier de reactie van Wob-goeroe Roger Vleugels op de internetconsultatie Wet voorkomen misbruik Wob, die op 19 augustus j.l. sloot (hyperlinks zijn van mij):

Utrecht, 18 augustus 2014

Reactie voor internetconsultatie inzake Wet voorkoming misbruik Wob

Het onder de aandacht brengen en indienen van het onderhavige wetsvoorstel is niet integer. Viermaal niet integer.

1. Het doel wordt al bediend door wetsvoorstel voor een nieuwe Wob

Al jaren wordt gewerkt aan een nieuwe Wob. Dit wetsvoorstel is al langs de Raad van State; al voor een tweede keer in de Kamer ingediend en in behandeling. Volgende maand is er een hearing in de Kamer.

In dat wetsvoorstel zit al een ontkoppeling van de Wet dwangsom. De precieze vormgeving van de anti-misbruik bepaling in dat wetsvoorstel komt de komende maanden aan bod in de Kamerbehandeling.

Al jaren is dit wetsvoorstel voor een nieuwe Wob, en sinds zijn aantreden gesteund door Minister Plasterk, de weg waarlangs het misbruik gestopt moet worden.

Het is niet integer om dwars door dit zorgvuldige proces nu met een wetsvoorstel te komen vooral rond misbruik. Er is ook geen enkele noodzaak want het lopende traject voor een nieuwe Wob is sneller.

Tot tegen het reces heette het uit de mond van Minister Plasterk dat hij het misbruik via het wetsvoorstel voor een nieuwe Wob wilde aanpakken. Toch kwam vlak voor het reces het wetsvoorstel dat object is van deze internetconsultatie.

Je moet maar durven om een democratisch proces zo te frustreren. De handelswijze van Minister Plasterk is alleen te verklaren vanuit de dwang van de VVD die via de Ministerraad, meer precies via Minister Opstelten, de facto, qua Wob, onder curatele gesteld is [meer hierover in de volgende paragraaf.]

2. Dit wetsvoorstel is de doodsteek voor de nieuwe Wob

Een van de factoren die in de Kamer voor enige urgentie bij de behandeling van het wetsvoorstel voor een nieuwe Wob zorgt, is dat in dat wetsvoorstel de ontkoppeling met de Wet dwangsom geregeld wordt.

Door dit nu apart te regelen in de Wet voorkoming misbruik Wob verliest het wetsvoorstel voor een nieuw Wob een groot deel van zijn urgentie.

Vanaf het begin waren er in de Kamer aarzelingen bij de noodzaak voor een nieuwe Wob. Eén van de samenbindende factoren vormde het aanpakken van het misbruik in dat voorstel.

Als dat er nu uitgaat valt te vrezen dat de Kamer de behandeling van het wetsvoorstel voor een nieuwe Wob geen urgentie meer zal geven.

Zeer merkwaardig is de orkestratie inzake misbruik. Op enig moment, meer hierover in de volgende paragraaf, ontstond aandacht voor misbruik van de Wob, verondersteld misbruik want in die hele campagne, die vooral gevoerd werd door de VNG, enkele gemeenten en de Kamerwoordvoerders voor de Wob van de VVD en de PvdA, werd nooit aangegeven wat de omvang van dat misbruik zou zijn. Iets stellen verplicht tot onderbouwen, leek hier niet te gelden.

Hiervoor spreek ik over de Kamerwoordvoerder voor de Wob van de PvdA. Meer precies moet dat zijn één van de twee PvdA Wob-woordvoerders.

De ene, Manon Fokke, wil het misbruik aanpakken onmiddellijk en los van het wetsvoorstel voor een nieuwe Wob en zit daarmee op één lijn met de VVD.

De andere, Astrid Oosenbrug wil dat via het wetsvoorstel voor een nieuwe Wob en zit daar mee op de lijn van de indieners van de nieuwe Wob, GL en D66, en op de lijn van Minister Plasterk.

Althans zat. Vanwege de druk van de VVD en het Fokke deel van de PvdA besloot de Ministerraad vlak voor het reces dat er niet meer gewacht moest worden op het wetsvoorstel voor een nieuwe Wob.

Minister Plasterk kwam de facto onder curatele van de Ministerraad en moest het spoor van misbruik aanpak via het wetsvoorstel voor een nieuwe Wob verlaten wat resulteerde in de indiening van het voorstel waar deze consultatie over handelt.

3. Misbruik door verzoekers is een non-issue

Inmiddels liggen er meerdere onderzoeken uitgevoerd in opdracht van Minister Plasterk die aantonen dat er amper sprake is van misbruik van de Wob. Nergens gaat het om meer dan enkele zaken per orgaan per jaar, met twee uitzonderingen: de politie en de gemeenten [althans een klein deel van de gemeenten].

Bij die twee uitzonderingen is het misbruik sinds medio 2013 overigens fors aan het afne-men. Bij de politie vooral door de invoering van een al jaren eerder geteste andere manier van boete-inning waardoor de bulk van de Wob-verzoeken per definitie niet meer kan. Bij gemeenten doordat daar eindelijk serieuzer omgegaan wordt met Wob-verzoeken wat onder andere te zien is in sneller beslissen.

Inzake die laatste opmerking is zeer van belang dat in een van de onderzoeken die BZK liet uitvoeren [dat van het SEO] klip en klaar naar voren kwam dat het maken van een beslissing in verreweg de meeste Wob-verzoeken minder dan 10 werkdagen vergt.

Deze constatering onderbouwd dat het meer dan vreemd is dat Nederland, als één van de weinige landen ter wereld, toch een beslistermijn van 2×28 dagen kent.

Tegelijk wordt met die 10 dagen constatering het geklaag over boetes die betaald moeten worden bij termijnoverschrijding nog merkwaardiger. Immers een ingebrekestelling geeft na die eerste 2×28 dagen eerst 14 dagen wachttijd en daarna bouwt de boete heel langzaam op. Het duurt in totaal vier maanden alvorens dat boetebedrag van 1260 bereikt is. Vier maanden voor iets dat 10 dagen werk kost.

Misbruik benoemen als een issue is niet integer. Het een urgentie geven in een apart wetsvoorstel los van het wetsvoorstel voor een nieuwe Wob is bizar. Het is bovendien het schofferen van het parlement dat zeer ver is met het voorstel voor een nieuwe Wob.

4. Disproportioneel en onbehoorlijk

Het is de vraag of het rechtmatig is als de met afstand grootste misbruiker van de Wob, de overheid, stappen onderneemt naar verzoekers toe zonder tegelijk ook het misbruik door de overheid aan te pakken.

De overheid als misbruiker? In rond de 50% van alle zaken wordt de wettelijke maximumtermijn voor de primaire beslissing overschreden [dat is die wereldwijd uniek langzame 2×28 dagen]; weigergrond Wob artikel 10.2.g wordt bijzonder vaak onrechtmatig ingezet wat mag blijken uit dat hij bijna even vaak sneuvelt bij bezwaar en beroep; reeksen illegale weigergronden worden gehanteerd, enz. enz.

Het is ook zeer de vraag of het rechtmatig is dat degene die een wet zeer halfwas invoert, zo is er amper formatie voor Wob-ambtenaren; zo is er geen opleiding voor Wob-ambtenaren, iets mag zeggen over misbruik van die wet door verzoekers.

Omstreeks de invoering van de Wob is door de toenmalige minister van BZK, Rood, gezegd dat er sprake is van een integere invoering als er per departement tot wel 50 Wob-ambtenaren aangesteld zouden worden. Er is er nu niet één met 5.

De Ministers Donner en Spies schreven aan de Kamer dat de regering niks zou voorstellen dat strijdig is met het Verdrag van Tromso. Nu komen met een verplichting tot informeel overleg is dat.

Bovendien: Als verzoekers nu overleg aanbieden, bij verzoek of later, wordt dit door organen amper gebruikt voor verheldering of informele schikking. Als een orgaan er op ingaat is het louter om één reden: pogen het verzoek in te perken.

Een eerste vereiste lijkt het dat de overheid met verzoeken tot overleg normaal leert omgaan en dat de overheid verzoekers behulpzaam is op manieren zoals de Awb voorschrijft. Tot dat moment al spreken over verplichte overleggen is, opnieuw, niet integer.

Resumerend

Voor mijn cliënten, meest persorganen, voerde ik de afgelopen 25 jaar tegen de 5.000 Wob-procedures. De teloorgang van de openbaarheid sinds midden jaren 00 is stuitend.

Als docent Freedom of information geef ik colleges in dik een dozijn landen. Het zien verworden van Nederland van min of meer voorloper tot hekkensluiter is tenenkrommend.

Ik vind het beschamend deze consultatie te schrijven. Sterker nog het wordt steeds beschamender om in Nederland met openbaarheid van bestuur bezig te zijn.

De Wob heeft als één van de kerndoelen het stimuleren van goede bestuursvoering. De gang van zaken rond dit wetsvoorstel toont aan dat dat nog zeer veel investering behoeft.

Roger Vleugels

Juridisch adviseur Wob
Docent Wob
Wetgevingsadviseur nieuwe Wob

Related:

EOF