Month: February 2011

Wiki Government in the Netherlands?

UPDATE 2012-01-18: related news, found via @LiberationTech: U.S. Congress May Soon Take Questions From The Great State Of Social Media

Disclaimer: I have not been educated in public policy or politics.
In Wiki Government (2009), author Beth Simone Noveck discusses how technology can benefit democracy when used for citizen participation, working toward a more open model of decision-making. The Dutch government is currently running a two-year pilot program at www.internetconsultatie.nl in which all ministries (are said to) consult the public about at least 10% of their legislative proposals (I don’t know how they measure the “10%”). Although I highly appreciate this pilot as a step toward a (more) participatory democracy, I hope its successors at the level of both Dutch ministries and Dutch municipalities will make sure to take into account Noveck’s lessons-learned described in chapter 8 (citation):

  1. Ask the right questions: The more specific the question, the better targeted and more relevant the responses will be. Open-ended: “What do you think of x?” questions only lead to unmanageable and irrelevant feedback.
  2. Ask the right people: Creating opportunities for self-selection allows expertise to find the problem. Self-selection can be combined with baseline participation requirements.
  3. Design the process for the desired end: The choice of methodology and tools will depend on the results. But the process should be designed to achieve a goal. That goal should be communicated up front.
  4. Design for groups, not individuals: “Chunk” the work into smaller problems, which can easily be distributed to members of a team. Working in groups makes it easier to participate in short bursts of time and is demonstrated to produce more effective results.
  5. Use the screen to show the group back to itself: If people perceive themselves to be part of a minimovement, they will work more effectively together across a distance.
  6. Divide the work into roles and tasks: Collaboration requires parceling out assignments into smaller tasks. Visualizations can make it possible for people to perceive the available roles and choose their own. Wikipedia works because people know what to do.
  7. Harness the power of reputation: Organizations are increasingly using bubbling-up techniques to solicit information in response to specific questions and allowing people to rate the submissions.
  8. Make policies, not websites: Improved practices cannot be created through technology alone. Instead, look at the problem as a whole, focusing on how to redesign internal processes in response to opportunities for collaboration.
  9. Pilot new ideas: Use pilot programs, competitions, and prizes to generate innovation. 
  10. Focus on outcomes, not inputs: Design practices to achieve performance goals and metrics. Measure success.

In addition: could anything be learned from www.derdekamer.net and www.democratiespel.nl ? I’d be very happy to read your comments!

Cyberwar = Propaganda (isn’t it?)

Bill Blunden‘s paper “Manufacturing Consent and Cyberwar” (.pdf), written for the Lockdown 2010 conference, deserves more attention and discussion, IMHO. Obviously referencing “Manufacturing Consent: The Political Economy of the Mass Media” (1988) by Edward S. Herman and Noam Chomsky, Blunden discusses the dangers of “offense is the best defense” crisis mentality, (mis)attribution of attacks and weakly-founded claims about (future) threats by security firms and media; which altogether may resemble the propaganda model after, updating it to the current realm of discourse, one replaces Herman/Chomsky’s “anti-communist” filter with perhaps a more generally apocalyptic “FUD” filter (better suggestions are welcome). The paper is well-written. Its abstract:

Over the past year, there have been numerous pieces that have appeared in the press alluding to the dire consequences of Cyberwar and the near existential threat that it represents to the United States. While these intimations of destruction can seem alarming at first glance, closer scrutiny reveals something else. Ultimately, the gilded hyperbole of Cyberwar being peddled to the public is dangerous because it distracts us from focusing on actual threats and constructive solutions. Pay no attention to the man behind the curtain says the ball of fire named Oz. In this presentation, I’ll pull back the curtain to expose the techniques being used to manipulate us and the underlying institutional dynamics that facilitate them.

Slides (.pdf) are available as well.

(PS: yes I see the irony of posting "Cyberwar = Propaganda" on "blog2.cyberwar.nl/")

The Sokal affair

In 1996, Alan Sokal gained notoriety for getting his (intentionally) bullshit paper “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity” published in the cultural studies journal Social Text. This has become known as the ‘Sokal affair‘. In “Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science” (1997), physicists Alan Sokal and Jean Bricmont explain why Sokal’s parody paper is bullshit and identify the (probably undesirable) conditions of (mostly French) postmodernist thinking that (probably) made it possible for the parody paper to get accepted in a journal. Excellent book. The affair demonstrated unwarranted disqualification of scientific methods as a result of overly relativistic “anything goes”-thinking and unjustified and often improper use of concepts from mathematics, physics and logic.

Lessons that are suggested to be drawn (page 185-189):

  1. It’s a good idea to know what one is talking about (don’t apply concepts you don’t understand);
  2. Not all that is obscure is necessarily profound (strive for easy-to-understand language);
  3. Science is not a “text” and can’t be analyzed in a purely verbal manner;
  4. Don’t ape the natural sciences or its “paradigm shifts” (e.g. between probabilist and determinist theories);
  5. Be wary of argument from authority (this can’t be repeated enough);
  6. Specific skepticism should not be confused with radical skepticism (“scientific theory X is bogus” versus “all scientific theories are bogus”);
  7. Be aware that ambiguity may be (ab)used as subterfuge.

How to Read a Scientific Paper

Questions to ask when reading/reviewing a scientific paper:

  1. What questions does the paper address?
  2. What are the main conclusions of the paper?
  3. What evidence supports those conclusions?
  4. Do the data actually support the conclusions?
  5. What is the quality of the evidence?
  6. Why are the conclusions important?

I suggest the following subquestions:

  • If the paper contains a hypothesis, is it falsifiable?
  • (How) Is the work reproducible? (what would you need to reproduce it?)
  • What does the paper contribute to the existing body of knowledge?
  • Are the applied methods explained, valid and reliable? (e.g. statistical tests)
  • Are the limitations of the work acknowledged?

Naturalistic Conception of Science

Several years ago I read the Dutch book “Wetenschap of Willekeur” (1985) by A.A. Derksen, which (still!) is an excellent introduction to the philosophy of science. Can you read Dutch? Buy a copy! 🙂 The book contains one diagram which IMHO summarizes the (?) naturalistic conception of science pretty well, and for educational purposes I reproduced and translated it:
 

P.S.: I expect this blogpost to be covered by the Dutch right of citation (‘citaatrecht‘), please inform me if I’m wrong.

Conditions for Considering Scientific Claims

In his book God: The Failed Hypothesis, Victor J. Stenger defined the following five “Conditions for Considering Extraordinary Claims”:

  1. The protocols of the study must be clear and impeccable so that all possibilities of error can be evaluated. The investigators, not the reviewers, carry the burden of identifying each possible source of error, explaining how it was minimized, and providing a quantitative estimate of the effect of each error. These errors can be systematic—attributable to biases in the experimental set up—or statistical—the result of chance fluctuations. No new effect can be claimed unless all the errors are small enough to make it highly unlikely that they are the source of the claimed effect.
  2. The hypotheses being tested must be established clearly and explicitly before data taking begins, and not changed midway through the process or after looking at the data. In particular, “data mining” in which hypotheses are later changed to agree with some interesting but unanticipated results showing up in the data is unacceptable. This may be likened to painting a bull’s-eye around wherever an arrow has struck. That is not to say that certain kinds of exploratory observations, in astronomy, for example, may not be examined for anomalous phenomena. But they are not used in hypothesis testing. They may lead to new hypotheses, but these hypotheses must then be independently tested according to the protocols I have outlined.
  3. The people performing the study, that is, those taking and analyzing the data, must do so without any prejudgment of how the results should come out. This is perhaps themost difficult condition to follow to the letter, since most investigators start out with the hope of making a remark- able discovery that will bring them fame and fortune. They are often naturally reluctant to accept the negative results that more typically characterize much of research. Investigators may then revert to data mining, continuing to look until they convince themselves they have found what they were looking for.3 To enforce this condition and avoid such biases, certain techniques such as “blinding” may be included in the protocol, where neither the investigators nor the data takers and analyzers know what sample of data they are dealing with. For example, in doing a study on the efficacy of prayer, the investigators should not know who is being prayed for or who is doing the praying until all the data are in and ready to be analyzed.
  4. The hypothesis being tested must be one that contains the seeds of its own destruction. Those making the hypothesis have the burden of providing examples of possible experimental results that would falsify the hypothesis. They must demonstrate that such a falsification has not occurred. A hypothesis that cannot be falsified is a hypothesis that has no value.
  5. Even after passing the above criteria, reported results must be of such a nature that they can be independently replicated. Not until they are repeated under similar conditions by different (preferably skeptical) investigators will they be finally accepted into the ranks of scientific knowledge.

These conditions are desirable in any claim of knowledge; there is a time for unrestricted creativity (preceding ‘the’ scientific method) and there is a time for rigor (practicing ‘the’ scientific method). Would it be a good idea, a bad idea, or simply impossible to expect such conditions to be met by any scientific discipline? Which claims or disciplines are unable to meet these conditions, and (how) do they provide reliable knowledge? Although I tend to agree with Paul Feyerabend‘s statement in “Against Method” (1975) that “The idea that science can, and should, be run according to fixed and universal rules, is both unrealistic and pernicious” (page 295), I can’t imagine how to achieve reliable knowledge without at least falsification (condition 4) and reproducibility (condition 5).

Surveillance and Democracy

I bought a copy of “Surveillance and Democracy”, 2010, eds. Haggerty and Samatas, printed by Routledge, and today I was blown away by the eloquent and accurate way in which the following text passages from the Introduction (!) chapter reflect (and clarified) my own concerns regarding surveillance (bold emphasis and typo’s are mine):

The first point to note is that today many surveillance are technological. Groundbreaking surveillance initiatives emerge out of laboratories with each new imputation of computer software or hardware. These augmented technological capacities are only rarely seen as necessitating explicit policy decisions, and as such disperse into society with little or no official political discussion. Or, alternatively, the comparatively slow timelines of electoral politics often ensure that any formal scrutiny of the dangers or desirability of surveillance technologies only occurs long after the expansion of the surveillance measure is effectively a fait accompli.

By default, then, many of the far-reaching questions about how surveillance systems will be configured occur in organizational back regions amongst designers and engineers, and therefore do not benefit from the input of a wider range of representative constituencies. Sclove (1995) has drawn attention to this technological democratic deficit, and calls for greater public input at the earliest stages of system design (see also Monahan in this volume). And while this is a laudable ambition, the prospect of bringing citizens into the design process confronts a host of pragmatic difficulties, not the least of which are established understandings of what constitutes relevant expertise in a technologized society.

Even when surveillance measures have been introduced by representative bodies this is no guarantee that these initiatives reflect the will of an informed and reasoned electorate. One of the more important dynamics in this regard concerns the long history whereby fundamental changes in surveillance practice and infrastructure have been initiated in times of national crisis. The most recent and telling example of this process occurred after 9/11 when many Western governments , the United States most prominently, passed omnibus legislation that introduced dramatic new surveillance measures justified as a mean to enhance national security (Ball and Webster, 2003; Haggerty and Gazso, 2005; Lyon, 2003). This legislation received almost no political debate, and was presented to the public in such a way that it was impossible to appreciate the full implications of the proposed changes. This, however, was just the latest in the longstanding practice of politicians embracing surveillance at times of heightened fear. At such junctures one is more apt to encounter nationalist jingoism than measured debate about the merits and dangers of turning the state’s surveillance infrastructure on suspect populations.

The example of 9/11 accentuates the issue of state secrets, which can also limit the democratic oversight of surveillance. While few would dispute the need for state secrets, particularly in matters of national security, their existence raises serious issues insofar as the public is precluded from accessing the information needed to judge the actions of its leaders. In terms of surveillance, this can include limiting access to information about the operational dynamics of established surveillance systems, or even simply denying the existence of specific surveillance schemes. Citizens are asked (or simply expected) to trust that their leaders will use this veil of secrecy to undertake actions that the public would approve of if they were privy to the specific details. Unfortunately, history has demonstrated time and again that this trust is often abused, and knowledge of past misconduct feeds a political climate infused with populist conspiracy theories (Fenster, 2008). Indeed, one need not be paranoid to contemplate the prospect that, as surveillance measures are increasingly justified in terms of national security, a shadow “security state” is emerging — one empowered by surveillance, driven by a profit motive, cloaked in secrecy and unaccountable to traditional forms of democratic oversight (see Hayes in this volume).

(…)

Mitrou’s chapter also explores the possible anti-democratic implications of measures that make the average citizen more transparent. She analyzes new European measures designed to retain information about a citizen’s electronic communications. While some see this development as innocuous given that they do not store actual communication content, Mitrou accentuates how much potentially sensitive information can be derived from the data that is collected. As the public becomes more aware of such measures there is a risk that this will produce an anti-democratic chilling effect, as individuals wary how their information might be used in the future start to self-censor any communications that could be construed as having political implications. Mitrou interrogates how these measures can limit the democratic rights to privacy, expression and freedom of movement.

Whereas most analysts of surveillance typically concentrate on the implications of one unique practice or technology, Hayes presents a disturbing vision of the overall direction of how assorted surveillance measures are being aligned in the ostensible service of securing the European Union. Rather than surveillance expanding in an ad-hoc fashion, he details an explicit agenda being pushed by non-representative agencies with strong ties to large international military and security firms. Their aim is to establish a form of domestic “full spectrum dominance” that relies on new information technologies to create a form of largely unaccountable control over all risks that different groups are imagined to pose.

The words “shadow security state” and “full spectrum dominance” made me frown a bit, but perhaps that language is justifiable; I’ll submit a separate blogpost reflecting on those respective passages after I’ve read them.

References mentioned in this quotation (hyperlinks are mine):

Ball, Kirstie and Frank Webster. 2003. “The Intensification of Surveillance.” London: Pluto.
Fenster, Mark. 2008. Conspiracy Theories: Secrecy and Power in American Culture. Minnesota: University of Minnesota Press.
Haggerty, Kevin D. and Amber Gazso. 2005. “Seeing Beyond the Ruins: Surveillance as a Response to Terrorist Threats.” Canadian Journal of Sociology 30:169-87.
Lyon, David. 2003. Surveillance After September 11. London: Polity.
Sclove, Richard. 1995. Democracy and Technology. New York: Guilford.