Month: February 2011

Conditions for Considering Scientific Claims

In his book God: The Failed Hypothesis, Victor J. Stenger defined the following five “Conditions for Considering Extraordinary Claims”:

  1. The protocols of the study must be clear and impeccable so that all possibilities of error can be evaluated. The investigators, not the reviewers, carry the burden of identifying each possible source of error, explaining how it was minimized, and providing a quantitative estimate of the effect of each error. These errors can be systematic—attributable to biases in the experimental set up—or statistical—the result of chance fluctuations. No new effect can be claimed unless all the errors are small enough to make it highly unlikely that they are the source of the claimed effect.
  2. The hypotheses being tested must be established clearly and explicitly before data taking begins, and not changed midway through the process or after looking at the data. In particular, “data mining” in which hypotheses are later changed to agree with some interesting but unanticipated results showing up in the data is unacceptable. This may be likened to painting a bull’s-eye around wherever an arrow has struck. That is not to say that certain kinds of exploratory observations, in astronomy, for example, may not be examined for anomalous phenomena. But they are not used in hypothesis testing. They may lead to new hypotheses, but these hypotheses must then be independently tested according to the protocols I have outlined.
  3. The people performing the study, that is, those taking and analyzing the data, must do so without any prejudgment of how the results should come out. This is perhaps themost difficult condition to follow to the letter, since most investigators start out with the hope of making a remark- able discovery that will bring them fame and fortune. They are often naturally reluctant to accept the negative results that more typically characterize much of research. Investigators may then revert to data mining, continuing to look until they convince themselves they have found what they were looking for.3 To enforce this condition and avoid such biases, certain techniques such as “blinding” may be included in the protocol, where neither the investigators nor the data takers and analyzers know what sample of data they are dealing with. For example, in doing a study on the efficacy of prayer, the investigators should not know who is being prayed for or who is doing the praying until all the data are in and ready to be analyzed.
  4. The hypothesis being tested must be one that contains the seeds of its own destruction. Those making the hypothesis have the burden of providing examples of possible experimental results that would falsify the hypothesis. They must demonstrate that such a falsification has not occurred. A hypothesis that cannot be falsified is a hypothesis that has no value.
  5. Even after passing the above criteria, reported results must be of such a nature that they can be independently replicated. Not until they are repeated under similar conditions by different (preferably skeptical) investigators will they be finally accepted into the ranks of scientific knowledge.

These conditions are desirable in any claim of knowledge; there is a time for unrestricted creativity (preceding ‘the’ scientific method) and there is a time for rigor (practicing ‘the’ scientific method). Would it be a good idea, a bad idea, or simply impossible to expect such conditions to be met by any scientific discipline? Which claims or disciplines are unable to meet these conditions, and (how) do they provide reliable knowledge? Although I tend to agree with Paul Feyerabend‘s statement in “Against Method” (1975) that “The idea that science can, and should, be run according to fixed and universal rules, is both unrealistic and pernicious” (page 295), I can’t imagine how to achieve reliable knowledge without at least falsification (condition 4) and reproducibility (condition 5).

Surveillance and Democracy

I bought a copy of “Surveillance and Democracy”, 2010, eds. Haggerty and Samatas, printed by Routledge, and today I was blown away by the eloquent and accurate way in which the following text passages from the Introduction (!) chapter reflect (and clarified) my own concerns regarding surveillance (bold emphasis and typo’s are mine):

The first point to note is that today many surveillance are technological. Groundbreaking surveillance initiatives emerge out of laboratories with each new imputation of computer software or hardware. These augmented technological capacities are only rarely seen as necessitating explicit policy decisions, and as such disperse into society with little or no official political discussion. Or, alternatively, the comparatively slow timelines of electoral politics often ensure that any formal scrutiny of the dangers or desirability of surveillance technologies only occurs long after the expansion of the surveillance measure is effectively a fait accompli.

By default, then, many of the far-reaching questions about how surveillance systems will be configured occur in organizational back regions amongst designers and engineers, and therefore do not benefit from the input of a wider range of representative constituencies. Sclove (1995) has drawn attention to this technological democratic deficit, and calls for greater public input at the earliest stages of system design (see also Monahan in this volume). And while this is a laudable ambition, the prospect of bringing citizens into the design process confronts a host of pragmatic difficulties, not the least of which are established understandings of what constitutes relevant expertise in a technologized society.

Even when surveillance measures have been introduced by representative bodies this is no guarantee that these initiatives reflect the will of an informed and reasoned electorate. One of the more important dynamics in this regard concerns the long history whereby fundamental changes in surveillance practice and infrastructure have been initiated in times of national crisis. The most recent and telling example of this process occurred after 9/11 when many Western governments , the United States most prominently, passed omnibus legislation that introduced dramatic new surveillance measures justified as a mean to enhance national security (Ball and Webster, 2003; Haggerty and Gazso, 2005; Lyon, 2003). This legislation received almost no political debate, and was presented to the public in such a way that it was impossible to appreciate the full implications of the proposed changes. This, however, was just the latest in the longstanding practice of politicians embracing surveillance at times of heightened fear. At such junctures one is more apt to encounter nationalist jingoism than measured debate about the merits and dangers of turning the state’s surveillance infrastructure on suspect populations.

The example of 9/11 accentuates the issue of state secrets, which can also limit the democratic oversight of surveillance. While few would dispute the need for state secrets, particularly in matters of national security, their existence raises serious issues insofar as the public is precluded from accessing the information needed to judge the actions of its leaders. In terms of surveillance, this can include limiting access to information about the operational dynamics of established surveillance systems, or even simply denying the existence of specific surveillance schemes. Citizens are asked (or simply expected) to trust that their leaders will use this veil of secrecy to undertake actions that the public would approve of if they were privy to the specific details. Unfortunately, history has demonstrated time and again that this trust is often abused, and knowledge of past misconduct feeds a political climate infused with populist conspiracy theories (Fenster, 2008). Indeed, one need not be paranoid to contemplate the prospect that, as surveillance measures are increasingly justified in terms of national security, a shadow “security state” is emerging — one empowered by surveillance, driven by a profit motive, cloaked in secrecy and unaccountable to traditional forms of democratic oversight (see Hayes in this volume).

(…)

Mitrou’s chapter also explores the possible anti-democratic implications of measures that make the average citizen more transparent. She analyzes new European measures designed to retain information about a citizen’s electronic communications. While some see this development as innocuous given that they do not store actual communication content, Mitrou accentuates how much potentially sensitive information can be derived from the data that is collected. As the public becomes more aware of such measures there is a risk that this will produce an anti-democratic chilling effect, as individuals wary how their information might be used in the future start to self-censor any communications that could be construed as having political implications. Mitrou interrogates how these measures can limit the democratic rights to privacy, expression and freedom of movement.

Whereas most analysts of surveillance typically concentrate on the implications of one unique practice or technology, Hayes presents a disturbing vision of the overall direction of how assorted surveillance measures are being aligned in the ostensible service of securing the European Union. Rather than surveillance expanding in an ad-hoc fashion, he details an explicit agenda being pushed by non-representative agencies with strong ties to large international military and security firms. Their aim is to establish a form of domestic “full spectrum dominance” that relies on new information technologies to create a form of largely unaccountable control over all risks that different groups are imagined to pose.

The words “shadow security state” and “full spectrum dominance” made me frown a bit, but perhaps that language is justifiable; I’ll submit a separate blogpost reflecting on those respective passages after I’ve read them.

References mentioned in this quotation (hyperlinks are mine):

Ball, Kirstie and Frank Webster. 2003. “The Intensification of Surveillance.” London: Pluto.
Fenster, Mark. 2008. Conspiracy Theories: Secrecy and Power in American Culture. Minnesota: University of Minnesota Press.
Haggerty, Kevin D. and Amber Gazso. 2005. “Seeing Beyond the Ruins: Surveillance as a Response to Terrorist Threats.” Canadian Journal of Sociology 30:169-87.
Lyon, David. 2003. Surveillance After September 11. London: Polity.
Sclove, Richard. 1995. Democracy and Technology. New York: Guilford.