Author: mrkoot

Cyberwar = Propaganda (isn’t it?)

Bill Blunden‘s paper “Manufacturing Consent and Cyberwar” (.pdf), written for the Lockdown 2010 conference, deserves more attention and discussion, IMHO. Obviously referencing “Manufacturing Consent: The Political Economy of the Mass Media” (1988) by Edward S. Herman and Noam Chomsky, Blunden discusses the dangers of “offense is the best defense” crisis mentality, (mis)attribution of attacks and weakly-founded claims about (future) threats by security firms and media; which altogether may resemble the propaganda model after, updating it to the current realm of discourse, one replaces Herman/Chomsky’s “anti-communist” filter with perhaps a more generally apocalyptic “FUD” filter (better suggestions are welcome). The paper is well-written. Its abstract:

Over the past year, there have been numerous pieces that have appeared in the press alluding to the dire consequences of Cyberwar and the near existential threat that it represents to the United States. While these intimations of destruction can seem alarming at first glance, closer scrutiny reveals something else. Ultimately, the gilded hyperbole of Cyberwar being peddled to the public is dangerous because it distracts us from focusing on actual threats and constructive solutions. Pay no attention to the man behind the curtain says the ball of fire named Oz. In this presentation, I’ll pull back the curtain to expose the techniques being used to manipulate us and the underlying institutional dynamics that facilitate them.

Slides (.pdf) are available as well.

(PS: yes I see the irony of posting “Cyberwar = Propaganda” on “blog.cyberwar.nl”)

The Sokal affair

In 1996, Alan Sokal gained notoriety for getting his (intentionally) bullshit paper “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity” published in the cultural studies journal Social Text. This has become known as the ‘Sokal affair‘. In “Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science” (1997), physicists Alan Sokal and Jean Bricmont explain why Sokal’s parody paper is bullshit and identify the (probably undesirable) conditions of (mostly French) postmodernist thinking that (probably) made it possible for the parody paper to get accepted in a journal. Excellent book. The affair demonstrated unwarranted disqualification of scientific methods as a result of overly relativistic “anything goes”-thinking and unjustified and often improper use of concepts from mathematics, physics and logic.

Lessons that are suggested to be drawn (page 185-189):

  1. It’s a good idea to know what one is talking about (don’t apply concepts you don’t understand);
  2. Not all that is obscure is necessarily profound (strive for easy-to-understand language);
  3. Science is not a “text” and can’t be analyzed in a purely verbal manner;
  4. Don’t ape the natural sciences or its “paradigm shifts” (e.g. between probabilist and determinist theories);
  5. Be wary of argument from authority (this can’t be repeated enough);
  6. Specific skepticism should not be confused with radical skepticism (“scientific theory X is bogus” versus “all scientific theories are bogus”);
  7. Be aware that ambiguity may be (ab)used as subterfuge.

How to Read a Scientific Paper

Questions to ask when reading/reviewing a scientific paper:

  1. What questions does the paper address?
  2. What are the main conclusions of the paper?
  3. What evidence supports those conclusions?
  4. Do the data actually support the conclusions?
  5. What is the quality of the evidence?
  6. Why are the conclusions important?

I suggest the following subquestions:

  • If the paper contains a hypothesis, is it falsifiable?
  • (How) Is the work reproducible? (what would you need to reproduce it?)
  • What does the paper contribute to the existing body of knowledge?
  • Are the applied methods explained, valid and reliable? (e.g. statistical tests)
  • Are the limitations of the work acknowledged?

Naturalistic Conception of Science

Several years ago I read the Dutch book “Wetenschap of Willekeur” (1985) by A.A. Derksen, which (still!) is an excellent introduction to the philosophy of science. Can you read Dutch? Buy a copy! 🙂 The book contains one diagram which IMHO summarizes the (?) naturalistic conception of science pretty well, and for educational purposes I reproduced and translated it:
 

P.S.: I expect this blogpost to be covered by the Dutch right of citation (‘citaatrecht‘), please inform me if I’m wrong.

Conditions for Considering Scientific Claims

In his book God: The Failed Hypothesis, Victor J. Stenger defined the following five “Conditions for Considering Extraordinary Claims”:

  1. The protocols of the study must be clear and impeccable so that all possibilities of error can be evaluated. The investigators, not the reviewers, carry the burden of identifying each possible source of error, explaining how it was minimized, and providing a quantitative estimate of the effect of each error. These errors can be systematic—attributable to biases in the experimental set up—or statistical—the result of chance fluctuations. No new effect can be claimed unless all the errors are small enough to make it highly unlikely that they are the source of the claimed effect.
  2. The hypotheses being tested must be established clearly and explicitly before data taking begins, and not changed midway through the process or after looking at the data. In particular, “data mining” in which hypotheses are later changed to agree with some interesting but unanticipated results showing up in the data is unacceptable. This may be likened to painting a bull’s-eye around wherever an arrow has struck. That is not to say that certain kinds of exploratory observations, in astronomy, for example, may not be examined for anomalous phenomena. But they are not used in hypothesis testing. They may lead to new hypotheses, but these hypotheses must then be independently tested according to the protocols I have outlined.
  3. The people performing the study, that is, those taking and analyzing the data, must do so without any prejudgment of how the results should come out. This is perhaps themost difficult condition to follow to the letter, since most investigators start out with the hope of making a remark- able discovery that will bring them fame and fortune. They are often naturally reluctant to accept the negative results that more typically characterize much of research. Investigators may then revert to data mining, continuing to look until they convince themselves they have found what they were looking for.3 To enforce this condition and avoid such biases, certain techniques such as “blinding” may be included in the protocol, where neither the investigators nor the data takers and analyzers know what sample of data they are dealing with. For example, in doing a study on the efficacy of prayer, the investigators should not know who is being prayed for or who is doing the praying until all the data are in and ready to be analyzed.
  4. The hypothesis being tested must be one that contains the seeds of its own destruction. Those making the hypothesis have the burden of providing examples of possible experimental results that would falsify the hypothesis. They must demonstrate that such a falsification has not occurred. A hypothesis that cannot be falsified is a hypothesis that has no value.
  5. Even after passing the above criteria, reported results must be of such a nature that they can be independently replicated. Not until they are repeated under similar conditions by different (preferably skeptical) investigators will they be finally accepted into the ranks of scientific knowledge.

These conditions are desirable in any claim of knowledge; there is a time for unrestricted creativity (preceding ‘the’ scientific method) and there is a time for rigor (practicing ‘the’ scientific method). Would it be a good idea, a bad idea, or simply impossible to expect such conditions to be met by any scientific discipline? Which claims or disciplines are unable to meet these conditions, and (how) do they provide reliable knowledge? Although I tend to agree with Paul Feyerabend‘s statement in “Against Method” (1975) that “The idea that science can, and should, be run according to fixed and universal rules, is both unrealistic and pernicious” (page 295), I can’t imagine how to achieve reliable knowledge without at least falsification (condition 4) and reproducibility (condition 5).