I was recently sent a link to yet another of Dembski’s wretched writings about specified complexity, titled Specification: The Pattern The Signifies Intelligence.
While reading this, I came across a statement that actually changes my opinion of Dembski. Before reading this, I thought that Dembski was just a liar. I thought that he was a reasonably competent mathematician who was willing to misuse his knowledge in order to prop up his religious beliefs with pseudo-intellectual rigor. I no longer think that. I’ve now become convinced that he’s just an idiot who’s able to throw around mathematical jargon without understanding it.
In this paper, as usual, he’s spending rather a lot of time avoiding defining specification. Purportedly, he’s doing a survey of the mathematical techniques that can be used to define specification. Of course, while rambling on and on, he manages to never actually say just what the hell specification is – just goes on and on with various discussions of what it could be.
Most of which are wrong.
“But wait”, I can hear objectors saying. “It’s his theory! How can his own definitions of his own theory be wrong? Sure, his theory can be wrong, but how can his own definition of his theory be wrong?” Allow me to head off that objection before I continue.
Demsbki’s theory of specicfied complexity as a discriminator for identifying intelligent design relies on the idea that there are two distinct quantifiable properties: specification, and complexity. He argues that if you can find systems that posess sufficient quantities of both specification and complexity, that those systems cannot have arisen except by intelligent intervention.
But what if Demsbki defines specification and complexity as the same thing? Then his definitions are wrong: because he requires them to be distinct concepts, but he defines them as being the same thing.
Throughout this paper, he pretty ignores the complexity to focus on specification. He’s pretty careful never to say “specification is this”, but rather “specification can be this”. If you actually read what he does say about specification, and you go back and compare it to some of his other writings about complexity, you’ll find a positively amazing resemblance.
But onwards. Here’s the part that really blew my mind.
One of the methods that he purports to use to discuss specification is based on Kolmogorov-Chaitin algorithmic information theory. And in his explanation, he demonstrates a profound lack of comprehension of anything about KC theory.
First – he purports to discuss K-C within the framework of probability theory. K-C theory has nothing to do with probability theory. K-C theory is about the meaning of quantifying information; the central question of K-C theory is: How much information is in a given string? It defines the answer to that question in terms of computation and the size of programs that can generate that string.
Now, the quotes that blew my mind:
Consider a concrete case. If we flip a fair coin and note the occurrences of heads and tails in
order, denoting heads by 1 and tails by 0, then a sequence of 100 coin flips looks as follows:
(R) 11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110.
This is in fact a sequence I obtained by flipping a coin 100 times. The problem algorithmic
information theory seeks to resolve is this: Given probability theory and its usual way of
calculating probabilities for coin tosses, how is it possible to distinguish these sequences in terms
of their degree of randomness? Probability theory alone is not enough. For instance, instead of
flipping (R) I might just as well have flipped the following sequence:
(N) 11111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111.
Sequences (R) and (N) have been labeled suggestively, R for “random,” N for “nonrandom.”
Chaitin, Kolmogorov, and Solomonoff wanted to say that (R) was “more random” than (N). But
given the usual way of computing probabilities, all one could say was that each of these
sequences had the same small probability of occurring, namely, 1 in 2100, or approximately 1 in
1030. Indeed, every sequence of 100 coin tosses has exactly this same small probability of
occurring.
To get around this difficulty Chaitin, Kolmogorov, and Solomonoff supplemented conventional
probability theory with some ideas from recursion theory, a subfield of mathematical logic that
provides the theoretical underpinnings for computer science and generally is considered quite far
removed from probability theory.
It would be difficult to find a more misrepresentative description of K-C theory than this. This has nothing to do with the original motivation of K-C theory; it has nothing to do with the practice of K-C theory; and it has pretty much nothing to do with the actual value of K-C theory. This is, to put it mildly, a pile of nonsense spewed from the keyboard of an idiot who thinks that he knows something that he doesn’t.
But it gets worse.
Since one can always describe a sequence in terms of itself, (R) has the description
copy '11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110'.
Because (R) was constructed by flipping a coin, it is very likely that this is the shortest
description of (R). It is a combinatorial fact that the vast majority of sequences of 0s and 1s have
as their shortest description just the sequence itself. In other words, most sequences are random
in the sense of being algorithmically incompressible. It follows that the collection of nonrandom
sequences has small probability among the totality of sequences so that observing a nonrandom
sequence is reason to look for explanations other than chance.
This is so very wrong that it demonstrates a total lack of comprehension of what K-C theory is about, how it measures information, or what it says about anything. No one who actually understands K-C theory would ever make a statement like Dembski’s quote above. No one.
But to make matters worse – this statement explicitly invalidates the entire concept of specified complexity. What this statement means – what it explicitly says if you understand the math – is that specification is the opposite of complexity. Anything which posesses the property of specification by definition does not posess the property of complexity.
In information-theory terms, complexity is non-compressibility. But according to Dembski, in IT terms, specification is compressibility. Something that possesses “specified complexity” is therefore something which is simultaneously compressible and non-compressible.
The only thing that saves Dembski is that he hedges everything that he says. He’s not saying that this is what specification means. He’s saying that this could be what specification means. But he also offers a half-dozen other alternative definitions – with similar problems. Anytime you point out what’s wrong with any of them, he can always say “No, that’s not specification. It’s one of the others.” Even if you go through the whole list of possible definitions, and show why every single one is no good – he can still say “But I didn’t say any of those were the definition”.
But the fact that he would even say this – that he would present this as even a possibility for the definition of specification – shows that Dembski quite simply does not get it. He believes that he gets it – he believes that he gets it well enough to use it in his arguments. But there is absolutely no way that he understands it. He is an ignorant jackass pretending to know things so that he can trick people into accepting his religious beliefs.
Like this:
Like Loading...