Dishonest Dembski:the Universal Probability Bound
One of the dishonest things that Dembski frequently does that really bugs me is take bogus arguments, and dress them up using mathematical terminology and verbosity to make them look more credible.
An example of this is Dembski’s *universal probability bound*. Dembski’s definition of the UPB from the [ICSID online encyclopedia][upb-icsid] is:
>A degree of improbability below which a specified event of that probability
>cannot reasonably be attributed to chance regardless of whatever
>probabilitistic resources from the known universe are factored in. Universal
>probability bounds have been estimated anywhere between 10-50 (Emile Borel)
>and 10-150 (William Dembski).
He’s quantified it in several different ways. I’ve found three different versions of the calculation of the UPB: two of them from wikipedia; one is from a message thread at ICSID which the author claims is a quote from one of Dembski’s books.
Let’s look at Dembski’s own words first:
>Specifically, within the known physical universe there are estimated to be no
>more than 1080 elementary particles. Moreover, the properties of matter are
>such that transitions from one state to another cannot occur at a rate faster
>that 1045 times per second. Finally, the universe itself is about a billion
>times younger than 1025 seconds (assuming the universe is around 10 to 20
>billion years old). ….these cosmological constraints imply that the total
>number of specified events throughout cosmic history cannot exceed
>1080 * 1045 x 1025 = 10150.
He goes on to assert that this is the “maximum number of trials” that could have occurred since the beginning of the universe, and that for anything less likely than that which is observed to occur, it is not reasonable to say it is caused by chance.
Wikipedia presents this definition, and a more recent one which lowers the UPB, but as they don’t provide all of the details of the equation, I’ll skip it for now. Wikipedia’s explanation of this original form of the UPB is:
>Dembski’s original value for the universal probability bound is 1 in 10150,
>derived as the inverse of the product of the following approximate
>quantities:[11]
>
> * 1080, the number of elementary particles in the observable
> universe.
> * 1045, the maximum rate per second at which transitions in
> physical states can occur (i.e., the inverse of the Planck time).
> * 1025, a billion times longer than the typical estimated age of
> the universe in seconds.
>
>Thus, 10150 = 1080 × 1045 × 1025.
>Hence, this value corresponds to an upper limit on the number of physical
>events that could possibly have occurred since the big bang.
Here’s the fundamental dishonesty: None of those numbers have *anything* to do with what he’s supposedly trying to prove. He’s trying to create a formal-sounding version of the big-number problem by throwing together a bunch of fancy-sounding numbers, multiplying them together, and claiming that they somehow suddenly have meaning.
But they don’t.
It’s actually remarkably easy to show what utter nonsense this is. I’ll do a fancy one first, and a trivial one second.
Let’s create an incredibly simplified model of a region of space. Let’s say we have a cube of space, 1 kilometer on a side. Further, let’s suppose that this space contains 1000 particles, and they are all electrons. And further, let’s suppose that each 1mm cube in this cubic kilometer can only have one electron in it.
This is a model which is so much simpler than reality that it’s downright silly. But everything about the real world would make it more complex, and it’s sufficient for our purposes.
Now: consider the probability of any *configuration* of the electrons in the region of space. A configuration is a selection of the set of 1mm cubes that contain electrons. The number of different configurations of this region of space is (109!)/((1000!)*(109-1000)!). That works out to (109*(109-1)*(109-2)*…*(109-1000))/(1000!).
1000! is roughly 4×102568 according to my scheme interpreter. We’ll be generous, and use 1×102569, to make things easier. To estimate the numerator, we can treat it as (109)*((108)999), which will be much smaller. That’s 107801. So the probability of any particular configuration within that cube is 1 in 105232.
So any state of particles within that cube is an event with probability considerably smaller than 1 in 105232. So what Dembski is saying is that *every* possible configuration of matter in space in the entire universe is impossible without intelligent intervention.
And the trivial one? Grab two decks of distinguishable cards. Shuffle them together, and lay them out for a game of spider solitaire. What’s the probability of that particular lay of cards? 104! , or, very roughly, something larger than 1×10166. Is god personally arranging ,my cards every time I play spider?
Anyone who’s ever taken any class on probability *knows* this stuff. One college level intro, and you know that routine daily events can have incredibly small probabilities – far smaller than his alleged UPB. But Dembski calls himself a mathematician, and he writes about probability quite frequently. As much as I’ve come to believe that he’s an idiot, things like this just don’t fit: he *must* know that this is wrong, but he continues to publish it anyway.
[upb-icsid]: http://www.iscid.org/encyclopedia/Universal_Probability_Bound
Mark, you need to correct the typo in your quoting of his definition: you lost the “-” before the “50” in 10E-50. It’s jarring.
Bob:
Good catch, thanks. It’s fixed now.
Thank you for doing this! When I first heard of the UPB (and the Explanitory Filter that it goes with), I kept wondering what I was missing. What I understood it to be was so ridiculous it would be laughed out of an undergraduate course in probability. Nice to know that it’s not me, but Dembski, who can’t handle lower-division math.
It seems that what Dembski is trying to calculate the total number of conceivable histories of the universe, which fundamentally has nothing to do with probability.
There’s a sharp distinction to be made between counting the number of possible states and assigning a probability distribution over those states.
Leaving that aside, where he goes wrong is the point where he directly inserts the number of elementary particles — what he should have used instead is the number of conceivable configurations of those 10^80 particles. The other two factors seem reasonable enough, in principle.
Good stuff. I never stopped to think about this calculation before – other than a vague feeling it was unrealistic. Now I am trying to work out what was in Dembski’s mind. I think he has attempted to calculate the total number of physical states the known universe has been in since it began. As you point out this is much smaller than the number of possible states.
I am going to direct a couple of IDists to your post to see what the reaction is.
I know this is nitpicking, but hey that is what I learned in school.
Probability is always between 0 and 1, both included, so the probability of the lay of cards must be 1 in 104!, not 104!, which makes it smaller than 1 in 1×10^166.
An even smaller nitpick: why 1×10^166, why not simply 10^166?
I’ll go back to lurking now
What are the chances that I would end up here saying “Jeez Louise, even I know better than that”? Astronomically small, I’m sure.
I don’t know if it’s dishonesty or stupidity, but whatever Dembski has, I doubt it has any universal bounds.
So any state of particles within that cube is an event with probability considerably smaller than 1 in 105232. So what Dembski is saying is that every possible configuration of matter in space in the entire universe is impossible without intelligent intervention.
I think one of us has misread him somewhat. I read it as referring to the probabilistic resources required to achieve a specified goal. So a configuration would only present a problem for Dembski if it happened to be specified. Specified events are, by definition, not supposed to crop up much, so such a situation would rarely occur.
Of course, this presents two further problems. Firstly, as you’ve mentioned before, the whole concept of specification is complete bollocks. AFAICT it basically boils down to how cool something looks – if it makes Dembski go “whoa…” it’s specified.
Secondly, the whole concept of probabilistic resources is completely borked. It assumes a discrete universe. We do not live in a discrete universe. It assumes that every configuration has effectively the same chance of appearing. This is blatant bull.
So I’d say he’s wrong, just not in precisely the way you describe.
Is it just me, or is Mark’s cubic-kilometre of negatively-charged space a superb vacuum by human standards?
I agree with Canuckistani’s comment. Dembski’s not trying to bound the number of possible states of the universe, but something like the maximum number of trials of a given probabilistic experiment that could conceivably occur in the lifetime of the universe.
My main objection, as always, is that this has nothing to do with evolution. More below.
My secondary objection is that his formula doesn’t exactly work and that casting it as a “UPB” is just pompous and silly.
Another objection that I’m not quite sure about and is most likely irrelevant to evolution is that it ignores quantum superposition. I think that even interactions of a few particles can imply exponentionally many possible quantum states that exist simultaneously. (But this is a point that I don’t fully understand.)
The main point I would make about evolution is that it does not involve any extremely low probabilities if you group the outcomes properly.
It’s true that there is a neglible probability of getting humans exactly like us through an evolutionary process even if you started with a eukaryotic cell with similar conditions to the those of the first such cell on earth. But the probability of getting some kind of intelligent life is almost certainly much higher (hard to be more specific given available information).
For another example, we can make a better guess at the probability of evolving an eye with a lens (that old creationist bugbear). Vertebrates and cephalopods both have them, but their common ancestor did not. So while the probability of evolving something exactly like the human eye is vanishing small, it looks like the evolution of something equivalent was close to inevitable once earth’s ecosystem was capable of supporting multicellular organisms.
To use a vague card analogy, creationist probability arguments are invariably fixated on the low probability of drawing a straight royal flush, while the relevant number is the reasonable probability of improving your hand by replacing your weakest card with one drawn at random.
Finally, you could define something like Dembski wants to define if the universe consisted of a finite string of of discrete bits (but it doesn’t as far as I know). For instance, you could have 1000 bits that each change state uniformly at random 1000 times over the lifetime of that “universe”. Then you could ask if a given bit string of a certain length was likely to appear as a substring in the course of that experiment. It’s very simple to see that a 1-bit string will appear many times (about half the bits will have that value), a 10-bit string will appear less often, but will still be common (since there are just 1024 of them). Any given 100-bit string will have neglibly low probability of appearing at all. There’s nothing wrong about this kind of analysis, but it is just meaningless to apply it to evolutionary processes, which have nothing to do with picking complete, complex objects out of uniform distributions.
PaulC, your comments are often of (considerably) higher quality than the original posts on otherwise excellent blogs. Thanks.
“Another objection that I’m not quite sure about and is most likely irrelevant to evolution is that it ignores quantum superposition. I think that even interactions of a few particles can imply exponentionally many possible quantum states that exist simultaneously. (But this is a point that I don’t fully understand.)”
Entropy sets another bound on the possible number of states irregardless of quantum mechanics. “entropy is envisioned as a measure of the number of microscopic configurations that are capable of yielding the observed macroscopic description of the thermodynamic system” ( http://en.wikipedia.org/wiki/Entropy ). Entropy is classically bounded in the observable universe.
Which gets me to my own objections.
1) “probabilitistic resources from the known universe” makes no sense.
3) We don’t know how large the universe is, only how large the observable universe is. A bigbang universe may be infinite, and that is the most parsimonious assumption. At least the time direction seems to be infinite. A conservative UPB should be zero.
2) An UPB should not vary depending on current observed volume and time. The UPB that fulfills that requirement is 0.
Ow.
We now return to your comment thread, already in progress. Sorry for the interruption.
Nitpick: Unless the ace of spades from deck 1 is significantly different from the ace of spades from deck 2, 104! is not the number of *different* configurations. I think it would be 104!/(2^52).
One very common error of large-number bad math involves not realizing that you are counting the same thing multiple times.
Torbjorn’s point is also interesting: perhaps the time and space defined by the observable universe is not representative of the entire universe, but we see the part where we are because it’s a part which contains someone capable of seeing. (This applies even more strongly to multiple-universe interpretations.) Thus, calculations based on the size of the observable universe underestimate the population from which the event was drawn.
Not that it makes any difference – he only makes up the UPB in order to abuse it by stringing together non-independent events and probabilities pulled out of his ass.
Chris:
One sort of obvious objection to the finality of low probability arguments is the effect of the many-worlds interpretation of quantum events–the notion that every time a quantum event appears to choose randomly between multiple outcomes, there are actually multiple timelines (or whatever) spawned in which each one happens.
The main problem I have with this idea is that it’s unfalsifiable. Conceivably, there could be universes getting spawned 10^45 times/second, but if you cannot detect them, then there is not much that science can say about them. So I don’t take this too seriously. I believe Dembski himself has explicitly denounced such models, but the main reason in his case is that they render his methodology completely worthless.
That is, if you assume that some possible series of quantum outcomes, however unlikely, could lead to the existence of intelligent life then there would be no big surprise that we happen to find ourselves in one of the timelines in which intelligent life exists. The process of observation restricts our sample space to those timelines in which there is an observer.
The above strikes me as a little silly and clearly unfalsifiable, but not obviously sillier than positing a designer. I mean, if all you need is a lot more dumb stuff going on than you thought there was, that is still a more parsimonious explanation than some mysteriously super-intelligent entity.
Finally, I don’t think that you need any such exotica. Self-organization occurs in very ordinary dynamic systems. It is not a low probability event, but often a near certainty. And to go back to the example of the human and octopus eyes evolving independently, that cannot be explained by restricting our sample space to timelines with an observer; rather, it suggests that such events are highly probable in an ecosystem such as ours. So all of these variants of the monkeys-at-a-typewriter scenario are amusing thought experiments but largely irrelevant to the process of evolution.
Of course Dembski knows he’s doing doubletalk. More importantly, he knows the DaveScots and Salvador Cordovas of the world can’t tell the difference.
Steve s,
I agree wholeheartedly. Dembski has to know better. He isn’t, contrary to passionate assertions to the otherwise, actually stupid. He’s well-educated. That just makes him a less moral creature than most of us for his deception.
The midsummer holiday was long and eventful, apparently so was this thread.
“”…irregardless…”
Ow.”
Sorry, my bad. I was probably tired, thus contaminating regardless and irrespective. My english part of the brain hasn’t the acute spelling alarms of my native part…
I should note that not only is the infinite bigbang universe most parsimonious, inflationary flatness also argue for a large universe since a smaller inflated universe would still be curved.
Multiworlds of several kinds are for example presented on Max Tegmark’s web page. I agree with PaulC that choosing the many-worlds QM interpretation is not especially warranted – the other interpretations are as useful in the face of lacking experimental evidence for choosing. Manyworlds lacks some problems that others have, but has its own.
However, multiverses are nice generalisations of universality. Especially Hawkings no-boundary multiverse, since each universe has its own time, instead of the shared infinite time of the endless inflation multiverse. That part of a theory lacks falsifiability shouldn’t be a concern as long as the theory makes some other new predictions. Hopefully this will happen with multiverses, if they are the correct cosmologies.
Hi Mark:
There is a great cartoon in Doonesbury Sunday 2 JUL about evolution / i-design.
http://news.yahoo.com/comics/uclickcomics/20060702/cx_db_uc/db20060702
I have used a similar argument with fundies before. Something like “This post contains 112 charaters.Assuming randomness and 50 alphnumeric characters, the chances of my typing this exactly are 50^112th”. That is impossible. Since I can do the impossible I am God. WORSHIP ME, FOUL WORTHLESS SCUM!!
Sigh! Again I forgot Typekey and got nameless and stuck in a queue. It was me who had the long holiday and anonymous post.
Oooh! But when you play solitaire an intelligent agency is laying out the cards. Oooh!
Yes, but what’s the probability of a certain pattern of marbles on the floor after falling from a table being shaken by an earthquake? Or did God cause the earthquake? Oh, but intelligent people had to make the marbles and the table. Oh, and God must have made the glass and the wood. See? Wood is alive and therefore intelligently designed. ID wins again!