A while ago, I wrote about Dembski’s definition of specified complexity, arguing that it was a non-sensical pile of rubbish, because of the fact that “specified complexity” likes to present itself as being a combination of two distinct concepts: specification and complexity. In various places, Dembski has been fairly clear that his complexity is equivalent to Kolmogorov-Chaitin information complexity, meaning that a complex entity has *high* K-C information content; and in [my debunking of a paper where Dembski tried to define specificiation][debunk-spec], I argue that his definition of specification is basically an entity with *low* K-C information content. Put those two together, and an entity with specified complexity is “An entity with simultaneously high and low K-C information content.”
In the comments on that post, and in some rather abusive emails, some Dembski backers took me to task, alleging that I was misrepresenting the IDists view of specified complexity; that
the definition of specification used by IDists was *not* low K-C complexity, and that therefore, the definition of specified complexity was *not* self-contradictory.
Well, I was just doing a web-search to try to find some article where Dembski makes his case for a fourth law of thermodynamics, and in the search results, I came across a [very interesting discussion thread][iscid-thread] at ISCID (the “International Center for Complexity, Information, and Design”, an alleged professional society of which William Dembski is a fellow in mathematics). In this thread, Salvador Cordova is trying to make an argument that Dembski’s “Fourth Law” actually subsumes the second law. In the course of it, he attempts to define “Specified Complexity”. This thread started back in the spring of 2005, and continues to this day.
>The definition of Specified Complexity you gave is closer to Irreducible Complexity.
>
>Specified Complexity has this thing that is called “Probabilistic Complexity” which means simply
>that it’s improbable.
>
>These defintions are understandably confusing at first, but surmountable.
>
>We have many complexities involved, and seriously each one should be explored, but I’ll have to go
>into the details later:
>
>* Probabilistic Complexity
>* Specificational Complexity
>* Specified Complexity
>* Irreducible Complexity
>* Kolmogorov Complexity
>
>All of these are in Dembski’s book, and should be treated with care, lest one becomes totally
>confused. The diagram addresses 3 of the 5 complexities listed above explicitly.
>
>Probabilistic and Specificational Complexity require a separate discussion.
>
>Specified complexity has these features (per Design Revolution, page 84)
>
>1. Low Specificational Complexity
>2. High Probabilistic Complexity
Sal attempts to wiggle around, but low specificational complexity is, in his own words, means that
an entity that has low specification complexity is one whose description has *low* K-C complexity. Probabilistic complexity is, as far as I know, Sal’s addition. In Dembski’s writings, he’s been
fairly clear that complexity is complexity in the information theory sense – see the Dembski paper linked above, which quotes him explaining why “probabilistic complexity” isn’t sufficient and introducing K-C complexity to fix it.
So – one of Dembski’s associates, in an extensive message thread quoting Dembski’s books, says that specification is *low* K-C complexity; and tries to wiggle around the fact that the complexity part of “specified complexity”, when examined in terms of information theory means that the K-C complexity is high. Ergo, specified complexity as described by Dembski, *is* the property of simultaneously having both high and low K-C complexity.
Sad, isn’t it?
[debunk-spec]: http://scienceblogs.com/goodmath/2006/06/dembskis_profound_lack_of_comp.php
[iscid-thread]: http://www.iscid.org/boards/ubb-get_topic-f-6-t-000562.html
Dimensions and Topology
Back in the early days of Good Math/Bad Math, when it was still at blogger, one of the most widely linked posts was one about the idea of dimension. At the time, I said that the easiest way to describe a dimension was as a direction. If you’ve got a point in a plane, and you want to say where it is, you can do it with two numbers – one for each of the fundamental directions in the plane. If you’ve set an origin, “(5,-2)” is enough to uniquely identify exactly one point. You con reach any point on the plane by moving in two directions: up/down and left/right.
If you’ve got a cube, you can’t uniquely specify a point using its distance in two directions. Up three and left two doesn’t give you one point – there are lots of points that are up three and left two. You need a third direction, forward/back, for depth. That’s the third dimension – a direction that could not be formed by any combination of the two directions you had in the plane.
Topology has its own sense of dimension – in fact, it has several. They’re interesting because, as happens so often in topology, they start with the intuition that we get from simple metric spaces like ℜn, and work it down to its bare essentials by figuring out what it means when you apply it to an arbitrary topological space – that is, an arbitrary structure formed from open sets.
More Fractran: A Trivial Interpreter
For your amusement and edification, the following is a very simple interpreter for fractran
programs which, in addition to running the program to generate its result also generates a trace
to show you how the program executed.
;; A Trivial Fractran Interpreter
;;
;; Copyright 2006 Mark C. Chu-Carroll
;; http://scienceblogs.com/goodmath
;;
;; You’re welcome to do anything you want with this code as long
;; as you keep a copy of this header to identify the original source.
;;
;; This program runs a fractran program. A fractran program is a list
;; of fractions. The fractions are represented by a list of two integers:
;; the numerator, and the denominator. For example, the classic fractran
;; multiplication program could be written:
;; ((385 13) (13 21) (1 7) (3 11) (7 2) (1 3))
;; or:
;; (((* 7 11 5) 13) (13 (* 3 7)) (1 7) (3 11) (7 2) (1 3))
;;
;;
;; To run a program until it terminates, call (run-fractran program input).
;; This will return a list; the car of the list will be the result of
;; running the program, and the cdr will be a trace of the executions in the
;; form of a list of the fractions that ran at each step.
;;
;; To run a program for a specific maximum number of steps, call
;; (run-fractran-bounded program input maxsteps)
;;
(define (step-fractran fract int)
(if (equal? fract ()) int
(let ((fr (car fract)))
(if (= (remainder int (cadr fr)) 0)
(cons (/ (* int (car fr)) (cadr fr))
(list fr))
(step-fractran (cdr fract) int)))))
(define (run-fractran fract int)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran fract new-int)))
(list int ))))
(define (run-fractran-bounded fract int bound)
(if (> bound 0)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran-bounded fract new-int (- bound 1))))
(list int)))
(list int)))
;; The mult program.
(define mult ‘((385 13) (13 21) (1 7) (3 11) (7 2) (1 3)))
;;
;; (run-fractran mult 432)
;; The primes program
(define primes ‘((17 91) (78 85) (19 51) (23 38) (29 33) (77 29) (95 23)
(77 19) (1 17) (11 13) (13 11) (15 2) (1 7) (55 1)))
;; (run-fractran-bounded primes 2 1000)
———-
Commenter Pseudonym has kindly provided a Haskell version in the comments, which was mangled by MTs comment formatting, so I’m adding a properly formatted version here. I think it’s a really interesting comparison to the scheme code above. The Haskell code is very nice; cleaner than my rather slapdash Scheme version. But overall, I think it’s a pretty good comparison – it gives you a sense of what the same basic code looks like in the two languages. Personally, I think the Haskell is clearer than the Scheme, even though the Scheme is my own code.
module Fractran where
import Ratio
import Data.Maybe
import Control.Monad.Fix
type Program = [Rational]
runFractran :: [()] -> Program -> Integer -> [Integer]
runFractran bound prog l
= step bound prog l
where
step _ [] l = []
step [] (f:fs) l
= []
step (_:bound) (f:fs) l
= let p = f * fromIntegral l
in case denominator p of
1 -> let pi = numerator p
in pi : step bound prog pi
_ -> step bound fs l
fractran :: Program -> Integer -> [Integer]
fractran prog l
= runFractran (fix (():)) prog l
fractranBounded :: Int -> Program -> Integer -> [Integer]
fractranBounded b prog l
= runFractran (take b $ fix (():)) prog l
mult = [385%13, 13%21, 1%7, 3%11, 7%2, 1%3]
primes = [17%91, 78%85, 19%51, 23%38, 29%33, 77%29, 95%23,
77%19, 1%17, 11%13, 13%11, 15%2, 1%7, 55%1]
— fractran mult (2^4 * 3^3)
— fractranBounded 1000 primes 2
Friday Random Ten, Oct 27
It’s friday, so it’s time for more of my highly warped taste in music.
1. **Tempest, “Turn of the Wheel”**. Tempest is a really cool band. They’re a cross between an electrified folk band and a neo-progressive rock band. Strong Irish and Swedish influences on the folky side, and a vaguely ELP-ish sound on the rock side.
2. **Mel Brooks, “In Old Bavaria” from the Producers**.
3. **The National, “Baby We’ll Be Fine”**. Probably my favorite track from this album by the National.
4. **Hamster Theatre, “The Quasi Day Room Ceremonila Quadrille”**. HT is an offshoot of Thinking Plague. Every bit as strange as TP, but with their own early-music influenced sound. Great stuff.
5. **Kate Bush, “King of the Mountain”**. Kate’s first new album in a very long time, and it’s really fantastic. Very mellow, but with a lot of depth.
6. **Dream Theater, “Take the Time”**. Dream Theater is an amazing neo-progressive/metal band. This is off of an album of theirs that isn’t my favorite, but *anything* by Dream Theater is at least pretty good.
7. **Bela Fleck, “See Rock City”**. An amazing piece of newgrass virtuosity by Bela Fleck. Honestly, I wish he’d dump the Flecktones, and get back to doing more newgrass. The last couple of flecktones albums have been pretty mediocre, and when I’ve seen him live, he seems much more engaged when
he’s doing the newgrass type stuff with folks like Edgar Meyer, Sam Bush, and Mark O’Connor than he does with the Flecktones.
8. **Spock’s Beard, “NWC”**. I’m sure I’ve mentioned Spock’s Beard before. They’re a great neo-progressive band. They started off sounding like a cross between old Genesis, Rush. and Gentle Giant, but they’ve evolved into their own very distinctive sound. Absolutely fantastic. This is a fantastic instrumental track. (And I just got email from them that they’ve got a new album coming out towards the end of November!)
9. **Shirim, “Nokh A Gleyzl Vayn”**. Great bouncy track from the greatest modern jazz-influenced Klezmer band around.
10. **The Clogs, “Lantern”**. The only vocal track on the newest album from one of my favorite post-rock ensembles. Just fantastic. The Clogs never cease to amaze me.
Prime Number Pathology: Fractran
Today’s pathological language is based on a piece of work called Fractran by John Conway of game theory fame. It’s a really fascinating bugger; absolutely insanely difficult to program in, but based on one of the most bizarrely elegant concepts of computation that I’ve ever seen. It’s amazing that this is Turing complete. It’s not a real programming language in the sense of being able to write practical programs; it’s more of a simple theoretical computational model which has been implemented as a language.
It’s based on the idea of numbers as products of prime factors. As you should remember from elementary school, every number can be represented by a collection of prime numbers that, multiplied together, produce the number. For a few examples:
- 24 = 2×2×2×3, or 23×31
- 291 = 3×97
- 1800 = 5×5×3×3×2×2×2=52×32×23
Conway figured out that using something based on that concept, you can express any computable function using nothing but a list of positive fractions.
Republican New Math
Yesterday, Karl Rove was interviewed by Robert Siegel on NPR. I just about passed out from shock when I heard the following exchange: (transcript via [raw story](http://www.rawstory.com/news/2006/Rove_dukes_it_out_with_NPR_1025.html))
>MR. SIEGEL: We’re in the home stretch, though. And many might consider you on the optimistic end of >realism about —
>
>MR. ROVE: Not that you would be exhibiting a bias or anything like that. You’re just making a comment.
>
>MR. SIEGEL: I’m looking at all the same polls that you’re looking at every day.
>
>MR. ROVE: No you’re not. No you’re not!
>
>MR. SIEGEL: No, I’m not —
>
>MR. ROVE: I’m looking at 68 polls a week. You may be looking at four or five public polls a week that talk >about attitudes nationally, but that do not impact the outcome —
>
>MR. SIEGEL: — name races between — certainly Senate race
>
>MR. ROVE: Well, like the polls today showing that Corker’s ahead in Tennessee; or the race — polls >showing that Allen is pulling away in the Virginia Senate race.
>
>MR. SIEGEL: Leading Webb in Virginia. Yes.
>
>MR. ROVE: Yeah, exactly.
>
>MR. SIEGEL: Have you seen the DeWine race and the Santorum race and — I don’t want to —
>
>MR. ROVE: Yeah. Look, I’m looking at all these Robert and adding them up. And I add up to a Republican >Senate and a Republican House. You may end up with a different math, but you’re entitled to your math. >I’m entitled to “the” math.
>
>MR. SIEGEL: I don’t know if you’re entitled to a different math, but you’re certainly entitled to —
>
>MR. ROVE: I said you were entitled to yours.
Yes indeed, the same folks who sneered at the “reality based community” saying that they don’t need to *study* reality because they can just create their own reality apparently feel the same way about math. All of us lowly peons out here are entitled to our own silly little math, but Karl Rove and the Republicans are the only ones who have the *real* math, which seems to say whatever they want it to.
Building Manifolds with Products
Time to get back to some topology, with the new computer. Short post this morning, but at least it’s something. (I had a few posts queued up, just needing diagrams, but they got burned with the old computer. I had my work stuff backed up, but I don’t let my personal stuff get into the company backups; I like to keep them clearly separated. And I didn’t run my backups the way I should have for a few weeks.)
Last time, I started to explain a bit of patchwork: building manifolds from other manifolds using *gluing*. I’ll have more to say about patchwork on manifolds, but first, I want to look at another way of building interesting manifolds.
At heart, I’m really an algebraist, and some of the really interesting manifolds can be defined algebraically in terms of topological product. You see, if you’ve got two manifolds **S** and **T**, then their product topology **S×T** is also a manifold. Since we already talked about topological product – both in classic topological terms, and in categorical terms, I’m not going to go back and repeat the definition. But I will just walk through a couple of examples of interesting manifolds that you can build using the product.
The easiest example is to just take some lines. Just a simple, basic line. That’s a 1 dimensional manifold. What’s the product of two lines? Hopefully, you can easily guess that: it’s a plane. The standard cartesian metric spaces are all topological products of sets of lines: ℜn is the product of *n* lines.
To be a bit more interesting, take a circle – the basic, simple circle on a cartesian plane. Not the *contents* of the circle, but the closed line of the circle itself. In topological terms, that’s a 1-sphere, and it’s also a very simple manifold with no boundary. Now take a line, which is also a simple manifold.
What happens when you take the product of the line and the circle? You get a hollow cylinder.
What about if you take the product of the circle with *itself*? Thing about the definition of product: from any point *p* in the product **S×T**, you should be able to *project* an image of
**S** and an image of **T**. What’s the shape where you can make that work right? The torus.
In fact the torus is a member of a family of topological spaces called the toroids. For any dimensionality *n*, there is an *n*-toroid which the the product of *n* circles. The 1-toroid is a circle; the 2-toroid is our familiar torus; the 3-toroid is a mess. (Beyond the 2-toroid, our ability to visualize them falls apart; what kind of figure can be *sliced* to produce a torus and a circle? The *concept* isn’t too difficult, but the *image* is almost impossible.)
Haskell and Scheme: Which One and Why?
While I was waiting for stuff to install on my new machine, I was doing some browsing around the web, and came across an interesting article at a blog called “The Only Winning Move”, titled [Scheme Death Knell?](http://theonlywinningmove.blogspot.com/2006/10/scheme-death-knell.html). It’s not a bad article at all, but I disagree with its conclusions, and I thought it would be interesting to
discuss why.
The article was brought on by *another* blog post, this one at [Lambda the Ultimate](http://lambda-the-ultimate.org/) suggesting that someone translate Douglas Hofstadter’s columns introducing Scheme to replace the Scheme with Haskell.
Josh over at TOWM was slightly irritated by this idea, and concludes that the only reason
why people are suggesting that is because scheme is getting too old to be cool and trendy,
and so they want to switch to something newer.
I disagree.
Haskell is a *very* different language from Scheme, and I think that there are some genuinely good
reasons for teaching Haskell instead of Scheme. I *also* think there are some genuinely good reasons for using Scheme instead of Haskell. It’s not really clear to me which is better, but it’s worth contrasting the two to help understand the tradeoff.
Dead Computer Delays
Sorry for the sudden silence on the blog. My computer died on me yesterday, and so I’ve been rather cut off. I’m in the process of setting up my gorgeous brand new MacBookPro, and things should be getting back to normal pretty quickly, except that I lost a couple of prepared posts in the crash, so this week might be a bit slow around here.
A Metalanguage for Pathological Programming: Cellular Automata in Alpaca
Todays entry isn’t so much a pathological language itself as it is a delightful toy which can be used to *produce* pathological languages. I’m looking at Chris Pressey’s wonderful language [ALPACA](http://catseye.mine.nu:8080/projects/alpaca/), which is a meta-language for describing different kinds of cellular automata.
Frankly, I’m very jealous of Alpaca. I started writing a cellular automata language something like this on my own; I spent about two weeks of my free time working on it, and got it roughly 90% finished before I found out that he’d already done it, the bastard!
Alpaca definitely qualifies as a real language, in that it is capable of generating turing complete automatons. We can show this in two different ways. First, Conway’s life is a trivial program in Alpaca, and you can build a turing machine simulator using life. And second, Alpaca includes an implementation of a Brainfuck clone as a cellular automaton.
Alpaca’s a very simple language, but it lets you define pretty much any cellular automaton that works in a two-dimensional rectangular grid. And it comes with some very fun, and some very pathological examples.