Issue number 58 of [the Tangled Bank][tb] is now live at Salto Sobrius. Head on over, take a look, and plan to spend some time reading some of the net’s best science blogging from the last two weeks.
[tb]: http://saltosobrius.blogspot.com/2006/07/tangled-bank-58.html
Linear Logic
[Monday][yesterday], I said that I needed to introduce the sequent calculus, because it would be useful for describing things like linear logic. Today we’re going to take a quick look at linear logic – in particular, at *propositional* linear logic; you can expand to predicate linear logic in a way very similar to the way we went from propositional logic to first order predicate logic.
So first: what the heck *is* linear logic?
The answer is, basically, logic where statements are treated as *resources*. So using a statement in an inference step in linear logic *consumes* the resource. This is a strange notion if you’re coming from a regular predicate logic. For example, in regular predicate logic, if we have the statements: “A”, “A ⇒ B”, and “A ⇒ C”, we know that we can conclude “B ∧ C”. In linear logic, that’s not true: using either implication statement would *consume* the “A”. So we could infer “B”, or we could infer “C”, but we could *not* infer both.
When people talk about linear logic, and why it makes sense, they almost always use a vending machine analogy. Suppose I walk up to a vending machine, and I want to buy a soda and a candy bar. I’ve got 8 quarters in my pocket; the soda costs $1.50; the candy bar costs $.75.
In linear logic, I’d say something like the following (the syntax is wrong, but we’ll get to syntax later): (Q,Q,Q,Q,Q,Q,Q,Q), (Q,Q,Q,Q,Q,Q) ⇒ Soda, (Q,Q,Q) ⇒ Candy.
Using the rules, I can by a soda by “spending” 6 of my Qs. I wind up with “(Q,Q) ∧ Soda”, and “(Q,Q,Q) ⇒ Candy”. I’ve consumed 6 Qs, and I’ve consumed the “(Q,Q,Q,Q,Q,Q) ⇒ Soda” implication. I can’t do anything else; I don’t have enough Qs.
The basic statements in linear logic, with intuitive meanings are:
1. A ⊗ B. This is called *multiplicative conjunction*, also known as *simultaneous occurrence*. This means that I definitely have both A and B. This has an *identity unit* called “1”, such that A ⊗ 1 ≡ 1 ⊗ A ≡ A. 1 represents the idea of the absence of any resource.
2. A & B : *additive conjunction*, aka *internal choice*. I can have either A *or* B, and I get to pick which one. The unit is ⊤, pronounced “top”, and represents a sort of “I don’t care” value.
3. A ⊕ B. This is called *additive disjunction*, also known as *external choice*. It means that I get either A or B, but I don’t get to pick which one. The unit here is 0, and represents the lack of an outcome.
4. A ⅋ B : *multiplicative disjunction*, aka *parallel occurence*; I *need to have* both A and B at the same time. The unit for this is ⊥, pronounced “bottom”, and represents the absence of a goal. In the vending machine metaphor, think of it as the “cancel” button on the vending machine: I decided I don’t want any, so I’m not going to spend my resources.
5. A -o B : Linear implication. Consume resource A to *produce* resource B. The normal symbol for this looks like an arrow with a circle instead of an arrowhead; this operator is often called “lolly” because of what the normal symbol looks like. I’m stuck writing it as “-o”, because there’s no HTML entity for the symbol.
6. !A : Positive exponentiation, pronounced “Of course A”. This *produces* an arbitrary number of As. Equivalent to A ⊗ !A.
7. ?A : Negative exponentiation, pronounced “Why not A?”. This *consumes* As.
Ok. So, suppose I want to talk about buying lunch. I’ve got 10 dollars to buy lunch. I’ll be full if I have a salad, a coke, and a tuna sandwich. If I wanted to write “I’ve got a dollar” as “D”, “I have a salad” as “S”, “I have a coke” as “C”, “I have a tuna sandwich” as “T”, and finally, “I’m full” as “F”
* I can write “I have 10 dollars” in LL as: “(D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ D)”.
* I can write “Tuna sandwich and salad and coke” as a group of things that I want to have all of as: “T ⅋ S ⅋ C”.
* I can say that I’ll be full if I have lunch as “T ⅋ S ⅋ C -o F”
If I want to talk about buying lunch, I can describe the prices of the things I want using implication:
* A coke costs one dollar: “D -o C”; I can spend one dollar, and in return I get one coke.
* A salad costs 3 dollars: “(D ⊗ D ⊗ D) -o S”
* A tuna sandwich also costs three dollars: “(D ⊗ D ⊗ D) -o S”
Now, I can do some reasoning with these.
* By taking 1 of the dollars, I can get one C. That leaves me with “D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ C”
* By taking 3 D, I can get one S. “D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ C ⊗ S”.
* By taking 3 D, I can get one T. “D ⊗ D⊗ D ⊗ C ⊗ S ⊗ T”.
* Now I’ve got my lunch. I can eat it and be full, with three dollars left: “D ⊗ D⊗ D ⊗ F”.
Just from this trivial example, you should be able to see why linear logic is cool: the idea of being able to talk about *how* resources are used in an inference process or a computation is really valuable, and linear logic gives you the ability to really work with the concept of resource in a solid, formal way. If you think of it in terms of the Curry-Howard isomorphism [types-as-proofs concept from the simply typed lambda calculus][types-as-proofs], you can imagine using linear logic for types of values that are *consumed* by a computation – i.e., they’re no longer available once they’ve been used.
I’m going to adopt a slightly different format for the sequents for working in linear logic. The way that I produced the center bars in yesterdays post was really painful to write, and didn’t even end up looking particularly good. So, the way that I’m going to right the sequents in this post is to wrap the “top” and “bottom” of the sequent in curly braces, and separate them by a “⇒”, as in:
{GivenContext :- GivenEntailment } ⇒ { InferredContext :- InferredEntailment}
Now, let’s take a look at the sequent rules for propositional linear logic. I’m using the version of these rules from [Patrick Lincoln’s SIGACT ’92 paper][sigact92]. Yeah, I know that’s a bit of a big load there. Don’t worry about it too much; the important part is the concept described up above; the sequents are useful to look at when you have a hard time figuring out what some operator means in inference. For example, you can see the difference between & and ⊕ (which I found confusing at first) by looking at their sequents, to see what they do.
1. **Identity**: { } ⇒ { A :- A }
2. **Cut**: { Γ1 :- A, Σ1 Γ2, A :- Σ2 } ⇒ { Γ1, Γ2 :- Σ1,Σ2}
3. **Exchange Left**: { Γ1, A, B, Γ2 :- Σ } ⇒ { Γ1, B, A, Γ2 :- Σ }
4. **Exchange Right**: { Γ :- Σ1, A, B, Σ2 } ⇒ { Γ :- Σ1, B, A, Σ2}
5. **⊗ Left**: {Γ, A, B :- Σ} ⇒ { Γ, A ⊗ B :- Σ }
6. **⊗ Right**: { Γ1 :- A, Σ1 Γ2 :- B, Σ2} ⇒ { Γ1, Γ2 :- (A ⊗ B), Σ1,Σ2}
7. **-o Left**: { Γ1 :- A, Σ1 Γ2, B :- Σ2 } ⇒ { Γ1,Γ2, (A -o B) :- Σ1,Σ2}
8. **-o Right**: { Γ, A :- B, Σ} ⇒ { Γ :- A -o B, Σ}
9. **⅋ Left**: { Γ1,A :- Σ1 Γ2 :- B, Σ2 } ⇒ { Γ1,Γ2, (A ⅋ B) :- Σ1,Σ2}
10. **⅋ Right**: { Γ :- A, B, Σ} ⇒ { Γ :- A ⅋ B, Σ}
11. **& Left**: { Γ, A :- Σ } ⇒ { Γ,A & B :- Σ}/{ Γ, B :- Σ } ⇒ { Γ,A & B :- Σ}
12. **& Right**: { Γ :- A,Σ Γ :- B,Σ} ⇒ { Γ :- (A & B), Σ }
13. **⊕ Left**: Γ,A :- Σ Γ,B :- Σ} ⇒ { Γ,A ⊕ B :- Σ}
14. **⊕ Right**: {Γ :- A,Σ} ⇒ {Γ :- A ⊕ B, Σ}/{Γ :- B,Σ} ⇒ {Γ :- A ⊕ B, Σ}
15. **!W**: {Γ :- Σ} ⇒ {Γ,!A :- Σ}
16. **!C**: {Γ,!A,!A :- Σ} ⇒ { Γ,!A :- Σ }
17. **!D**: { Γ, A :- Σ} ⇒ { Γ,!A :- Σ}
18. **!S**: { !Γ :- A, ?Σ} ⇒ { !Γ :- !A, ?Σ}
19. **?W**: {Γ :- Σ} ⇒ {Γ:- ?A, Σ}
20. **?C**: {Γ :- ?A,?A, Σ} ⇒ { Γ :- ?A, Σ }
21. **?D**: { Γ :- A, Σ} ⇒ { Γ :- ?A, Σ}
22. **?S**: { !Γ, A :- ?Σ} ⇒ { !Γ,?A :- ?Σ}
23. **⊥ Exp Left**: { Γ :- A, Σ} ⇒ { Γ, A⊥ :- Σ}
24. **⊥ Exp Right**: { Γ, A :- Σ} ⇒ { Γ :- A⊥, Σ}
25. **0 Left**: { Γ,0 :- Σ } *(Nothing can be inferred)*
26. **⊤ Right**: { Γ :- ⊤,Σ}
27. **⊥ Left**: { ⊥ :- }
28. **⊥ Right**: {Γ :- Σ} ⇒ {Γ :- ⊥,Σ}
29. **1 Left**: { Γ :- Σ} ⇒ { Γ,1 :- Σ}
30. **1 Right**: { :- 1}
This is long enough that I’m not going to get into how this hooks into category theory today, except to point out that if you look carefully at the multiplication and exponentiations, they might seem a tad familiar.
[types-as-proofs]: http://goodmath.blogspot.com/2006/06/finally-modeling-lambda-calculus.html
[sigact92]: http://www.csl.sri.com/~lincoln/papers/sigact92.ps
[yesterday]: http://scienceblogs.com/goodmath/2006/07/a_brief_diversion_sequent_calc.php
Rehashing Conservative Liars: Did Edwards tell the truth about poverty?
You might remember my post last week about [conservatives who can’t subtract][subtract]: in particular, about how a conservative blogger who goes by “Captain Ed” attacked John Edwards for saying there are 37 million people in poverty in the US. It turned out that good ol’ Ed wasn’t capable of doing simple subtraction.
You might also remember a post about [lying with statistics][liar], discussing an article by Tim Worstall, who quoted a newspaper piece about abortion rates, and tried to misuse the statistics to argue something about sexual education in the UK.
Well, Tim (the target of the second piece) was pretty ticked off at my criticism; and so now, he’s back – but not with a defense of his own piece (he tried that already), but with [a response to the criticism of the first piece][liar-subtracts]. Of course, he tries to defend our good captain not by defending his math – that is, by claiming that *the point that he made* was correct; but by moving the goalposts, and claiming that the *real* point about the piece wasn’t to call Edwards a liar, but to pretend that he was *really* making an economic argument about whether or not people below the povertly line are really correctly described as poor – because, lucky duckies that they are, they get some money from the earned income tax credit! So obviously they’re not *really* poor!
>”Thirty-seven million of our people, worried about feeding and clothing their
>children,” he said to his audience. “Aren’t we better than that?”
>
>and the link is to this table at the US Census Bureau which indeed states that
>there are some 37 million or so below the poverty line.
>
>Right, so that must mean that there really are 37 million poor people in the
>USA, right? So what’s Tim bitchin’ about? Well, how about the fact that those
>figures which show 37 million below the poverty line do not in fact show that
>there are 37 million poor people? Weird thought I know but nope, it ain’t true.
>
>For this reason:
>
>The official poverty definition uses money income before taxes and does not
>include capital gains or noncash benefits (such as public housing, Medicaid,
>and food stamps).
>
>What is being measured in the first definition of poverty is how many people
>there are below the poverty line before we try to do anything about it.
This is what those of us who know anything about logic refer to as a “non-sequiter”. That is, it’s a conclusion that has nothing to do with what came before it. It’s one of the oldest and sloppiest rhetorical tactics in the book. (Literally – I’ve got a book on classical rhetoric on my bookshelf, and it’s cited in there.)
Edwards was talking about the division of wealth in the US: we have people like the CEOs of big companies taking home unbelievable amounts of money, while at the same time, the income of the people in the middle class is declining slightly in real terms, and the income of the people at the bottom isn’t even approaching what they need to get by. There are 37 million people below the poverty line in this country in terms of their income. Some portion (not specified in the only source Tim and the captain cite) of those people are working, and despite working, are still not making enough money to get by. This is indisputable: there are many people in this country who are working, but who still require government assistance just to pay their bills. That’s what Edwards said.
What does that have to do with whether or not the government gives them some token assistance? The point is that our economic policies quite deliberately *refuse* to do anything to help the people on the bottom of the economic ladder become self-sufficient. Witness the recent refusal to even allow an open debate in congress on increasing the minimum wage, even while the members of congress gave themselves a raise. A person with a family, working full time for the minimum wage is left *below* the poverty line. But it’s not considered an important issue by the people currently running our government.
[subtract]: http://scienceblogs.com/goodmath/2006/07/subtraction_math_too_hard_for.php
[liar]: http://scienceblogs.com/goodmath/2006/07/lying_with_statistics_abortion.php
[liar-subtracts]: http://timworstall.typepad.com/timworstall/2006/07/good_maths_and_.html
A Brief Diversion: Sequent Calculus
*(This post has been modified to correct some errors and add some clarifications in response to comments from alert readers. Thanks for the corrections!)*
Today, we’re going to take a brief diversion from category theory to play with
some logic. There are some really neat connections between variant logics and category theory. I’m planning on showing a bit about the connections between category theory and one of those, called *linear logic* . But the easiest way to present things like linear logic is using a mechanism based on sequent calculus.
Sequent calculus is a deduction system for performing reasoning in first order propositional logic. But it’s notation and general principles are useful for all sorts of reasoning systems, including many different logics, all sorts of type theories, etc. The specific sequent calculus that I’m to talk about is sometimes called system-LK; the general category of things that use this basic kind of rules is called Gentzen systems.
The sequent calculus consists of a set of rules called *sequents*, each of which is normally written like a fraction: the top of the fraction is what you know before applying the sequent; the bottom is what you can conclude. The statements in the sequents are always of the form:
CONTEXTS, Predicates :- CONTEXTS, Predicates
The “CONTEXTS” are sets of predicates that you already know are true. The “:-” is read “entails”; it means that the *conjuction* of the statements and contexts to the left of it can prove the *disjunction* of the statements to the right of it. In predicate logic, the conjuction is logical and, and disjunction is logical or, so you can read the statements as if “,” is “∧” on the left of the “:-“, and “∨” on the right. *(Note: this paragraph was modified to correct a dumb error that I made that was pointed out by commenter Canuckistani.)*
Contexts are generally written using capital greek letters; predicates are generally written using uppercase english letters. We often put a name for an inference rule to the right of the separator line for the sequent.
For example, look at the following sequent:
Γ :- Δ
————— Weakening-Left
Γ,A :- Δ
This sequent is named Weakening-left; the top says that “Given Γ everything in Δ can be proved.”; and
the bottom says “Using Γ plus the fact that A is true, everything in Δ can be proved”. The full sequent basically says: if Δ is provable given Γ, then it will still be provable when A is added to Γ;in other words, adding a true fact won’t invalidate any proofs that were valid before the addition of A. *(Note: this paragraph was modified to correct an error pointed out by a commenter.)*
The sequent calculus is nothing but a complete set of rules that you can use to perform any inference in predicate calculus. A few quick syntactic notes, and I’ll show you the full set of rules.
1. Uppercase greek letters are contexts.
2. Uppercase english letters are *statements*.
3. Lowercase english letters are *terms*; that is, the objects that predicates
can reason about, or variables representing objects.
4. A[b] is a statement A that contains the term b in some way.
5. A[b/c] means A with the term “b” replaced by the term “c”.
——-
First, two very basic rules:
1.
———— (Identity)
A :- A
2. Γ :- A, Δ Σ, A :- Π
—————————————— (Cut)
Γ,Σ :- Δ, Π
Now, there’s a bunch of rules that have right and left counterparts. They’re duals of each other – move terms across the “:-” and switch from ∧ to ∨ or vice-versa.
3. Γ, A :- Δ
————————— (Left And 1)
Γ, A ∧ B :- Δ
4. Γ :- A, Δ
——————— ——— (Right Or 1)
Γ, :- A ∨ B, Δ
5. Γ, B :- Δ
——————— ——(Left And 2)
Γ,A ∧ B :- Δ
6. Γ :- B, Δ
——————— ——— (Right Or 2)
Γ :- A ∧ B, Δ
7. Γ, A :- Δ Σ,B :- Π
————————————— (Left Or)
Γ,Σ, A ∨ B :- Δ,Π
8. Γ :- A,Δ Σ :- B,Π
—————————— ——(Right And)
Γ,Σ :- A ∧ B, Δ,Π
9. Γ :- A,Δ
————— —— (Left Not)
Γ, ¬A :- Δ
10. Γ,A :- Δ
——————— (Right Not)
Γ :- ¬A, Δ
11. Γ :- A,Δ Σ,B :- Π
————————————— (Left Implies)
Γ, Σ, A → B :- Δ,Π
12. Γ,A[y] :- Δ *(y bound)*
————————————— (Left Forall)
Γ,∀x A[x/y] :- Δ
13. Γ :- A[y],Δ *(y free)*
————————————— (Right Forall)
Γ :- ∀x A[x/y],Δ
14. Γ, A[y] :- Δ *(y bound)*
———————————— (Left Exists)
Γ,∃x A[x/y] :- Δ
15. Γ, :- A[y], Δ *(y free)*
————————————(Right Exists)
Γ :- ∃x A[x/y], Δ
16. Γ :- Δ
—————— (Left Weakening)
Γ, A :- Δ
17. Γ :- Δ
—————— (Right Weakening)
Γ :- A, Δ
18. Γ, A, A :- Δ
——————— (Left Contraction)
Γ,A :- Δ
19. Γ :- A, A, Δ
——————— (Right Contraction)
Γ :- A, Δ
20. Γ, A, B, Δ :- Σ
————————— (Left Permutation)
Γ,B, A, Δ :- Σ
21. Γ :- Δ, A, B, Σ
————————— (Right Permutation)
Γ :- Δ B, A, Σ
Here’s an example of how we can use sequents to derive A ∨ ¬ A:
1. Context empty. Apply Identity.
2. A :- A. Apply Right Not.
3. empty :- ¬ A, A. Apply Right And 2.
4. empty : A ∨ ¬A, A. Apply Permute Right.
5. empty :- A, A ∨ ¬ A. Apply Right And 1.
6. empty :- A ∨ ¬ A, A ∨ ¬ A. Right Contraction.
7. empty :- A ∨ ¬ A
If you look *carefully* at the rules, they actually make a lot of sense. The only ones that look a bit strange are the “forall” rules; and for those, you need to remember that the variable is *free* on the top of the sequent.
A lot of logics can be described using Gentzen systems; from type theory, to temporal logics, to all manner of other systems. They’re a very powerful tool for describing all manner of inference systems.
Mathematicians and Evolution: My Two Cents
There’s been a bunch of discussion here at ScienceBlogs about whether or not mathematicians are qualified to talk about evolution, triggered by [an article by ID-guy Casey Luskin][luskin]. So far, [Razib at Gene Expression][gnxp], [Jason at][evblog1][EvolutionBlog][evblog2], and [John at Stranger Fruit][sf] have all commented on the subject. So I thought it was about time for me to toss in my two cents as well, given that I’m a math geek who’s done rather a lot of writing about evolution here at this blog.
I don’t want to spend a lot of time rehashing what’s already been said by others. So I’ll start off by just saying that absolutely agree that just being a mathematician gives you absolutely *no* qualifications to talk about evolution, and that an argument about evolution should *not* be considered any more credible because it comes from a PhD in mathematics rather than a plumber. That’s not to say that there is no role for mathematics in the discussion of evolution – just that being a mathematician doesn’t give you any automatic expertise or credibility about the subject. A mathematician who wants to study the mathematics of evolution needs to *study evolution* – and it’s the knowledge of evolution that they gain from studying it that gives them credibility about the topic, not their background in mathematics. Luskin’s argument is nothing but an attempt to cover up for the fact that the ID “scientists petition” has a glaring lack of signatories who actually have any qualifications to really discuss evolution.
What I would like to add to the discussion is something about what I do here on this blog with respect to writing about evolution. As I’ve said plenty of times, I’m a computer scientist. I certainly have no qualifications to talk about evolution: I’ve never done any formal academic study of evolution; I’ve certainly never done any professional work involving evolution; I can barely follow [work done by qualified mathematicians who *do* study evolution][gm-good-ev].
But if you look at my writing on this blog, what I’ve mainly done is critiques of the IDists and creationists who attempt to argue against evolution. And here’s the important thing: the math that they do – the kind of arguments coming from the people that Luskin claims are uniquely well suited to argue about evolution – are so utterly, appallingly horrible that it doesn’t take a background in evolution to be able to tear them to ribbons.
To give an extreme example, remember the [infamous Woodmorappe paper][woodie] about Noah’s ark? You don’t need to be a statistician to know that using the *median* is wrong. It’s such a shallow and obvious error that anyone who knows any math at all should be able to knock it right down. *Every* mathematical argument that I’ve seen from IDists and/or creationists has exactly that kind of problems: errors so fundamental and so obvious that even without having to get into the detailed study of evolution, anyone who takes the time to actually *look at the math* can see why it’s wrong. It’s not always as bad as Woodie, but just look at things like [Dembski’s specified complexity][dembski-sc]: anyone who knows information theory can see that it’s a self-contradicting definition; you don’t need to be an expert in mathematical biology to see the problem – the problem is obvious in the math itself.
That fact in itself should be enough to utterly discredit Luskin’s argument: the so-called mathematicians that he’s so proud to have on his side aren’t even capable of putting together remotely competent mathematical arguments about evolution.
[luskin]: http://www.evolutionnews.org/2006/07/mathematicians_and_evolution.html
[gnxp]: http://scienceblogs.com/gnxp/2006/07/math_and_creation.php
[evblog1]: http://scienceblogs.com/evolutionblog/2006/07/are_mathematicians_qualified_t.php
[evblog2]: http://scienceblogs.com/evolutionblog/2006/07/are_mathematicians_qualified_t_1.php
[sf]: http://scienceblogs.com/strangerfruit/2006/07/more_on_mathematicians_1.php
[gm-good-ev]: http://scienceblogs.com/goodmath/2006/07/using_good_math_to_study_evolu.php
[woodie]: http://goodmath.blogspot.com/2006/06/more-aig-lying-with-statistics-john.html
[dembski-sc]: http://scienceblogs.com/goodmath/2006/06/dembskis_profound_lack_of_comp.php
Unofficial "Ask a ScienceBlogger": Childrens Books (UPDATED)
Over at fellow SBer {Worlds Fair][worldsfair}, they’ve put up an unofficial “Ask a ScienceBlogger” question, about childrens books:
Are there any children’s books that are dear to you, either as a child or a parent, and especially ones that perhaps strike a chord with those from a science sensibility? Just curious really. And it doesn’t have to be a picture book, doesn’t even have to be a children’s book – just a book that, for whatever reason, worked for you.
I’ve got two kids, a girl who’s almost six, and a boy who’s three. And they’re both showing serious signs of being pre-geeks. Whenever we go to a new place, the first thing they do is head for the bookshelves to see if there are any books they haven’t seen yet. My daughter’s school had a book fair last year, and we ended up spending a little over $100 on books for a kindergartener, and another $30 or so for the (then) 2yo. So obviously, I end up spending a lot of time reading childrens books!
There are a few books that really stand out in my mind as being *special*:
1. “Giraffes Can’t Dance”, by Giles Andreae, illustrated by Guy Parker-Rees. This isn’t a science book at all, but it’s simply one of the most wonderful children’s book I’ve seen. The story is wonderful, the rhythm and the rhyme structure are fantastic, and the art is bright and beautiful in a cartoonish sort of way.
2. “Our Family Tree: An Evolution Story”, by Lisa Westberg Peters, illustrated by Lauren Stringer. We bought this one for my daughter last december, after PZ recommended it on his blog. It’s a beautiful book – great art, and it’s actually really *compelling* for a child. Most kids science books have a kind of dull style to the writing; my daughter will generally want to read them once or twice, but once she understands what’s in them, she doesn’t want to read them again. But this one, she’s either read it or had it read to her at least fifty different times.
3. “Rumble in the Jungle, Commotion in the Ocean, et al”, by Giles Andreae, illustrated by David Wojtowycz. We started getting this series when my daughter was threeish, because it’s by the same author as “Giraffes”, and liked it so much that we have continued to get it for my son. Each book is about some environment and the animals that live in it. Each animal gets a little rhyme and a picture. The art is bright and colorful, and the rhymes are clever and very amusing to the kids.
UPDATE: I realized that I forgot one of *my* favorite books from my childhood: “The Lorax” by Dr. Seuss. In general, I’m not actually a huge Dr. Seuss fan: so many of his books are just rhyming nonsense. But the Lorax was one of my favorite books as a child; it turned me into a mini-environmentalist at the age of four. My son doesn’t quite get the book yet; my daughter definitely does. No list of science-ish kids books would be complete without it.
[worldsfair]: http://scienceblogs.com/worldsfair/2006/07/childrens_book_roundup_and_a_q.php
Using Good Math to Study Evolution Using Fitness Landscapes
Via [Migrations][migrations], I’ve found out about a really beautiful computational biology paper that very elegantly demonstrates how, contrary to the [assertions of bozos like Dembski][dembski-nfl], an evolutionary process can adapt to a fitness landscape. The paper was published in the PLOS journal “Computational Biology”, and it titled [“Evolutionary Potential of a Duplicated Repressor-Operator Pair: Simulating Pathways Using Mutation Data”][plos].
Here’s their synopsis of the paper:
>The evolution of a new trait critically depends on the existence of a path of
>viable intermediates. Generally speaking, fitness decreasing steps in this path
>hamper evolution, whereas fitness increasing steps accelerate it.
>Unfortunately, intermediates are hard to catch in action since they occur only
>transiently, which is why they have largely been neglected in evolutionary
>studies.
>
>The novelty of this study is that intermediate phenotypes can be predicted
>using published measurements of Escherichia coli mutants. Using this approach,
>the evolution of a small genetic network is simulated by computer. Following
>the duplication of one of its components, a new protein-DNA interaction
>develops via the accumulation of point mutations and selection. The resulting
>paths reveal a high potential to obtain a new regulatory interaction, in which
>neutral drift plays an almost negligible role. This study provides a
>mechanistic rationale for why such rapid divergence can occur and under which
>minimal selective conditions. In addition it yields a quantitative prediction
>for the minimum number of essential mutations.
And one more snippet, just to show where they’re going, and to try to encourage you to make the effort to get through the paper. This isn’t an easy read, but it’s well worth the effort.
>Here we reason that many characteristics of the adaptation of real protein-DNA
>contacts are hidden in the extensive body of mutational data that has been
>accumulated over many years (e.g., [12-14] for the Escherichia coli lac
>system). These measured repression values can be used as fitness landscapes, in
>which pathways can be explored by computing consecutive rounds of single base
>pair substitutions and selection. Here we develop this approach to study the
>divergence of duplicate repressors and their binding sites. More specifically,
>we focus on the creation of a new and unique protein-DNA recognition, starting
>from two identical repressors and two identical operators. We consider
>selective conditions that favor the evolution toward independent regulation.
>Interestingly, such regulatory divergence is inherently a coevolutionary
>process, where repressors and operators must be optimized in a coordinated
>fashion.
This is a gorgeous paper, and it shows how to do *good* math in the area of search-based modeling of evolution. Instead of the empty refrain of “it can’t work”, this paper presents a real model of a process, shows what it can do, and *makes predications* that can be empirically verified to match observations. This, folks, is how it *should* be done.
[migrations]: http://migration.wordpress.com/2006/07/12/duplication_and_coevolutionary_modeling/
[dembski-nfl]: http://scienceblogs.com/goodmath/2006/06/dembski_and_no_free_lunch_with_2.php
[plos]: http://compbiol.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pcbi.0020058
GM/BM Friday: Pathological Programming Languages
In real life, I’m not a mathematician; I’m a computer scientist. Still a math geek, mind you, but what I really do is very much in the realm of applied math, researching how to build systems to help people program.
One of my pathological obsessions is programming languages. Since I first got exposed to TRS-80 Model 1 BASIC back in middle school, I’ve been absolutely nuts programming languages. Last time I counted, I’d learned about 130 different languages; and I’ve picked up more since then. I’ve written programs most of them. Like I said, I’m nuts.
Anyway, I decided that it would be amusing to inflict my obsession on you, my readers, with a new feature: the friday pathological programming language. You see, there are plenty of *crazy* people out there; and many of them like to invent programming languages. Some very small number of them try to design good languages and succeed; a much larger number try to design good languages and fail; and *then* there are the folks who design the languages I’m going to talk about. They’re the ones who set out to design *bizzare* programming languages, and succeed brilliantly. They call them “esoteric” programming languages. I call them evil.
Today, the beautiful grand-daddy of the esoteric language family: the one, the only, the truly and deservedly infamous: [Brainfuck!][bf], designed by Urban Müller. (There are a number of different implementations available; just follow the link.)
Only 8 commands – including input and output – all written using symbols. And yet Turing complete; and not just Turing complete, but actually based on a *real* [formal theoretical design][pprimeprime]. And it’s even been implemented [*in hardware*!][bf-hard]
BrainFuck is based on something very much like a twisted cross between a [Turing machine][turing] and a [Minsky machine][minsky]. It’s got the idea of an input tape, like the turing machine. But unlike the turing machine, each cell of the tape stores a number, which can be incremented or decremented, like a Minsky machine. And like a Minsky, the only control flow is a test for zero.
The 8 instructions:
1. **>**: move the tape head one cell forward.
2. **+++++++++.++++++-.+++++++..+++.>++++<>.<——.++++++
.+++.——.——–.>+.
Let’s pull that apart just a bit so that we can hope to understand.
* “++++++++”: store the number “8” in the current tape cell. We’re going to use that as a loop index, so the loop is going to repeat 8 times.
* “[>+++++++++.”: go to the cell after the loop index, and output what’s there. That outputs the “72” as a character: “H”.
* “++++++-.”: Advance past the index, subtract one, and output. That’s 101, or “e”.
Continues in pretty much the same vein, using a couple of tape cells, and running loops to generate the values of the characters. Beautiful, eh?
If that didn’t seem impressive enough, [here][bf-fib] is a really gorgeous implementation of a fibonacci sequence generator, with documentation. The BF compiler used to write this ignores any character other than the 8 commands, so the comments don’t need to be marked in any way; they just need to be really careful not to use punctuation.
+++++++++++ number of digits to output > #1 + initial number >>>> #5 ++++++++++++++++++++++++++++++++++++++++++++ (comma) > #6 ++++++++++++++++++++++++++++++++ (space) <<<<< #1 copy #1 to #7 [>>>>>>+>+<<<<<<>>>>>>[<<<<<<>>>>>>-] ++++++++++ set the divisor #8 [ subtract from the dividend and divisor ->+>+<<>>[<<>>-] set #10 + if #9 clear #10 [-][<>>+<<>[-]] jump back to #8 (divisor possition) <>> #11 copy to #13 [>>+>+<<>>[<<>>-] set #14 + if #13 clear #14 [-][<>[-]] <<<<<<>>>> #12 if #12 output value plus offset to ascii 0 [++++++++++++++++++++++++++++++++++++++++++++++++.[-]] subtract #11 from 10 ++++++++++ #12 is now 10 - #12 output #12 even if it's zero ++++++++++++++++++++++++++++++++++++++++++++++++.[-] <<<<<<<<<<< #1 check for final number copy #0 to #3 >>+>+<<<>>>[<<<>>>-] >.>.<<<[-]] <>+>+<<>>[<<>>-]<<[-]>[-]<<<- ]
[bf]: http://www.muppetlabs.com/~breadbox/bf/
[bf-fib]: http://esoteric.sange.fi/brainfuck/bf-source/prog/fibonacci.txt
[turing]: http://goodmath.blogspot.com/2006/03/playing-with-mathematical-machines.html
[minsky]: http://goodmath.blogspot.com/2006/05/minsky-machine.html
[bf-hard]: http://www.robos.org/?bfcomp
[pprimeprime]: http://en.wikipedia.org/wiki/P_prime_prime
Why I Hate Religious Bayesians
Last night, a reader sent me a link to yet another wretched attempt to argue for the existence of God using Bayesian probability. I really hate that. Over the years, I’ve learned to dread Bayesian arguments, because so many of them are things like this, where someone cobbles together a pile of nonsense, dressing it up with a gloss of mathematics by using Bayesian methods. Of course, it’s always based on nonsense data; but even in the face of a lack of data, you can cobble together a Bayesian argument by pretending to analyze things in order to come up with estimates.
You know, if you want to believe in God, go ahead. Religion is ultimately a matter of personal faith and spirituality. Arguments about the existence of God always ultimately come down to that. Why is there this obsessive need to justify your beliefs? Why must science and mathematics be continually misused in order to prop up your belief?
Anyway… Enough of my whining. Let’s get to the article. It’s by a guy named Robin Collins, and it’s called “God, Design, and Fine-Tuning“.
Let’s start right with the beginning.
Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist. The temperature, for example, was set around 70o F and the humidity was at 50%; moreover, there was an oxygen recycling system, an energy gathering system, and a whole system for the production of food. Put simply, the domed structure appeared to be a fully functioning biosphere. What conclusion would we draw from finding this structure? Would we draw the conclusion that it just happened to form by chance? Certainly not. Instead, we would unanimously conclude that it was designed by some intelligent being. Why would we draw this conclusion? Because an intelligent designer appears to be the only plausible explanation for the existence of the structure. That is, the only alternative explanation we can think of–that the structure was formed by some natural process–seems extremely unlikely. Of course, it is possible that, for example, through some volcanic eruption various metals and other compounds could have formed, and then separated out in just the right way to produce the “biosphere,” but such a scenario strikes us as extraordinarily unlikely, thus making this alternative explanation unbelievable.
The universe is analogous to such a “biosphere,” according to recent findings in physics. Almost everything about the basic structure of the universe–for example, the fundamental laws and parameters of physics and the initial distribution of matter and energy–is balanced on a razor’s edge for life to occur. As eminent Princeton physicist Freeman Dyson notes, “There are many . . .lucky accidents in physics. Without such accidents, water could not exist as liquid, chains of carbon atoms could not form complex organic molecules, and hydrogen atoms could not form breakable bridges between molecules” (1979, p.251)–in short, life as we know it would be impossible.
Yes, it’s the good old ID argument about “It looks designed, so it must be”. That’s the basic argument all the way through; they just dress it up later. And as usual, it’s wrapped up in one incredibly important assumption, which they cannot and do not address: that we understand what it would mean to change the fundamental structure of the universe.
What would it mean to change, say, the ratio of the strengths of the electromagnetic force and gravity? What would matter look like if we did? Would stars be able to exist? Would matter be able to form itself into the kinds of complex structures necessary for life?
We don’t know. In fact, we don’t even really have a clue. And not knowing that, we cannot meaningfully make any argument about how likely it is for the universe to support life.
They do pretend to address this:
Various calculations show that the strength of each of the forces of nature must fall into a very small life-permitting region for intelligent life to exist. As our first example, consider gravity. If we increased the strength of gravity on earth a billionfold, for instance, the force of gravity would be so great that any land-based organism anywhere near the size of human beings would be crushed. (The strength of materials depends on the electromagnetic force via the fine-structure constant, which would not be affected by a change in gravity.) As astrophysicist Martin Rees notes, “In an imaginary strong gravity world, even insects would need thick legs to support them, and no animals could get much larger.” (Rees, 2000, p. 30). Now, the above argument assumes that the size of the planet on which life formed would be an earth-sized planet. Could life forms of comparable intelligence to ourselves develop on a much smaller planet in such a strong-gravity world? The answer is no. A planet with a gravitational pull of a thousand times that of earth — which would make the existence of organisms of our size very improbable– would have a diameter of about 40 feet or 12 meters, once again not large enough to sustain the sort of large-scale ecosystem necessary for organisms like us to evolve. Of course, a billion-fold increase in the strength of gravity is a lot, but compared to the total range of strengths of the forces in nature (which span a range of 1040 as we saw above), this still amounts to a fine-tuning of one part in 1031. (Indeed,other calculations show that stars with life-times of more than a billion years, as compared to our sun’s life-time of ten billion years, could not exist if gravity were increased by more than a factor of 3000. This would have significant intelligent life-inhibiting consequences.) (3)
Does this really address the problem? No. How would matter be different if gravity were a billion times stronger, and EM didn’t change? We don’t know. For the sake of this argument, they pretend that mucking about with those ratios wouldn’t alter the nature of matter at all. That’s what they’re going to build their argument on: the universe must support life exactly like us: it’s got to be carbon-based life on a planetary surface that behaves exactly like matter does in our universe. In other words: if you assume that everything has to be exactly as it is in our universe, then only our universe is suitable.
They babble on about this for quite some time; let’s skip forwards a bit, to where they actually get to the Bayesian stuff. What they want to do is use the likelihood principle to argue for design. (Of course, they need to obfuscate, so they cite it under three different names, and finally use the term “the prime principle of confirmation” – after all, it sounds much more convincing than “the likelihood principle”!)
The likelihood principle is a variant of Bayes’ theorem, applied to experimental systems. The basic idea of it is to take the Bayesian principle of modifying an event probability based on a prior observation, and to apply it backwards to allow you to reason about the probability of two possible priors given a final observation. In other words, take the usual Bayesian approach of asking: “Given that Y has already occurred, what’s the probability of X occurring?”; turn it around, and say “X occurred. For it to have occurred, either Y or Z must have occurred as a prior. Given X, what are the relative probabilities for Y and Z as priors?”
There is some controversy over when the likelihood principle is applicable. But let’s ignore that for now.
To further develop the core version of the fine-tuning argument, we will summarize the argument by explicitly listing its two premises and its conclusion:
Premise 1. The existence of the fine-tuning is not improbable under theism.
Premise 2. The existence of the fine-tuning is very improbable under the atheistic single-universe hypothesis. (8)
Conclusion: From premises (1) and (2) and the prime principle of confirmation, it follows that the fine-tuning data provides strong evidence to favor of the design hypothesis over the atheistic single-universe hypothesis.
At this point, we should pause to note two features of this argument. First, the argument does not say that the fine-tuning evidence proves that the universe was designed, or even that it is likely that the universe was designed. Indeed, of itself it does not even show that we are epistemically warranted in believing in theism over the atheistic single-universe hypothesis. In order to justify these sorts of claims, we would have to look at the full range of evidence both for and against the design hypothesis, something we are not doing in this paper. Rather, the argument merely concludes that the fine-tuning strongly supports theism over the atheistic single-universe hypothesis.
That’s pretty much their entire argument. That’s as mathematical as it gets. Doesn’t stop them from arguing that they’ve mathematically demonstrated that theism is a better hypothesis than atheism, but that’s really their whole argument.
Here’s how they argue for their premises:
Support for Premise (1).
Premise (1) is easy to support and fairly uncontroversial. The argument in support of it can be simply stated as follows: since God is an all good being, and it is good for intelligent, conscious beings to exist, it not surprising or improbable that God would create a world that could support intelligent life. Thus, the fine-tuning is not improbable under theism, as premise (1) asserts.
Classic creationist gibberish: pretty much the same stunt that Swinburne pulled. They pretend that there are only two possibilities. Either (a) there’s exactly one God which has exactly the properties that Christianity attributes to it; or (b) there are no gods of any kind.
They’ve got to stick to that – because if they admitted more than two possibilities, they’d have to actually consider why their deity is more likely that any of the other possibilities. They can’t come up with an argument that Christianity is better than atheism if they acknowledge that there are thousands of possibilities as likely as theirs.
Support for Premise (2).
Upon looking at the data, many people find it very obvious that the fine-tuning is highly improbable under the atheistic single-universe hypothesis. And it is easy to see why when we think of the fine-tuning in terms of the analogies offered earlier. In the dart-board analogy, for example, the initial conditions of the universe and the fundamental constants of physics can be thought of as a dart- board that fills the whole galaxy, and the conditions necessary for life to exist as a small one-foot wide target. Accordingly, from this analogy it seems obvious that it would be highly improbable for the fine-tuning to occur under the atheistic single-universe hypothesis–that is, for the dart to hit the board by chance.
Yeah, that’s pretty much it. The whole argument for why fine-tuning is less probably in a universe without a deity than in a universe with one. Because “many people find it obvious”, and because they’ve got a clever dartboard analogy.
They make a sort of token effort to address the obvious problems with this, but they’re really all nothing but more empty hand-waving. I’ll just quote one of them as an example; you can follow the link to the article to see the others if you feel like giving yourself a headache.
Another objection people commonly raise against the fine-tuning argument is that as far as we know, other forms of life could exist even if the constants of physics were different. So, it is claimed, the fine-tuning argument ends up presupposing that all forms of intelligent life must be like us. One answer to this objection is that many cases of fine-tuning do not make this presupposition. Consider, for instance, the cosmological constant. If the cosmological constant were much larger than it is, matter would disperse so rapidly that no planets, and indeed no stars could exist. Without stars, however, there would exist no stable energy sources for complex material systems of any sort to evolve. So, all the fine-tuning argument presupposes in this case is that the evolution of life forms of comparable intelligence to ourselves requires some stable energy source. This is certainly a very reasonable assumption.
Of course, if the laws and constants of nature were changed enough, other forms of embodied intelligent life might be able to exist of which we cannot even conceive. But this is irrelevant to the fine-tuning argument since the judgement of improbability of fine-tuning under the atheistic single-universe hypothesis only requires that, given our current laws of nature, the life-permitting range for the values of the constants of physics (such as gravity) is small compared to the surrounding range of non-life-permitting values.
Like I said at the beginning: the argument comes down to a hand-wave that if the universe didn’t turn out exactly like ours, it must be no good. Why does a lack of hydrogen fusion stars like we have in our universe imply that there can be no other stable energy source? Why is it reasonable to constrain the life-permitting properties of the universe to be narrow based on the observed properties of the laws of nature as observed in our universe?
Their argument? Just because.
Protecting the Homeland: the Terrorists' Target List
Longtime readers of GM/BM will remember [this post][homeland], where I discussed the formula used by the Department of Homeland Security for allocating anti-terrorism funds. At the time, I explained:
>It turns out that the allocation method was remarkably simple. In their
>applications for funding, cities listed assets that they needed to protect.
>What DHS did was take the number of listed assets from all of the cities that
>were going to be recipients of funds, and give each city an amount of funding
>proportional to the number of assets they listed.
>
>So, the Empire State building is equal to the neighborhood bank in Omaha. The
>stock exchange on Wall Street is equal to the memorial park in Anchorage,
>Alaska. Mount Sinai hospital is equal to the county hospital in the suburbs of
>Toledo, Ohio. The New York subway system (18.5 billion passenger-miles per
>year) is equal to the Minneapolis transit system (283 million passenger-miles
>per year). The Brooklyn Bridge is equal the George Street bridge in New
>Brunswick, NJ.
Well, according to the [New York Times][nyt] (login required), it appears that I gave *too much credit* to the DHS. They weren’t saying that, for example, Wall Street was equivalent to the memorial park in Anchorage. What they were saying is that the Wall Street stock exchange is equivalent to the Mule Day Parade in Columbia Tenessee; Mt. Sinai hospital is equivalent to an unnamed Donut Shop; the Macy’s thanksgiving parade is equivalent to the Bean Fest in Mountain View, Arkansas.
Questioned about the foolishness of this insane list, a DHS spokesperson responded “We don’t find it embarrassing, the list is a valuable tool.”
Don’t you feel safer now that you know how the government is using what they keep stressing is *your money* to protect you?
[homeland]: http://goodmath.blogspot.com/2006/06/astoundingly-stupid-math-bullshit-of_02.html
[nyt]: http://www.nytimes.com/2006/07/12/washington/12assets.html?_r=1&oref=login