As I said in the last post, Church came up with λ-calculus, which looks like it’s a great formal model of computation. But – there was a problem. Church struggled to find a model. What’s a model, and why would that matter? That’s the point of this post. To get a quick sense of what a model is, and why it matters?
A model is basically a mapping from the symbols of a logical system to some set off objects, such that all statements that you can prove in the logical system will be true about the corresponding objects. Note that when I say object here, I don’t necessarily mean real-world physical objects – they’re just something that we can work with, which is well-defined and consistent.
Why does it matter? Because the whole point of a system like λ-calculus is because we want to use it for reasoning. When you have a logical system like λ-calculus, you’ve built this system with its rules for a reason – because you want to use it as a tool for understanding something. The model provides you with a way of saying that the conclusions you derive using the system are meaningful. If the model isn’t correct, if it contains any kind of inconsistency, then your system is completely meaningless: it can be used to derive anything.
So the search for a model for λ-calculus is really important. If there’s a valid model for it, then it’s wonderful. If there isn’t, then we’re just wasting our time looking for one.
So, now, let’s take a quick look at a simple model, to see how a problem can creep in. I’m going to build a logic for talking about the natural numbers – that is, integers greater than or equal to zero. Then I’ll show you how invalid results can be inferred using it; and finally show you how it fails by using the model.
One quick thing, to make the notation easier to read: I’m going to use a simple notion of types. A type is a set of atoms for which some particular one-parameter predicate is true. For example, if is true, I’ll say that x is a member of type P. In a quantifier, I’ll say things like to mean . Used this way, we can say that P is a type predicate.
How do we define natural numbers using logic?
First, we need an infinite set of atoms, each of which represents one number. We pick one of them, and call it zero. To represent the fact that they’re natural numbers, we define a predicate , which is true if and only if x is one of the atoms that represents a natural number.
Now, we need to start using predicates to define the fundamental properties of numbers. The most important property of natural numbers is that they are a sequence. We define that idea using a predicate, , where is true if and only if x = y + 1. To use that to define the ordering of the naturals, we can say: .
Or in english: every natural number has a successor – you can always add one to a natural number and get another natural number.
We can also define predecessor similarly, with two statements:
.
So every number has a predecessor, and every number has a successor, and x is the predecessor of y if y is the successor of x.
To be able to define things like addition and subtraction, we can use successor. Let’s define addition using a predicate Sum(x,y,z) which means “z = x + y”.
Again, in english: for any two natural numbers, there is a natural number that it their sum; x + 0 always = x; and for any natural number, x + y = z is true if (x + 1) + (y – 1) = z.
There’s one kind of crank that I haven’t really paid much attention to on this blog, and that’s the real number cranks. I’ve touched on real number crankery in my little encounter with John Gabriel, and back in the old 0.999…=1 post, but I’ve never really given them the attention that they deserve.
There are a huge number of people who hate the logical implications of our definitions real numbers, and who insist that those unpleasant complications mean that our concept of real numbers is based on a faulty definition, or even that the whole concept of real numbers is ill-defined.
This is an underlying theme of a lot of Cantor crankery, but it goes well beyond that. And the basic problem underlies a lot of bad mathematical arguments. The root of this particular problem comes from a confusion between the representation of a number, and that number itself. “” isn’t a number: it’s a notation that we understand refers to the number that you get by dividing one by two.
There’s a similar form of looniness that you get from people who dislike the set-theoretic construction of numbers. In classic set theory, you can construct the set of integers by starting with the empty set, which is used as the representation of 0. Then the set containing the empty set is the value 1 – so 1 is represented as { 0 }. Then 2 is represented as { 1, 0 }; 3 as { 2, 1, 0}; and so on. (There are several variations of this, but this is the basic idea.) You’ll see arguments from people who dislike this saying things like “This isn’t a construction of the natural numbers, because you can take the intersection of 8 and 3, and set intersection is meaningless on numbers.” The problem with that is the same as the problem with the notational crankery: the set theoretic construction doesn’t say “the empty set is the value 0″, it says “in a set theoretic construction, the empty set can be used as a representation of the number 0.
The particular version of this crankery that I’m going to focus on today is somewhat related to the inverse-19 loonies. If you recall their monument, the plaque talks about how their work was praised by a math professor by the name of Edgar Escultura. Well, it turns out that Escultura himself is a bit of a crank.
The specify manifestation of his crankery is this representational issue. But the root of it is really related to the discomfort that many people feel at some of the conclusions of modern math.
A lot of what we learned about math has turned out to be non-intuitive. There’s Cantor, and Gödel, of course: there are lots of different sizes of infinities; and there are mathematical statements that are neither true nor false. And there are all sorts of related things – for example, the whole ideaof undescribable numbers. Undescribable numbers drive people nuts. An undescribable number is a number which has the property that there’s absolutely no way that you can write it down, ever. Not that you can’t write it in, say, base-10 decimals, but that you can’t ever write down anything, in any form that uniquely describes it. And, it turns out, that the vast majority of numbers are undescribable.
This leads to the representational issue. Many people insist that if you can’t represent a number, that number doesn’t really exist. It’s nothing but an artifact of an flawed definition. Therefore, by this argument, those numbers don’t exist; the only reason that we think that they do is because the real numbers are ill-defined.
This kind of crackpottery isn’t limited to stupid people. Professor Escultura isn’t a moron – but he is a crackpot. What he’s done is take the representational argument, and run with it. According to him, the only real numbers are numbers that are representable. What he proposes is very nearly a theory of computable numbers – but he tangles it up in the representational issue. And in a fascinatingly ironic turn-around, he takes the artifacts of representational limitations, and insists that they represent real mathematical phenomena – resulting in an ill-defined number theory as a way of correcting what he alleges is an ill-defined number theory.
In the New Real Numbers, which he notates as , the decimal notation is fundamental. The set of new real numbers consists exactly of the set of numbers with finite representations in decimal form. This leads to some astonishingly bizarre things. From his paper:
3) Then the inverse operation to multiplication called division; the result of dividing a decimal by another if it exists is called quotient provided the divisor is not zero. Only when the integral part of the devisor is not prime other than 2 or 5 is the quotient well defined. For example, 2/7 is ill defined because the quotient is not a terminating decimal (we interpret a fraction as division).
So 2/7ths is not a new real number: it’s ill-defined. 1/3 isn’t a real number: it’s ill-defined.
4) Since a decimal is determined or well-defined by its digits, nonterminating decimals are ambiguous or ill-defined. Consequently, the notion irrational is ill-defined since we cannot cheeckd all its digits and verify if the digits of a nonterminaing decimal are periodic or nonperiodic.
After that last one, this isn’t too surprising. But it’s still absolutely amazing. The square root of two? Ill-defined: it doesn’t really exist. e? Ill-defined, it doesn’t exist. ? Ill-defined, it doesn’t really exist. All of those triangles, circles, everything that depends on e? They’re all bullshit according to Escultura. Because if he can’t write them down in a piece of paper in decimal notation in a finite amount of time, they don’t exist.
Of course, this is entirely too ridiculous, so he backtracks a bit, and defines a non-terminating decimal number. His definition is quite peculiar. I can’t say that I really follow it. I think this may be a language issue – Escultura isn’t a native english speaker. I’m not sure which parts of this are crackpottery, which are linguistic struggles, and which are notational difficulties in reading math rendered as plain text.
5) Consider the sequence of decimals,
(d)^na_1a_2…a_k, n = 1, 2, …, (1)
where d is any of the decimals, 0.1, 0.2, 0.3, …, 0.9, a_1, …, a_k, basic integers (not all 0 simultaneously). We call the nonstandard sequence (1) d-sequence and its nth term nth d-term. For fixed combination of d and the a_j’s, j = 1, …, k, in (1) the nth term is a terminating decimal and as n increases indefinitely it traces the tail digits of some nonterminating decimal and becomes smaller and smaller until we cannot see it anymore and indistinguishable from the tail digits of the other decimals (note that the nth d-term recedes to the right with increasing n by one decimal digit at a time). The sequence (1) is called nonstandard d-sequence since the nth term is not standard g-term; while it has standard limit (in the standard norm) which is 0 it is not a g-limit since it is not a decimal but it exists because it is well-defined by its nonstandard d-sequence. We call its nonstandard g-limit dark number and denote by d. Then we call its norm d-norm (standard distance from 0) which is d > 0. Moreover, while the nth term becomes smaller and smaller with indefinitely increasing n it is greater than 0 no matter how large n is so that if x is a decimal, 0 < d < x.
I think that what he’s trying to say there is that a non-terminating decimal is a sequence of finite representations that approach a limit. So there’s still no real infinite representations – instead, you’ve got an infinite sequence of finite representations, where each finite representation in the sequence can be generated from the previous one. This bit is why I said that this is nearly a theory of the computable numbers. Obviously, undescribable numbers can’t exist in this theory, because you can’t generate this sequence.
Where this really goes totally off the rails is that throughout this, he’s working on the assumption that there’s a one-to-one relationship between representations and numbers. That’s what that “dark number” stuff is about. You see, in Escultura’s system, 0.999999… is not equal to one. It’s not a representational artifact. In Escultura’s system, there are no representational artifacts: the representations are the numbers. The “dark number”, which he notates as , is (1-0.99999999…) and is the smallest number greater than 0. And you can generate a complete ordered enumeration of all of the new real numbers, .
Reading Escultura, every once in a while, you might think he’s joking. For example, he claims to have disproven Fermat’s last theorem. Fermat’s theorem says that for n>2, there are no integer solutions for the equation . Escultura says he’s disproven this:
The exact solutions of Fermat’s equation, which are the counterexamples to FLT, are given by the triples (x,y,z) = ((0.99…)10^T,d*,10^T), T = 1, 2, …, that clearly satisfies Fermat’s equation,
x^n + y^n = z^n, (4)
for n = NT > 2. Moreover, for k = 1, 2, …, the triple (kx,ky,kz) also satisfies Fermat’s equation. They are the countably infinite counterexamples to FLT that prove the conjecture false. One counterexample is, of course, sufficient to disprove a conjecture.
Even if you accept the reality of the notational artifact , this makes no sense: the point of Fermat’s last theorem is that there are no integer solutions; is not an integer; is not an integer. Surely he’s not that stupid. Surely he can’t possibly believe that he’s disproven Fermat using non-integer solutions? I mean, how is this different from just claiming that you can use (2, 3, 351/3) as a counterexample for n=3?
But… he’s serious. He’s serious enough that he’s published published a real paper making the claim (albeit in crackpot journals, which are the only places that would accept this rubbish).
Anyway, jumping back for a moment… You can create a theory of numbers around this rubbish. The problem is, it’s not a particularly useful theory. Why? Because it breaks some of the fundamental properties that we expect numbers to have. The real numbers define a structure called a field, and a huge amount of what we really do with numbers is built on the fundamental properties of the field structure. One of the necessary properties of a field is that it has unique identity elements for addition and multiplication. If you don’t have unique identities, then everything collapses.
So… Take . That’s the multiplicative inverse of 9. So, by definition, – the multiplicative identity.
In Escultura’s theory, is a shorthand for the number that has a representation of 0.1111…. So, . So is also a multiplicative identity. By a similar process, you can show that itself must be the additive identity. So either , or else you’ve lost the field structure, and with it, pretty much all of real number theory.
One of the things that’s endlessly fascinating to me about math and
science is the way that, no matter how much we know, we’re constantly
discovering more things that we don’t know. Even in simple, fundamental
areas, there’s always a surprise waiting just around the corner.
A great example of this is something called the Ulam spiral,
named after Stanislaw Ulam, who first noticed it. Take a sheet of graph paper.
Put “1” in some square. Then, spiral out from there, putting one number in
each square. Then circle each of the prime numbers. Like the following:
If you do that for a while – and zoom out, so that you can’t see the numbers,
but just dots for each circled number, what you’ll get will look something like
this:
That’s the Ulam spiral filling a 200×200 grid. Look at how many diagonal
line segments you get! And look how many diagonal line segments occur along
the same lines! Why do the prime numbers tend to occur in clusters
along the diagonals of this spiral? I don’t have a clue. Nor, to my knowledge,
does anyone else!
And it gets even a bit more surprising: you don’t need to start
the spiral with one. You can start it with one hundred, or seventeen thousand. If
you draw the spiral, you’ll find primes along diagonals.
Intuitions about it are almost certainly wrong. For example, when I first
thought about it, I tried to find a numerical pattern around the diagonals.
There are lots of patterns. For example, one of the simplest ones is
that an awful lot of primes occur along the set of lines
f(n) = 4n2+n+c, for a variety of values of b and c. But what does
that tell you? Alas, not much. Why do so many primes occur along
those families of lines?
You can make the effect even more prominent by making the spiral
a bit more regular. Instead of graph paper, draw an archimedean spiral – that
is, the classic circular spiral path. Each revolution around the circle, evenly
distribute the numbers up to the next perfect square. So the first spiral will have 2, 3, 4;
the next will have 5, 6, 7, 8, 9. And so on. What you’ll wind up with is
called the Sack’s spiral, which looks like this:
This has been cited by some religious folks as being a proof of the
existence of God. Personally, I think that that’s silly; my personal
belief is that even a deity can’t change the way the numbers work: the
nature of the numbers and how they behave in inescapable. Even a deity who
could create the universe couldn’t make 4 a prime number.
Even just working with simple integers, and as simple a concept of
the prime numbers, there are still surprises waiting for us.
I was planning on ignoring this one, but tons of readers have been writing
to me about the latest inanity spouting from the keyboard of Discovery
Institute’s flunky, Denise O’Leary.
Here’s what she had to say:
Even though I am not a creationist by any reasonable definition,
I sometimes get pegged as the local gap tooth creationist moron. (But then I
don’t have gaps in my teeth either. Check unretouched photos.)
As the best gap tooth they could come up with, a local TV station interviewed
me about “superstition” the other day.
The issue turned out to be superstition related to numbers. Were they hoping
I’d fall in?
The skinny: Some local people want their house numbers changed because they
feel the current number assignment is “unlucky.”
Look, guys, numbers here are assigned on a strict directional rota. If the
number bugs you so much, move.
Don’t mess up the street directory for everyone else. Paramedics, fire chiefs,
police chiefs, et cetera, might need a directory they can make sense of. You
might be glad for that yourself one day.
Anyway, I didn’t get a chance to say this on the program so I will now: No
numbers are evil or unlucky. All numbers are – in my view – created by God to
march in a strict series or else a discoverable* series, and that is what
makes mathematics possible. And mathematics is evidence for design, not
superstition.
The interview may never have aired. I tend to flub the gap-tooth creationist
moron role, so interviews with me are often not aired.
* I am thinking here of numbers like pi, that just go on and on and never
shut up, but you can work with them anyway.(You just decide where you want
to cut the mike.)
In my Dembski rant, I used a metaphor involving the undescribable numbers. An interesting confusion came up in the comments about just what that meant. Instead of answering it with a comment, I decided that it justified a post of its own. It’s a fascinating topic which is incredibly counter-intuitive. To me, it’s one of the great examples of how utterly wrong our
intuitions can be.
Numbers are, obviously, very important. And so, over the ages, we’ve invented lots of notations that allow us to write those numbers down: the familiar arabic notation, roman numerals, fractions, decimals, continued fractions, algebraic series, etc. I could easily spend months on this blog just writing about different notations that we use to write numbers, and the benefits and weaknesses of each notation.
But the fact is, the vast, overwhelming majority of numbers cannot be written
down in any form.
That statement seems bizarre at best. But it does actually make sense. But for it to
make sense, we have to start at the very beginning: What does it mean for a number to be describable?
After my post the other day about rounding errors, I got a ton of
requests to explain the idea of significant figures. That’s
actually a very interesting topic.
The idea of significant figures is that when you’re doing
experimental work, you’re taking measurements – and measurements
always have a limited precision. The fact that your measurements – the
inputs to any calculation or analysis that you do – have limited
precision, means that the results of your calculations likewise have
limited precision. Significant figures (or significant digits, or just “sigfigs” for short) are a method of tracking measurement
precision, in a way that allows you to propagate your precision limits
throughout your calculation.
Before getting to the rules for sigfigs, it’s helpful to show why
they matter. Suppose that you’re measuring the radius of a circle, in
order to compute its area. You take a ruler, and eyeball it, and end
up with the circle’s radius as about 6.2 centimeters. Now you go to
compute the area: π=3.141592653589793… So what’s the area of the
circle? If you do it the straightforward way, you’ll end up with a
result of 120.76282160399165 cm2.
The problem is, your original measurement of the radius was
far too crude to produce a result of that precision. The real
area of the circle could easily be as high as 128, or as low as
113, assuming typical measurement errors. So claiming that your
measurements produced an area calculated to 17 digits of precision is
just ridiculous.
Another alert reader sent me a link to a YouTube video which is moderately interesting.
The video itself is really a deliberate joke, but it does demonstrate a worthwile point. It’s about rounding.
I’m away on vacation this week, taking my kids to Disney World. Since I’m not likely to
have time to write while I’m away, I’m taking the opportunity to re-run an old classic series
of posts on numbers, which were first posted in the summer of 2006. These posts are mildly
revised.
Ω is my own personal favorite transcendental number. Ω isn’t really a specific number, but rather a family of related numbers with bizarre properties. It’s the one real transcendental number that I know of that comes from the theory of computation, that is important, and that expresses meaningful fundamental mathematical properties. It’s also deeply non-computable; meaning that not only is it non-computable, but even computing meta-information about it is non-computable. And yet, it’s almost computable. It’s just all around awfully cool.
I’m away on vacation this week, taking my kids to Disney World. Since I’m not likely to have time to write while I’m away, I’m taking the opportunity to re-run an old classic series of posts on numbers, which were first posted in the summer of 2006. These posts are mildly revised.
One of the annoying things about how we write numbers is the fact that we generally write things one of two ways: as fractions, or as decimals.
You might want to ask, “Why is that annoying?” (And in fact, that’s what I want you to ask, or else there’s no point in my writing the rest of this!)
It’s annoying because both fractions and decimals can both only describe rational numbers – that is, numbers that are a perfect ratio of two integers. The problem with that is that most numbers aren’t rational. Our standard notations are incapable of representing the precise values of the overwhelming majority of numbers!
But it’s even more annoying than that: if you use decimals, then there are lots of rational numbers that you can’t represent exactly (i.e., 1/3); and if you use fractions, then it’s hard to express the idea that the fraction isn’t exact. (How do you write π as a fraction? 22/7 is a standard fractional approximation, but how do you say π, which is almost 22/7?)
I’m away on vacation this week, taking my kids to Disney World. Since I’m not likely to have time to write while I’m away, I’m taking the opportunity to re-run an old classic series of posts on numbers, which were first posted in the summer of 2006. These posts are mildly revised.
Anyway. Todays number is e, aka Euler’s constant, aka the natural log base. e is a very odd number, but very fundamental. It shows up constantly, in all sorts of strange places where you wouldn’t expect it.