Todays dose of programming pathology is a monstrosity called *Linguine*. Linguine is a language that (obviously) carries spaghetti code to an extreme. You can see the [language specification][linguine-spec], or download an [implementation with examples][linguine-impl].
It’s really a remarkably simple language. There are only 10 commands, including input and output:
1. “`x=y`”: assign the value of y to the variable x.
2. “`x+y`”: assign the value of x+y to the variable x.
3. “`x-y`”: assign the value of x-y to the variable x.
4. “`x|y`”: assign the value of x NAND y to the variable x.
5. “`x>y`”: assign the value of shifting x by y bits to x. If y is positive, then it shifts y bits right; if y is negative, then it shifts -y bits left.
6. “`x?`”: Input one character and store it in the variable x.
7. “`x$`”: Output the content of register x as a character.
8. “`x#`”: Output the content of register x as an integer.
9. “`x<y:loc`”: if x < y then jump to the program line labeled “loc”.
10. “`x~y:loc`”: if x = y then jump to the program line labeled “loc”.
Command operands on either integer literals, or on the contents of an unlimited set of numbered registers each of which contains an integer. Valid operands are either an integer (which identifies a register number if it appears on the left, or a literal value on the right); or an operand preceeded by “*”, which performs an indirection on the value it precedes. So for example, “*3” is the contents of register three. “**3” is the contents of the register whose number is stored in register 3, etc. Any of the operands may be any of these forms, so for example, “1~*3:**4” is a valid command.
A linguine statement consists of three parts: a line label, a list of commands, and a branch target, written “label[commands]target”. When executed, the line evaluates each command in the list in sequence. If it reaches the end of the list without having branched, then it will branch to the statement labeled by the branch target.
The lines of the program may appear in any order. The first statement executed will be the statement whose label is smallest. Branching to 0 ends program execution.
Comments start with a “‘”, and extend to the end of the line.
All quite simple, right?
As usual, we’ll start with hello world.
1[0=72,0$,0+29,0$,0+7,0$,0$,0+3,0$,1=32,1$,0-24,0$,0+24,0$,0+3,0$,0-6,0$,0-8,0$,1+1,1$,1-23,1$]0
That’s perfectly clear, isn’t it? Actually, it’s basically the same as the hello world program we saw in brainfuck. We’ll piece it apart.
* 0=72,0$
: 72 is the ascii code for “H”. So store that in register 0. Then output register 0.
* 0+29,0$
: 101 is the ascii code for “e”. So add 29 to the contents of register 0, which will make the new value of register 0 be 101. Then output that.
* etc.
How about something a tad more interesting? Let’s look at generating the
Fibonacci sequence:
1[0=32,2=1,1#,0$,2#]2
2[1+*2,3=*1,1=*2,2=*3,0$,2#]2
* This starts with line 1:
* “0=32
: Store the ascii code for a whitespace in register 0.
* “`2=1`”: store the value 0 in register 2.
* “`1#`”: output the value of register 1. (Since registers contain 0 when the program starts, that prints a zero.)
* “`0$`”: output the contents of register 0 as a character, which prints a whitespace.
* “`2#`”: print the contents of register 2 (1).
* Finally, branch to statement 2.
* Statement 2: registers 1 and 2 contain the previous two fibonacci numbers. So this adds them together to get the next one, and then rotates the values of the registers so that the newest fibonacci number is in register 2, and the one before that is in register one.
* “`1+*2`”: add the value of register 2 to register 1.
* “`3=*1`”: store the value of register 1 in register 3.
* “`1=*2`”: store the value of register 2 in register 1.
* “`2=*3`”: store the value of register 3 in register 2.
* “`0$,2#`”: output a space, and then output the value of register 2.
* Finally, the branch instruction repeats statement 2, to keep generating fibonacci numbers.
Here’s a nice little multiply program, which demonstrates how to write subroutines. Note that recursion doesn’t work here, because we need to use fixed register numbers. (You can set up recursion using additional levels of indirection, it’s just messy as heck.)
‘Multiply: -2 *= *-3
‘Programmed by Jeffry Johnston, 2005
‘-1=return jump, -4=temp, -5=temp
100[-4=*-3,-5=*-2,-2=0,-4<0:101]102
101[-4~0:*-1,-2-*-5,-4+1]101 '-3 is negative
102[-4~0:*-1,-2+*-5,-4-1]102 '-3 is positive
This takes the address of the instruction to return to in register -1; it takes the left operand of multiple (which is also the target of the result) in register -2, and the right operand in register -3. Registers -4 and -5 are used for temporary storage.
* Line 100: set up, and determine whether the right operand is positive or negative.
* "`-4=*-3,-5=*-2`": Copy the contents of register -3 to register -4, and the contents of register -2 to register -5.
* "`-2=0`": Store 0 in register -2.
* "`-4<0:101`": If the contents of register -4 is less than 0, then branch to statement 101.
* If you reach the end of this line, then the contents of register -4 is non-negative, and you branch to statement 102.
* Line 101: do multiplication by repeated subtraction
* "`-4~0:*-1`": If the contents of register -4 is zero, then we're done, so return by branching to the line labeled by the contents of register -1.
* "`-2-*-5`": Subtract the contents of register -5 from the contents of register -2 (the result register).
* "`-4-1`": decrement (or increment, depending on your point of view) the contents of register four, to show that you've done one subtraction.
* Repeat the line.
* Line 102: basically exactly the same as 101, except that the does a repeated add rather than a subtract.
An example of calling this would be:
1[-1=2,-2=36,-3=13]200
2[-2#]0
This stores 36 and 13 as the parameters to the multiple subroutine, and then invokes it by branching to it at line 200. The "return address" is set to statement 2 by storing 2 in register -1. When control returns to statement 2, the result of the multiplication is printed out.
One last example, which you can puzzle out on your own. A brainfuck interpreter in 21 lines of Linguine:
'BF interpreter in Linguine
'Programmed by Jeffry Johnston, 2005
1[*-2?,*-2~33:2,-2+1]1
2[*-2=-1,-2+1]3
3[-3=**-1,-3~-1:0,-3~43:4,-3~44:6,-3~45:7,-3~46:9,-3~60:10,-3~62:11,-3~91:12,-3~93:14]15
4[*-2+1,*-2~256:5]15 '+
5[*-2=0]15
6[*-2?,*-2~-1:5]15 ',
7[*-2-1,*-2~-1:8]15 '-
8[*-2=255]15
9[*-2$]15 '.
10[-2-1]15 '
12[*-2~0:13]15 ‘[
13[-3=1]16
14[-3=-1]16 ‘]
15[-1+1]3
16[-4=1]17
17[-1+*-3,*-1~91:18,*-1~93:19]20
18[-4+*-3]20
19[-4-*-3]20
20[-4~0:21]17
21[-3~1:15]3
[linguine-spec]: http://kidsquid.com/files/linguine/README
[linguine-impl]: http://kidsquid.com/files/linguine/linguine.tar.gz
Topological Spaces
Yesterday, I introduced the idea of a *metric space*, and then used it to define *open* and *closed* sets in the space. (And of course, being a bozo, I managed to include a typo that made the definition of open sets equivalent to the definition of closed sets. It’s been corrected, but if you’re not familiar with this stuff, you might want to go back and take a look at the corrected version. It’s just replacing a ≤ with a <, but that makes a *big* difference in meaning!)
Today I’m going to explain what a *topological space* is, and what *continuity* means in topology. (For another take on continuity, head on over to [Antopology][antopology] where Ofer has posted his explanation.)
A *topological space* is a set **X** and a collection **T** of subsets of **X**, where the following conditions hold:
1. ∅ ∈ **T** and **X** ∈ **T*.
2. ∀ C ∈ ℘(**T**); : ∪(c ∈ C) ∈ **T**. (That is, the union of any collection of subsets of **T** is an element of **T**. )
3. ∀ s, t ∈ **T** : s ∩ t ∈ T. *(The intersection of any two subsets of **T** is also in **T**.)
The collection **T** is called a *topology* on **T**. The *members* of **T** are the *open sets* of the topology. The *closed sets* are the set complements of the members of **T**. Finally, the *elements* of the topological space **X** are called *points*.
The connection to metric spaces should be pretty obvious. The way we built up open and closed sets over a metric space can be used to produce topologies. The properties we worked out for the open and closed sets are exactly the properties that are *required* of the open and closed sets of the topology. There are many ways to build a topology other than starting with a metric space, but that’s definitely the easiest way.
One of the most important ideas in topology is the notion of *continuity*. In some sense, it’s *the* fundamental abstraction of topology. We can now define it.
A *function* from topological space **X** to topological space **U** is *continuous* if/f for every open sets C ∈ **T** the *inverse image* of f on C is an open set. The inverse image is the set of points x in **X** where f(x) ∈ C.
That’s a bit difficult to grasp. What it’s really capturing is that there are no *gaps* in the function. If there were a gap, then the open spaces would no longer be open. Think of the metric spaces idea of open sets. Imagine an open set with a cube cut out of the middle. It’s definitely not continuous. If you took a function on that open set, and its inverse image was the set with the cube cut out, then the function is not smoothly mapping from the open set to the other topological space. It’s mapping *part of* the open set, leaving a big ugly gap.
Now, here’s were it gets kind of nifty. The set of of topological spaces and continuous functions form a *category*. with the spaces as objects and the functions as morphisms. We call this category **Top**. It’s often easiest to talk about topological spaces using the constructs of category theory.
So, for example, one of the most important ideas in topology is *homeomorphism*. A homeomorphism is a bicontinuous bijection (a bicontinuous function is a continuous function with a continuous inverse; a bijection is a bidirectional total function between sets.) A homeomorphism between topological spaces is a *homomorphism* in **Top**.
From the perspective of topology, any two topological spaces with a homeomorphism between them are *identical*. (Which ends up corresponding exactly to how we defined the idea of *equality* in category theory.)
[antopology]: http://antopology.blogspot.com/2006/08/continuity-introduced.html
Metric Spaces
Topology usually starts with the idea of a metric space. A metric space is a set of values with some concept of distance. We need to define that first, before we can get into anything really interesting.
Metric Spaces and Distance
What does distance mean?
Let’s look at a set, S, consisting of elements s1, s2, s3,…,sn. What does it mean to measure a distance from si to sj?
We’ll start by looking at a simple number-line, with the set of real numbers. What’s the distance between two numbers x and y? It’s a measure of how far over the number-line you have to go to get from x to y. But that’s really circular; we’ve defined distance as “how far you have to go”, which is defined by distance. Let’s try again. Take a blank ruler, and put it next to the numberline, so that the left edge of the ruler is on X, and then draw a mark on the ruler where it touches Y. The length of the ruler up to that mark is the distance from x to y. The reason that this one isn’t circular is because now, you can take that ruler, and use it to answer the question: is the number v farther from the number w than x is from y? Because the ruler gives you a *metric* that you can use *that is separate from* the number-line itself.
A metric over S is a function that takes two elements si and sj, and returns a *real number* which measure the distance between those two elements. To be a proper metric, it needs to have a set of required properties. To be formal, a function d : S × S → ℜ is a *metric function over S* if/f:
- ∀ si, sj ∈ S: d(si,sj) = 0 if/f i=j. (the identity property)
- ∀ si, sj ∈ S: d(si,sj) = d(sj,si) (the symmetry property)
- ∀ si, sj, sk ∈ S: d(si,sk) ≤ d(si,sj) + d(sj,sj) (the triangle inequality property)
Some people also add a fourth property, called the non-negativity property; I prefer to leave it out, because it can be inferred from the others. But for completeness, here it is: ∀ si, sj ∈ S: d(si,sj) ≥ 0.
A metric space is just the pair (S,d) of a set S, and a metric function d over the set.
For example: - The real numbers are a metric space with the ruler-metric function. You can easily verify that properties of a metric function all work with the ruler-metric. In fact, they are are all things that you can easily check with a ruler and a number-line, to see that they work. The function that you’re creating with the ruler is: d(x,y) = |x-y| (the absolute value of x – y). So the ruler-metric distance from 1 to 3 is 2.
- A cartesian plane is a metric space whose distance function is the euclidean distance:
d((ax,ay), (bx,by)) = ((ax-bx)2 + (ay-by)2 )1/2. - In fact, for every n, the euclidean n-space is a metric space using the euclidean distance.
- A checkerboard is a metric space if you use the number of kings moves as the distance function.
- The Manhattan street grid is a metric space where the distance function between two intersections is the sum of the number of horizontal blocks and the number of vertical blocks between them.
Open and Closed Sets in Metric Spaces
You can start moving from metric spaces to topological spaces by looking at open sets. Take a metric space, (S,d), and a point p∈S. An open ball B(p,r) (a ball of radius r around point p) in S is the set of points x such that d(p,x) 0, B(p,r)∩T≠∅. The closure of T (usually written as T with a horizontal line over it; sometimes written as T by computer scientists, because that’s the closure notation in many CS subjects). is the set of all points adherent to T. *(note: a typo was corrected in this paragraph. Thanks to the commenter who caught it!)
A subset T of S is called a closed subset if/f T=T. Intuitively, T is closed if it *contains the surface that forms its boundary. So in 3-space, a solid sphere is a closed space. The contents of the sphere (think of the shape formed by the air in a spherical balloon) is not a closed space; it’s bounded by a surface, but that surface is not part of the space.
Introducing Topology
Back when GM/BM first moved to ScienceBlogs, we were in the middle of a poll about the next goodmath topic for me to write about. At the time, the vote was narrowly in favor of topology, with graph theory as a very close second.
We’re pretty much done with category theory, so it’s topology time!
So what’s topology about? In some sense, it’s about the fundamental abstraction of *continuity*: if I have a bunch of points that form a continuous line or surface, what does that really mean? In particular, what does it mean *from within* the continuous surface?
Another way of looking at is as the study of what kinds of *structures* are formed from continuous sets of points. This viewpoint makes much of topology look a lot like category theory: a study of mathematical structures, what they mean, and how we can build them and create mappings between them.
Let’s take a quick look at an example. There’s a famous joke about topologists; you can always recognize a topology at breakfast, because they’re the people who can’t tell the difference between their coffee mug and their donut.
It’s not just a joke; there’s a real example hidden in there. From the viewpoint of topology, the coffee mug and the donut *are the same shape*. They’re both toruses. In topology, the exact shape doesn’t matter: what matters is the basic continuities of the surface: what is *connected* to what, and *how* they are connected. In the following diagram, all three shapes are *topologically* identical:
If you turn the coffee mug into clay, you can remold it from mug-shape to donut-shape *without tearing or breaking*. Just squishing and stretching. So in topology, they *are* the same shape. On the other hand, a sphere is different: you can’t turn a donut into a sphere without tearing a whole in it. If you’ve got a sphere and you want to turn it into a torus, you can either flatten it and punch a hole in the middle; or you can roll it into a cylinder, punch holes in the ends to create a tube, and then close the tube into a circle. And you can’t turn a torus into a sphere without tearing it: you need to break the circle of the torus and then close the ends to create a sphere. In either case, you’re tearing at least one whole in what was formerly a continuous surface.
Topology was one of the hottest mathematical topics of the 20th century, and as a result, it naturally has a lot of subfields. A few examples include:
1. **Metric topology**: the study of *distance* in different spaces. The measure of distance and related concepts like angles in different topologies.
2. **Algebraic topology**: the study of topologies using the tools of abstract algebra. In particular, studies of things like how to construct a complex space from simpler ones. Category theory is largely based on concepts that originated in algebraic topology.
3. **Geometric topology**: the study of manifolds and their embeddings. In general, geometric topology looks at lower-dimensional structures, most either two or three dimensional. (A manifold is an abstract space where every point is in a region that appears to be euclidean if you only look at the local neighborhood. But on a larger scale, the euclidean properties may disappear.)
4, **Network topology**: topology in the realm of discrete math. Network topologies are graphs (in the graph theory sense) consisting of nodes and edges.
5. **Differential Topology**: the study of differential equations in topological spaces that have the properties necessary to make calculus work.
Personally, I find metric topology rather dull, and differential topology incomprehensible. Network topology more properly belongs in a discussion of graph theory, which is something I want to write about sometime. So I’ll give you a passing glance at metric topology to see what it’s all about, and algebraic topology is where I’ll spend most of my time.
One of the GM/BM readers, Ofer Ron (aka ParanoidMarvin) is starting a new blog, called [Antopology][antopology] where he’ll be discussing topology, and we’re going to be tag-teaming our way through the introductions. Ofer specializes in geometric topology (knot theory in particular, if I’m not mistaken), so you can get your dose of geometric topology from him.
[antopology]: http://antopology.blogspot.com/
A Stunning Demonstration of Why Good Science Needs Good Math
Everyone is scientific circles is abuzz with the big news: there’s proof that dark matter exists! The paper from the scientists who made the discovered is [here][dark-matter-paper]; and a Sean Carroll (no relation) has [a very good explanation on his blog, Cosmic Variance][cv]. This discovery happens to work as a great example of just why good science needs good math.
As I always say, one of the ways to recognize a crackpot theory in physics is by the lack of math. For an example, you can look at the [electric universe][electric] folks. They have a theory, and they make predictions: but because there’s *no* math, the predictions are vague, and there’s no good way of *really* testing them, because there’s no quantifiable way of making a precise prediction – because there’s no math. So they can make predictions like “the stardust experiment will get bigger particles than they expect”; but they can’t tell you *how* big.
The dark matter result is a beautiful example of how to use good math in science.
Here’s the basic idea. The theory says that there are two kinds of matter: “dark” matter, and “light” matter. Dark matter only interacts with light matter via gravity; it does not interact with light matter via other forces. But dark matter and light matter generally clump in the same places – because gravity pulls them together. So it’s very difficult to really prove that dark matter exists – because you can’t see it directly, and it normally only appears with light matter, so you can’t really prove that the dark matter is there: any observed effect *might* be caused by the light matter behaving in a way different than our current theories claim it should.
But what if you could somehow sweep the light matter away?
What the scientists who did this work found is a collision of two galactic clusters. When these clusters collided, the light matter, mostly in the form of gas, interacted very strongly with one another, creating a shock wave pattern. But the *dark* matter passed through without interacting – the “collision” between the gas clouds didn’t affect the dark matter. So the gas clouds were swept back, while the dark matter continued moving. There’s a great animation illustrating this; in the animation, the blue is the dark matter; the red is the light matter. As the two clusters pass through each other, the light matter is swept away by the electromagnetic interactions between the gas clouds; the dark matter passes right through:
Here’s where the math comes in.
They used a combination of optical and X-ray telescope to produce maps of the gravitational fields of the clusters. This was done by computing the gravitational lensing effect distorting the images of other, more distant galaxies visible *behind* the collided clusters. By carefully computing the distortion caused by gravitational lensing, they were able to determine the distribution of *mass* in the collided clusters. And what they found was the bulk of the mass was *not* in the light matter. It was in the places that the center of gravities of the clusters would have been *without* the shock-wave effects of the collision. So the bulk of the mass of these two clusters do not appear on our telescope images; but it behaves exactly as the math predicts it would if it were dark matter.
The prediction and the result are both based on very careful computations based on the *mathematical* predictions of gravity and relativity. They were able to predict precisely what they would expect from the interaction using a mathematical model of the how the gas clouds would interact to be swept away; and how the dark matter would interact gravitationally to predict where the dark matter masses should be. Then they were able, via a *separate* computation to determine how much mass was in what location based on gravitational lensing. And finally, they were able to compare the two *separately computed* results to see if the reality matched the prediction.
Now *that* is both good math and good science! And the science could *not* have been done without the math.
[dark-matter-paper]: http://chandra.harvard.edu/photo/2006/1e0657/media/paper.pdf
[cv]: http://cosmicvariance.com/2006/08/21/dark-matter-exists/
[electric]: http://www.holoscience.com/
[animate]: http://chandra.harvard.edu/photo/2006/1e0657/media/bullet.mpg
A Glance at Hyperreal Numbers
Since we talked about the surreals, I thought it would be interesting to take a *very* brief look at an alternative system that also provides a way of looking at infinites and infinitessimals: the *hyperreal* numbers.
The hyperreal numbers are not a construction like the surreals; instead they’re defined by axiom. The basic idea is that for the normal real numbers, there are a set of basic statements that we can make – statements of first order logic; and there is a basic structure of the set: it’s an *ordered field*.
Hyperreals add the “number” ω, the size of the set of natural numbers, so that you can construct numbers using ω, like ω+1, 1/ω, etc; but it constrains it by axiom so that the set of hyperreals is *still* an ordered field; and all statements that are true in first-order predicate logic over the reals are true in first-order predicate logic over the hyperreals.
For notation, we write the real field ℜ, and the hyperreal field ℜ*.
The Structure of Reals and Hyperreals: What is an ordered field?
——————————————————————-
If you’ve been a long-time reader of GM/BM, you’ll remember the discussion of group theory. If not, you might want to take a look at it; there’s a link in the sidebar.
A field is a commutative ring, where the multiplicative identity and the additive identity are not equal; where all numbers have an additive inverse, and all numbers except 0 have a multiplicative inverse.
Of course, for most people, that statement is completely worthless.
In abstract algebra, we study things about the basic structure of the sets where algebra works. The most basic structure is a *group*. A group is basically a set of values with a single operation, “×”, called the *group operator*. The “×” operation is *closed* over the set, meaning that for any values x and y in the set, x × y produces a value that is also in the set. The group operator also must be associative – that is, for any values x, y, and z, x×(y×z) = (x×y)×z. The set contains an *identity element* for the group, generally written “1”, which has the property that for every value x in the group, “x×1=x”. And finally, for any value x in the set, there must be a value x-1 such that x×x-1=1. We often write a group as (G,×) where G is the set of values, and × is the group operator.
So, for example, the integers with the “+” operation form a group, (Z,+). The real numbers *almost* form a group with multiplication, except that “0” has no inverse. If you take the real numbers without 0, then you get a group.
If the group operator is also commutative (x=y if/f y=x), then it’s called an *abelian group*. Addition with “+” is an abelian group.
A *ring* (R,+,×) is a set with two operations. (R,+) must be an abelian group; (R-{0},×) needs to be a group. If × is commutative (meaning (R-{0},×) is abelian), then the group is called a *commutative* group.
A *field* (F,+,×) is a commutative ring with two operators “+” and “×”; where the identity value for “+” is written 0, and the identity for “×” is written 1; all values have additive inverses, all values except 0 have multiplicative inverses; and 0 ≠ 1. A *subfield* (S,+,×) of a field (F,+,×) is a field with the same operations as F, and where its set of values is a subset of the values of F.
*Finally*, an *ordered* field is a field with a total order “≤”: for any two values x and y, either “x ≤ y” or “y ≤ x”, and if x ≤ y ∧ y ≤ x then x=y. The total order must also respect the two operations: if a ≤ b, then a + x ≤ b + x; and if 0 ≤ a and 0 ≤ b then 0 ≤ a×b.
The real numbers are the canonical example of an ordered field.
*(The definitions above were corrected to remove several errors pointed out in the comments by readers “Dave Glasser” and “billb”. As usual, thanks for the corrections!)*
One of the things we need to ensure for the hyperreal numbers to work is that they form an ordered field; and that the real numbers are an ordered subfield of the hyperreals.
The Transfer Principle
————————
To do the axiomatic definition of the hyperreal numbers, we need something called *the transfer principle*. I’m not going to go into the full details of the transfer principle, because it’s not a simple thing to fully define it, and prove that it works. But the intuitive idea of it isn’t hard.
What the transfer principle says is: *For any **first order** statement L that’s true for the ordered field of real numbers, L is also true for the ordered field of hyperreal numbers*.
So for example: ∀ x ∈ ℜ, &exists; y ∈ ℜ : x ≤ y. Therefore, for any hyperreal number x ∈ ℜ*, &exists y ∈ ℜ* : x ≤ y.
Defining the Hyperreal Numbers
——————————–
To define the hyperreal numbers so that they do form an ordered field with the right property, we need to two things:
1. Define at least one hyperreal number that is not a real number.
2. Show that the *transfer principle* applies.
So we define ω as a hyperreal number such that ∀ r ∈ ℜ, r < ω.
What we *should* do next is prove that the transfer principle applies. But that’s well beyond the scope of this post.
What we end up with is very similar to what we have with the surreal numbers. We have infinitely large numbers. And because of the transfer principle, since there’s a successor for any real number, that means that there’s a successor for ω, so there is an ω+1. Since multiplication works (by transfer), there is a number 2×ω. Since the hyperreals are a field, ω has a multiplicative inverse, the infinitessimal 1/ω, and an additive inverse, -ω.
There is, of course, a catch. Not quite everything can transfer from ℜ to ℜ*. We are constrained to *first order* statements. What that means is that we are limited to simple direct statements; we can’t make statements that are quantified over other statements.
So for example, we can say that for any real number N, the series 1,1+1,1+1+1,1+1+1,1+1+1+1,… will eventually reach a point where every element after that point will be larger than N.
But that’s not a first order statement. The *series* 1, 1+1, 1+1+1, … is a *second order* statement: it isn’t talking about a simple single statement. It’s talking about a *series* of statements. So the transfer principle fails.
That does end up being a fairly serious limit. There are a lot of things that you can’t say using first-order statements. But in exchange for that limitation, you get the ability to talk about infinitely large and infinitely small values, which can make some problems *much* easier to understand.
Arithmetic with Surreal Numbers
Last thursday, I introduced the construction of John Conway’s beautiful surreal numbers. Today I’m going to show you how to do arithmetic using surreals. It’s really quite amazing how it all works! If you haven’t read the original post introducing surreals, you’ll definitely want to [go back and read that][surreals] before looking at this post!
Random Quotes
I figured it was time I did the latest random thing to be wandering its way around Scienceblogs. [Janet has introduced the “random quotes” meme.][janet], in which we’re supposed to go wandering through the [quotes here][quotes], and pick the first five that reflect “who you are or what you believe”.
1. “Human beings are perhaps never more frightening than when they are convinced beyond doubt that they are right.”, Laurens Van der Post, The Lost World of the Kalahari (1958). Could any quote possibly be more true?
2. “He that would make his own liberty secure, must guard even his enemy from oppression; for if he violates this duty, he establishes a precedent that will reach to himself.”, Thomas Paine (1737 – 1809). Given recent events in this country, this is particularly apropos.
3. “We are here to change the world with small acts of thoughtfulness done daily rather than with one great breakthrough.”, Rabbi Harold Kushner. I’ve actually taken a class with Rabbi Kushner, and it was wonderful; this quote to me sums up a big part of how I try to live my life.
4. “Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science.”, Henri Poincare (1854 – 1912). I couldn’t possibly pick quotes for this blog without quoting a mathematician. Considering what I do on this blog, this one is quote appropriate for me. So many of the creationist screeds that I criticize are based on collections of actual facts put together in stupid ways that turn them in garbage instead of science.
5. “The art of dining well is no slight art, the pleasure not a slight pleasure.”, Michel de Montaigne (1533 – 1592). Yes, I’m a foodie. Maybe someday I’ll post a recipe or two. I used to have a website with the recipes from my Y2K New Years Eve party, but it got lost when I switched ISPs and forgot to copy it.
[janet]: http://scienceblogs.com/ethicsandscience/2006/08/random_quotations_meme.php
[quotes]: http://www.quotationspage.com/random.php3
A Crank Responds: Georgie-Boy and his "Scientific Proof of God"
Remember my post several weeks ago about [“The First Scientific Proof of God”?][georgie] The author, Georgie-boy Shollenberger popped up [in the comments yesterday][georgie-comments], and posted [a response][georgie-responds] on his blog.
This is how he describes this blog:
>This website is an example of how some math teachers are thinking and teaching
>your children. In general, this website is a Good Math, Bad Math web. On this
>web, debunking creationism is listed under the bad math category. So, your
>children are most likely taught by atheists. Is this what parents want?
If this blog is indeed an example of how math teachers are thinking and teaching, then I’m very flattered. I’m not a teacher, but I would very much *like* to be one; if my writing works well for teaching math, then that means I’m doing a good job.
But the second part: does “debunking creationism” imply atheism? For a guy who purports to have made the greatest discovery in the history of the world; and who claims to show why both math and logic need to be reexamined from their very roots, this is a pathetic claim.
First: Creationism is *not* the only possible theistic belief system.
Second: Creationism is *bad* theism. It consists of throwing away math, logic, science, and reason all in an effort to support a bizarre and unreasonable interpretation of one poorly translated religious text.
Third: I’m not an atheist.
He follows that intro with an interesting nonsequitur:
>Mathematicians review my suggestion to restudy mathematics. First, they do not
>believe that humans might be living on other planets. You might agree with them
>but my scientific proof requires other planets to maintain human life
>eternally. Apparently, the reviewers believe that the evening stars are merely
>lights as the ancients thought. How mindless. When seeking the effects of a
>proven God, planet earth is not the first planet that has humans and will not
>be the last planet that has humans.
Fascinating though process there, huh? I criticize him for sloppy mathematical arguments, and therefore “I do not believe that humans might be living on other planets”, and I “believe that the evening stars are merely lights”.
As a matter of fact, I *don’t* believe that there are humans living on other planets. But how one can conclude from my criticism of his math that I think “evening stars are merely lights”? (Or that I believe that humans don’t live on other planets, for that matter? Just because I *do* believe that humans don’t live on other planets doesn’t mean you can conclude that from my criticism of his sloppy math!)
>… But, the author gripes because my book must
>be purchased to determine what I say. Yet, mathematicians make and sell books
>regularly.
Yes, mathematicians make and sell books. But I’ve yet to see a major mathematical discovery that you could *only* see if you were willing to pay
the author.
For example, the other day, I wrote about Grigory Perelman’s proof of the Poincare conjecture. It’s available online for anyone who wants it:
* The entropy formula for the Ricci flow and its geometric applications
* Ricci flow with surgery on three-manifolds
* Finite extinction time for the solutions to the Ricci flow on certain three-manifolds
Or Conway’s surreal numbers? Yes, he wrote an [excellent book][onag] on them. He also made information on them widely available to all sorts of people. He showed them to Don Knuth, who wrote [the first book][knuth-book] on them. There’ve been articles on them all over – from Marvin Gardner in Scientific American to random people on personal websites. He didn’t demand that everyone give him money to see his work.
How about Einstein? He published relativity in a physics journal called “[Annalen der Physik][annalen]”. At the time, there was nothing like the internet, and scientists pretty much always published in journals (as they continue to do today). Annalen does *not* pay the authors of papers; it’s available in *every* major university library; and you are permitted to make personal copies of articles from it for academic use.
Mathematicians and scientists publish articles and books – but we don’t expect (or even particularly want) to be able to restrict access to our ideas *only* to people willing to give us money to see them.
Georgie-boy doesn’t do anything like that. If you want to see his wonderful, world-changing proof, you have to pay him for it.
Finally, he gets around to addressing my criticism of his *math*.
>The author focuses on the concept of infinite, but does not seem to understand
>the great mathematician, Georg Cantor, who discovered the transfinite numbers.
>Instead, the author (1) plays with Aristotle’s potential infinity, which Cantor
>calls the mathematical or bad infinity, (2) plays with ‘infinity by division,’
>which is a verb that defined the atom for the ancients atomists, (3) plays with
>’infinity by addition,’ which applies to Cantor’s transfinite numbers, and (4)
>plays with surreal numbers in which infinity becomes a real number. I would
>throw John Conway’s surreal numbers into the circle file. Then, the author
>charges me with saying that God is a number infinity. At no time have I ever
>gave God a number because. God is not a number. God’s oneness opposes the
>universes’ manyness and thus precedes all finite and infinite numbers that will
>ever be found in the universe.
Why did I talk about Aristotle’s potential infinity? Because Georgie-boy *specifically claims that mathematicians use Aristotle’s infinity*. Infinity by addition and infinity by division are the two forms of infinity discussed by Aristotle. The horror! The horror! I actually *criticized* Georgie-boy by *addressing his arguments*! Oh, will my unfairness and unreasonability never cease?!
Oh, and why would he throw Conway’s surreals in the trash? Who knows? It’s particularly interesting the way that he juxtaposes Cantor and the transfinite numbers in defense of his ideas, while tossing Conway and the surreals into the trash. Because, you see, the surreals are based *on the same concept of ordinals* as the transfinite numbers. *(Note: the previous paragraph originally had a typo; where it currently says “transfinite numbers”, I originally repeated “surreal numbers”. Thanks to commenter “Noodle” for the catch.)*
>My suggestion to restudy mathematics is a serious matter because I discovered
>the first scientific proof of God. I conclude that this discovery has vast
>potentials in mathematics and all sciences. With this proof, big changes can be
>expected.
Yes, his theory has *vast potential*. It’s going to change the world! It’s going to revolutionize all of math and science! And all you need to do to learn about it is: **buy his book**! Because he won’t tell you about it otherwise.
>… For instance, Cantor’s transfinite numbers must be developed by our
>mathematicians so we can understand the universe’s atoms, the cosmology of God,
>and the cells of all the living bodies. Unfortunately, the atheistic
>mathematicianc believe that we live only in world of numbers. The theory of God
>will not go away during the life of any person. Today’s mathematicians have a
>choice to work with 85% of the people in the USA who believe in God. On the
>other hand, they can live privately ‘in their box of finites.’ I hope to
>convince ‘the majority’ in the USA that the field of mathematics is falling
>apart and must thus be reformed but also expanded considerably.
Yeah, we need to start studying transfinite numbers, because *nobody* has been studying anything like that. (Except, of course, for thousands of number theorists.)
And we need to stop being atheists (even when we aren’t), because the existence of god means, ummm, well, ummm…. Not very much in terms of math?
And mathematics is falling apart! Just because we’ve managed to accomplish trivial little things like proving the Poincare conjecture and Fermat’s last theorem; characterizing the fundamental limits of mathematics, and silly things like that means *nothing*. Mathematics is falling apart! Who can save us?!
Why, nothing can save us except Georgie-boy!
As long as we send him some cash.
[georgie]: http://scienceblogs.com/goodmath/2006/07/restudying_math_in_light_of_th.php
[georgie-comments]:http://scienceblogs.com/goodmath/2006/07/restudying_math_in_light_of_th.php#comment-194071
[georgie-responds]: http://georgeshollenberger.blogspot.com/2006/08/what-mathematicians-are-teaching-your.html
[onag]: http://rcm.amazon.com/e/cm?t=goodmathbadma-20&o=1&p=8&l=as1&asins=1568811276&fc1=000000&IS2=1<1=_blank&lc1=0000ff&bc1=000000&bg1=ffffff&f=ifr
[knuth-book]: http://rcm.amazon.com/e/cm?t=goodmathbadma-20&o=1&p=8&l=as1&asins=0201038129&fc1=000000&IS2=1<1=_blank&lc1=0000ff&bc1=000000&bg1=ffffff&f=ifr
[annalen]: http://www.wiley-vch.de/publish/en/journals/alphabeticIndex/2257/
Beautiful Insanity: Pathological Programming in SNUSP
Todays programming language insanity is a real winner. It’s a language called SNUSP. You can find the language specification [here][snuspspec], a [compiler][snuspcomp], and [an interpreter embedded in a web page][snuspinterp]. It’s sort of like a cross between [Befunge][befunge] and [Brainfuck][brainfuck], except that it also allows subroutines. (And in a variant, *threads*!) The real beauty of SNUSP is its beauty: that is, programs in SNUSP are actually really quite pretty, and watching them run can be positively entrancing.
SNUSP keeps its data on a tape, like Brainfuck. The basic instructions are very Brainfuck like:
1. “” move the data tape pointer one cell to the right.
3. “+” add one to the value in the current data tape cell.
4. “-” subtract one from the value of the current data tape cell.
5. “,” read a byte from stdin to the current tape cell.
6. “.” write a byte from the current tape cell to stdout.
7. “!” skip the next instruction.
8. “?” skip the next instruction if the current tape cell contains zero.
Then there’s the two-dimensional control flow. There aren’t many instructions here.
1. “/” bounce the current control flow direction as if the “/” were a mirror: if the program is flowing up, switch to right; if it’s flowing down, switch to left; if it’s flowing left, switch to down; and if its flowing right, switch to up.
2. “” bounce the other way; also just like a mirror.
3. “=” noop for drawing a path flowing left/right.
4. “|” noop for drawing a path flowing up/down.
So far, we’ve pretty much got a straightforward mix between Brainfuck and Befunge. Here’s where it becomes particularly twisted. It also has two more
instructions for handling subroutines. There’s a call stack which records pairs of location and direction, and the two instructions work with that:
1. “@” means push the current program location and the current direction onto the stack.
2. “#” means pop the top of the stack, set the location and direction, and *skip* one cell. If there is nothing on the stack, then exit (end program).
Finally, the program execution starts out wherever there is a “$”, moving to the right.
So, for example, here’s a program that reads a number and then prints it out twice:
/==!/===.===#
| |
$====,===@/==@/====#
So, it starts at the “$” flowing right. Then gets to the “,”, and reads a value
into the current tape cell. It hits the first “@”, records the location and direction on the stack. Then it hits the “/” mirror, and goes up until it hits the “/” mirror, and turns right. It gets to the “!” and skips over the “/” mirror, then the “.” prints, and the “#” pops the stack. So it returns to the
first “@”, skips over the “/” mirror, and gets to the second “@”, which pushes the stack, etc.
Here’s a simple subroutine for adding 48 to a cell:
=++++++++++++++++++++++++++++++++++++++++++++++++#
Or a slight variant:
/=+++++++++++++++++++++++++
| #+++++++++++++++++++++++/
|
$
Or (copying from the language manual), how about this one? This one starts to give you an idea of what I like about this bugger; the programs just *look* cool. Writing a program in SNUSP can be as much art as it is programming.
#//
$===!++++
/++++/
/=++++
!///
One last +48 subroutine,
123 4
/=@@@+@+++++#
|
$
This last one is very clever, so I’ll walk through it. The “1234” on the top are comments; any character that isn’t an instruction is ignored. They’re there to label things for me to explain.
The program goes to @1. It pushes the loc/dir on the stack. Then it gets to @2, pushed again. (So now the stack is “@1right,@2right”). Then @3, push (stack=”@1right,@2right,@3right”). Then add one to the cell. Push again (stack=@1right,@2right,@3right,@4right”). Then 5 “+”s, so add 5 to the cell. So we’ve added 6. Then we hit “#”, so pop, return to @4, skip one cell. So 4+s get executed. So we’ve added 10. Then pop again (so stack is “@1right,@2right”), return to @3, skip one instruction. So we’re back to @4; push (stack=@1right,@2right,@4right). Add five (we’re up to “+15”), and pop the stack, which brings us back to @4 again, and skip one cell, so now add another 4 (+19). Pop (stack=@1right), and we’re at @2. Skip one instruction, so we jump over @3. Then add one (+20), and repeat what happened before when we first got to “@4”, adding another 9 (+29). Pop again (so stack is empty), skip one instruction, so we’re at @3. Skip, push, repeat from @4 (+38). Pop back to @2, skip @3, add one (+39), push @4, repeat the same thing from @4 again (+48).
Here’s a real beauty: Multiplication, with documentation. If you look at it carefully, it’s actually reasonably clear how it works! Given this instruction set, that’s *truly* an amazing feat.
read two characters ,>,== * /=================== ATOI ———-
convert to integers /=/@</@=/ * // /===== ITOA ++++++++++ /———-/
multiply @ =!=========/ // /++++++++++/ ———-
convert back !/@!============/ ++++++++++ /———-/
and print the result / .# * /++++++++++/ ——–#
/====================/ * ++++++++#
|
| /- #/?=<<<>>> />>+<+>>>+/ // /======== BSPL2 !======?/#
| /->+< /===|=========== FMOV4 =/ // /<+>-
| #?===/! FMOV1 =|===|============== /====/ /====== FSPL2 !======?/#
| /==|===|==============|==|=======/
| * * *|* | * | * * * * * * *|* | * * * /+@/>@/>>=== /====>>@<@=?/@==<#<<<== !<<<>>>-?/ * // /-
* \ /@========|======!/?>=!/?>!/=======?<<<#
| -/>>+<<+-?+>?/
\!=</</
=== 0 and y = 0
or A(x-1,A(x,y-1)) if x > 0 and y > 0
The value of Ackerman’s function grows like nothing else. A(4, 2) is about 2×1019728.
Here’s Ackerman’s in SNUSP:
/==!/==atoi=@@@-@—–#
| |
| | /=========!==!==== ** recursion **
$,@/>,@/==ack=!? j+1
j i -@/# | | A(i,0) -> A(i-1,1)
@>@->@/@ A(i-1,A(i,j-1))
# # | | |
/-<>!=/ =====|==@>>>@< 0) ? ? | | |
>>+<>+>+<<>>+<<<-/
[snuspspec]: http://www.deepwood.net/~drlion/snusp/snusp-1.0-spec-wd1.pdf
[snuspcomp]: http://www.baumanfamily.com/john/esoteric.html
[snupsinterp]: http://www.quirkster.com/snusp/snusp-js.html
[brainfuck]: http://scienceblogs.com/goodmath/2006/07/gmbm_friday_pathological_progr.php
[befunge]: http://scienceblogs.com/goodmath/2006/07/friday_pathological_programmin.php