Category Archives: Good Math

Building Manifolds with Products

Time to get back to some topology, with the new computer. Short post this morning, but at least it’s something. (I had a few posts queued up, just needing diagrams, but they got burned with the old computer. I had my work stuff backed up, but I don’t let my personal stuff get into the company backups; I like to keep them clearly separated. And I didn’t run my backups the way I should have for a few weeks.)
Last time, I started to explain a bit of patchwork: building manifolds from other manifolds using *gluing*. I’ll have more to say about patchwork on manifolds, but first, I want to look at another way of building interesting manifolds.
At heart, I’m really an algebraist, and some of the really interesting manifolds can be defined algebraically in terms of topological product. You see, if you’ve got two manifolds **S** and **T**, then their product topology **S×T** is also a manifold. Since we already talked about topological product – both in classic topological terms, and in categorical terms, I’m not going to go back and repeat the definition. But I will just walk through a couple of examples of interesting manifolds that you can build using the product.
The easiest example is to just take some lines. Just a simple, basic line. That’s a 1 dimensional manifold. What’s the product of two lines? Hopefully, you can easily guess that: it’s a plane. The standard cartesian metric spaces are all topological products of sets of lines: ℜn is the product of *n* lines.
To be a bit more interesting, take a circle – the basic, simple circle on a cartesian plane. Not the *contents* of the circle, but the closed line of the circle itself. In topological terms, that’s a 1-sphere, and it’s also a very simple manifold with no boundary. Now take a line, which is also a simple manifold.
What happens when you take the product of the line and the circle? You get a hollow cylinder.
cylinder.jpg
What about if you take the product of the circle with *itself*? Thing about the definition of product: from any point *p* in the product **S×T**, you should be able to *project* an image of
**S** and an image of **T**. What’s the shape where you can make that work right? The torus.
torus.jpg
In fact the torus is a member of a family of topological spaces called the toroids. For any dimensionality *n*, there is an *n*-toroid which the the product of *n* circles. The 1-toroid is a circle; the 2-toroid is our familiar torus; the 3-toroid is a mess. (Beyond the 2-toroid, our ability to visualize them falls apart; what kind of figure can be *sliced* to produce a torus and a circle? The *concept* isn’t too difficult, but the *image* is almost impossible.)

Haskell and Scheme: Which One and Why?

While I was waiting for stuff to install on my new machine, I was doing some browsing around the web, and came across an interesting article at a blog called “The Only Winning Move”, titled [Scheme Death Knell?](http://theonlywinningmove.blogspot.com/2006/10/scheme-death-knell.html). It’s not a bad article at all, but I disagree with its conclusions, and I thought it would be interesting to
discuss why.
The article was brought on by *another* blog post, this one at [Lambda the Ultimate](http://lambda-the-ultimate.org/) suggesting that someone translate Douglas Hofstadter’s columns introducing Scheme to replace the Scheme with Haskell.
Josh over at TOWM was slightly irritated by this idea, and concludes that the only reason
why people are suggesting that is because scheme is getting too old to be cool and trendy,
and so they want to switch to something newer.
I disagree.
Haskell is a *very* different language from Scheme, and I think that there are some genuinely good
reasons for teaching Haskell instead of Scheme. I *also* think there are some genuinely good reasons for using Scheme instead of Haskell. It’s not really clear to me which is better, but it’s worth contrasting the two to help understand the tradeoff.

Continue reading

A Metalanguage for Pathological Programming: Cellular Automata in Alpaca

Todays entry isn’t so much a pathological language itself as it is a delightful toy which can be used to *produce* pathological languages. I’m looking at Chris Pressey’s wonderful language [ALPACA](http://catseye.mine.nu:8080/projects/alpaca/), which is a meta-language for describing different kinds of cellular automata.
Frankly, I’m very jealous of Alpaca. I started writing a cellular automata language something like this on my own; I spent about two weeks of my free time working on it, and got it roughly 90% finished before I found out that he’d already done it, the bastard!
Alpaca definitely qualifies as a real language, in that it is capable of generating turing complete automatons. We can show this in two different ways. First, Conway’s life is a trivial program in Alpaca, and you can build a turing machine simulator using life. And second, Alpaca includes an implementation of a Brainfuck clone as a cellular automaton.
Alpaca’s a very simple language, but it lets you define pretty much any cellular automaton that works in a two-dimensional rectangular grid. And it comes with some very fun, and some very pathological examples.

Continue reading

Manifolds and Glue

So, after the last topology post, we know what a manifold is – it’s a structure where the neighborhoods of points are *locally* homeomorphic to open spheres in some ℜn.
We also talked a bit about the idea of *gluing*, which I’ll talk about
more today. Any manifold can be formed by *gluing together* subsets of ℜn. But what does *gluing together* mean?
Let’s start with a very common example. The surface of a sphere is a simple manifold. We can build it by gluing together *two* circles from ℜ2 (a plane). We can think of that as taking each circle, and stretching it over a bowl until it’s shaped like a hemisphere. Then we glue the two hemispheres together so that the *boundary* points of the hemispheres overlap.
Now, how can we say that formally?

Continue reading

A Bit About Number Bases

After my binary fingermath stuff, a few people wrote to me to ask about just how binary really works. For someone who does the kinds of crazy stuff that I do, the idea of different number bases is so fundamental that it’s easy to forget that most people really don’t understand the idea of using different bases.
To start off with, I need to explain how our regular number system works. Most people understand how our numbers work, but don’t necessarily understand *why* they’re set up that way.
Our number system is *positional* – that is, what a given digit of a number means is dependent on its position in the number. The positions each represent one power of ten, increasing as you go from right to left, which is why our number system is called base-10. You can do positional numbers using any number as the base – each digit will correspond to a power of the base.
So suppose we have a number like 729328. The first position from the right, containing the digit “8” is called position zero, because it tells us how many 100=1s we have. The next position is position 1, or the 101=10s column, because it says how many tens there are (2). And then we have the digit three in position 2, meaning that there are 3 102s = 3 100s, etc. So the number really means 7×105+2×104+9×103+3×102+2×101+8×100 = 7×100,000 + 2×10,000 + 8×1,000 + 3×100 + 2×10 + 8×1. Now let’s look at the same kind of thing in base 3, using the number 102120. That’s 1×35 + 0×34+2×33+1×32+2×31+0×30 = 1×254+0×81+2×27+1×9+2×3+0×1 = 254 + 54 + 9 + 6 = 323 in base 10.
radix.jpg
Normally, to show that we’re using a different base, we’d write the numbers with a subscript showing the base – so we’d normally write the binary for 11 as 10112.
To write numbers in bases larger than 10, we have one very small problem, which is that we only have numbers to write digits from 0 to 9, but hex has digits from 0 to 15. The way that we work around that is by using letters of the alphabet: A=10, B=11, C=12, D=13, E=14, F=15, … So base-16 uses 0-9 and A-F as digits.
Base-8 (octal) and base-16 (hexidecimal, or hex for short) are special to a lot of computer people. The reason for that is that because 8 and 16 are powers of 2, that means that each digit in octal and hexidecimal corresponds exactly to a specific number of binary digits. Every digit in hexidecimal is something between 0 and 15, which corresponds to exactly 4 binary digits. Base-10 doesn’t have this property – a digit in base ten takes *between* three and four digits – and that means that you *can’t* divide up a string of binary into parts that correspond to decimal digits without looking at the number. With hexidecimal, every hex-digit is 4 binary digits, which is remarkably convenient.
Why would that make a difference? Well, we store data in memory in 8-bit groups called bytes. How big a number can you store in one byte? 255. That’s a strange looking number. But what about in hexidecimal? It’s the largest number you can write in 2 digits: FF16. It gets worse as you get bigger. If you’ve got a computer that has bytes of memory numbered using 4 byte addresses, what’s the largest memory address? In hex, it’s easy to write: FFFFFFFF16. In decimal? 4294967296. The hex numbers are a lot more natural to work with when you’re doing something close to the hardware. In modern programming, we rarely use hex anymore, but a few years ago, it was absolutely ubiquitous. I used to be able to do arithmetic in hex as naturally as I can in decimal. Not anymore; I’ve gotten rusty.
Among computer geeks, there’s an old argument that number systems based on powers of two are more natural than others, but I’m not going to get into *that* basket of worms. If you’re interested, take a look at the [Intuitor Hex Headquarters](http://www.intuitor.com/hex/).

Pathological Programming: Ignorance is Bliss, or at least control.

Todays dose of pathology is another masterpiece from the mangled mind of Chris Pressey. It’s called “[Version](http://catseye.mine.nu:8080/projects/version/)”, for no particularly good reason.
It’s a wonderfully simple language: there’s only *one* actual kind of statement: the labelled assignment. Every statement is of the form: “*label*: *var* = *value*”. But like last’s weeks monstrosity, Smith, Version is a language that doesn’t really have any flow control. But instead of copying instructions the Smith way, in Version the program is toroidal: it executes all statements in sequence, and when it reaches the end, it goes back to the beginning.
The way that you manage to control your program is that one of the variables you can assign values to is special: it’s called IGNORE, and it’s value is the *ignorance space*. The value of the ignorance space is an *irregular* expression (basically, a weak wild-card subset of regexps); if a statement’s label fits the ignorance space, then the statement is ignored rather than executed. The program will keep executing as long as there are any statements that are not part of the ignorance space.

Continue reading

Meet the Manifolds

So far, we’ve been talking about topologies in the most general sense: point-set topology. As we’ve seen, there are a lot of really fascinating things that you can do using just the bare structure of topologies as families of open sets.

But most of the things that are commonly associated with topology aren’t just abstract point-sets: they’re *shapes* and *surfaces* – in topological terms, they’re things called *manifolds*.

Informally, a manifold is a set of points forming a surface that *appears to be* euclidean if you look at small sections. Manifolds include euclidean surfaces – like the standard topology on a plane; but they also include many non-euclidean surfaces, like the surface of a sphere or a torus.

Continue reading

Connected Topologies and Fixed Points

If you’ve got a connected topology, there are some neat things you can show about it. One of the interesting ones involves *fixed points*. Today I’m going to show you a few of the relatively simple fixed point properties of basic connected topologies.
To give you a taste of what’s coming: imagine that you have two sheets of graph paper, with the edges numbered with a coordinate system. So you can easily identify any point on the sheet of paper. Take one sheet, and lay it flat on the table. Take the *second* sheet, and crumple it up into a little ball. No matter how you crumple the paper into a ball, no matter where you put it down on the uncrumpled sheet, there will be at least one point on the crumpled ball of paper which is directly above the point with the same coordinate on the flat sheet.
As a quick aside: today is Yom Kippur, which means that this post is scheduled, and I’m not anywhere near my computer. So I won’t be able to make any corrections or answer any comments until late this evening.

Continue reading

Programming without Control: Smith

Joy of joys, [Cat’s Eye Technologies](http://catseye.mine.nu:8080/projects/), the home of Chris Pressey, one of the most prolific designers of thoroughly bizarre languages is back up! And so, today, we’re going to take a look at one of his masterpieces, the language *Smith*. This is one of my personal all-time favorite pathological languages; it’s positively brilliantly twisted.
Smith is, mostly, a pretty typical memory-oriented machine language. So what makes it pathological? It’s got *no* jumps, *no* branches, *no* control flow whatsoever. The program counter starts at zero, and every step, it increments by one *and only one*.
So, you might quite sensibly ask, how on earth could this beast do anything useful, much less be Turing complete? The answer is: self-modifying code; in particular, the program can *copy* its own instructions.

Continue reading

Connectedness

Next stop on our tour of topology is the idea of *connectedness*. It’s an important concept that defines a lot of useful and interesting properties of topological spaces.
The basic idea of connectedness is very simple and intuitive. If you think of a topology on a metric space like ℜ3, what connectedness means is, quite literally, connectedness in the physical sense: a space is connected if doesn’t consist of two or more pieces that never touch.
Being more formal, there are several equivalent definitions:
* The most common one is the definition in terms of open and closed sets. It’s precise, concise, and formal; but it doesn’t have a huge amount of intuitive value. A topological space **T** is connected if/f the only two sets in **T** that are both open *and* closed are **T** and ∅.
* The most intuitive one is the simplest set based definition: a topological space **T** connected if/f **T** is *not* the union of two disjoint non-empty closed sets.
* One that’s clever, in that it’s got both formality and intuition: **T** is connected if the only sets in **T** with empty boundaries are **T** and ∅.
Closely related to the ida of connectedness is separation. A topological space is *separated* if/f it’s not connected. (Profound, huh?)
Separateness becomes important when we talk about *subspaces*, because it’s much easier to define when subspaces are *separated*; and they’re connected if they’re not separated.
If A and B are subspaces of a topological space **T**, then they’re *separated in **T*** if and only if they are disjoint from each others closure. An important thing to understand here is that we are *not* saying that their *closures* are disjoint. We’re saying that A and B* are disjoint, and B and A* are disjoint, not that A* and B* are disjoint.
The distinction is much clearer with an example. Let’s look at the topological space ℜ2. We can have *A* and *B* be *open* circles. Let’s say that *A* is the open circle centered on (-1,0) with radius one; so it’s every point whose distance from (-1,0) is *less than* 1. And let’s say that *B* is the open circle centered on (1,0), with radius 1. So the two sets are what you see in the image below. *A* is the shaded part of the green circle. The outline is the *boundary* of *A*, which is part of *A**, but not part of *A* itself. *B* is the shaded part of the red circle; the outline is the boundary. Neither *A* nor *B* include the point (0,0). But both the *closure* of *A* and the *closure* of *B* contain (0,0). So *A* is disjoint from *B**; and *B* is disjoint from *A**. But *A** is *not* disjoint from *B**: they overlap at (0,0). *A* and *B* are separated, even though their closures overlap.
circles.jpg
There’s also an even stronger notion of connectness for a topological space (or for subspaces): *path*-connectedness. A space **T** is *path connected* if/f for any two points x,y ∈ **T**, there is a *continuous path* between x and y. Of course, we need to be a bit more formal than that; what’s a continuous path?
There is a continuous path from x to y in **T** if there is a continuous function *f* from the closed interval [0,1] to **T**, where *f(0)=x*, and *f(1)=y*.