Category Archives: Good Math

Two-Dimensional Pathology: Befunge

I’m currently away on a family vacation, and as soon as vacation is
over, I’m off on a business trip for a week. And along the way, I’ve got some
deadlines for my book. So to fill in, I’m recycling some old posts. I decided
that it’s been entirely too long since there was any pathological programming
’round these parts, so I’m going to repost some of my favorites.

Today, we’re going to take a look at a brilliant language called Befunge.
Befunge is the work of an evil genius named Chris Pressey.

Normal programming languages are based on a basically one-dimensional
syntax; the program is a string, a sequence of characters, and it’s processed
by reading that string in a straight-ahead fashion. But that’s not Befunge!
It’s got a two-dimensional syntax. Befunge is something like a two-dimensional
turing machine: the program is written on a two dimensionaltorus called the playfield. Each
instruction in Befunge is a single character, and where it’s located on the
torus is crucial – control is going to move either up/down or left/right
on the torus. All of the control flow is expressed by just changing the direction that
the head moves over the torus.

(In case you’re not familiar with a torus, it’s what you get
if you take a very flexible sheet of paper, and roll it so that you connect
the top edge to the bottom, and then roll that tube so that you connect the
left edge to the right. You get a donut shape where moving up from what used
to be the top of the page puts you on the bottom of the page; moving left from
the left edge of the page puts you on the right.)

Continue reading

The One… The Only… Brainf*ck

I’m currently away on a family vacation, and as soon as vacation is over, I’m off on a business trip for a week. And along the way, I’ve got some deadlines for my book. So to fill in, I’m recycling some old posts. I decided that it’s been entirely too long since there was any pathological programming ’round these parts, so I’m going to repost some of my favorites.

As long-time readers know by now, in real life, I’m not a mathematician; I’m a computer scientist. I’m still a math geek, mind you, but what I really do is very much in the realm of applied math, working on building systems to help people build programs.

One of my pathological obsessions is programming languages. Since I first got exposed to TRS-80 Model 1 BASIC back in middle school, I’ve been absolutely  nuts programming languages. Last time I counted, I’d learned about 150 different languages; and I’ve picked up more since then. I’ve written programs most of them. Like I said, I’m nuts.

These pathological programming language posts are my way of inflicting my obsession on you in a (hopefully) amusing way. You see, in this screwed up world of ours, there are lots and lots of thoroughly crazy people out there. In the world of geekery, many of those crazy people like to invent programming languages. Some very small number of them try to design good languages and succeed; a much larger number try to design good languages and fail; and then there are ones whose work I’m writing about. The ones who deliberately set out to design strange, warped, twisted, and nearly unusable languages, and succeed brilliantly. Most of the people who design them call them “esoteric” programming languages. I call them evil.

Today, the beautiful grand-daddy of the esoteric language family: the one, the only, the truly and deservedly infamous: Brainfuck, designed by Urban Müller. (There are a number of different implementations available; just follow the link.)

Only 8 commands – including input and output – all written using symbols. And yet it’s Turing complete. In fact, it’s one-up on just being Turing complete – it’s actually been formally specified with a complete formal theoretical design, called P”. And it’s even been implemented in hardware!.

Continue reading

Pathological Programming with Primes (REPOST)

I’m currently away on a family vacation, and as soon as vacation is over, I’m off on a business trip for a week. And along the way, I’ve got some deadlines for my book. So to fill in, I’m recycling some old posts. I decided that it’s been entirely too long since there was any pathological programming ’round these parts, so I’m going to repost some of my favorites.

Today’s pathological language is my personal all-time favorite pathological monstrosity. It’s an incredibly combination of perfect simplicity and complete incomprehensibility. It’s based on a piece of work called Fractran by John Conway of game theory fame. It’s a really fascinating bugger; absolutely insanely difficult to program in, but based on one of the most bizarrely elegant concepts of computation that I’ve ever seen. It’s amazing that this is Turing complete. It’s not a real programming language in the sense of being able to write practical programs; it’s more of a simple theoretical computational model which has been implemented as a
language.

Continue reading

Chaotic Systems and Escape

nbody.jpg

One of the things that confused my when I started reading about chaos is easy to explain using what we’ve covered about attractors. (The image to the side was created by Jean-Francois Colonna, and is part of his slide-show here)

Here’s the problem: We know that things like N-body gravitational systems are chaotic – and a common example of that is how a gravity-based orbital system that appears stable for a long time can suddenly go through a transition where one body is violently ejected, with enough velocity to permanently escape the orbital system.

But when we look at the definition of chaos, we see the requirement for dense periodic orbits. But if a body is ejected from a gravitational system, ejection of a body from a gravitational system is a demonstration of chaos, how can that system have periodic orbits?

The answer relates to something I mentioned in the last post. A system doesn’t have to be chaotic at all points in its phase space. It can be chaotic under some conditions – that is, chaotic in some parts of the phase space. Speaking loosely, when a phase space has chaotic regions, we tend to call it a chaotic phase space.

In the gravitational system example, you do have a region of dense periodic orbits. You can create an N-body gravitational system in which the bodies will orbit forever, never actually repeating a configuration, but also never completely breaking down. The system will never repeat. Per Ramsey theory, given any configuration in its phase space, it must eventually come arbitrarily close to repeating that configuration. But that doesn’t mean that it’s really repeating: it’s chaotic, so even those infinitesimal differences will result in divergence from the past – it will follow a different path forward.

An attractor of a chaotic system shows you a region of the phase space where the system behaves chaotically. But it’s not the entire phase space. If the attractor covered the entire space, it wouldn’t be particularly interesting or revealing. What makes it interesting is that it captures a region where you get chaotic behavior. The attractor isn’t the whole story of a chaotic systems phase space – it’s just one interesting region with useful analytic properties.

So to return to the N-body gravitational problem: the phase space of an N-body gravitational system does contain an attractor full of dense orbits. It’s definitely very sensitive to initial conditions. There are definitely phase spaces for N-body systems that are topologically mixing. None of that precludes the possibility that you can create N-body gravitational systems that break up and allow escape. The escape property isn’t a good example of the chaotic nature of the system, because it encourages people to focus on the wrong properties of the system. The system isn’t chaotic because you can create gravitational systems where a body will escape from what seemed to be a stable system. It’s chaotic because you can create systems that don’t break down, which are stable, but which are thoroughly unpredictable, and will never repeat a configuration.

Strange Attractors and the Structure of Chaos

sage0-1.png

Sorry for the slowness of the blog; I fell behind in writing my book, which is on a rather strict schedule, and until I got close to catching up, I didn’t have time to do the research necessary to write the next chaos article. (And no one has sent me any particularly interesting bad math, so I haven’t had anything to use for a quick rip.)

Anyway… Where we left off last was talking about attractors. The natural question is, why do we really care about attractors when we’re talking about chaos? That’s a question which has two different answers.

First, attractors provide an interesting way of looking at chaos. If you look at a chaotic system with an attractor, it gives you a way of understanding the chaos. If you start with a point in the attractor basin of the system, and then plot it over time, you’ll get a trace that shows you the shape of the attractor – and by doing that, you get a nice view of the structure of the system.

Second, chaotic attractors are strange. In fact, that’s their name: strange attractors: a strange attractor is an attractor whose structure has fractal dimension, and most chaotic systems have fractal-dimension attractors.

Let’s go back to the first answer, to look at it in a bit more depth. Why do we want to look in the basin in order to find the structure of the chaotic system?

If you pick a point in the attractor itself, there’s no guarantee of what it’s going to do. It might jump around inside the attractor randomly; it might be a fixed point which just sits in one place and never moves. But there’s no straightforward way of figuring out what the attractor looks like starting from a point inside of it. To return to (and strain horribly) the metaphor I used in the last post, the attractor is the part of the black hole past the even horizon: nothing inside of it can tell you anything about what it looks like from the outside. What happens inside of a black hole? How are the things that were dragged into it moving around relative to one another, or are they moving around? We can’t really tell from the outside.

But the basin is a different matter. If you start at a point in the attractor basin, you’ve got something that’s basically orbital. You know that every path starting from a point in the basin will, over time, get arbitrarily close to the attractor. It will circle and cycle around. It’s never going to escape from that area around the attractor – it’s doomed to approach it. So if you start at a point in the basin around a strange attractor, you’ll get a path that tells you something about the attractor.

Attractors can also vividly demonstrate something else about chaotic systems: they’re not necessarily chaotic everywhere. Lots of systems have the potential for chaos: that is, they’ve got sub-regions of their phase-space where they behave chaotically, but they also have regions where they don’t. Gravitational dynamics is a pretty good example of that: there are plenty of N-body systems that are pretty much stable. We can computationally roll back the history of the major bodies in our solar system for hundreds of millions of years, and still have extremely accurate descriptions of where things were. But there are regions of the phase space of an N-body system where it’s chaotic. And those regions are the attractors and attractor basins of strange attractors in the phase space.

A beautiful example of this is the first well-studied strange attractor. The guy who invented chaos theory as we know it was named Edward Lorenz. He was a meteorologist who was studying weather using computational fluid flow. He’d implemented a simulation, and as part of an accident resulting from trying to reproduce a computation, but entering less precise values for the starting conditions, he got dramatically different results. Puzzling out why, he laid the foundations of chaos theory. In the course of studying it, he took the particular equations that he was using in the original simulation, and tried to simplify them to get the simplest system that he could that still showed the non-linear behavior.

The result is one of the most well-known images of modern math: the Lorenz attractor. It’s sort of a bent figure-eight. It’s dimensionality isn’t (to my knowledge) known precisely – but it’s a hair above two (the best estimate I could find in a quick search was in the 2.08 range). It’s not a particularly complex system – but it’s fascinating. If you look at the paths in the Lorenz attractor, you’ll see that things follow an orbital path – but there’s no good way to tell when two paths that are very close together will suddenly diverge, and one will pass on the far inside of the attractor basin, and the other will fly to the outer edge. You can’t watch a simulation for long without seeing that happen.

While searching for information about this kind of stuff, I came across a wonderful demo, which relates to something else that I promised to write about. There’s a fantastic open-source mathematical software system called sage. Sage is sort of like Mathematica, but open-source and based on Python. It’s a really wonderful system, which I really will write about at some point. On the Sage blog, they posted a simple Sage program for drawing the Lorenz attractor. Follow that link, and you can see the code, and experiment with different parameters. It’s a wonderful way to get a real sense of it. The image at the top of this post was generated by that Sage program, with tweaked parameters.

Defining Dynamical Systems

In my first chaos post, I kept talking about dynamical systems without bothering to define them. Most people who read this blog probably have at least an informal idea of what a dynamical system is. But today I’m going to do a quick walkthrough of what a dynamical system is, and what the basic relation of dynamical systems is to chaos theory.

The formal definitions of dynamical systems are dependent on the notion of phase space. But before going all formal, we can walk through the basic concept informally.

The basic idea is pretty simple. A dynamical system is a system that changes
over time, and whose behavior can be (in theory) described a function that takes
time as a parameter. So, for example, if you have a gravitational system which
has three bodies interacting gravitationally, that’s a dynamical system. If you
know the initial masses, positions, and velocities of the planets, the positions of all three bodies at any future point in time is a function of the time.

Continue reading

Chaos

strange_attractors-0086.jpg.jpeg

One mathematical topic that I find fascinating, but which I’ve never had a chance to study formally is chaos. I’ve been sort of non-motivated about blog-writing lately due to so many demands on my time, which has left me feeling somewhat guilty towards those of you who follow this blog. So I decided to take this topic about which I know very little, and use the blog as an excuse to learn something about it. That gives you something interesting to read, and it gives me something to motivate me to write.

I’ll start off with a non-mathematical reason for why it interests me. Chaos is a very simple idea with very complex implications. The simplicity of the concept makes it incredibly ripe for idiots like Michael Crichton to believe that he understands it, even though he doesn’t have a clue. There’s an astonishingly huge quantity of totally bogus rubbish out there, where the authors are clueless folks who sincerely believe that their stuff is based on chaos theory – because they’ve heard the basic idea, and believed that they understood it. It’s a wonderful example of my old mantra: the worst math is no math. If you take a simple mathematical concept, and render it into informal non-mathematical words, and then try to reason from the informal stuff, what you get is garbage.

So, speaking mathematically, what is chaos?

To paraphrase something my book-editor recently mentioned: in math, the vast majority of everything is bad. Most functions are non-continuous. Most topologies are non-differentiable. Most numbers are irrational. Most irrational numbers are undescribable. And most complex systems are completely unstable.

Modern math students have, to a great degree, internalized this basic idea. We pretty much expect badness, so the implications of badness don’t surprise us. We’ve grown up mathematically knowing that there are many, many interesting things that we would really like to be able to do, but that can’t be done. That realization, that there are things that we can’t even hope to do, is a huge change from the historical attitudes of mathematicians and scientists – and it’s a very recent one. A hundred years ago, people believed that we could work out simple, predictable, closed-form solutions to all interesting mathematical problems. They expected that it might be very difficult to find a solution; and they expected that it might take a very long time, but they believed that it was always possible.

For one example that has a major influence on the study of chaos: John Von Neumann believed that he could build a nearly perfect weather prediction computer: it was just a matter of collecting enough data, and figuring out the right equations. In fact, he expected to be able to do more than that: he expected to be able to control the weather. He thought that the weather was likely to be a system where there were unstable points, and that by introducing small changes at the unstable points, that weather managers would be able to push the weather in a desired direction.

Of course, Von Neumann knew that you could never gather enough data to perfectly predict the weather. But most systems that people had studied could be approximated. If you could get measurements that were correct to within, say, 0.1%, you could use those measurements to make predictions that would be extremely close to correct – within some multiple of the precision of the basic measurements. Small measurement errors would mean small changes in the results of a prediction. So using reasonably precise but far from exact or complete measurements, you could make very accurate predictions.

For example, people studied the motion of the planets. Using the kinds of measurements that we can make using fairly crude instruments, people have been able to predict solar eclipses with great precision for hundreds of years. With better precision, measuring only the positions of the planets, we can predict all of the eclipses and alignments for the next thousand years – even though the computations will leave out the effects of everything but the 8 main planets and the sun.

Mathematicians largely assumed that most real systems would be similar: once you worked out what was involved, what equations described the system you wanted to study, you could predict that system with arbitrary precision, provided you could collect enough data.

Unfortunately, reality isn’t anywhere near that simple.

Our universe is effectively finite – so many of the places where things break seem like they shouldn’t affect us. There are no irrational numbers in real experience. Nothing that we can observe has a property whose value is an indescribable number. But even simple things break down.

Many complex systems have the property that they’re easy to describe – but where small changes have large effects. That’s the basic idea of chaos theory: that in complex dynamical systems, making a minute change to a measurement will produce huge, dramatic changes after a relatively short period of time.

For example, we compute weather predictions with the Navier-Stokes equations. N-S are a relatively simple set of equations that describe how fluids move and interact. We don’t have a closed-form solution to the N-S equations – meaning that given a particular point in a system, we can’t compute the way fluid will flow around it without also computing separately how fluid will flow around the points close to it, and we can’t compute those without computing the points around them, and so on.

So when we make weather predictions, we create a massive grid of points, and use the N-S equations to predict flows at every single point. Then we use the aggregate of that to make weather predictions. Using this, short-term predictions are generally pretty good towards the center of the grid.

But if you try to extend the predictions out in time, what you find is that they become unstable. Make a tiny, tiny change – alter the flow at one point by 1% – and suddenly, the prediction for the weather a week later is dramatically different. A difference of one percent in one out of a million cells can, over the space of a month, be responsible for the difference between a beautiful calm summer day and a cat-5 hurricane.

That basic bit is called sensitivity to initial conditions is a big part of what defines chaos – but it’s only a part. And that’s where the crackpots go wrong. Just understanding the sensitivity and divergence isn’t enough to define chaos – but to people like Crichton, understanding that piece is understanding the whole thing.

To really get the full picture, you need to dive in to topology. Chaotic systems have a property called topological mixing. Topological mixing is an idea which isn’t too complex informally, but which can take a lot of effort to describe and explain formally. The basic idea of it is that no matter where you start in a space, given enough time, you can wind up anywhere at all.

To get that notion formally, you need to look at the phase space of the system. You can define a dynamical system using a topological space called the phase space of the system. Speaking very loosely, the phase space P of a system is a topological space of points where each point p∈P corresponds to one possible state of the system, and the topological neighborhoods of p are the system states reachable on some path from p.

So – image that you have a neighborhood of points, G in a phase space. From each point in G, you traverse all possible forward paths through the phase space. At any given moment t, G will have evolved to form a new neighborhood of points, Gt. For the phase space to be chaotic, it has to have the property that for any arbitrary pair of neighborhoods G and H in the space, no matter how small they are, no matter how far apart they are, there will be a time t such that Gt and Ht will overlap.

But sensitivity to initial conditions and topological mixing together still aren’t sufficient to define chaos.

Chaotic systems must also have a property called dense periodic orbits. What that means is that Chaotic systems are approximately cyclic in a particular way. The phase space has the property that if the system passes through a point p in the neighborhood P, then after some finite period of time, the system will pass through another point in P. That’s not to say that it will repeat exactly: if it did, then you would have a repeating system, which would not be chaotic! But it will come arbitrarily close to repeating. And that almost-repetition has to have a specific property: the union of the set of all of those almost-cyclic paths must be equivalent to the entire phase-space itself. (We say, again speaking very loosely, that the set of almost-cycles is dense in the phase space.)

That’s complicated stuff. Don’t worry if you don’t understand it yet. It’ll take a lot of posts to even get close to making that comprehensible. But that’s what chaos really is: a dynamical system with all three properties: sensitivity to initial conditions, overlaps in neighborhood evolution, and dense periodic orbits.

In subsequent posts, I’ll spend some more time talking about each of the three key properties, and showing you examples of interesting chaotic systems.

Finger Tree Update: I forgot something

As an alert commenter pointed out, I left out one really important thing in
my earlier post about finger trees. That’s what I get for trying to write when
I’m sick :-). Seriously, this is a point that’s implied by the post as it stands, but never explicitly stated – and since it’s really important, it
should be stated explicitly.

The monoid-annotated tree structure doesn’t replace the original
data structure: it’s superimposed on it.

So, as I said: cons-cell style lists are ubiquitous and beloved by
functional and non-functional programmers. In finger trees, you’re
not getting rid of them. The point of finger trees is to let
you keep the convenient data structure, with its basic operations
and properties intact, and to augment it with a tree that lets you search it efficiently.

To illustrate: here’s the number list example from yesterdays post. The
original list is at the bottom, with green pointers representing the
original cons-list next-pointers. The monoid-annotated tree is on top,
with red pointers. The combination of the original list with the
monoid annotated tree is a finger tree.

counted-finger-tree-list.png

The point of this is that you’ve still got your cons list. All of the
beautiful recursive/iterative algorithms that walk the list continue to
work exactly the way that they would in the traditional cons-list: for code that walks the list, the fact that there are finger-tree pointers
sitting on top of the list is irrelevant – and, in fact, completely invisible. For algorithms that want to search the list, the tree structure is there,
and allows the searches to be performed much more quickly than they could be on
the traditional list. The superposition of those two structures is the
genius of the finger tree.

Finally: Finger Trees!

stone fingers_478fc739a1585.jpg

For ages, I’ve been promising to write about finger trees. Finger trees are
an incredibly elegant and simple structure for implementing sequence-based
data structures. They’re primarily used in functional languages, but there’s nothing
stopping an imperative-language programmer from using them as well.

In functional programming languages, lists are incredibly common. They show up everywhere; they’re just so easy to use for recursive iteration that they’re ubiquitous. I can’t think of
any non-trivial program that I’ve written in Haskell, Lisp, or OCaml that doesn’t use lists. (Heck, I’ve wound up using cons-cell libraries a couple of times in C++.)

Of course, lists aren’t perfect. In fact, for some things, they absolutely suck. Suppose I’ve got a really long list – like a list of names and phone numbers from a telephone book. My local phone book has 600 pages of names; and the pages are incredibly dense – there’ve got to be at least a couple of hundred names per page. So in a phone book for a suburban region of New York City, the list of phone numbers will have 120,000 names. Now imagine that I want to look
up one name in that list – on average, I’m going to have to chase through 60,000 pointers. 60,000 isn’t a huge number for a computer, but still, chasing 60,000 pointers is a pretty big deal. If I stored them in an array, it would be a lot less convenient for some tasks, but I could use
binary search to find names, and I could find any name in no more than 16 comparisons – and the whole shebang would be contiguous, so I wouldn’t even be chasing pointers.

What finger trees do is give me a way of representing a list that has both the convenience
of the traditional cons list, and the search efficiency of the array based method.

Continue reading

Cloud Computing

cloud-creatures.jpg

In general, I try to keep the content of this blog away from my work. I don’t do
that because it would get me in trouble, but rather because I spend enough time on work, and blogging is my hobby. But sometimes there’s an overlap.

One thing that’s come up in a lot of conversations and a lot of emails it the idea of cloud computing. A lot of people are interested in it, but they’re not really sure of what it is, or what it means.

So what do we mean when we talk about “cloud computing”? What’s the cloud? How’s it different from good old-fashioned client/server computing?

Continue reading