Category Archives: Good Math

Types in Haskell: Types are Propositions, Programs are Proofs

(This is a revised repost of an earlier part of my Haskell tutorial.)

Haskell is a strongly typed language. In fact, the type system in Haskell
is both stricter and more expressive than any type system I’ve seen for any
non-functional language. The moment we get beyond writing trivial
integer-based functions, the type system inevitably becomes visible, so we
need to take the time now to talk about it a little bit, in order to
understand how it works.

Continue reading

Writing Basic Functions in Haskell (edited repost)

(This is a heavily edited repost of the first article in my original
Haskell tutorial.)

(I’ve attempted o write this as a literate haskell program. What that
means is that if you just cut-and-paste the text of this post from your
browser into a file whose name ends with “.lhs”, you should be able to run it
through a Haskell compiler: only lines that start with “>” are treated as
code. The nice thing about this is that this blog post is itself a
compilable, loadable Haskell source file – so I’ve compiled and tested
all of the code in here in exactly this context.)

Continue reading

The Go I Forgot: Concurrency and Go-Routines

A couple of people pointed out that in my wednesday post about Go, I completely left out the concurrency stuff! That’s what I get for rushing the post – I managed to leave out one of the most interesting subjects! Go provides very strong support for communicating processes.

I haven’t done a lot of hacking with the concurrency stuff yet – so my impressions of it are still very preliminary. But my early impressions are very good.

Continue reading

Google's New Language: Go

I’ve been being peppered with questions about Go, the new programming language just released as open-source by Google. Yes, I know about it. And yes, I’ve used it. And yes, I’ve got some strong opinions about it.

Go is an interesting language. I think that there are many fantastic things about it. I also think that there are some really dreadful things about it.

A warning before I go on: this post is definitely a bit of a rush job. I wanted to get something out before my mailbox explodes :-). I’ll probably try to do a couple of more polished posts about Go later. But this should give you a first taste.

Continue reading

Philosophizing about Programming; or "Why I'm learning to love functional programming"

Way back, about three years ago, I started writing a Haskell tutorial as a series of posts on this blog. After getting to monads, I moved on to other things. But based on some recent philosophizing, I think I’m going to come back to it. I’ll start by explaining why, and then over the next few days, I’ll re-run revised versions of old tutorial posts, and then start new material dealing with the more advanced topics that I didn’t get to before.

To start with, why am I coming back to Haskell? What changed since the last time I wrote about it?

Continue reading

Orbits, Periodic Orbits, and Dense Orbits – Oh My!

Another one of the fundamental properties of a chaotic system is dense periodic orbits. It’s a bit of an odd one: a chaotic system doesn’t have to have periodic orbits at all. But if it does, then they have to be dense.

The dense periodic orbit rule is, in many ways, very similar to the sensitivity to initial conditions. But personally, I find it rather more interesting a way of describing key concept. The idea is, when you’ve got a dense periodic orbit, it’s an odd thing. It’s a repeating system, which will cycle through the same behavior, over and over again. But when you look at a state of the system, you can’t tell which fixed path it’s on. In fact, minuscule differences in the position, differences so small that you can’t measure them, can put you onto dramatically different paths. There’s the similarity with the initial conditions rule: you’ve got the same basic idea of tiny changes producing dramatic results.

In order to understand this, we need to step back, and look at the some basics: what’s an orbit? What’s a periodic orbit? And what are dense orbits?

To begin with, what’s an orbit?

If you’ve got a dynamical system, you can usually identify certain patterns in it. In fact, you can (at least in theory) take its phase space and partition it into a collection of sub-spaces which have the property that if at any point in time, the system is in a state in one partition, it will never enter a state in any other partition. Those partitions are called orbits.

Looking at that naively, with the background that most of us have associated with the word “orbit”, you’re probably thinking of orbits as being something very much like planetary orbits. And that’s not entirely a bad connection to make: planetary orbits are orbits in the dynamical system sense. But an orbit in a dynamical system is more like the real orbits that the planets follow than like the idealized ellipses that we usually think of. Planets don’t really travel around the sun in smooth elliptical paths – they wobble. They’re pulled a little bit this way, a little bit that way by their own moons, and by other bodies also orbiting the sun. In a complex gravitational system like the solar system, the orbits are complex paths. They might never repeat – but they’re still orbits: a state where where Jupiter was orbiting 25% closer to the sun that it is now would never be on an orbital path that intersects with the current state of the solar system. he intuitive notion of “orbit” is closer to what dynamical systems call a periodic orbit: that is, an orbit that repeats its path.

A periodic orbit is an orbit that repeats over time. That is, if the system is described as a function f(t), then a periodic orbit is a set of points Q where ∃Δt : ∀q∈Q: if f(t)=q, then f(t+Δt)=q.

pendulum.tiff

Lots of non-chaotic things have periodic orbits. A really simple dynamical system with a periodic orbit is a pendulum. It’s got a period, and it loops round and round through a fixed cycle of states from its phase space. You can see it as something very much like a planetary orbit, as shown in the figure to the right.

On the other hand, in general, the real orbits of the planets in the solar system are not periodic. The solar system never passes through exactly the same state twice. There’s no point in time at which everything will be exactly the same.

But the solar system (and, I think, most chaotic systems) are, if not periodic, then nearly periodic. The exact same state will never occur twice – but it will come arbitrarily close. You have a system of orbits that look almost periodic.

But then you get to the density issues. A dynamical system with dense orbits is one where you have lots of different orbits which are all closely tangled up. Making even the tiniest change in the state of the system will shift the system into an entirely different orbit, one which may be dramatically different.

Again, think of a pendulum. In a typical pendulum, if you give the pendulum a little nudge, you’ve changed its swing: you either increased or decreased the amplitude of its swing. If it were an ideal pendulum, your tiny nudge will permanently change the orbit. Even the tiniest perturbation of it will create a permanently change. But it’s not a particularly dramatic change.

On the other hand, think of a system of planetary orbits. Give one of the planets a nudge. It might do almost nothing. Or it might result in a total breakdown of the stability of the system. There’s a very small difference between a path where a satellite is captured into gravitational orbit around a large body, and a path where the satellite is ejected in a slingshot.

Or for another example, think of a damped driven pendulum. That’s one of the classic examples of a chaotic system. It’s a pendulum that has some force that acts to reduce the swing when it gets too high; and it’s got another force that ensures that it keeps swinging. Under the right conditions, you can get very unpredictable behavior. The damped driven pendulum produces a set of orbits that really demonstrate this, as shown to the right. Tiny changes in the state of the pendulum put you in different parts of the phase space very quickly.

Damped driven chaotic pendulum - double period behavior.png

In terms of Chaos, you can think of the orbits in terms of an attractor. Remember, an attractor is a black hole in the phase space of a system, which is surrounded by a basin. Within the basin, you’re basically trapped in a system of periodic orbits. You’ll circle around the attractor forever, unable to escape, inevitably trapped in a system of periodic or nearly orbits. But even the tiniest change can push you into an entirely different orbit, because the orbits are densely tangled up around the attractor.

Chaos and Initial Conditions

One thing that I wanted to do when writing about Chaos is take a bit of time to really home in on each of the basic properties of chaos, and take a more detailed look at what they mean.

To refresh your memory, for a dynamical system to be chaotic, it needs to have three basic properties:

  1. Sensitivity to initial conditions,
  2. Dense periodic orbits, and
  3. topological mixing

The phrase “sensitivity to initial conditions” is actually a fairly poor description of what we really want to say about chaotic systems. Lots of things are sensitive to initial conditions, but are definitely not chaotic.

Before I get into it, I want to explain why I’m obsessing over this condition. It is, in many ways, the least important condition of chaos! But here I am obsessing over it.

As I said in the first post in the series, it’s the most widely known property of chaos. But I hate the way that it’s usually described. It’s just wrong. What chaos means by sensitivity to initial conditions is really quite different from the more general concept of sensitivity to initial conditions.

To illustrate, I need to get a bit formal, and really define “sensitivity to initial conditions”.

To start, we’ve got a dynamical system, which we’ll call f. To give us a way of talking about “differences”, we’ll establish a measure on f. Without going into full detail, a measure is a function M(x) which maps each point x in the phase space of f to a real number, and which has the property that points that are close together in f have measure values which are close together.

Given two points x and y in the phase space of f, the distance between those points is the absolute value of the difference of their measures, |M(x) - M(y)|.

So, we’ve got our dynamical system, with a measure over it for defining distances. One more bit of notation, and we’ll be ready to get to the important part. When we start our system f at an initial point x, we’ll write it f_x.

What sensitivity to initial conditions means is that no matter how close together two initial points x and y are, if you run the system for long enough starting at each point, the results will be separated by as large a value as you want. Phrased informally, that’s actually confusing; but when you formalize it, it actually gets simpler to understand:

Take the system f with measure M. Then f is sensitive to initial conditions if and only if:

  • Select any two points x and y such that:
  • Let diff(t) = |M(f_x(t)) - M(f_y(t))|. (Let diff(t) be the distance between f_x and ff_y at time t.)
  • \forall G, \exists T: \text{diff}(T) \text{greater than} G (No matter what value you chose for G, at some point in time T, diff(T) will be larger than G.)

Now – reading that, a naive understanding would be that the diff(T) increases monotonically as T increases – that is, that for any two values t_i and t_j, if with measure M(f(t)) = 1/f(t). And for our non-chaotic system, we’ll use g(t) = g(t-1)^2, with M(g(t)) = g(t).

Think about arbitrarily small differences starting values. In the quadratic equation, even if you start off with a miniscule difference – starting at v0=1.00001 and v1=1.00002 – you’ll get a divergence. They’ll start off very close together – after 10 steps, they only differ by 0.1. But they rapidly start to diverge. After 15 steps, they differ by about 0.5. By 16 steps, they differ by about 1.8; by 20 steps, they differ by about 1.2×109! That’s clearly a huge sensitivity to initial conditions – an initial difference of 1×10-5, and in just 20 steps, their difference is measured in billions. Pick any arbitrarily large number that you want, and if you scan far enough out, you’ll get a difference bigger than it. But there’s nothing chaotic about it – it’s just an incredibly rapidly growing curve!

In contrast, they logistic curve is amazing. Look far enough out, and you can find a point in time where the difference in measure between starting at 0.00001 and 0.00002 is as large as you could possibly want; but also, look far enough out past that divergence point, and you’ll find a point in time where the difference is as small as you could possible want! The measure values of systems starting at x and y will sometimes be close together, and sometimes far apart. They’ll continually vary – sometimes getting closer together, sometimes getting farther apart. At some point in time, they’ll be arbitrarily far apart. At other times, they’ll be arbitrarily close together.

That’s a major hallmark of chaos. It’s not just that given arbitrarily close together starting points, they’ll eventually be far apart. That’s not chaotic. It’s that they’ll be far apart at some times, and close together at other times.

Chaos encompasses the so-called butterfly effect: a butterfly flapping its wings in the amazon could cause an ice age a thousand years later. But it also encompasses the sterile elephant effect: a herd of a million rampaging giant elephants crushing a forest could end up having virtually no effect at all a thousand years later.

That’s the fascination of chaotic systems. They’re completely deterministic, and yet completely unpredictable. What makes them so amazing is how they’re a combination of incredibly simplicity and incredible complexity. How many systems can you think of that are really much simpler to define that the logistic map? But how many have outcomes that are harder to predict?

Chaos: Bifurcation and Predictable Unpredictability

800px-LogisticMap_BifurcationDiagram.png

Let’s look at one of the classic chaos examples, which demonstrates just how simple a chaotic system can be. It really doesn’t take much at all to push a system from being nice and smoothly predictable to being completely crazy.

This example comes from mathematical biology, and it generates a graph commonly known as the logistical map. The question behind the graph is, how can I predict what the stable population of a particular species will be over time?

If there was an unlimited amount of food, and there were no predators, then it would be pretty easy. You’d have a pretty straightforward exponential growth curve. You’d have a constant, R, which is the growth rate. R would be determined by two factors: the rate of reproduction, and the rate of death from old age. With that number, you could put together a simple exponential curve – and presto, you’d have an accurate description of the population over time.

But reality isn’t that simple. There’s a finite amount of resources – that is, a finite amount of food for for your population to consume. So there’s a maximum number of individuals that could possibly survive – if you get more than that, some will die until the population shrinks below that maximum threshold. Plus, there are factors like predators and disease, which reduce the available population of reproducing individuals. The growth rate only considers “How many children will be generated per member of the population?”; predators cull the population, which effectively reduces the growth rate. But it’s not a straightforward relationship: the number of individuals that will be consumed by predators and disease is related to the size of the population!

Modeling this reasonably well turns out to be really simple. You take the maximum population based on resources, Pmax. You then describe the population at any given point in time as a population ratio: a fraction of Pmax. So if your environment could sustain one million individuals, and the population is really 500,000, then you’d describe the population ratio as 1/2.

Now, you can describe the population at time T with a recurrence relation:

P(t+1)= R × P(t) × (1-P(t))

That simple equation isn’t perfect, but it’s results are impressively close to accurate. It’s good enough to be very useful for studying population growth.

So, what happens when you look at the behavior of that function as you vary R? You find that below a certain threshold value, it falls to zero. Cross that threshold, and you get a nice increasing curve, which is roughly what you’d expect. Up until you hit R=3. Then it splits, and you get an oscillation between two different values. If you keep increasing R, it will split again – your population will oscillate between 4 different values. A bit farther, and it will split again, to eight values. And then things start getting really wacky – because the curves converge on one another, and even start to overlap: you’ve reached chaos territory. On a graph of the function, at that point, the graph becomes a black blur, and things become almost completely unpredictable. It looks like the beautiful diagram at the top of this post that I copied from wikipedia (it’s much more detailed then anything I could create on my own).

But here’s where it gets really amazing.

Take a look at that graph. You can see that it looks fractal. With a graph like that, we can look for something called a self-similarity scaling factor. The idea of a SS-scaling factor is that we’ve got a system with strong self-similarity. If we scale the graph up or down, what’s the scaling factor where a scaled version of the graph will exactly overlap with the un-scaled graph/

For this population curve, the SSSF turns out to about 4.669.

What’s the SSSF for the Mandelbrot set? 4.669.

In fact, the SSSF for nearly all bifurcating systems that we see, and their related fractals, is virtually always exactly 4.669. There’s a basic structure which underlies all systems of this sort.

What’s this sort? Basically, it’s a dynamical system with a quadratic maximum. In other words, if you look at the recurrence relation for the dynamical system, it’s got a quadratic factor, and it’s got a maximum value. The equation for our population system can be written: P(t+1) = R×P(t)-P(t)2, which is obviously quadratic, and it will always produce a value between zero and one, so it’s got a fixed maximum value, and Pick any chaotic dynamical system with a quadratic maximum, and you’ll find this constant in it. Any dynamical system with those properties will have a recurrence structure with a scaling factor of 4.669.

That number, 4.669 is called the Feigenbaum constant, after Mitchell Fiegenbaum, who first discovered it. Most people believe that it’s a transcendental number, but no one is sure! We’re not really sure of quite where the number comes from, which makes it difficult to determine whether or not it’s really transcendental!

But it’s damned useful. By knowing that a system is subject to recurrence at a rate determined by Feigenbaum’s constant, we know exactly when that system will become chaotic. We don’t need to continue to observe it as it scales up to see when the system will go chaotic – we can predict exactly when it will happen just by virtue of the structure of the system. Feigenbaum’s constant predictably tell us when a system will become unpredictable.

Sorry, Denise – but God didn't make numbers

I was planning on ignoring this one, but tons of readers have been writing
to me about the latest inanity spouting from the keyboard of Discovery
Institute’s flunky, Denise O’Leary.

Here’s what she had to say:

Even though I am not a creationist by any reasonable definition,
I sometimes get pegged as the local gap tooth creationist moron. (But then I
don’t have gaps in my teeth either. Check unretouched photos.)

As the best gap tooth they could come up with, a local TV station interviewed
me about “superstition” the other day.

The issue turned out to be superstition related to numbers. Were they hoping
I’d fall in?

The skinny: Some local people want their house numbers changed because they
feel the current number assignment is “unlucky.”

Look, guys, numbers here are assigned on a strict directional rota. If the
number bugs you so much, move.

Don’t mess up the street directory for everyone else. Paramedics, fire chiefs,
police chiefs, et cetera, might need a directory they can make sense of. You
might be glad for that yourself one day.

Anyway, I didn’t get a chance to say this on the program so I will now: No
numbers are evil or unlucky. All numbers are – in my view – created by God to
march in a strict series or else a discoverable* series, and that is what
makes mathematics possible. And mathematics is evidence for design, not
superstition.

The interview may never have aired. I tend to flub the gap-tooth creationist
moron role, so interviews with me are often not aired.

* I am thinking here of numbers like pi, that just go on and on and never
shut up, but you can work with them anyway.(You just decide where you want
to cut the mike.)

Continue reading

Two Dimensional Pathological Beauty: SNUSP

I’m currently away on a family vacation, and as soon as vacation is over, I’m off on a business trip for a week. And along the way, I’ve got some deadlines for my book. So to fill in, I’m recycling some old posts. I decided that it’s been entirely too long since there was any pathological programming ’round these parts, so I’m going to repost some of my favorites.

Todays programming language insanity is a real delight – it’s one
of my all-time favorites. It’s a language
called SNUSP. You can find the language specification here,
a compiler, and
an interpreter embedded
in a web page
. It’s sort of like a cross between Befunge
and Brainfuck,
except that it also allows subroutines. (And in a variant, threads!) The
real beauty of SNUSP is its beauty: that is, programs in SNUSP are actually
really quite pretty, and watching them run can be positively entrancing.

Continue reading