Maximum Flow and Minimum Cut

I got started on this series about graph theory by writing a post debunking something foolish that George Gilder said about network theory. Today I’m going to talk about one of the classic problems in graph theory, which happens to be a network problem. It’s called the maximum flow problem.

The idea is that you have a weighted graph, which is modeling a bunch of places connected by pipes of different sizes. The sizes of the pipes are represented by the weight labels on the edges, which
are called capacities. Pick two nodes in the graph, called the source and the sink, how much can you pump from source to sink without exceeding the capacities of the pipes? That value is called the maximum flow of the graph.

Continue reading

The Sierpinski Gasket by Affine

sier-shear.png

So, in my last post, I promised to explain how the chaos game is is an attractor for the Sierpinski triangle. It’s actually pretty simple. First, though, we’ll introduce the idea of an affine transformation. Affine transformations aren’t strictly necessary for understanding the Chaos game, but by understanding the Chaos game in terms of affines, it makes it easier to understand
other attractors.

Continue reading

Howling at the Moon: Bad Gravitational Math

PF_997949~Gray-Wolf-Howling-at-Moon-Posters.jpg

There’s one piece of bad math that I’ve encountered relatively frequently in conversations. It’s
incredibly frustrating to me, because it’s just so crazy – but the way we teach math and physics, far to many people just don’t have enough of a clue to see how foolish it really is.

This comes up in conversations with lay-people whenever a new space probe is launched. It’s generally presented in the form of a question; something like “That TV announcer said something about a point between the earth and the moon where gravity cancels, so there’s no gravitational pull towards either the earth or the moon. How can the moon cause tides if its gravity is cancelled all the way out there?”

I’ve never found a form of this that was sufficiently mockable – in general, people who ask the question know that there’s something wrong with the question; they know that it’s stupid, but they don’t know why. I don’t like to make fun of that: people who ask a question because they know that their ignorant about something, and they’re trying to fix that patch of ignorance – they don’t deserve to be mocked. So I’ve avoided this. Until now: I’ve found the perfect mockable presentation of this problem. And wait till you see the wonderfully insane form I found it in!

Continue reading

Iterated Function Systems and Attractors

Most of the fractals that I’ve written about so far – including all of the L-system fractals – are
examples of something called iterated function systems. Speaking informally, an iterated function
system is one where you have a transformation function which you apply repeatedly. Most iterated
function systems work in a contracting mode, where the function is repeatedly applied to smaller
and smaller pieces of the original set.

There’s another very interesting way of describing these fractals, and I find it very surprising
that it’s equivalent. It’s the idea of an attractor. An attractor is a dynamical system which, no matter what its starting point, will always evolve towards a particular shape. Even if you
perturb the dynamical system, up to some point, the pertubation will fade away over time, and the system
will continue to evolve toward the same target.

Continue reading

Geometric L-systems

256px-T-Square_fractal_%288_iterations%29.png

As I alluded to yesterday, there’s an analogue of L-systems for things more complicated than curves. In fact, there are a variety of them. I’m going to show you one simple example, called a geometric L-system, which is useful for generating a certain kind of iterated function fractal; other variants work in a similar way.

Continue reading

Weighted Graphs and the Minimum Spanning Tree

Moving on from simple graphs, one of the things you can do to make things more interesting
is to associate numeric values with the edges, called the weights of those edges. This variation
of a graph is called a weighted graph.

Weighted graphs are extremely useful buggers: many real-world optimization problems ultimately
reduce to some kind of weighted graph problem. A few examples include:

  • The traveling salesman problem (TSP): given a set of cities, and information about travel-times
    between pairs of cities, find the shortest route that visits all cities. This is actually an
    incredibly commonly used problem; for example, in large data centers which manage backups,
    a set of backup-restore requests is served by computing the seek time between pairs
    of requests (where seek time includes finding the correct tape, loading it into a drive, and
    scanning to the correct point on tape), and then finding the optimum order in which to fulfill
    the requests by using a TSP algorithm
  • Shortest path problem: a subset of the TSP; given a set of cities and information about
    travel-time between cities with direct transportation routes, find the shortest path between
    two cities.
  • The minimum spanning tree problem: given a weighted graph, find the spanning tree
    with minimum total edge-weight. Airlines use minimum spanning trees to work out their basic
    route system: flights between hubs are low-cost; other flights have varying prices depending on how many people fly them, what airplanes the airports can support, fuel transport costs, etc. The best
    set of routes is the minimum spanning tree.

Continue reading

L-System Fractals

fb.gif

In the post about Koch curves, I talked about how a grammar-rewrite system could be used to describe fractals. There’s a bit more to the grammar idea that I originally suggested. There’s something called an L-system (short for Lindenmayer system, after Aristid Lindenmayer, who invented it for describing the growth patterns of plants), which is a variant of the Thue grammar, which is extremely useful for generating a wide range of interesting fractals for describing plant growth, turbulence patterns, and lots of other things.

Continue reading

Fractal Dust and Noise

cantor-dust-tri.png

While reading Mandelbrot’s text on fractals, I found something that surprised me: a relationship between Shannon’s information theory and fractals. Thinking about it a bit, it’s not really that suprising; in fact, it’s more surprising that I’ve managed to read so much about information theory without encountering the fractal nature of noise in a more than cursory way. But noise in a communication channel is fractal – and relates to one of the earliest pathological fractal sets: Cantor’s set, which Mandelbrot elegantly terms “Cantor’s dust”. Since I find that a wonderfully description, almost poetic way of describing it, I’ll adopt Mandelbrot’s terminology.

Cantor’s dust is a pathological set. It’s caused no small amount of consternation among mathematicians and physicists who find it too strange, too bizarre to accept as anything more than an extreme artifact of logic in the realm of pure math. But it’s a very simple thing. To make it, you start a line segment. Cut it into three identical parts. Erase the middle one. Then repeat the cutting into thirds and erasing the middle on each of the two remaining segments – and then the segments remaining from that, and so on. The diagram below shows a few steps in the construction of the cantor dust.

cantor-dust.png

Why should it be called a dust? Because geometrically, in the limit, it’s got to be a set of completely disconnected points – a scattering of dust across the original line-segment.

It’s so simple – what is pathological about it? Well, it’s clearly 0-dimensional in the pure sense – as I said above, it’s just a collection of points. Topologically, it’s a set of points with empty neighborhoods. And yet – look at it. It’s clearly not zero-dimensional. It’s got a 1-dimensional geometric structure. But naive topology insists that it doesn’t. But there’s worse to it. It’s a similar problem to what we saw in the shape-filling curves. Clearly, the dust disappears into nothingness. Every part of it has zero length – it’s seems like it must converge to something very close to the empty set. And yet it doesn’t: the set of points in the Cantor dust has the same cardinality as the cardinality of the set of points in the original line.

For those of us who came of age as math geeks in the late 20th century, this doesn’t really seem that strange at all. But you’ve got to remember: the 20th century was a time of great turmoil in math. At the beginning of the century, the great work was solving mathematics: turning all of math into a glorious, perfect, clean, rational, elegant edifice. The common belief of the time was that math was beautiful and perfect – that while the real world might be ugly, might have all sorts of imperfections and irrationalities, that those real-world flaws could never touch the realm of pure math: math was, in the words of one famous mathematician “the perfect mind of God”. And then came the crash: the ramifications of Cantor’s set theory, Gödel’s incompleteness, Church and Turing’s uncomputability, fractals, Chaitin’s strange numbers… The edifice collapsed; math was flawed, imperfect, incomplete as anything else in the world. It was hugely traumatic, and there was (and in some circles still is) a great deal of resistance to the idea that so much irregularity or ever irrationality was a part of the world of math – that the part of math that we can really grasp and use is just an infinitessimal part of the monstrous world of what really exists in our abstractions.

But getting back to the point at hand: what does Cantor’s dust have to do with information and noise?

Imagine that you’re listening to sound through a telephone wire with incredibly precise recording equipment. You’re sending a perfectly clear sine-wave over the line, in order to see how much noise there is.

You start pretty high – you only want to record noises that exceed, say, 20% of the amplitude of the basic sine-wave. You wind up with a pattern of bursts of noise. Those noises are scattered around, temporally. Now, mark off every time period of greater that 5 minutes where there is no noise. Those are gaps in the noise – the largest gaps that you’re going to look at. Now look in the bursts of noise – that is, the periods of time where there was no gap in the noise longer than 5 minutes. Look for periods of 1 minute where there wasn’t any noise. In between the 5 minute gaps, you’ll get a collection of smaller 1 minute gaps, separated by smaller bursts of noise. Then look into those 1 minute gaps, for 10 second periods with no noise – and you’ll break the bursts of noise up further, into bursts longer than 10 seconds, but shorter than a minute. Keep doing that, and eventually, you’ll run out of noise. But turn down your noise threshold so that you can hear noise of a smaller amplitude, and you can find more noise, and more gaps, breaking up the bursts.

If you look at the distribution of noise, one thing you’ll notice is that the levels are independent: the length of the longest gaps has no relation to the frequency of smaller gaps between them. And the other thing you’ll notice is that the frequency of gaps is self-similar: the distribution of long gaps relative to sections of the recording of long length are the same as the distribution of short gaps relative to shorter sections of the recording. The noise distribution is fractal! In fact, it’s pretty much a slightly randomized version of Cantor’s dust.

Understanding the structure of noise isn’t just interesting in the abstract: it provides a necessary piece of knowledge, which is used regularly by communication engineers to determine the necessary properties of a communication channel in order to ensure proper transmission and storage of information. Recognizing the fractal nature of noise makes it possible to better predict the properties of that channel, and determine how much information we can safely pump through it, and how much redundancy we need to add to the information to prevent data loss.

Fractal Pathology: Peano’s Space Filling Curve

colored-hilbert.jpg

One of the strangest things in fractals, at least to me, is the idea of space filling curves. A space filling curve is a curve constructed using a Koch-like replacement method, but instead of being self-avoiding, it eventually contacts itself at every point.

What’s so strange about these things is that they start out as a non-self-contacting curve. Through further steps in the construction process, they get closer and closer to self-contacting, without touching. But in the limit, when the construction process is complete, you have a filled square.

Why is that so odd? Because you’ve basically taken a one-dimensional thing – a line – with no width at all – and by bending it enough times, you’ve wound up with a two-dimensional figure. This isn’t just odd to me – this was considered a crisis by many mathematicians – it seems to break some of the fundamental assumptions of geometry: how did we get width from something with no width? It’s nonsensical!

Continue reading

Powers and Products of Graphs

hyperproduct.jpg

Often, we use graphs as a structured representation of some kind of information. Many interesting problems involving the things we’re representing can then by described in terms of operations over
graphs. Today, I’m going to talk about some of the basic operations that we can do on graphs, and in later posts, we’ll see how those operations are used to solve graph-based problems.

Continue reading