Despite having written about it before, I still get a lot of questions about William Dembski’s “No Free Lunch”
(NFL) theorems. One message recently contained the question in a particularly interesting form, so I thought I’d take
the opportunity to answer it with a post.
Here’s the question I received:
Is the NFL theorem itself bad math?
If the theorem itself is sound, what’s wrong with how it’s being applied? Is it a breadth issue
or a depth issue?
Another really fun and fundamental weighted graph problem is the shortest path problem. The
question in: given a connected weighted graph G, what is the shortest path between two vertices
v and w in G?
Once upon a time, I wrote about a jackass who was criticizing his college math instructor, because the instructor couldn’t explain what made the calculus class christian, or why it was different from what would be taught in a math class at a secular college.
That kind of thinking is quite strong in certain segments of the conservative christian community, and that disgusts me. Let me show you an example, and then I’ll explain why is annoys me so much. A reader
send me a link to the math curriculum for a Baptist high school, and it seriously bugs me.
One of the most fundamental properties of fractals that we’ve mostly avoided so far is the idea of dimension. I mentioned that one of the basic properties of fractals is that their Hausdorff dimension is
larger than their simple topological dimension. But so far, I haven’t explained how to figure out the
Hausdorff dimension of a fractal.
When we’re talking about fractals, notion of dimension is tricky. There are a variety of different
ways of defining the dimension of a fractal: there’s the Hausdorff dimension; the box-counting dimension; the correlation dimension; and a variety of others. I’m going to talk about the fractal dimension, which is
a simplification of the Hausdorff dimension. If you want to see the full technical definition of
the Hausdorff dimension, I wrote about it in one of my topology posts.
I got started on this series about graph theory by writing a post debunking something foolish that George Gilder said about network theory. Today I’m going to talk about one of the classic problems in graph theory, which happens to be a network problem. It’s called the maximum flow problem.
The idea is that you have a weighted graph, which is modeling a bunch of places connected by pipes of different sizes. The sizes of the pipes are represented by the weight labels on the edges, which
are called capacities. Pick two nodes in the graph, called the source and the sink, how much can you pump from source to sink without exceeding the capacities of the pipes? That value is called the maximum flow of the graph.
So, in my last post, I promised to explain how the chaos game is is an attractor for the Sierpinski triangle. It’s actually pretty simple. First, though, we’ll introduce the idea of an affine transformation. Affine transformations aren’t strictly necessary for understanding the Chaos game, but by understanding the Chaos game in terms of affines, it makes it easier to understand
other attractors.
There’s one piece of bad math that I’ve encountered relatively frequently in conversations. It’s
incredibly frustrating to me, because it’s just so crazy – but the way we teach math and physics, far to many people just don’t have enough of a clue to see how foolish it really is.
This comes up in conversations with lay-people whenever a new space probe is launched. It’s generally presented in the form of a question; something like “That TV announcer said something about a point between the earth and the moon where gravity cancels, so there’s no gravitational pull towards either the earth or the moon. How can the moon cause tides if its gravity is cancelled all the way out there?”
I’ve never found a form of this that was sufficiently mockable – in general, people who ask the question know that there’s something wrong with the question; they know that it’s stupid, but they don’t know why. I don’t like to make fun of that: people who ask a question because they know that their ignorant about something, and they’re trying to fix that patch of ignorance – they don’t deserve to be mocked. So I’ve avoided this. Until now: I’ve found the perfect mockable presentation of this problem. And wait till you see the wonderfully insane form I found it in!
Most of the fractals that I’ve written about so far – including all of the L-system fractals – are
examples of something called iterated function systems. Speaking informally, an iterated function
system is one where you have a transformation function which you apply repeatedly. Most iterated
function systems work in a contracting mode, where the function is repeatedly applied to smaller
and smaller pieces of the original set.
There’s another very interesting way of describing these fractals, and I find it very surprising
that it’s equivalent. It’s the idea of an attractor. An attractor is a dynamical system which, no matter what its starting point, will always evolve towards a particular shape. Even if you
perturb the dynamical system, up to some point, the pertubation will fade away over time, and the system
will continue to evolve toward the same target.
As I alluded to yesterday, there’s an analogue of L-systems for things more complicated than curves. In fact, there are a variety of them. I’m going to show you one simple example, called a geometric L-system, which is useful for generating a certain kind of iterated function fractal; other variants work in a similar way.
Moving on from simple graphs, one of the things you can do to make things more interesting
is to associate numeric values with the edges, called the weights of those edges. This variation
of a graph is called a weighted graph.
Weighted graphs are extremely useful buggers: many real-world optimization problems ultimately
reduce to some kind of weighted graph problem. A few examples include:
The traveling salesman problem (TSP): given a set of cities, and information about travel-times
between pairs of cities, find the shortest route that visits all cities. This is actually an
incredibly commonly used problem; for example, in large data centers which manage backups,
a set of backup-restore requests is served by computing the seek time between pairs
of requests (where seek time includes finding the correct tape, loading it into a drive, and
scanning to the correct point on tape), and then finding the optimum order in which to fulfill
the requests by using a TSP algorithm
Shortest path problem: a subset of the TSP; given a set of cities and information about
travel-time between cities with direct transportation routes, find the shortest path between
two cities.
The minimum spanning tree problem: given a weighted graph, find the spanning tree
with minimum total edge-weight. Airlines use minimum spanning trees to work out their basic
route system: flights between hubs are low-cost; other flights have varying prices depending on how many people fly them, what airplanes the airports can support, fuel transport costs, etc. The best
set of routes is the minimum spanning tree.