Category Archives: Good Math

The Mandelbrot Set

800px-Mandelbrot_set_with_coloured_environment.png

The most well-known of the fractals is the infamous Mandelbrot set. It’s one of the first things that was really studied as a fractal. It was discovered by Benoit Mandelbrot during his early study of fractals in the context of the complex dynamics of quadratic polynomials the 1980s, and studied in greater detail by Douady and Hubbard in the early to mid-80s.

It’s a beautiful example of what makes fractals so attractive to us: it’s got an extremely simple definition; an incredibly complex structure; and it’s a rich source of amazing, beautiful images. It’s also been glommed onto by an amazing number of woo-meisters, who babble on about how it represents “fractal energies” – “fractal” has become a woo-term almost as prevalent as “quantum”, and every woo-site that babbles about fractals invariably uses an image of the Mandelbrot set. It’s also become a magnet for artists – the beauty of its structure, coming from a simple bit of math captures the interest of quite a lot of folks. Two musical examples are Jonathon Coulton and the post-rock band “Mandelbrot Set”. (If you like post-rock, I definitely recommend checking out MS; and a player for brilliant Mandelbrot set song is embedded below.)

So what is the Mandelbrot set?

overall-mandelbrot.gif

Take the set of functions
f_C(x) = x^2 + C where for each f_C, C is a complex constant. That gives an infinite set of simple functions over the complex numbers. For each possible complex number C, you look at the recurrence relation generated by repeatedly applying f, starting with x=0:

  1. m(0,C)=f_C(0)
  2. m(i+1,C)=f_C(m(i, C))

If m(i,C) doesn’t diverge (escape) towards infinity as i gets larger, then the complex number C is a member of the Mandelbrot set. That’s it – that simple definition – repeatedly apply f(x)=x^2 + C for complex numbers – produces the astonishing complexity of the Mandelbrot set.

If we use that definition of the Mandelbrot set, and draw the members of the set in black, we get an image like the one above. That’s nice, but it’s probably not what you expected. We’re all used to the beautiful colored bands and auras around that basic pointy black blob. Those colored regions are not really part of the set.

Mandelbrot1.png

The way we get the colored bands is by considering *how long* it takes for the points to start to diverge. Each color band is an escape interval – that is, some measure of how many iterations it takes for the repeated application of f(x) to diverge. Images like the ones to the right and below are generated using various variants of escape-interval colorings.

images-1.jpg

images-2.jpg

images.jpg

My personal favorite rendering of the Mandelbrot set is an image called the Buddhabrot. In the Buddhabrot, what you do is look at values of C which *aren’t* in the mandebrot set. For each point m(i,C) before it escapes, plot a point. That gives you the escape path for the value C. If you take a large number of escape paths for randomly selected values of C, and you plot them so that the brightness of a pixel is determined by the number of escape paths that cross that pixel, you get the Budddhabrot. It’s fascinating because it reveals the structure in a particularly amazing way. If you look at a simple unzoomed image of the madelbrot set, what you see is a spiky black blob; the actually complexity of the structure isn’t obvious until you spend some time looking at it. The Buddhabrot is more obvious – you can see the astonishing complexity much more easily.

600px-Buddhabrot-deep.jpg

An Unsolved Simple Graph Problem

One of the things that I find fascinating about graph theory is that it’s so simple, and
yet, it’s got so much depth. Even when we’re dealing with the simplest form of a graph – undirected graphs with no 1-cycles, there are questions that *seem* like that should be obvious, but which we don’t know the answer to.
For example, there’s something called the *reconstruction theorem*. We strongly suspect that it’s really a theorem, but it remains unproven. What it says is a very precise formal version of the idea that a graph is really fully defined by a canonical collection of its subgraphs.

Continue reading

An Introduction to Fractals

gasket.jpg

I thought in addition to the graph theory (which I’m enjoying writing, but doesn’t seem to be all that popular), I’d also try doing some writing about fractals. I know pretty much nothing about fractals, but I’ve wanted to learn about them for a while, and one of the advantages of having this blog is that it gives me an excuse to learn about things that that interest me so that I can write about them.

Fractals are amazing things. They can be beautiful: everyone has seen beautiful fractal images – like the ones posted by my fellow SBer Karmen. And they’re also useful: there are a lot of phenomena in nature that seem to involve fractal structures.

But what is a fractal?

The word is a contraction of fractional dimension. The idea of that is that there are several different ways of measuring the dimensionality of a structure using topology. The structures that we call fractals are things that have a kind of fine structure that gives them a strange kind of dimensionality; their conventional topological dimension is smaller than their Hausdorff dimension. (You can look up details of what topological dimension and Hausdorff dimension mean in one of my topology articles.) The details aren’t all that important here: the key thing to understand is that there’s a fractal is a structure that breaks the usual concept of dimension: it’s shape has aspects that suggest higher dimensions. The Sierpinski carpet, for example, is topologically one-dimensional. But if you look at it, you have a clear sense of a two-dimensional figure.

carpet.jpg

That’s all frightfully abstract. Let’s take a look at one of the simplest fractals. This is called Sierpinski’s carpet. There’s a picture of a finite approximation of it over to the right. The way that you generate this fractal is to take a square. Divide the square into 9 sub-squares, and remove the center one. Then take each of the 8 squares around the edges, and do the same thing to them: break them into 9, remove the center, then repeat on the even smaller squares. Do that an infinite number of times.

When you look at the carpet, you probably think it looks two dimensional. But topologically, it is a one-dimensional space. The “edges” of the resulting figure are infinitely narrow – they have no width that needs a second dimension to describe. The whole thing is an infinitely complicated structure of lines: the total area covered by the carpet is 0! Since it’s just lines, topologically, it’s one-dimensional.

In fact, it is more than just a one dimensional shape; what it is is a kind of canonical one dimensional shape: any one-dimensional space is topologically equivalent (homeomorphic) to a subset of the carpet.

But when we look at it, we can see it has a clear structure in two dimensions. In fact, it’s a structure which really can’t be described as one-dimensional – we defined by cutting finite sized pieces from a square, which is a 2-dimensional figure. It isn’t really two dimensional; it isn’t really one dimensional. The best way of describing it is by its Hausdorff dimension, which is 1.89. So it’s almost, but not quite, two dimensional.

Sierpinski’s carpet is a very typical fractal; it’s got the traits that we use to identify fractals, which are the following:

  1. Self-similarity: a fractal has a structure that repeats itself on ever smaller scales. In the case of the carpet, you can take any non-blank square, and it’s exactly the same as a smaller version of the entire carpet.
  2. Fine structure: a fractal has a fine structure at arbitrarily small scales. In the case of the carpet, no matter how small you get, it’s always got even smaller subdivisions.
  3. Fractional dimension: its Hausdorff dimension is not an integer. Its Hausdorff dimension is also usually larger than its topological dimension. Again looking at the carpet, it’s topological dimension is 1; it’s Hausdorff dimension is 1.89.

Graph Contraction and Minors

Another useful concept in simple graph theory is *contraction* and its result, *minors*
of graphs. The idea is that there are several ways of simplifying a graph in order to study
its properties: cutting edges, removing vertices, and decomposing a graph are all methods we’ve seen before. Contraction is a different technique that works by *merging* vertices, rather than removing them.

Continue reading

Graph Decomposition and Turning Cycles

One thing that we often want to do is break a graph into pieces in a way that preserves
the structural relations between the vertices in any part. Doing that is called
*decomposing* the graph. Decomposition is a useful technique because many ways
of studying the structure of a graph, and many algorithms over graphs can work by
decomposing the graph, studying the elements of the decomposition, and then combining
the results.
To be formal: a graph G can be decomposed into a set of subgraphs {G1, G2, G3, …}, where the edges of each of the Gis are
*disjoint* subsets of the edges of G. So in a decomposition of G, *vertices* can be shared between elements of the decomposition, but *edges* cannot.

Continue reading

Edge Coloring and Graph Turning

In addition to doing vertex and face colorings of a graph, you can also do edge colorings. In an edge coloring, no two edges which are incident on the same vertex can share the same color. In general, edge coloring doesn’t get as much attention as vertex coloring or face coloring, but it can be an interesting subject. Today I’m going to show you an example of a really clever visual proof technique called *graph turning* to prove a statement about the edge colorings of complete graphs.
Just like a graph has a chromatic index for its vertex coloring, it’s got a chromatic
index for its edge coloring. The edge chromatic index of a graph G is the minimum number of colors in any edge-coloring of G. The theorem that I’m going to prove for you is about the edge chromatic index of complete graphs with 2n vertices for some integer n:
**The edge-chromatic index of a complete graph K2n = 2n-1.**

Continue reading

A Taxonomy of Some Basic Graphs

Naming Some Special Graphs
When we talk about graph theory – particularly when we get to some of the
interesting theorems – we end up referencing certain common graphs or type of graphs
by name. In my last post, I had to work in the definition of snark, and struggle around
to avoid mentioning another one, so it seems like as good a time as any to run through
some of the basics. This won’t be an exciting post, but you’ve got to do the definitions sometime. And there’s a bunch of pretty pictures, and even an interesting simple proof or two.

Continue reading

Coloring Planar Graphs

Coloring general graphs is, as I said in my last post, quite difficult. But if you can restrict the structure of the graph, you can change the problem in a way that makes it *much* easier. The classic example of this is *planar graphs*.
map.jpg
A planar graph is a graph which can be drawn on a plane with no edges crossing. Every planar graph is also the structural equivalent to a map, where the vertices corresponding to two countries on a map are adjacent if and only if the countries share a border in the map. (A border here has non-zero length, so two countries that only meet in a corner don’t share a border.) You can see an example of a map with its corresponding planar graph overlayed to the right.

Continue reading

Graph Coloring Algorithms

Graph coloring is deceptively simple. The idea of coloring a graph is very straightforward, and it seems as if it should be relatively straightforward to find a coloring. It turns out to not be – in fact, it’s extremely difficult. A simple algorithm for graph coloring is easy to describe, but potentially extremely expensive to run.

Continue reading

Get out your crayons: it's graph coloring time!

One kind of graph problem that’s extremely widely used in computer science is called graph coloring. There’s two versions of it, *vertex coloring*, and *face coloring*.
Vertex coloring is the one that’s more widely used. Edge coloring problems are all variations on the following: given a graph *G=(V,E)*, find a mapping of colors to vertices, such than if two vertices are adjacent, they’re assigned different colors?
The variants of the vertex coloring problem are things like:
* What’s the *minimum* number of colors (aka the *chromatic number* of the graph)
that can be used to color a graph?
* Given an integer N, can you find an N-coloring of the graph?
The face coloring problem is more complicated to describe. Suppose you have a *planar* graph. (Remember that that means a graph which can be drawn on a plane with no edges crossing.) Suppose further that the graph has the property that from any vertex v, there is a path through the graph from v to v. Then that graph is called a *map*. For a map drawn on a plane, every edge is part of at least one cycle. The smallest possible cycles – that is, the cycles which are not bisected by any other edges of the graph – are called *countries* or *faces* of the graph. The face coloring problem assigns colors to the *faces* of a graph, so that no two faces that share an edge are the same color. There’s an extremely fascinating proof that any planar map can be face-colored with no more than four colors. We’ll talk about later – it’s the first computer assisted proof that I’m aware of.

Continue reading