Category Archives: Good Math

Homotopy

I’ve been working on a couple of articles talking about homology, which is an interesting (but difficult) topic in algebraic topology. While I was writing, I used a metaphor with a technique that’s used in homotopy, and realized that while I’ve referred to it obliquely, I’ve never actually talked about homotopy.

When we talked about homeomorphisms, we talked about how two spaces are homeomorphic (aka topologically equivalent) if and only if one can be continuously deformed into the other – that is, roughly speaking, transformed by bending, twisting, stretching, or squashing, so long as nothing gets torn.

Homotopy is a formal equivalent of homeomorphism for functions between topological spaces, rather than between the spaces themselves. Two continuous functions f and g are homotopic if and only if f can be continuously transformed into g.

The neat thing about the formal definition of homotopy is that it finally gives us a strong formal handle on what this continuous deformation stuff means in strictly formal terms.

So, let’s dive in and hit the formalism.

Suppose we’ve got two topological spaces, S and T, and two continuous functions f,g:ST. A homotopy is a function h which associates every value in the unit interval [0,1] with a function from S to T. So we can treat h as a function from S×[0,1]→T, where ∀x:h(x,0)=f(x) and h(x,1)=g(x). For any given value x, then, h(x,·) is a curve from f(x) to g(x).

Thus – expressed simply, the homotopy is a function that precisely describes the transformation between the two homotopical functions. Homotopy defines an equivalence relation between continuous functions: continuous functions between topological spaces are topologically equivalent if there is a homotopy between them. (This paragraph originally included an extremely confusing typo – in the first sentence, I repeatedly wrote “homology” where I meant “homotopy”. Thanks to commenter elspi for the catch!)

We can also define a type of homotopy equivalence between topological spaces. Suppose again that we have two topological spaces S and T. S and T are homotopically equivalent if there are continuous functions f:ST and g:TS where gºf is homotopic to the identity function for T, 1T, and fºg is homotopic to the identity function for S, 1S. The functions f and g are called homotopy equivalences.

This gives us a nice way of really formalizing the idea of continuous deformation of spaces in homeomorphism – every homeomorphism is also a homotopy equivalence. But it’s not both ways – there are homotopy equivalences that are not homeomorphisms.

The reason why is interesting: if you look at our homotopy definition, the equivalence is based on a continuous deformations – including contraction. So, for example, a ball is not homeomorphic to a point – but it is homotopically equivalent. The contraction all the way from the ball to the point doesn’t violate anything about the homotopical equivalence. In fact, there’s a special name for the set of topological spaces that are homotopically equivalent to a single point: they’re called contractible spaces. (Originally, I erroneously wrote “sphere” instead of “ball” in this paragraph. I can’t even blame it on a typo – I just screwed up. Thanks to commenter John Armstrong for the catch.

Addendum: Commenter elspi mentioned another wonderful example of a homotopy that isn’t a homeomorphism, and I thought it was a good enough example that I wish I’d included it in the original post, so I’m promoting it here. The mobius band is homotopically equivalent to a circle – compact the band down to a line, and the twist “disappears” and you’ve got a circle. But it’s pretty obvious that the mobius is not homeomorphic to a circle!. Thanks again, elspi – great example!

Clear Object-Oriented Programming? Not in Glass

Todays bit of programming insanity is a bit of a novelty: it’s an object-oriented programming language called Glass, with an interpreter available here. So far in all of my Friday Pathological Programming columns, I haven’t written about a single object-oriented language. But Glass is something
special. It’s actually sort of a warped cross between Smalltalk and Forth – two things that should never have gotten together; in the words of the language designer, Gregor Richards, “No other language is implemented like this, because it would be idiotic to do so.”

Continue reading

Basics: Discrete vs Continuous

One thing that I frequently touch on casually as I’m writing this blog is the distinction between continuous mathematics, and discrete mathematics. As people who’ve been watching some of my mistakes in the topology posts can attest, I’m much more comfortable with discrete math than continuous.

Continue reading

Not Quite Basics: Sorting Algorithms

Multiple people have written to me, after seeing yesterday’s algorithms basics post, asking me to say more about sorting algorithms. I have to say that it’s not my favorite topic – sorting is one of those old bugaboos that you can’t avoid, but which gets really dull after a while. But there is a kernel of interest to it – sorting can be used to demonstrate a lot of interesting ideas about
computational complexity.

Continue reading

Basics: Algorithm

A kind reader pointed out that I frequently mention algorithms, but that I haven’t defined them in the basics posts. To me, they’re so fundamental to the stuff I do for a living that I completely forgot that they’re an interesting idea.

It’s really pretty simple: an algorithm is description of a mechanical procedure for performing a mathematical task. But that’s actually a fairly sophisticated idea – the general concept of things that describe a procedure, and which can be discussed, described, and reasoned about as entities in their own right is something quite different from nearly anything that came before it.

Continue reading

The Most Pathological Machine I've Ever Seen: Tag

What we have here is a truly warped language.

Back in the very early days of what eventually became computer science, many of the people working in the field invented all sorts of automatons/computing formalisms. The one that I’ve always found the most confounding is the Tag machine invented by Emil Post.

The tag machine is simple to the point of triviality. The machine is a queue of characters (with one character designated as “Halt”), and a set of rules. Each rule has a different character that selects the rule, and a string of characters. Each step, the machine looks at the first character of the queue, and selects the rule that that is associated with that character. If the character is “Halt”, the machine just stops, and whatever is left on the queue is the result of the computation. Otherwise, it appends the selected rule’s string of characters to the end of the queue, and then removed a fixed number of characters from the front of the queue. The machines are called “n-Tag” machines where “n” is number of character dropped each step.

That’s it. Look at the front of the queue, use it to pick a set of characters to append, and then remove and discard the first N characters from the queue..

For any N≥2, a Post N-tag machine is Turing equivalent.

Like I said above – I’ve always found the Post Tag machine to be thoroughly confounding. I can grasp why it’s Turing equivalent, but for the life of me, I’ve never been able to figure out how to actually implement anything interesting on one. So, I figured, there are tons of esoteric/pathological language fans out there who love nothing more than the challenge of writing interesting programs for bizarre languages; I’ll write a post-tag language, and throw it to the wolves, and see what happens!

Continue reading

Using Monads for Control: Maybe it's worth a look?

So, after our last installment, describing the theory of monads, and the previous posts, which focused on representing things like state and I/O, I thought it was worth taking a moment to look at a different kind of thing that can be done with monads. We so often think of them as being state wrappers; and yet, that’s only really a part of what we can get from them. Monads are ways of tying together almost anything that involves sequences.

Continue reading

Building Towards Homology: Vector Spaces and Modules

One of the more advanced topics in topology that I’d like to get to is homology. Homology is a major topic that goes beyond just algebraic topology, and it’s really very interesting. But to understand it, it’s useful to have some understandings of some basics that I’ve never written about. In particular, homology uses chains of modules. Modules, in turn, are a generalization of the idea of a vector space. I’ve said a little bit about vector spaces when I was writing about the gluing axiom, but I wasn’t complete or formal in my description of them. (Not to mention the amount of confusion that I caused by sloppy writing in those posts!) So I think it’s a good idea to cover the idea in a fresh setting here.

So, what’s a vector space? It’s yet another kind of abstract algebra. In this case, it’s an algebra built on top of a field (like the real numbers), where the values are a set of objects where there are two operations: addition of two vectors, and scaling a vector by a value from the field.

To define a vector space, we start by taking something like the real numbers: a set whose values form a field. We’ll call that basic field F, and the elements of F we’ll call scalars. We can then define a vector space over F as a set V whose members are called vectors, and which has two operations:

Vector Addition
An operation mapping two vectors to a third vector, +:V×VV
Scalar Multiplication
An operation mapping a scalar and a vector to another scalar: *:F×VV

Vector addition forms an abelian group over V, and scalar multiplication is distributive over vector addition and multiplication in the scalar field. To be complete, this means that the following properties hold:

  • (V,+) are an Abelian group
    • Vector addition is associative: ∀a,b,c∈V: a+(b+c)=(a+b)+c
    • Vector addition has an identity element, 0; ∀a∈V:a+0=0+a=a.
    • Vector addition has an inverse element: ∀a∈V:(∃b∈V:a+b=0.) The additive inverse of a vector a is normally written -a. (Up to this point, this defines (V,+) is a group.)
    • Vector addition is commutative: ∀a,b∈V: a+b=b+a. (The addition of this commutative rule is what makes it an abelian group.)
  • Scalar Multiplication is Distributive
    • Scalar multiplication is distributive over vector addition: ∀a∈F,∀b,c∈V, a*(b+c)=a*b+a*c
    • Scalar multiplication is distributive over addition in F: ∀a,b∈F,∀c∈V: (a+b)*c = (a*c) + (b*c).
    • Scalar multiplication is associative with multiplication in F: ∀a,b∈F,c∈V: (a*b)*c = a*(b*c).
    • The multiplicative identity for multiplication in F is also the identity element for scalar multiplication: ∀a∈V: 1*a=a.

So what does all of this mean? It really means that a vector space is a structure over a field where the elements can be added (vector addition) or scaled (scalar multiplication). Hey, isn’t that exactly what I said at the beginning?

One obvious example of a vector space is a Euclidean space. Vectors are arrows from the origin to some point in the space – and so they can be represented as ordered tuples. So for example, ℜ3 is the three-dimensional euclidean space; points (x,y,z) are vectors. Adding two vectors (a,b,c)+(d,e,f)=(a+d,b+e,c+f); and scalar multiplication x(a,b,c)=(xa,xb,xc).

Following the same basic idea as the euclidean spaces, we can generalize to matrices of a particular size, each of which is a vector space. There are also ways of creating vector spaces using polynomials, various kinds of functions, differential equations, etc.

In homology, we’ll actually be interested in modules. A module is just a generalization of the idea of a vector space. But instead of using a field as a basis the way that you do in a vector space, in a module, the basis is just a general ring; so the basis is less constrained: a field is a commutative ring with multiplicative inverses of all values except 0, and distinct additive and multiplicative identities. So a module does not require multiplicative inverse for the scalars; nor does it require scalar multiplication to be commutative.

Basics: Optimization

Yet another term that we frequently hear, but which is often not properly understood, is the concept of optimization. What is optimization? And how does it work?

The idea of optimization is quite simple. You have some complex situation, where
some variable of interest (called the target) is based on a complex
relationship with some other variables. Optimization is the process of trying to find
an assignment of values to the other variables (called parameters) that produces a maximum or minimum value of the target variable, called
the optimum or optimal value

The practice of optimization is quite a bit harder. It depends greatly
on the relationships between the other variables. The general process of finding
an optimum is called programming – not like programming a computer; the term predates the invention of computers.

Continue reading

Crazy Stack Games: Programming in Kipple

Insane Stacking

Todays pathology is playing with stacks. Lots of lots of stacks. Stacks for data. Stacks for control. Stacks out the wazoo. It’s called Kipple for no particularly good reason that I know of.

Kipple happens to be one of the pathological languages that I highly recommend trying to write some programs in. It’s crazy enough to be a challenge, but there is a basic logic to how you program to it – which makes figuring out how to write programs rewarding rather than just frustrating.

Continue reading