One of the things that I find niftiest about category theory is category diagrams. A lot of things that normally turn into complex equations or long-winded logical statements can be expressed in diagrams by capturing the things that you’re talking about in a category, and then using category diagrams to express the idea that you want to get accross.
A category diagram is a directed graph, where the nodes are objects from a category, and the edges are morphisms. Category theorists say that a graph commutes if, for any two paths through arrows in the diagram from node A to node B, the composition of all edges from the first path is equal to the composition of all edges from the second path.
As usual, an example will make that clearer.
This diagram is a way of expression the associativy property of morphisms: f º (g º h) = (f º g) º h. The way that the diagram illustrates this is: (g º h) is the morphism from A to C. When we compose that with f, we wind up at D. Alternatively, (f º g) is the arrow from B to D; if we compose that with H, we wind up at D. The two paths: f º (A → C), and (B → D) º h are both paths from A to D, therefore if the diagram commutes, they must be equal.
Let’s look at one more diagram, which we’ll use to define an interesting concept, the principal morphism between two objects. The principle morphism is a single arrow from A to B, and any composition of morphisms that goes from A to B will end up being equivalent to it.
In diagram form, a morphism m is principle if (∀ x : A → A) (∀ y : A → B), the following diagram commutes.
In words, this says that f is a principal morphism if for every endomorphic arrow x, and for every arrow y from A to B, f is is the result of composing x and y. There’s also something interesting about this diagram that you should notice: A appears twice in the diagram! It’s the same object; we just draw it in two places to make the commutation pattern easier to see. A single object can appear in a diagram as many times as you want to to make the pattern of commutation easy to see. When you’re looking at a diagram, you need to be a bit careful to read the labels to make sure you know what it means. (This paragraph was corrected after a commenter pointed out a really silly error; I originally said “any identity arrow”, not “any endomorphic arrow”.)
One more definition by diagram: x and y are a retraction pair, and A is a retract of B (written A < B) if the following diagram commutes:
That is, x : A → B, and y : B → A are a retraction pair if y º x = 1A.
Just a minor correction — as you’ve defined things, x is an endomorphism, not necessarily the identity.
A single programming language?
I’ve been reading Good Math, Bad Math for a while now, and it has a fair number of interesting posts…
Hi Mark,
Ever since the first post I read on turing machines I�ve been a loyal vistor to your site. Kudos and keep up the good work.
I would consider myself a math novice and most of my formal education in math doesn�t go beyond high school. However, I�ve still kept a keen interest in math on a broad range of topics; mainly statistics & discrete math. I wanted to give you some background about myself first so that it could help you understand where my questions were coming from and how you could approach answering them.
“Category theorists say that a graph commutes if, for any two paths through arrows in the diagram from node A to node B, the composition of all edges from the first path is equal to the composition of all edges from the second path.”
Commuting seems oddly similar to the hypotenuse theorem. So therefore a morphism carry some intrinsic value (distance)?
“There’s also something interesting about this diagram that you should notice: A appears twice in the diagram! It’s the same object; we just draw it in two places to make the commutation pattern easier to see.”
So your implying that a diagram is drawn in 3D space? Then any three points that commute also create a “plane” or vector. Multiple vectors create a polygon which I know in CS are the building blocs for graphical objects. Which is why your writing about cat. theory before your topology papers. Which leads me to my next question; when the vectors create a polygon what is in between the polygons? Empty space? I ask because how many polygons would be needed to create our visual world? Which partly go hand in hand with; how small of a “polygon” can our eye detect?
I know the last few deal more with science then math but I was hoping maybe you could help answer them with your topology papers or at least direct me to some resources that could.
I want to apologize for the long and hefty post…along with my general ignorance, apparant with my questions =x…but I couldn’t find your email addy on your site and figured this was the only means of contacting you.
-Mike
Mike:
Don’t try to read too much into the diagrams. There is no concept of distance; there is no deep meaning to putting a single object in two places in a diagram.
A diagram is nothing but a way of drawing arrows that represent morphisms. The positioning of objects does not have any meaning; the length of the arrows does not have any meaning; the number of times that a particular object appears in the graph doesn’t have any meaning; and the geometry of how the arrows appear doesn’t mean anything. What has meaning is arrows between objects – and the only meaning to that is that there is a morphism between the objects.
The point of the diagrams is to provide a visual way of seeing the morphisms – nothing more.
The only meaning that you’re looking for when you read a category diagram is what we call a diagram chase: follow the paths formed by composing arrows. The meaning of the diagram – and the meaning of “commutation” – is that if two paths start in the same place, and end in the same place, they are in some sense equivalent.
Don’t try to use geometric intuition on a category diagram. The commutation notion really has nothing to do with anything like the hypotenuse of a triangle. It happens in the diagram I used, you see triangles, you’re playing with what looks like an equivalence between two right triangles. – but that’s just because it’s a very simple example where we’re looking at some very short paths that I happened to draw in that shape.
If I had a category with objects A, B, C, D, and F; and I had morphisms f : A → B, g : B → C, h: C → D, i: D → F, j : A → C, k : C → F, then the paths i º h º g º f and k º j commute – both are paths of morphism compositions form A to F. I tried to upload a diagram like that, but I can’t seem to get the diagram to show in the comments. (I’m still learning our software here at SB.)
Of course, your diagrams are missing some morphisms, for instance the other diagonal on the first diagram – not that the existence isn’t implied heh.
When did retracts suddenly become cooler than sections? Sections were, like, totally in vogue back when I was learning this stuff!
There are, of course, the salient questions: “why is a retract called a retract?”, or, as I asked myself: “why is a section called a section?”. (I confess that I actually don’t know the answer to the first one).
(Of course, not all diagrams commute : ) There’re two very natural ways to get commutative diagrams from non-commutative ones. The arise because diagrams correspond to systems of equations, and commutative diagrams correspond to solutions to systems of equations, and there are two basic ways to “solve” systems of equations).
As usual, all these names come from topology. There, a retract is a map of a space to a subspace that is the identity on that subspace. The name is sort of obvious, then.
It probably just depends on your field. I still think sections are far cooler than retracts. 🙂
Probably one of the number one annoyances in my field (algebraic geometry) is the fact that hardly anyone ever shows a given diagram commutes. I’ve been reading through Brian Conrad’s Grothendieck Duality and Base Change, which was written a few years ago because a major result from the 60s relies on the commutativity of a certain diagram. Of course, no one proved commutativity of the diagram at the time, and it turned out to be highly nontrivial.
First off, you jump from using the symbol “m” for a principle morphism to “f” above. It’s a little confusing.
I’m still firmly in the “So what?” corner on category theory – your diagram above certainly seems more obscure than the (simple!) statement: f
Mark probably has to cover a bit more category theory before you can get into the meat of why we care about it. However, I’ll point out that, even though we talk about morphisms, the generality of the definition allows them to be almost anything.
Here’s a cute example: a group G is the same thing as a category having only one object, where every endomorphism is an isomorphism (there’s one endomorphism for each element of G). This seems like a needless abstraction, until you want to consider groupoids — categories where every morphism is an isomorphism. This definition only comes across as natural when you’re working in the realm of category theory.
That should read “even though we talk about morphisms as though they’re functions…”
>Why would we use this 2D notation,
Because all of the basic notions for categories can be expressed in 2D. Of course, if you have a look at John Baez’s “this week’s finds” articles (google him), you’ll find that there is quite a lot of cool stuff that looks at higher-dimensional diagrams, via so-called n-categories.
>especially when (as noted in a previous comment) it encourages >people to assume that some diagrams commute even when we don’t >actually know that?
You can look at it that way. But you can also look at it this way “looking at things with a categorial vocabulary gives us a lot of hints that certain things might hold” : ) But yes, sometimes people take it’s suggestivity for granted. But that happens in a lot of fields of maths : ) Anyway, I mean to say that on the whole, once you get used to them, diagrams can totally become part of your vernacular language.
Yet another thing — using language like “f � (g � h) = (f � g) � h” becomes unintelligible when you start working with large diagrams with lots of maps floating around (look up the Snake Lemma if you want a good example). Using diagrams makes life so much easier.
Just to illustrate Davis’s last point, check out the diagram on p. 38 of this. (Of course, I’m cheating a bit because that’s really a 2-category diagram, but whatever.)
I am a Smalltalk programer by profession and a lover of functional languages. Ever since I learned about Haskell I wanted to learn about CT. Seems that your blog, which I found just this week is exactly what I was looking for. Thank you for it!
I was puzzled by your correction notes in the text. Can you give me an example of a situation where an endomorphic arrow is not an identity arrow? You say that “an endomorphism is an arrow f : a → b such that a = b”. On the other hand, “1b is the identity morphism for the object b: the unique morphism such that for all other arrows f, if f : a → b, then 1b º f = f.” Does this mean that there can be any number of arrows a → a but only one of them is the identity arrow? What am I mssing here?
Again, thank you for the great work. Peter
Peter:
What I think you’re missing is the idea that an object is *not* really an atomic thing.
So, for example, any function f : Z → Z (where Z is the set of natural numbers) is an endomorphic arrow for the object Z in the category of sets; only f(x)=x is an identity morphism; f(x)=x2 is an endomorphism, but not an identity.
Yes, this is exactly the point. I got confused by the uniform use of lower case characters, where “b” is an object in Cat parlance in the definition of endomorphism whereas it is an atomic object in the definition of identity. Thank you for your help!
I’m confused by the second diagram. Given the description in the following paragraph I’d expect the diagram to be like this:
http://www.tothepowerofdisco.com/downloads/diagram.png
The y and m label seam misplaced in the current one. Am I missing something?
I find the comment explaining the difference between endomorphism and identity most revealing.
In my readings on CT so far I have often found myself on “so what?” side simply because subtle but important points (for a beginner at least) were often overlooked in introductory text.
I recall that a morphism would be principle if for every y there exists an x that makes your diagram commute…
I could be wrong of course, but the link below agrees with me. And the way you put it would give quite a strong notion…
http://www.ling.ohio-state.edu/~plummer/courses/winter09/ling681/asperti-longo/1Cat.pdf