One of the more advanced topics in topology that I’d like to get to is homology. Homology is a major topic that goes beyond just algebraic topology, and it’s really very interesting. But to understand it, it’s useful to have some understandings of some basics that I’ve never written about. In particular, homology uses chains of modules. Modules, in turn, are a generalization of the idea of a vector space. I’ve said a little bit about vector spaces when I was writing about the gluing axiom, but I wasn’t complete or formal in my description of them. (Not to mention the amount of confusion that I caused by sloppy writing in those posts!) So I think it’s a good idea to cover the idea in a fresh setting here.
So, what’s a vector space? It’s yet another kind of abstract algebra. In this case, it’s an algebra built on top of a field (like the real numbers), where the values are a set of objects where there are two operations: addition of two vectors, and scaling a vector by a value from the field.
To define a vector space, we start by taking something like the real numbers: a set whose values form a field. We’ll call that basic field F, and the elements of F we’ll call scalars. We can then define a vector space over F as a set V whose members are called vectors, and which has two operations:
- Vector Addition
- An operation mapping two vectors to a third vector, +:V×V→V
- Scalar Multiplication
- An operation mapping a scalar and a vector to another scalar: *:F×V→V
Vector addition forms an abelian group over V, and scalar multiplication is distributive over vector addition and multiplication in the scalar field. To be complete, this means that the following properties hold:
- (V,+) are an Abelian group
- Vector addition is associative: ∀a,b,c∈V: a+(b+c)=(a+b)+c
- Vector addition has an identity element, 0; ∀a∈V:a+0=0+a=a.
- Vector addition has an inverse element: ∀a∈V:(∃b∈V:a+b=0.) The additive inverse of a vector a is normally written -a. (Up to this point, this defines (V,+) is a group.)
- Vector addition is commutative: ∀a,b∈V: a+b=b+a. (The addition of this commutative rule is what makes it an abelian group.)
- Scalar Multiplication is Distributive
- Scalar multiplication is distributive over vector addition: ∀a∈F,∀b,c∈V, a*(b+c)=a*b+a*c
- Scalar multiplication is distributive over addition in F: ∀a,b∈F,∀c∈V: (a+b)*c = (a*c) + (b*c).
- Scalar multiplication is associative with multiplication in F: ∀a,b∈F,c∈V: (a*b)*c = a*(b*c).
- The multiplicative identity for multiplication in F is also the identity element for scalar multiplication: ∀a∈V: 1*a=a.
So what does all of this mean? It really means that a vector space is a structure over a field where the elements can be added (vector addition) or scaled (scalar multiplication). Hey, isn’t that exactly what I said at the beginning?
One obvious example of a vector space is a Euclidean space. Vectors are arrows from the origin to some point in the space – and so they can be represented as ordered tuples. So for example, ℜ3 is the three-dimensional euclidean space; points (x,y,z) are vectors. Adding two vectors (a,b,c)+(d,e,f)=(a+d,b+e,c+f); and scalar multiplication x(a,b,c)=(xa,xb,xc).
Following the same basic idea as the euclidean spaces, we can generalize to matrices of a particular size, each of which is a vector space. There are also ways of creating vector spaces using polynomials, various kinds of functions, differential equations, etc.
In homology, we’ll actually be interested in modules. A module is just a generalization of the idea of a vector space. But instead of using a field as a basis the way that you do in a vector space, in a module, the basis is just a general ring; so the basis is less constrained: a field is a commutative ring with multiplicative inverses of all values except 0, and distinct additive and multiplicative identities. So a module does not require multiplicative inverse for the scalars; nor does it require scalar multiplication to be commutative.
There’s a typo in the def’n of scalar multiplication; “another scalar” should be “another vector.”
A comment I found in the wikipedia article about euclidean spaces.
“A final wrinkle is that Euclidean space is not technically a vector space but rather an affine space, on which a vector space acts. Intuitively, the distinction just says that there is no canonical choice of where the origin should go in the space, because it can be translated anywhere.”
Hummmmm…..
Very nice discussion! I’m looking forward to more about homology–I’ve never heard of modules, but I’m familiar with vector spaces. One thing though–since a “basis” is a technical term when talking about vector spaces, could you not use the term to describe the field the vector space is over?
Not having encountered modules before I was going to ask why we want to weaken such an ubiquitous concept as a scalar field to a ring. But then I peeked into homology. So a chain is a “connected” sequence; cute.
Torbjörn: There are many reasons why we’d want to be able to look into modules instead of only vector spaces. The most fundamental is that we keep running into them:
1) Ideals are submodules. And by looking at modules for ring theory instead of ideals, suddenly things get a little bit neated.
2) Consider the polynomial ring over the integers. This is a ring, not a field. Consider .. say .. square matrices of some fixed size with integer entries. This forms a module over the polynomial ring with (anxn+…+a0)M=anMn+…+a0E
(with E the identity matrix)
3) Consider … say … the set of all differentiable functions in n variables, and take the ring of differentiable operators in n variables. Then that set ends up being a module over that ring, without any fields hanging around.
There are examples aplenty; and the endeffect is that we want to weaken the assumption, because there’s a lot we can do by weakening it.
(by the way: group representation theory is all about modules over the group algebra; so that’s “just” module theory as well…)
It seems to me that this:
is a theorem once you have this:
For any v∈V and nonzero a∈F, you have v=(a*a^-1)*v = a*w (w∈V).
Then 1*v = 1*(a*w) = (1*a)*w = a*w = v.
v=(a*a^-1)*v
Right here you’re begging the question.
Doh! And it took me embarassingly long to see it even after I had it pointed out to me…
My intuition is still screaming at me that there has to be some way that one of those follows from the other, but everything I’m coming up with has the same problem.
Sorry, but no chance. It is consistent with every rule except the last one if you define a*b=0 ∀a∈F,b∈V.
Ahh. I suppose the axioms that intuitively look like they should be theorems are generally the most interesting kind…
I think what I was actually basing it on was that for any vector v there should be a scalar a and vector w such that v=a*w, but it looks like you need to make the existence of a scalar identity for scalar-vector multiplication an axiom to guarantee that.