It’s been quite a while since I did any meaningful writing on type theory. I spent a lot of time introducing the basic concepts, and trying to explain the intuition behind them. I also spent some time describing what I think of as the platform: stuff like arity theory, which we need for talking about type construction. But once we move beyond the basic concepts, I ran into a bit of a barrier – no so much in understanding type theory, but in finding ways of presenting it that will be approacha ble.
I’ve been struggling to figure out how to move forward in my exploration of type theory. The logical next step is working through the basics of intuitionistic logic with type theory semantics. The problem is, that’s pretty dry material. I’ve tried to put together a couple of approaches that skip over this, but it’s really necessary.
For someone like me, coming from a programming language background, type theory really hits its stride when we look at type systems and particularly type inference. But you can’t understand type inference without understanding the basic logic. In fact, you can’t describe the algorithm for type inference without referencing the basic inference rules of the underlying logic. Type inference is nothing but building type theoretic proofs based on a program.
So here we are: we need to spend some time looking at the basic logic of type theory. We’ve looked at the basic concepts that underlie the syntax and semantics, so what we need to do next is learn the basic rules that we use to build logical statements in the realm of type theory. (If you’re interested in more detail, this is material from chapter 5 of “Programming in Martin-Löof’s Type Theory”, which is the text I’m using to learn this material.)
Martin Löoff’s type theory is a standard intuitionistic predicate logic, so we’ll go through the rules using standard sequent notation. Each rule is a sequence which looks sort-of like a long fraction. The “numerator” section is a collection of things which we already know are true; the “denominator” is something that we can infer given those truths. Each statement has the form , where A is a statement, and B is a set of assumptions. For example, means that is true, provided we’re in a context that includes .
Personally, I find that this stuff feels very abstract until you take the logical statements, and interpret them in terms of programming. So throughout this post, I’ll do that for each of the rules.
With that introduction out of the way, let’s dive in to the first set of rules.
Simple Introductions
We’ll start off with a couple of really easy rules, which allow us to introduce a variable given a type, or a type given a variable.
Introducing Elements
This is an easy one. It says that if we know that A is a type, then we can introduce the statement that , and add that as an assumption in our context. What this means is also simple: since our definition of type says that it’s only a type if it has an element, then if we know that A is a type, we know that there must be an element of A, and so we can write statements using it.
If you think of this in programming terms, the statement is saying that is a type. To be a valid type, there must be at least one value that belongs to the type. So you’re allowed to introduce a variable that can be assigned a value of the type.
Introducing Propositions as Types
This is almost the mirror image of the previous. A type and a true proposition are the same thing in our type theory: a proposition is just a type, which is a set with at least one member. So if we know that there’s a member of the set A, then A is both a type and a true proposition.
Equality Rules
We start with the three basic rules of equality: equality is reflexive, symmetric, and transitive.
Reflexivity
If is an element of a type , then is equal to itself in type ; and if is a type, then is equal to itself.
The only confusing thing about this is just that when we talk about an object in a type, we make reference to the type that it’s a part of. This makes sense if you think in terms of programming: you need to declare the type of your variables. “3: Int” doesn’t necessarily mean the same object as “3: Real”; you need the type to disambiguate the statement. So within type theory, we always talk about values with reference to the type that they’re a part of.
Symmetry
No surprises here – standard symmetry.
Transitivity
Type Equality
These are pretty simple, and follow from the basic equality rules. If we know that is a member of the type , and we know that the type equals the type , then obviously is also a member of . Along the same lines, if we know that in type , and equals , then in the type .
Substitution Rules
We’ve got some basic rules about how to formulate some simple meaningful statements in the logic of our type theory. We still can’t do any interesting reasoning; we haven’t built up enough inference rules. In particular, we’ve only been looking at simple, atomic statements using parameterless predicates.
We can use those basic rules to start building upwards, to get to parametric statements, by using substitution rules that allow us to take a parametric statement and reason with it using the non-parametric rules we talked about above.
For example, a parametric statement can be something like , which says that applying to a value which is a member of type produces a value which is a type. We can use that to produce new inference rules like the ones below.
This says that if we know that given a of type , will produce a type; and we know that the value is of type , then will be a type. In logical terms, it’s pretty straightforward; in programming terms it’s even clearer: if is a function on type , and we pass it a value of type , it will produce a result. In other words, is defined for all values of type .
This is even simpler: if is a function on type , then given two values that are equal in type , will produce the same result for those values.
Of course, I’m lying a bit. In this stuff, isn’t really a function. It’s a logical statement; isn’t quite a function. It’s a logical stamement which includes the symbol ; when we say , what we mean is the logical statement , with the object substituted for the symbol . But I think the programming metaphors help clarify what it means.
Using those two, we can generate more:
This one becomes interesting. is a proposition which is parametric in . Then is a proof-element: it’s an instance of which proves that is a type, and we can see as a computation which, given an element of produces a instance of . Then what this judgement says is that given an instance of type , we know that is an instance of type . This will become very important later on, when we really get in to type inference and quantification and parametric types.
This is just a straightforward application of equality to proof objects.
There’s more of these kinds of rules, but I’m going to stop here. My goal isn’t to get you to know every single judgement in the intuitionistic logic of type theory, but to give you a sense of what they mean.
That brings us to the end of the basic inference rules. The next things we’ll need to cover are ways of constructing new types or types from existing ones. The two main tools for that are enumeration types (basically, types consisting of a group of ordered values), and cartesian products of multiple types. With those, we’ll be able to find ways of constructing most of the types we’ll want to use in programming languages.