Lambda calculus started off with the simple, untyped lambda calculus that we’ve been talking about so far. But one of the great open questions about lambda calculus was: was it sound? Did it have a valid model?
Church found that it was easy to produce some strange and non-sensical expressions using the simple lambda calculus. In order to try to work around those problems, and end up with a consistent system, Church introduced the concept of types, producing the simply typed lambda calculus. Once types hit the scene, things really went wild; the type systems for lambda calculi have never stopped developing: people are still finding new things to do by extending the LC type system today! Most lambda calculus based programming languages are based on the Hindley-Milner lambda calculus, which is a simplification of one of the standard sophisticated typed lambda calculi called SystemF. There’s even a Lambda Cube which can categorize the different type abstractions for lambda calculus (but alas, despite its name, it’s not related to the time cube.) Once people really started to understand types, they realized that the untyped lambda calculus was really just a pathologically simple instance of the simply typed lambda calculus: a typed LC with only one base type.
The semantics of lambda calculus are easiest to talk about in a typed version. For now, I’ll talk about the simplest typed LC, known as the simply typed lambda calculus. One of the really amazing things about this, which I’ll show, is that a simply typed lambda calculus is completely semantically equivalent to an intuitionistic propositional logic: each type in the program is a proposition in the logic; each β reduction corresponds to an inference step; and each complete function corresponds to a proof! Look below for how.
Types
The main thing that typed lambda calculus adds to the mix is a concept called base types. In a typed lambda calculus, you have some universe of atomic values which you can manipulate; those values are partitioned into the *base types*. Base types are usually named by single lower-case greek letters: So, for example, we could have a type “σ”, which consists of the set of natural numbers; a type “τ” which corresponds to boolean true/false values; and a type “γ” which corresponds to strings.
Once we have basic types, then we can talk about the type of a function. A function maps from a value of one type (the type of parameter) to a value of a second type (the type of the return value). For a function that takes a parameter of type “γ”, and returns a value of type “δ”, we write its type as “γ → δ”. “→” is called the _function type constructor_; it associates to the right, so “γ → δ → ε” parses as “γ → (δ → ε)”
To apply types to the lambda calculus, we do a couple of things. First, we need a syntax update so that we can include type information in lambda terms. And second, we need to add a set of rules to show what it means for a typed program to be valid.
The syntax part is easy. We add a “:” to the notation; the colon has an expression or variable binding on its left, and a type specification on its right. It asserts that whatever is on the left side of the colon has the type specified on the right side. A few examples:
- This asserts that the parameter, has type , which we’ll use as the type name for the natural numbers. (In case it’s hard to tell, that’s a greek letter “nu” for natural.) There is no assertion of the type of the result of the function; but since we know that “+” is a function with type , we can infer that the result type of this function will be .
- This is the same as the previous, but with the type declaration moved out, so that it asserts the type for the lambda expression as a whole. This time we can infer that because the function type is , which means that the function parameter has type .
- This is a two parameter function; the first parameter has type ν, and the second has type δ. We can infer the return type, which is ν. So the type of the full function is ν → δ → ν. This may seem surprising at first; but remember that lambda calculus really works in terms of single parameter functions; multi-parameter functions are a shorthand for currying. So really, this function is: λ x : ν . (λ y : δ . if y then x * x else x); the inner lambda is type δ → ν; the outer lambda is type ν → (δ → ν).
To talk about whether a program is valid with respect to types (aka well-typed), we need to introduce a set of rules for type inference. Then we can verify that the program is type-consistent.
In type inference, we talked about judgements. When we can infer the type of an expression using an inference rule, we call that inference a type judgement. Type inference and judgements allow us to reason about types in a lambda expression; and if any part of an expression winds up with an inconsistent type judgement, then the expression is invalid. (When Church started doing typed LC, one of the motivations was to distinguish between values representing “atoms”, and values representing “predicates”; he was trying to avoid the Godel-esque paradoxes, by using types to ensure that predicates couldn’t operate on predicates.)
Type judgements are usually written in a sequent-based notation, which looks like a fraction where the numerator consists of statements that we know to be true; and the denominator is what we can infer from the numerator. In the numerator, we normally have statements using a context, which is a set of type judgements that we already know;it’s usually written as an uppercase greek letter. If a type context includes the judgement that , I’ll write that as .
Rule 1: Type Identity
This is the simplest rule: if we have no other information except a declaration of the type of a variable, then we know that that variable has the type it was declared with.
Rule 2: Type Invariance
This rule is a statement of non-interference. If we know that , then inferring a type judgement about any other term cannot change our type judgement for .
Rule 3: Function Type Inference
This statement allows us to infer function types given parameter types. Ff we know the type of the parameter to a function is ; and if, with our knowledge of the parameter type, we know that the type of term that makes up the body of the function is , then we know that the type of the function is .
Rule 4: Function Application Inference
This one is easy: if we know that we have a function that takes a parameter of type and returns a value of type , then if we apply that function to a value of type , we’ll get a value of type .
These four rules are it. If we can take a lambda expression, and come up with a consistent set of type judgements for every term in the expression, then the expression is well-typed. If not, then the expression is invalid.
So let’s try taking a look at a simple lambda calculus expression, and seeing how inference works on it.
Without any type declarations or parameters, we don’t know the exact type. But we do know that “x” has some type; we’ll call that “α”; and we know that “y” is a function that will be applied with “x” as a parameter, so it’s got parameter type α, but its result type is unknown. So using type variables, we can say “x:α,y:α→β”. We can figure out what “α” and “β” are by looking at a complete expression. So, let’s work out the typing of it with x=”3″, and y=”λ a:ν.a*a”. We’ll assume that our type context already includes “*” as a function of type “ν→ν→ν”, where ν is the type of natural numbers.
- “λ x y . y x) 3 (λ a:ν . a * a)”: Since 3 is a literal integer, we know its type: 3:ν.
- By rule 4, we can infer that the type of the expression “a*a” where “a:ν” is “ν”, and *:ν→ν→ν so therefore, by rule 3 the lambda expression has type “ν→ν”. So with type labelling, our expression is now: “(λ x y . y x) (3:ν) (λ a:ν.(a*a):ν) : ν→ν”.
- So – now, we know that the parameter “x” of the first lambda must be “ν”; and “y” must be “ν→ν”; so by rule 4, we know that the type of the application expression “y x” must be “ν”; and then by rule 3, the lambda has type: “ν→(ν→ν)→ν”.
- So, for this one, both α and β end up being “ν”, the type of natural numbers.
So, now we have a simply typed lambda calculus. The reason that it’s simply typed is because the type treatment here is minimal: the only way of building new types is through the unavoidable constructor. Other typed lambda calculi include the ability to define parametric types, which are types expressed as functions ranging over types.
Programs are Proofs
Here’s where it gets really fun. Think about the types in the simple typed language calculus. Anything which can be formed from the following grammar is a lambda calculus type:
- type ::= primitive | function | ( type )
- primitive ::= α | β | δ | …
- function ::= type→type
The catch with that grammar is that you can create type expressions which, while they are valid type definitions, you can’t write a single, complete, closed expression which will actually have that type. (A closed expression is one with no free variables.) When there is an expression that has a type, we say that the expression inhabits the type; and that the type is an inhabited type. If there is no expression that can inhabit a type, we say it’s uninhabitable. Any expression which either can’t be typed using the inference rules, or which is typed to an uninhabitable type is a type error.
So what’s the difference between inhabitable type, and an uninhabitable type?
The answer comes from something called the Curry-Howard isomorphism. For a typed lambda calculus, there is a corresponding intuitionistic logic. A type expression is inhabitable if and only if the type is a provable theorem in the corresponding logic.
The type inference rules in lambda calculus are, in fact, the same as logical inference rules in intuitionistic logic. A type can be seen as either a statement that this is a function that maps from a value of type to a value of type , or as a logical statement that if we’re given a fact alpha , we could use that to infer the truth of a fact .
If there’s a logical inference chain from an axiom (a given type assignment) to an inferred statement, then the inferred statement is an inhabitable type. If we have a type , then given a inhabited type , we know that is inhabitable, because if is a fact, then is also a fact.
On the other hand, think of a different case . That’s not a theorem, unless there’s some other context that proves it. As a function type, that’s the type of a function which, without including any context of any kind, can take a parameter of type α, and return a value of a different type β. You can’t do that – there’s got to be some context which provides a value of type β – and to access the context, there’s got to be something to allow the function to access its context: a free variable. Same thing in the logic and the lambda calculus: you need some kind of context to establish “α→β” as a theorem (in the logic) or as an inhabitable type (in the lambda calculus).
It gets better. If there is a function whose type is a theorem in the corresponding intuitionistic logic, then the program that has that type is a proof of the theorem. Each beta reduction is equivalent to an inference step in the logic. This is what programming languages geeks like me mean when we say “the program is the proof”: a well-typed program is, literally, a proof its well-typed-ness.
To connect back to the discussion about models: the intuitionistic logic corresponding to the lambda calculus and intuitionistic logic are, in a deep sense, just different reflections of the same thing. We know that intuitionistic logic has a valid model. And that, in turn, means that lambda calculus is valid as well. When we show that something is true using the lambda calculus, we can trust that it’s not an artifact of an inconsistent system.
Having a B.S. in math rather than computer science, I don’t know much lambda calculus. What I do know now is a result of getting interested in theorem assistants.
For a few main reasons, I’ve settled on trying to learn Isabelle:
http://www.cl.cam.ac.uk/research/hvg/isabelle/
However, something that looks promising is this:
http://proofpeer.appspot.com/
The author of ProofPeer actually comes from the Isabelle group, and he did some work implementing ZF sets in Isabelle, though for ProofPeer, he talks about HOL-ST, which is terminology that comes from HOL4: http://hol.sourceforge.net/ , where HOL-ST was an implementation of ZF sets in HOL4.
The Isabelle group, led by a Brit and a bunch of Germans, is a computer science group. I was initially super excited to learn about theorem assistants, and initially thought badly of the mathematics community for making me discover them on my own, and for not making them an integral part of mathematics education. Partially, I now understand why the math world hasn’t jumped on board the theorem assistant band-wagon. They’re not something you can jump into and start proving math with right away.
Lambda calculus and theorem assistants are heavily related. From my research, it seems that theorem assistant/prover users have primarily chosen HOL (higher-order logic) as their logical foundation.
For proving math, I need ZF sets, but it appears that for implementing math in a more concrete and practical way, HOL wins out.
If you have any secret knowledge of great theorem assistants that are available, please reveal it. I’ve done massive searches and keep coming back to Isabelle.
Thank you for taking the time to write this. I cannot say I am intellectually capable to follow in “real-time” (I look forward to rereading and rereading), but I am enjoying the challenge.
What really blows me away is Girard-Reynolds Isomorphism — the idea that as long as all types remain polymorphic (i.e. using haskell syntax for lack of formatting : a->b rather then Int -> Bool) that the proof can be mechanically derived from the theorem. There exactly one function with the type a -> a, that is x.x aka (I or id) the same with a->b->a that is xy.x (aka K) and even (a->b->c) -> (a->b) -> a ->c that is xyz.xz(yz) (aka S). These functions can be derived mechanically and there is a program called Djinn (http://www.augustsson.net/Darcs/Djinn/) which will do it for you.
See http://www.youtube.com/watch?v=h0OkptwfX4g (a Google Tech Talk 30 min) for a great explanation.
“the untyped lambda calculus was really just a pathologically simple instance of the simply typed lambda calculus: a typed LC with only one base type.”
I don’t understand how this is true. How does one express the fixed point combinator Y in a simple typed lambda calculus?
Work it out, Benson; just because there’s more than one type around doesn’t mean you have to use more than one.
Hint: there’s not just one Y-combinator.
Well, I was referring to what I think is usually understood as Y, that is, λf. (λx. f (x x)) (λx. f (x x)). In order to express Y in any lambda calculus, one must be allowed to apply terms to themselves, which is not allowed in the simply typed lambda calculus.
“In order to express Y in any lambda calculus, one must be allowed to apply terms to themselves, which is not allowed in the simply typed lambda calculus.”
It’s been a while since I looked at the STLC, but I take it you have a proof that there’s no object X in a CCC on one generator that is isomorphic to X^X?
A slightly less fancy way of saying the same thing: “who says you can’t apply terms to themselves!?” Sure, the most familiar types are not self-applicable, but that doesn’t mean there are no self-applicable types.
That said: Java’s type system is a fair approximation of a simply typed lambda calculus. You have basic, “primitive” types; given two types you can form the product type; given two types you can form a function type. Sure, the language isn’t set up to make it syntactically easy, but you can do it. And I assure you it’s absolutely possible to write Y-combinator equivalents in Java that find fixed-points for functions with the same input and output type, since I’ve done it myself.
“It’s been a while since I looked at the STLC”
You might start with the wikipedia article “Simply typed lambda calculus”, in particular, the section titled “General Observations”. Note that of course, practically anything could be equivalent in some sense to the simply typed lambda calculus if you are allowed to add extra arbitrary functions and types. But you have to add those in addition to the axiomatization itself, whereas, the axiomatization of the unypted lambda calculus already provides a Turing universal language (precisely because of its untyped nature which allows Y to be expressed in it without any additional axioms).
“For proving math, I need ZF sets, but it appears that for implementing math in a more concrete and practical way, HOL wins out.”
Why do you need ZF for “proving math”? Unless it is ZF specifically you are interested in proving things about, most math can be expressed in plenty of other base formalisms, or even axiomatized independently (and most of it doesn’t even need the power of ZF: second order arithmetic is often enough). But if you want to reduce your math to set theory, specifically ZF set theory, you can look at Mizar, which uses an extension of ZF.
“If you have any secret knowledge of great theorem assistants that are available, please reveal it. I’ve done massive searches and keep coming back to Isabelle.”
Isabelle is a large complex system. I would recommend looking at HOL Light, which is based on essentially the same HOL logic as most of the other HOL systems, but which is very simple, the work of mostly one guy, and the actual code for it is quite readable.
It tries to be minimal in its assumptions, partly for verifiability, but I think partly for philosophical reasons. The net result I think might be considered mixed, since for example you might like having ZF-Infinity and ZF, where ZF-Infinity is still pretty powerful (equivalent to PA) . But take the axiom of infinity out and you just get the a basic Church theory of types-style logic, which is not nearly so powerful. So there are no stepping stones to stronger and stronger systems the way reverse mathematicians might like. Also there are some strange things like proving the law of excluded middle from the axiom of choice, in the name of economy. Well, it is more economical, but perhaps not in terms of comprehension…
I need ZF sets because that’s what rules. Actually, what rules is naive sets, but ZF is a close approximation, as apposed to NBG sets.
It’s not about what’s the best way to do math. It’s about trying to duplicate the mathematics that’s already out there, so that I give the mathematical world (mathematicians) fewer reasons to reject what I try and give them. It’s about catering to specific audience.
Mizar has been very useful to lots of people, but the interface is archaic, and there’s not much documentation. Even if there was documentation, most people wouldn’t be interested in using it because of the interface.
I looked at the HOL Light web site briefly. I didn’t see much documentation or tutorials. The syntax is too much like a programming language, I guess. I didn’t download it and install it because I didn’t see that there was a decent learning path for me.
Isabelle keeps winning out for these reasons, among others:
1) High level tools and user interface.
2) Readability: syntax that’s mathematical (by means of ISAR).
3) The ability to use different logics, in particular, either ZF or HOL.
4) A well developed library of functions.
5) Logics that are based on natural deduction.
6) Proof methods that flow in a conventional, mathematical manner.
Yea, it’s complex, and it’s a bummer that the programmer’s haven’t made things simple enough that I can’t just jump in and start proving, but it’s a complex world.
Thanks for the remark.
Yes, HOL is lacking in many ways since it is so small and simple, although it has sufficed for proving an awful lot of mathematics. I’d like to see a system that could be used by “outsiders” rather than hard-core users that came up from the inside, like you seem to be talking about, and get them up to speed very quickly. So curious to know more about what you are planning to do. If you can talk in more detail, can you email me, benson dot bear at gmail dot com? I’m trying to use HOL light pedagogically right now, and developing a way to interactively browse proofs. This requires changing the code to record the proof that a tactic develops and trying to guess at what parts a user would want to see as they browse. I believe Isabelle and all the others also do not try to provide any user information about what a tactic has done, which is unfortunate. HOL Light has an option to record the entire low level proof but no one wants to look at that (even Beta reduction is not a primitive rule of inference by derived!)
My plan is to prove old math until I get to a point of maybe proving something of a research nature.
I only discovered theorem assistants about 7 months ago. I find it somewhat ridiculous that proof review is still almost 100% based on peer review through journals. That’s “somewhat” because I now see it’ll take me a massive amount of work to come up to speed, at least for how I want to do things. And any math I learn looking at examples of math implemented by others in a theorem assistant, I could learn 10 times faster studying a normal math textbook.
The allure is still there because I like the idea of being independent, and not having to beg to get someone to look at a proof of mine. Begging doesn’t get me anything anyway.
But I had to scale back what I wanted to learn. I wanted to specialize in number theory, but due to the current state of theorem assistants, I need foundational knowledge, so I’ve decided to stay at the logic and set theory level, with some real analysis as an application for the real numbers.
I’ve recently thought about how Zermelo-Fraenkel-Choice axioms would be enforced by the theorem assistant, if that was the logic being used. In that context, this statement, from An Introduction to Mathematical Logic and Type Theory by Andrews, doesn’t make me feel good:
One solution to not having classes in ZFC might be to add an axiom about an inaccessible cardinal. I don’t know anything about that right now.
All this together has given me a new mission. Hopefully, it’ll have a happy ending.
Okay will reply by email.