Even though this post seems to be shifting back to axiomatic set theory, don’t go thinking that we’re
done with type theory yet. Type theory will make its triumphant return before too long. But before
that, I want to take a bit of time to go through some basic constructions using set theory.
We’ve seen, roughly, how to create natural numbers using nothing but sets – that’s basically what
the ordinal and cardinal number stuff is about. Even doing that much is tricky – witness my gaffe about
ordinals and cardinals and countability. (What I was thinking of is the difference between the ε series in the ordinals, and the ω series in the cardinals, not the ordinals and cardinals themselves.) But if we restrict ourselves to sets of finite numbers (note: sets of finite numbers, not finite sets of numbers!), we’re pretty safe.
Of course, we haven’t defined arithmetic – we’ve just defined numbers. You might think it would be
pretty important to define arithmetic on the numbers. If you thought that, you’d be absolutely
Correct. So, that’s what I’m going to do next. First, I’m going to define addition and subtraction – multiplication can be defined in terms of addition. Division can be defined in terms of multiplication
and subtraction – but I’m going to hold off on defining division until we get to rational numbers.
If we want to define arithmetic, first we need to define what it is. What is natural number addition?
It’s a total function from a pair of integers to a single integer.
One problem: what’s a function? We haven’t really defined functions yet. We know about
sets, we know about logical predicates, and we’ve constructed the natural numbers. But we haven’t said anything about functions. So before we can define addition, we need to define functions.
In set theory, a function is a special kind of relation. A relation, in turn, is a set of
ordered pairs. For a relation, r, r = {(x,y) : R(x,y))}. In other words, a relation
is really a name for a predicate over the set of all ordered pairs that describes a relationship
between x and y – a two-parameter predicate. A function is a relation where, for any x, there is
no more that one pair with any given value of x in its first position. If f is a function,
and (x,y)∈f, then we can also write y=f(x). This is called application syntax.
That sounds a tad confusing: but just look at a simple example. Say that we want a function
from the members of the set {1, 2, 3, 4} to itself. We could write one such function as f = {(1,2), (2,3), (3,4), (4,1)}. Then f(2) just means the second member of the pair in f that starts with 2: f(2)=3.
We’ve still got a problem, though. What’s an ordered pair? We’ve only built sets and numbers; we’ve defined functions in terms of these “ordered pair” things, but we can’t use that definition until we’ve got pairs.
The naive thing to do is to say that a pair is a set: (x,y) = {x,y}. But that doesn’t work: (x,y)≠(y,x), but {x,y}={y,x}. There’s no notion of ordering in sets, but an ordered pair is ordered – that’s why it’s called an ordered pair! So we need some way of distinguishing between the first and second
values in an ordered pair, without any notion of ordering. It turns out to be pretty easy: (x,y) = {x,{x,y}}. There’s a set with two values, and one of them is a subset of the other. So we can recognize the one that’s the first element, because it’s the value that’s an element of the other; and then we can look at the other set, and see what value is inside of it besides the first one.
Let’s look at a quick example. Suppose we wanted the ordered pair (2,1).
In set form, 2 = {∅,{∅}}, and 1 = {∅}. So the ordered pair (2,1) would be the set
{2,{1,2}}. (Or expanded, {{∅,{∅}}, {{∅}, {∅,{∅}}}}). So if we were given
the set {{1,2},2}, and we knew it was an ordered pair, then we could look at it, and say there are two
members, a and b. a = {1,2}; b = 2. b∈a, so b=2 is the first element of the pair. a = {1,2}, so
the second element of our ordered pair is the member of a that is different from b, so the second element of the pair is 1. What if the pair is something like (1,1)? Then the set form is {1,{1}}, and since
one element is a member of the other, and the other has only one element, then we know it’s an equal pair.
That gives us ordered pairs – and therefore relations, and therefore functions.
Now we’ve got the hardware we need. To define arithmetic, we’re going to define
functions for the operations. We don’t need to say how to do them: we just need
to define their meaning. You’ll see what I mean by that in a minute.
If A is a natural number, then by our construction of the natural numbers, we know
how to increment it. We’ll define our functions using that.
Addition: Addition is a function from a pair of numbers to a number. So it’s
a set of ordered pairs, whose first element is an ordered pair. So it’s a set of
pairs of the form {((x,y),z) where:
- if x=0, then z = y
- if y=0, then z=x
- otherwise, let x’ = successor(x) and let y’ = predecessor(y) then z = Add(x’,y’).
That last line is a simple example of what I was talking about. We don’t worry
about how to do it; we just define things in terms of logical relation. We know
that every number has a successor. Because every number has a successor, every number
except 0 has a predecessor. We don’t need to say how to get it – we just know that
it exists, and for the purposes of defining what addition means, that’s good enough.
Subtraction is a better example of that. x-y is the set {((x,y),z) | ((z,y),x)∈Add} –
that is, x-y=z if and only if z+y=x. That doesn’t tell us anything about how to subtract,
but it does define what subtraction means.
Multiplication: multiplication is very similar, and relies on addition. Multiply is
a set of ordered pairs, just like addition: Mult = {((x,y),z)} where:
- If x = 0 or y = 0 then z = 0.
- If y = 1 then z = x
- Otherwise, z = Add(y, Mult(predecessor(x),y))
[Subtraction is a better example of that. x-y is the set {((x,y),z) | ((z,y),x)∈Add} – that is, x-y=z if and only if z+y=x.]
No doubt you can define an operation you call such “subtraction” in this way and it surely has its uses, but make no mistake the concept works out as broader, or at least some people sure do appear to think so. Let’s say we subtract intervals, such as [3, 4]-[1, 2]. Interval arithmetic has the rule such that
[a1, a2]-[b1, b2]=[a1-b2, a2-b1]. So,
[3, 4]-[1, 2]=[1, 3], while [1, 3]+[1, 2]=[2, 5].
One may, of course, define ‘interval subtraction’ as a different operator than ‘point subtraction’, and write something like [a1, a2]-*[b1, b2]=[a1-b2, a2-b1]. But, for the sets you’ve addressed interval subtraction behaves just like point subtraction, as if we just subtract two points, the interval notation yields
[a1, a1]-[b1, b1]=[a1-b1, a1-b1]=a1-b1, or if you prefer
[a1, a1]-*[b1, b1]=[a1-b1, a1-b1]=a1-b1. So, for the sets you’ve talked about, interval subtraction and point subtraction come out as equivalent and mathematically indistinguishable even if you define them differently. I prefer to think interval subtraction as more along the lines of the concept of subtraction than point subtraction, although perhaps too many mathematicians come as more small-minded in their conceptualtizations of subtraction than I believe.
Doug:
Just let it go. You’re just droning on about the same kind of cluelessness that have dominated the comments on other posts. You’re *not* going to accept that this is a perfectly valid model of arithmetic no matter what anyone says.
As far as your “interval arithmetic” thing: standard numeric arithemetic does not work on intervals. It works on numbers. Interval arithmetic is defined in a way that makes it equivalent. An interval isn’t a number in a pure numeric model; it isn’t a number in a logical model; and it isn’t a number in a set model. Interval arithmetic is defined for intervals; numeric arithmetic works on numbers. In a set theoretic model, you define them differently, and you don’t use a set that represents a number where you want to use a set that represents an interval. You can’t use the natural number function for subtraction on an interval – because it’s only defined for numbers.
Okay, so you’re using that definition of ordered pairs. I think it’s the most common one used when defining ordered pairs in set theory; it’s certainly the one I learned.
Now, though, if I might ask for a tiny preview – how do we define ordered pairs in type theory? The definition we have in set theory doesn’t quite translate, because types can apparently only contain types of one level down.
So do we say that the ordered pair (x, y) (where x and y are both types of order n) is the type {{x},{x,y}} of order n+2 ? What about when x=y; in that case, is it just the order n+2 type {{x}} ? Doing that would seem to make binary relations on types of order n types of order n+3, which fits with some of the references I found while trying to follow up on your previous type theory post.
And actually, that’s the most coherent comment Doug’s posted in a while. His MOD(n) comment on the last type theory post would have been hard to match, though.
Okay, this is what puzzles me. You say “We don’t need to say how to do them: we just need to define their meaning.” But doesn’t this neglect the possibility that you have defined it in a logically incoherent fashion?
For example, when you used predecessor(y) in the definition for addition, you had a note that every number except 0 has a predecessor, and it’s clear from the construction that this guarantees y has a predecessor. But surely you agree that if you had omitted rule 2 of the definition of addition that you would have been left with nonsense. Don’t we need, in addition to the definition, a proof that such a thing exists?
You define subtraction as “x-y is the set {((x,y),z) | ((z,y),x)∈Add}”, but how do I know that this is a function?
Can someone just get Doug his own blog, instead of filling up this one?
Michael, correct we do to be rigorous need to prove that functions satisfying the desired properties exist. What is more, we also need to so that they are in fact unique (i.e. that there isn’t more than one function satisfying the properties we have given above). These can be done by straightforward but tedious induction arguments.
Michael:
You are correct – I did skip a step. It’s not really enough to just define the operations – we need to prove that they are valid definitions; that they are consistent and unique; and that they have all of the properties (commutativity, associativity, etc.) that we expect from them. I didn’t do that in the post. I *could*, but as Joshua pointed out, it’s a rather tedious process. In a later post, I might do it for one or two properties of the operations on real numbers, but I’ll see when I get to it.
Are statements 1 of the Add definition and 2 of the multiply definition redundant (i.e., provable from the other two)? Also, the special case in Multiply.1 for y=0 is not really needed either, correct?
I believe I’ve seen rule 3 for Add expressed with the successor operation on applied to the Add, rather than to x. The clearer parallel to the multiply definition is appealing to me, but is there any real advantage one way or the other?
Peternee:
Yes, the definitions above do have redundancies; I think that they’re easier to understand at a first viewing with the redundant statements.
And you’re right – you can do addition using successor(Add(…)); I always remember the definitions in terms of how you write them in first order predicate logic,
where you can’t use function syntax. So I was translating from the FOPL form of the definitions, which is less clear than the form you’re mentioning; shame I didn’t think of it while I was writing.
I believe I’ve seen a definition of addition before that looked something like this (where Sx represents the successor of x):
Add(x,0) = x
Add(x,Sy) = Add(Sx,y)
I’ve sort of always liked the simplicity of that definition. Of course, proving the standard properties of addition (Add(x,y) = Add(y,x), and Add(x,Add(y,z)) = Add(Add(x,y),z)) are going to be tricky, but htey always are.
Hi Mark,
There seems to be at least two Dougs commenting on your blog.
I readily admit to probably all Doug comments beginning with Hi Mark.
My first comment on your blog dealt with the helix and music with you responding about a relative not realizing that he was performing mathematical determinants in his use of music.
At a later time you recommended a book by GJC on OMEGA with LISP programming.
I have most often commented on game theory and Petri Nets.
I readily admit to comments #8,#11 on your ‘Tiptoeing into Type Theory’ post.
I have no idea who is commenting as Doug in #1 of this post.
If the user is using my e-mail address, please advise me of such.
I can then report the user to Google. Perhaps this person is also responsible for sock-puppets on other blogs and for an increase in spam to my e-mail
Thanks,
Doug, now Doug_old
Your add definition technically does describe how to add. It’s just a tail-recursive algorithm. 😉
[You’re *not* going to accept that this is a perfectly valid model of arithmetic no matter what anyone says.]
I accept and accepted it as valid. I see no logical errors, and I think you can deduce regular arithmetic from it. I thought so before I wrote my first comment. However, the concepts that we use, even in grade-school arithmetic, I maintain come as more complex and subtle. So, even though the model works out as valid, it doesn’t really indicate our thinking process… it merely indicates the behavior of numbers as we usually use them.
[An interval isn’t a number in a pure numeric model; it isn’t a number in a logical model; and it isn’t a number in a set model.]
Try doing a simple google search for ‘interval number’ and see what you come up with. Just the papers and articles you’ll find there suggests something amiss in at least one of these hypotheses. Second, it works as completely correct even conceptually speaking, so far as I can see, to consider the concept of ‘interval numbers’ PURELY within crisp set theory. So, there exists a disconnect in saying that I haven’t “let it go.”
[Interval arithmetic is defined for intervals; numeric arithmetic works on numbers.]
An interval number [a, b] where a=b qualifies as a regular number. An interval [c, d] where c=d qualifies as a point. Consequenlty, interval airthmetic gets defined for BOTH interval numbers and point numbers.
[In a set theoretic model, you define them differently, and you don’t use a set that represents a number where you want to use a set that represents an interval. You can’t use the natural number function for subtraction on an interval – because it’s only defined for numbers.]
I find both statements correct. Now basically switch terms and we have that we can use a set that represents an interval where we want to use a set that represents a number, as if a=b, [a, b]=[a, a]=a, a regular number. We can use the interval number function for subtraction on a number, because interval numbers do get defined for point numbers.
[And actually, that’s the most coherent comment Doug’s posted in a while. His MOD(n) comment on the last type theory post would have been hard to match, though.]
Yeah… that wasn’t me.
Stephen Wells,
[Can someone just get Doug his own blog, instead of filling up this one?]
My blog is http://www.xanga.com/spoonwood, and you’re welcome to comment on it. I don’t get many comments, so I look for stimulation elsewhere. Although, judging by your response as to how you don’t appreciate feedback and neither does Mark, perhaps I’ll just cut-and-paste comments from here and talk about them on my blog. The last set of comments, basically between myself and one other person, *inspired* by Mark’s post and comments on Mark’s “basics” posts, lead to a discussion of how proof-by-contradiction doesn’t really rely on the excluded middle property and really works much like proof by case exhaustion, except it works by elimination.
So, I guess that’s a good-bye.
When I read the first comment on this thread I thought what we need is a Universal Doug Post (UDP) that could be posted as the first comment on all MarkCC’s maths threads, to save Doug the trouble of all that typing and ourselves the trouble of all that reading.
Ladies and Gentlemen I present for your delectation the UDP:
Things Go Better With FUZZ!
Your definition (x,y)={x,{x,y}} seems to require a bit of heavy machinery to work: One might conceivably have x={u,v} with v={x,y}, in which case (x,y)=(u,v) – except that would of course violate the well-foundedness axiom of set theory. If you define (x,y)={{x},{x,y}} instead, the identification of x and y from the pair is much more immediate, and only depends on the axiom that says that a set is determined by its members: Given a pair P, then P=(x,y) where x is the unique set with {x}∈P, and y is the unique set with {x,y}∈P.
If you define the integers as the closure of the empty set under a -> a U {a} and then proceed to define ordered pairs and arithmetic in the way Mark has done in his excellent post, you clearly get a set theoretical construction that has all the properties of arithmetic on the integers.
However, it seems to me that such a construction cannot really be addition on the integers because it has many more unnecessary properties.
For example, the empty set is not a member of the number “2”, at least in my naive understanding of what “2” really is. So while we might have a good set-theoretical model of “2”, I really can’t accept that when I talk about 2, I am really talking about the set described above. The same goes for ordered pairs.
What are your thoughts, Mark or is this too question too dumb or philosophical for this forum?
Charles:
The point of a construction like this isn’t to say “The number 2 is the set {∅,{∅}}.” The number 2 is the number 2.
The point of a construction like this is to create a sound logical basis for reasoning about numbers. To do that, you need to have some axiomatic basis for your reasoning. The reason that set theory is so attractive is that it’s got a relatively simple set of axioms, and it’s easy to construct other systems on top of them.
To do reasoning, you need an axiomatic basis that is sound, consistent, complete (or rather as complete as any system can be). If your system doesn’t have those properties – if it contains even the slightest error, then every proof you build in the system is totally invalid and worthless. To show that a system has these properties requires an elaborate and rather painful proof. So ideally, you want to do it once, and only once.
You can prove that the axioms of ZF set theory, combined with first order logic provides a complete, consistent system. And you can construct pretty much any set of mathematical objects and operations that you want in terms of set theoretic definitions using that axiomatic system. So set theory gives you the ability to work with all sorts of mathematical systems, without having to do the fundamental consistency proofs, because you know you’re building on a sound basis.
So this construction of numbers doesn’t tell us anything about the numbers – but it gives us a sound logical framework in which we can build theorems and proofs that do tell us about the numbers.
Pardon my suspicions, but why did Doug post a goodbye-cruel world, followed by Charles Tye making essentially the same argument that Doug does? Hmmmmm.
[Ladies and Gentlemen I present for your delectation the UDP:
Things Go Better With FUZZ!]
That has a nice sound to it, but basically wrong about my posts. The last set of posts on “proof by contradiciton” did use “fuzz” (because from studying “fuzz”, I realized such… but to say that “fuzz” comes as the cause committs the genetic fallacy), but only coincidentally. I could well have talked about such JUST in the realm of multi-valued logic, as I showed how some non-fuzzy, multi-valued logical systems worked out such that the property of excluded middle (as well as that of the property of contradiction) held, but one needs to rethink the reductio ad absurdum for it to work. In this post I mentioned NOTHING which *necessarily* works out as fuzzy, even in terms of its conceptualization. Seriously, Thony C., one can use the indicator function to describe membership for interval numbers, e.g. for the interval number [3, 4], the indicator function yields 1 for x belonging to [3, 4], while it yields 0 for any x belonging to (-oo, 3), (4, +oo). That’s crisp.
*Shakes head*
Harald Hanache-Olsen,
[One might conceivably have x={u,v} with v={x,y}, in which case (x,y)=(u,v) – except that would of course violate the well-foundedness axiom of set theory.]
As I understand it one can develop much of sets without the axiom of foundation. “Bounded set theory” comes as a term used. Lakoff and Nunez’s _Where Mathematics Comes From_ discusses this a bit, though not too much or in a technical sense, as well as Keith Devlin’s _The Joy of Sets_ discusses in some detail.
Mark,
[If your system doesn’t have those properties – if it contains even the slightest error, then every proof you build in the system is totally invalid and worthless.]
I don’t understand why you claim this, unless you mean to use something like relevant logic. If I have an axiom with truth value of false, and I deduce a true proposition I’ve reasoned validly in classical logic. I don’t know why you claim it needs to qualify as ‘sound’, since I don’t see how the premises have to work out as true… unless you claim to use something like relevant logic… which, of course, classical set theory doesn’t use. It really sounds like you mean to argue that we need material axiomatics. But, as comes as well-known to even some middle school mathematics students neither Euclidean nor non-Euclidean axioms need have a truth value to them for the proofs to yield demonstrations. We just need to know what will follow given that we would assume the assumptions as true and the logic in use.
[So set theory gives you the ability to work with all sorts of mathematical systems, without having to do the fundamental consistency proofs, because you know you’re building on a sound basis.]
Excuse me, but you first said “You can prove that the axioms of ZF set theory, combined with first order logic provides a complete, consistent system.” Where does a proof of ‘soundness’ exist?
[So this construction of numbers doesn’t tell us anything about the numbers – but it gives us a sound logical framework in which we can build theorems and proofs that do tell us about the numbers.]
I call bull.
Doug:
If you don’t understand why an invalid set of axioms invalidates proofs derived from those axioms, then there’s just no point arguing with you. That’s just such a fundamental part of any reasonable discussion of mathematical logic and proofs that if you don’t accept that, discussion isn’t possible.
If you want to see proofs about the soundness, completeness, and consistency of set theory+FOPL as an axiomatic reasoning system, I’d suggest the Quine textbook. It’s not exactly easy reading, but it’s thorough.
As for your “I call bull” line: there’s no way to argue it if you don’t even accept that without valid axioms, you can’t have a valid proof. Because that’s what’s at the base of that statement: we need a valid axiomatic system to do proofs; set theory plus constructions gives us an axiomatic system to build a definitions of numbers which we can use for proofs.
Doug Spoonwood:
Mark is correct in saying that a logical system is useless if it is incorrect, if it contains the slightest logical inconsistency. If your logical system is inconsistent, then you can, by using proof by contradiction, prove anything at all. It’s really quite trivial, and should be taught in beginning logic classes. I know this is true in FOPL and any system that admits Proof By Contradiction as valid. I give it good odds of even being true in some systems without PBC (if they have some other form of proof that works out to be equivalent for this purpose). I can lay this out for you in an explicit proof, if you wish. It’s quite easy and short.
Doesn’t the proof go something like this:
From (P and not-P) prove anything (A).
P is true.
therefore (P or A) is true.
But not-P is true.
And if (P or A) is true, and (not-P) is true, then A is true.
“You can prove that the axioms of ZF set theory, combined with first order logic provides a complete, consistent system.”
This isn’t correct. Better read some Godel.
johnshade:
If you just look two paragraphs up, you’ll see that I qualify the mention of completeness. Set theory + FOPL is consistent, sound, and as complete as a system can be while remaining sound and consistent.
[If you don’t understand why an invalid set of axioms invalidates proofs derived from those axioms, then there’s just no point arguing with you.]
I didn’t. As I understand it, validity doesn’t apply to axioms or assumptions, it applies to inferences or implications. If you check what I wrote, I talked about soundness.
[As for your “I call bull” line: there’s no way to argue it if you don’t even accept that without valid axioms, you can’t have a valid proof.]
I don’t get your term ‘valid axioms’, and currently I consider it oxymoronic.
Xanthir,
[Mark is correct in saying that a logical system is useless if it is incorrect, if it contains the slightest logical inconsistency. If your logical system is inconsistent, then you can, by using proof by contradiction, prove anything at all.]
First off he did NOT just say that. He said “The point of a construction like this is to create a sound logical basis for reasoning about numbers. To do that, you need to have some axiomatic basis for your reasoning. The reason that set theory is so attractive is that it’s got a relatively simple set of axioms, and it’s easy to construct other systems on top of them.” and so on. A two-valued logical system with the slightest logical inconsistency works out as useless, since i(a, c(a))=F, AND F->T, always in two-valued logic. This says nothing about other logical systems where i(a, c(a)) doesn’t equal F, as well as other logical systems where f->t (where f belongs to [F, T], and t belongs to [F, T] also) doesn’t always hold. *Partial* inconsistency works as permissible in non two-valued logic.
I define (almost always partial) inconsistency, in an equational way, as any violation of i(a, c(a))=0. Does
i(a, c(a))>0 imply all statements as provable, or
i(a, c(a))=U in a logic on {T, U, F} imply all statements as provable? Suppose regular truth table properties hold, and a=U, c(a)=U, i(U, U)=U, (U->T)=U. Then we can’t prove all T from i(a, c(a)) not equalling F.
Lastly, your statement about “partial inconsistency” comes as different than what Mark said (though perhaps not in intention) “To do reasoning, you need an axiomatic basis that is sound, consistent, complete (or rather as complete as any system can be). If your system doesn’t have those properties – if it contains even the slightest error, then every proof you build in the system is totally invalid and worthless.”
I don’t see how an instance where i(a, c(a)) or *partial* logical inconsistency does not equal F comes out as a “error”.
[It’s really quite trivial, and should be taught in beginning logic classes.]
It DOES get taught in beginning logic classes, and I learned it in one. But, I ALSO learned that it, like all mathematical as well as logical theorems, such an idea has conditions on it. What conditions? We have {T, F} as our truth set, the rules for intersection, union, and complement work according to standard truth tables, and so on. If those conditions don’t hold, then the premises of the syllogism
i(a, c(a))=F,
F-> any derivable T,
Therefore, i(a, c(a))-> all derivable T
Or the premises of a similar syllogism don’t hold. A necessary condition B of a premise A means that A->B. But, if A doesn’t hold, then we simply can’t say that B works out as necessary. Logical inconsistencies, taken as instances where i(a, c(a)) does not equal F, so far as I can tell, say NOTHING about soundness.
[From (P and not-P) prove anything (A).
P is true.
therefore (P or A) is true.
But not-P is true.
And if (P or A) is true, and (not-P) is true, then A is true.]
Nice proof. Now, let’s say I write it like this
P=T
Therefore, u(P, A)=T
c(P)=T… I’d stop right here. The axioms of my system, which comes as equivalent to truth tables, says that
c(T)=F, and c(F)=T. Well substituting T for P, I have
c(T)=T… whoops… I violated the truth-table axioms.
Does the proof mean that “the slightest inconsistency” will imply all truth statements? I suppose, it depends what one means by “*slightest* inconsistency”. If we mean by “inconsistency” any instance where A and not A,
i(a, c(a))>0 or equal to a negative infinitesimal, equals something other than 0, then “the slightest inconsistency” won’t imply all truth statements. I suppose you could define inconsistency perhaps in a different way, but you’d really first need to specify such. After all, contradiction does get defined in different ways in logic (a contradiction consists of any instance where all values of the truth tables come out false, versus a contradiciton comes as any instance of i(a, c(a))).
What’s the title of Quine’s book? From Wikipedia I’d guess _Set Theory and Logic_, but looking there I find many other titles on mathemtaical logic, and from my experience some logic texts cover set theory also.
There is no sense in which ZF can be said to be “as complete as can be”. One can consistently add all sorts of axioms that are not contradicted by the others and get closer to completeness without ever attaining it. The most blatant example is adding a Goedel sentence, which is independent, to the axioms, then iterating. But large cardinal axioms, for example, enrich set theory non-trivially.
A remark regarding the derivation of elementary arithmetic seems in order. Technically, to define, say, the addition function, you must show that the appropriate set of ordered pairs exists in accordance with the axioms of set theory. This is trickier than it sounds, because the axioms of set theory are not particularly helpful for defining functions—functions are not primitive, but a derived notion, and hence have to be derived from the fundamental axioms. The point is that the ZF axioms are not pragmatically very useful. Most modern set theories and type theories take functions as a starting point, and provide for comprehension principles that are directly useful in doing mathematics. Topos theory is one such axiomatization of set theory—and it is essentially Church’s Type Theory presented in algebraic form. Constructive type theories admit recursion equations as means of defining functions without further justification, again much more useful in practice than the homely axioms of set theory.
[From (P and not-P) prove anything (A).
P is true.
therefore (P or A) is true.
But not-P is true.
And if (P or A) is true, and (not-P) is true, then A is true.]
There exists more here than I previously thought. If we have i(P, c(P))=T, then by the linguistic meaning of intersection, we have P=T, as well as c(P)=T. Well, if P=T, then we also have c(P)=c(T)=F. From basic propreties of equality, we can then write
c(P)=T, implies T=c(P)=c(T)=F, which implies T=F. In other words truth values come out equivalent. Assuming our truth set as a set as opposed to a bag or multiset, we then have
{T, F}={T, T}={F, F}={F}={T}. In other words, that our truth set collapses to a singleton. This implies that, if we assume we have a truth set for the above propositions, we have a *one* valued logic at work. A contradiction allows one to derive everything holds in a *one* valued logic, and not in two-valued logic as, so far as I can tell, one simply can’t assume the existence of a contradiction (i(a, c(a)) not equal to F) in two-valued logic. One might instead appealed to the Leibniz’s principle of the identity of indiscernibles, and we’d still have a one-valued logic at work.
I suppose one could suppose we have a bag or multiset, but I would suspect most logicians would say we have a truth set or accept something like the identity of indiscernibles at this level. Also, if we suppose that we have a bag or multiset, then I don’t see how idempotency or that i(A, A)=A, u(A, A), works out a theorem. Nor do I see that we have a structural analogy between the truth (multi)set and classical logic in terms of basic properties. Consequenlty, I conclude the proposition that “from a contradiction everything follows” doesn’t even really hold in classical logic (it’s not even false in classical logic one might say), it only holds for a *one* valued logic. I don’t know of anyone really using a *one* valued logic. Although, maybe a multiset approach would work and I’ve just missed it.
Mark, thank you for taking the time to make such a detailed reply to my question.
I have been reading up a bit on this and I think the idea goes as follows (non-expert disclaimer).
Define (A,f) as a function f on a set A such that f has the usual “successor properties” i.e. one-to-one from A to itself and A is the closure of f under some member of A not in f(A).
Then you add a new primitive s to the first order language with set omega as its domain and a new axiom that (omega,s) has these successor properties.
It can then be shown that all algebras (A,f) are isomorphic to (omega,s) which I find is a more natural definition for the natural numbers and doesn’t have the extra properties I mentioned in my earlier post.
As you point out, formally, you gain nothing from this as any choice of (A,f) will do.
Stephen: I regularly enjoy reading this blog, but the last comment was only my second one. I assure you that I am not a sock for Doug Spoonwood.
Pardon me; my troll filter overloaded after visiting Pharyngula recently, and only resets slowly 🙂 As it happens your initial comment was very close to DS’s in an earlier thread.
“If you just look two paragraphs up, you’ll see that I qualify the mention of completeness. Set theory + FOPL is consistent, sound, and as complete as a system can be while remaining sound and consistent.”
Check Godel’s second theorem.
Note that for natural numbers in ZF we have n = {0,1,2,…,n-1}, so arithmetic on natural numbers *is* some kind of “interval arithmetic” (although I am not sure if it has anything to do with what Doug is saying).
Charles,
[I assure you that I am not a sock for Doug Spoonwood.]
At least someone thinks I’m not a smelly foot.
John Shade,
[“If you just look two paragraphs up, you’ll see that I qualify the mention of completeness. Set theory + FOPL is consistent, sound, and as complete as a system can be while remaining sound and consistent.”
Check Godel’s second theorem.]
I have to agree. I don’t mean to contradict Mark in saying that set theory has great utility, but even if his statement gets qualified with emphasis, the notion of “complete *as a system can be* [emphasis added] while remaining sound and consistent,” sounds like it produces some sort optimum in terms of soundness and consistency. I don’t see anyway to evaluate this like the optima of a function, except on personal intuitions.
Slawekk,
[Note that for natural numbers in ZF we have n = {0,1,2,…,n-1}, so arithmetic on natural numbers *is* some kind of “interval arithmetic” (although I am not sure if it has anything to do with what Doug is saying).]
I don’t see what you’re saying, so I’ll work through an example.
2+1={0, 1}+1={0, {0}}+{0}. I see a set with two members, and a singleton.
3={0, 1, 2}={0, {0}, {0, 1}}={0, {0}, {0, {0}}}. I see a set with four members. How’s that an interval number??? An interval [a, b], where a does not equal b, has an uncountable infinity of members. You’ve simply lost me (if you were serious)… how does your definition produce an interval number?
[Note that for natural numbers in ZF we have n = {0,1,2,…,n-1}, so arithmetic on natural numbers *is* some kind of “interval arithmetic” (although I am not sure if it has anything to do with what Doug is saying).]
Oh wait, I see now. One can define n as the set
n={x: 0
Thanks for showing folks how some basic things can be defined. As usual you get a lot of quibbles for your trouble, so my little comment is just for fun. You say
“(note: sets of finite numbers, not finite sets of numbers!)”
Indeed, there are infinitely many positive integers, and every one of them is finite.