An Open Letter to Glen Beck from a non-Orthodox Jew

Hey, Glen.

Look, I know we don’t get along. We don’t agree on much of anything. But still, we really need to talk.

The other day, you said some really stupid, really offensive, and really ignorant things about Jews. I know you’re insulted – after all, four hundred Rabbis from across the spectrum came together to call you out for being an antisemitic asshole, and that’s gotta hurt.

But that’s no excuse for being a pig-ignorant jackass.

Continue reading

Another Crank comes to visit: The Cognitive Theoretic Model of the Universe

When an author of one of the pieces that I mock shows up, I try to bump them up to the top of the queue. No matter how crackpotty they are, I think that if they’ve gone to the trouble to come and defend their theories, they deserve a modicum of respect, and giving them a fair chance to get people to see their defense is the least I can do.

A couple of years ago, I wrote about the Cognitive Theoretic Model of the Universe. Yesterday, the author of that piece showed up in the comments. It’s a two-year-old post, which was originally written back at ScienceBlogs – so a discussion in the comments there isn’t going to get noticed by anyone. So I’m reposting it here, with some revisions.

Stripped down to its basics, the CTMU is just yet another postmodern “perception defines the universe” idea. Nothing unusual about it on that level. What makes it interesting is that it tries to take a set-theoretic approach to doing it. (Although, to be a tiny bit fair, he claims that he’s not taking a set theoretic approach, but rather demonstrating why a set theoretic approach won’t work. Either way, I’d argue that it’s more of a word-game than a real theory, but whatever…)

The real universe has always been theoretically treated as an object, and specifically as the composite type of object known as a set. But an object or set exists in space and time, and reality does not. Because the real universe by definition contains all that is real, there is no “external reality” (or space, or time) in which it can exist or have been “created”. We can talk about lesser regions of the real universe in such a light, but not about the real universe as a whole. Nor, for identical reasons, can we think of the universe as the sum of its parts, for these parts exist solely within a spacetime manifold identified with the whole and cannot explain the manifold itself. This rules out pluralistic explanations of reality, forcing us to seek an explanation at once monic (because nonpluralistic) and holistic (because the basic conditions for existence are embodied in the manifold, which equals the whole). Obviously, the first step towards such an explanation is to bring monism and holism into coincidence.

Continue reading

E. E. Escultura and the Field Axioms

As you may have noticed, E. E. Escultura has shown up in the comments to this blog. In one comment, he made an interesting (but unsupported) claim, and I thought it was worth promoting up to a proper discussion of its own, rather than letting it rage in the comments of an unrelated post.

What he said was:

You really have no choice friends. The real number system is ill-defined, does not exist, because its field axioms are inconsistent!!!

This is a really bizarre claim. The field axioms are inconsistent?

I’ll run through a quick review, because I know that many/most people don’t have the field axioms memorized. But the field axioms are, basically, an extremely simple set of rules describing the behavior of an algebraic structure. The real numbers are the canonical example of a field, but you can define other fields; for example, the rational numbers form a field; if you allow the values to be a class rather than a set, the surreal numbers form a field.

So: a field is a collection of values F with two operations, “+” and “*”, such that:

  1. Closure: ∀ a, b ∈ F: a + b in F ∧ a * b ∈ f
  2. Associativity: ∀ a, b, c ∈ F: a + (b + c) = (a + b) + c ∧ a * (b * c) = (a * b) * c
  3. Commutativity: ∀ a, b ∈ F: a + b = b + a ∧ a * b = b * a
  4. Identity: there exist distinct elements 0 and 1 in F such that ∀ a ∈ F: a + 0 = a, ∀ b ∈ F: b*1=b
  5. Additive inverses: ∀ a ∈ F, there exists an additive inverse -a ∈ F such that a + -a = 0.
  6. Multiplicative Inverse: For all a ∈ F where a != 0, there a multiplicative inverse a-1 such that a * a-1 = 1.
  7. Distributivity: ∀ a, b, c ∈ F: a * (b+c) = (a*b) + (a*c)

So, our friend Professor Escultura claims that this set of axioms is inconsistent, and that therefore the real numbers are ill-defined. One of the things that makes the field axioms so beautiful is how simple they are. They’re a nice, minimal illustration of how we expect numbers to behave.

So, Professor Escultura: to claim that that the field axioms are inconsistent, what you’re saying is that this set of axioms leads to an inevitable contradiction. So, what exactly about the field axioms is inconsistent? Where’s the contradiction?

Computability

I just recently realized that I only wrote about computability back in the earliest days of this blog. Those posts have never been re-run, and they only exist back on the original blogger site. When I wrote them, I was very new to blogging – looking back, I think I can do a much better job now. So I’m going to re-do that topic. This isn’t just going to be a re-post of those early articles, but a complete rewrite.

The way that I’m going to cover this is loosely based on the way that it was first taught to me by a wonderful professor, Dr. Eric Allender at Rutgers, where I went to college. Dr. Allender was a really tremendous professor: he managed to take an area of computer science that could seem hopelessly abstract and abstruse, and turned it into something fun and even exciting to learn about.

Computability is the most basic and fundamental sub-field of theoretical computer science. It’s the study of what a mechanical computing device can do. Not just what a specific mechanical computing device can do, but what can any mechanical computing device do? What are the limits of what you can do mechanically? And once we know the limits, what can we discover about the nature of computation?

Continue reading

Fuzzy Logic vs Probability

In the comments on my last post, a few people asked me to explain the difference between fuzzy logic and probability theory. It’s a very good question.

The two are very closely related. As we’ll see when we start looking at fuzzy logic, the basic connectives in fuzzy logic are defined in almost the same way as the corresponding operations in probability theory.

The key difference is meaning.

There are two major schools of thought in probability theory, and they each assign a very different meaning to probability. I’m going to vastly oversimplify, but the two schools are the frequentists and the Bayesians

First, there are the frequentists. To the frequentists, probability is defined by experiment. If you say that an event E has a probability of, say, 60%, what that means to the frequentists is that if you could repeat an experiment observing the occurrence or non-occurrence of E an infinite number of times, then 60% of the time, E would have occurred. That, in turn, is taken to mean that the event E has an intrinsic probability of 60%.

The other alternative are the Bayesians. To a Bayesian, the idea of an event having an intrinsic probability is ridiculous. You’re interested in a specific occurrence of the event – and it will either occur, or it will not. So there’s a flu going around; either I’ll catch it, or I won’t. Ultimately, there’s no probability about it: it’s either yes or no – I’ll catch it or I won’t. Bayesians say that probability is an assessment of our state of knowledge. To say that I have a 60% chance of catching the flu is just a way of saying that given the current state of our knowledge, I can say with 60% certainty that I will catch it.

In either case, we’re ultimately talking about events, not facts. And those events will either occur, or not occur. There is nothing fuzzy about it. We can talk about the probability of my catching the flu, and depending on whether we pick a frequentist or Bayesian interpretation, that means something different – but in either case, the ultimate truth is not fuzzy.

In fuzzy logic, we’re trying to capture the essential property of vagueness. If I say that a person whose height is 2.5 meters is tall, that’s a true statement. If I say that another person whose height is only 2 meters is tall, that’s still true – but it’s not as true as it was for the person 2.5 meters tall. I’m not saying that in a repeatable experiment, the first person would be tall more often than the second. And I’m not saying that given the current state of my knowledge, it’s more likely than the first person is tall than the second. I’m saying that both people possess the property tall – but in different degrees.

Fuzzy logic is using pretty much the same tools as probability theory. But it’s using them to trying to capture a very different idea. Fuzzy logic is all about degrees of truth – about fuzziness and partial or relative truths. Probability theory is interested in trying to make predictions about events from a state of partial knowledge. (In frequentist terms, it’s about saying that I know that if I repeated this 100 times, E would happen in 60; in Bayesian, it’s precisely a statement of partial knowledge: I’m 60% certain that E will happen.) But probability theory says nothing about how to reason about things that aren’t entirely true or false.

And, in the other direction: fuzzy logic isn’t particularly useful for talking about partial knowledge. If you allowed second-order logic, you could have fuzzy meta-predicates that described your certainty about crisp first-order predicates. But with first order logic (which is really where we want to focus our attention), fuzzy logic isn’t useful for the tasks where we use probability theory.

So probability theory doesn’t capture the essential property of meaning (partial truth) which is the goal of fuzzy logic – and fuzzy logic doesn’t capture the essential property of meaning (partial knowledge) which is the goal of probability theory.

More 3-valued logic: Lukasiewicz and Bochvar

Last time I wrote about fuzzy logic, we were looking at 3-valued logics, and I mentioned that there’s more than one version of 3-valued logic. We looked at one, called K^S_3, Kleene’s strong 3-valued logic. In K^S_3, we extended a standard logic so that for any statement, you can say that it’s true (T), false (F), or that you don’t know (N). In this kind of logic, you can see some of the effect of uncertainty. In many ways, it’s a very natural logic for dealing with uncertainty: “don’t know” behaves in a very reasonable way.

For example, suppose I know that Joe is happy, but I don’t know if Jane is happy. So the truth value of “Happy(Joe)” is T; the truth value of “Happy(Jane)” is N. In Kleene, the truth value of “Happy(Joe) ∨ Happy(Jane)” is T; since “Happy(Joe)” is true, then “Happy(Joe) ∨ anything” is true. And “Happy(Joe) ∧ Happy(Jane)” is N; since we know that Joe is happy, but we don’t know whether or not Jane is happy, we can’t know whether both Joe and Jane are happy. It works nicely. It’s a rather vague way of handling vagueness, (that is, it lets you say you’re not sure, but it doesn’t let you say how not sure you are) but in so far as it goes, it works nicely.

A lot of people, when they first see Kleene’s three-valued logic think that it makes so much sense that it somehow defines the fundamental, canonical three-valued logic in the same way that, say, first order predicatelogic defines the fundamental two-valued predicate logic.

It isn’t.

There are a bunch of different ways of doing three-valued logic. The difference between them is related to the meaning of the third value – which, in turn, defines how the various connectives work.

There are other 3-valued logics. We’ll talk about two others. There’s Bochvar’s logic, and there’s Lukasiewicz’s. In fact, we’ll end up building our fuzzy logic on Lukasiewicz’s. But Bochvar is interesting in its own right. So we’ll take a look at both.

Continue reading

Sarah Palin and the Blood Libel

Ok, so this is another off-topic rant, but I’ve got to say something or my head will explode.

After the events of this past weekend, Sarah Palin has come up for a lot
of criticism for her target map from the last election. In case you’ve been hiding under a rock somewhere, her website published a map with congresspeople who had voted for the healthcare reform bill like Congresswoman Giffords marked with a rifle sight.

So today, she decided to defend herself, by saying:

Especially within hours of a tragedy unfolding, journalists and pundits should not manufacture a blood libel that serves only to incite the very hatred and violence that they purport to condemn. That is reprehensible.

No, Ms. Palin. That is not reprehensible. What is reprehensible is using a historic excuse for antisemitic violence as a defense against your words and your actions having had any role in the attempted murder of a Jewish congresswoman.

What we have here is a very vocally Christian politician, who
marked a Jewish congressperson with a gunsight. Said Jewish congresswoman was shot in the head and nearly killed. And Sarah Palin
has the chutzpah to talk about blood libel?

Let’s recall, for a moment, what the blood libel is. Blood libel isn’t an accusation that you’re responsible for violence. It’s a very specific accusation, made by Christians, that Jews murder christian children in order to obtain christian blood, which is used to make Passover Matzah.

From Wikipedia:

Blood libel (also blood accusation) refers to a false accusation or claim that religious minorities, almost always Jews, murder children to use their blood in certain aspects of their religious rituals and holidays. Historically, these claims have–alongside those of well poisoning and host desecration–been a major theme in European persecution of Jews.

The libels typically allege that Jews require human blood for the baking of matzos for Passover. The accusations often assert that the blood of Christian children is especially coveted, and historically blood libel claims have often been made to account for otherwise unexplained deaths of children. In some cases, the alleged victim of human sacrifice has become venerated as a martyr, a holy figure around whom a martyr cult might arise. A few of these have been even canonized as saints.

In general, the libel alleged something like this: a child, normally a boy who had not yet reached puberty, was kidnapped or sometimes bought and taken to a hidden place (the house of a prominent member of the Jewish community, a synagogue, a cellar, etc.) where he would be kept hidden until the time of his death. Preparations for the sacrifice included the gathering of attendees from near and far and constructing or readying the instruments of torture and execution.

At the time of the sacrifice (usually night), the crowd would gather at the place of execution (in some accounts the synagogue itself) and engage in a mock tribunal to try the child. The boy would be presented to the tribunal naked and tied (sometimes gagged) at the judge’s order. He would eventually be condemned to death. Many forms of torture would be inflicted during the boy’s “trial”, including some of those actually used by the Inquisition on suspects of heresy. Some of the alleged tortures were mutilation (including circumcision), piercing with needles, punching, slapping, strangulation, strappado and whipping, while being insulted and mocked throughout.

In the end, the half-dead boy would be crowned with thorns and tied or nailed to a wooden cross. The cross would be raised and the blood dripping from the boy’s wounds, particularly those on his hands, feet, and genitals, would be caught in bowls or glasses. Finally, the boy would be killed with a thrust through the heart from a spear, sword, or dagger. His dead body would be removed from the cross and concealed or disposed of, but in some instances rituals of black magic would be performed on it. The earlier stories describe only the torture and agony of the victim and suggest that the child’s death was the sole purpose of the ritual. Over time and as the libel proliferated, the focus shifted to the supposed need to collect the victim’s blood for mystical purposes.

The story of William of Norwich (d. 1144) is the first case of alleged ritual murder that led to widespread persecutions. It does not mention the collection of William’s blood nor of any ritual purpose to the alleged ritual murder. In the story of Little Saint Hugh of Lincoln (d. 1255) it was said that after the boy was dead, his body was removed from the cross and laid on a table. His belly was cut open and his entrails removed for some occult purpose, such as a divination ritual. In the story of Simon of Trent (d. 1475) it was highly stressed how the boy was held over a large bowl so all his blood could be collected.

According to Walter Laqueur, “Altogether, there have been about 150 recorded cases of blood libel (not to mention thousands of rumors) that resulted in the arrest and killing of Jews throughout history, most of them in the Middle Ages… In almost every case, Jews were murdered, sometimes by a mob, sometimes following torture and a trial.”

Blood libel is a very specific, disgraceful, malicious, and horrific accusation against Jews. It is an accusation that Jews, as a part of our religion, are murderers and cannibals. That we steal children from righteous christian communities, murder them, drain their blood, and then eat it as part of our religious rituals.

This isn’t just ancient history. The blood libel has been around since the middle ages, but it has persisted all the way to the present. My own ancestors fled their homes in Russia to avoid a pogrom – the supposed cause of which was to protect the christian children from being murdered for Passover matzah. It’s still around today: among other examples, in 2005, a group of members of the Russian parliament put forward a proposed law banning all Jewish organizations, because Jewish practices are inhumane, and extend to ritual murder”.

Sarah Palin clearly has no clue of what “blood libel” means. That’s a disgrace in itself; anyone who’s even moderately educated about politics and religion – like, say, a christian politician who wants to be the president of the US – should know what it means. But Sarah? No, she’s downright proud of her ignorant cluelessness.

What’s worse is the way that she’s expressing that cluelessness.

She’s trying to avoid taking any responsibility for the shooting. That’s
fine – she isn’t responsible for the shooting. But the way that she’s doing it is by falsely presenting herself as the victim in this situation. And to make matters worse, she’s doing that by cluelessly presenting herself as the victim of a historic anti-semitic
slur that falsely accuses Jews of being murderers. She’s trying to distance herself from the attempted murder of a Jewish woman by presenting herself as the victim of an anti-Jewish slur.

I can’t help but look at this as a Jew. She’s exploiting our history of repression, our history of being falsely accused, tortured, and murdered in the name of a lie. My family – my great grandfathers – had to leave their homes, and come to this county with nothing but the clothes on their backs – because if they hadn’t, their families would have been murdered in the name of the blood libel. My maternal great-grandfather, who I actually knew when I was a child, was a wealthy tailor in Russia. When he arrived in the US in 1905 with his wife and three children, they had – literally – one nickel, plus the clothes that they were wearing. My paternal grandfather came by himself, without even the nickel. And the people who he left behind died – some in the pogroms he was fleeing; the rest in the holocaust. The things that have happened to me can’t compare – but even in modern America, I’ve had run-ins with the blood libel. I lived in Ohio for four years as a kid, and as a second grader, I had people asking me where we got the blood for our Matzah.

The blood libel isn’t a joke. It’s a big piece of history, which has been the cause of horrific violence. It’s one of the causes of the holocaust. It’s one of the causes of the murderous pograms in Russia. It’s one of the causes of numerous rampages and murders throughout the middle ages in Europe. And it’s used today as a political bludgeon against Israel and the Jewish people.

And Sarah Palin wants to claim that people pointing out that she’d drawn crosshairs on the district of a woman who was shot in the head – a Jewish woman who was shot in the head – is blood libel.

She should be ashamed of herself. But she isn’t. She’ll never even come close to understanding why what she did is so wrong. And she, and her followers, will never even care. Because she’s a pathetic, stupid, small-minded, pig-ignorant, amoral, narcissistic twat – and that’s exactly what her followers like about her.

Mental Illness and Responsibility

There’s something came up in the comments of the post about Mr. Tangent 19 that I meant to turn into a post of its own. Unfortunately, I never quite got around to it. In light of recent events, and the talk about the man who attempted to kill congresswoman Giffords, I think it’s important to talk about this kind of thing, so I’m resurrecting the in-progress post now.

Quite frequently when I write a post about a particularly odd crank, someone will either comment or email me saying something like the following:

How fine a line is it between being a crank and being mentally ill, how do we differentiate between the two, and how should we individually treat those separate cases?

The gist of this line of reasoning is: the target of this post is obviously mentally ill, so why are you being mean picking on them?

When I look at things like this, I’ve got a rather blunt answer: why does it matter?

In fact, I’ve got an even better blunt answer: Why should it matter?

Over the last few years I’ve learned, from personal experience, what mental illness really means. Personally, I suffer from chronic depression (managed, quite well fortunately, through medication); and I’ve also had a lot of trouble dealing with pretty severe social anxiety. It’s not a lot of fun. But it’s also not relevant to anything I do at work, to anything I write on my blog, to any political or social or religious activity that I participate in.

I’ve learned from some of my friends about bipolar disorder and dissociative disorder. And I’ve got a cousin who is pretty much completely incapacitated by schizophrenia.

I’ve learned a couple of things from those experiences.

First: being mentally isn’t a particularly big deal. There’s a good chance that you know a lot of mentally ill people, and if you knew who they were, you’d probably be amazed by just how normal they seem.

Second: there is a terrible stigma associated with mental illness. That stigma is huge, and it colors everything about how we view mental illness and people with mental illness. The way that we look at someone mentally ill and baby them – say that we shouldn’t hold them responsible for what they say and do in public – that’s part of the stigma! And it’s not anything close to benign. As almost anyone with any kind of mental illness can tell you, revealing your illness to your employer or coworkers can completely change the way that you’re treated. You can go from being a go-to person on top of the world, to be an absolutely untrustworthy nothing overnight if the wrong person finds out. Nothing changes, except their perceptions: but because of the stigma that says that mentally ill people are irrational and untrustworthy, suddenly everything you say, everything you do, can suddenly become questionable and untrustworthy. After all, you’re crazy. (Yes, I speak from bitter experience here.)

Virtually all mentally ill people function as part of society, without people around them even knowing about their illness. But the instant you find out that someone is mentally ill, the instinctive reaction is to say: “This person is mentally ill, therefore they aren’t responsible for anything they say or do” – and as a direct corollary of that: “I can’t trust this person with anything important”. I’ve seen this quite directly in person.

It’s total bullshit. Most mentally ill people are just as responsible, trustworthy, intelligent, and reasonable as people who aren’t mentally ill. Even many people with schizophrenia – one of the most debilitating, hardest to treat mental illnesses out there – can be fully functional, trustworthy, and rational people. I’ll guarantee that every one of you reading this knows someone with a mental illness, and there’s a reasonable chance that there’s someone you know who has schizophrenia, but you don’t know it, because they seem perfectly normal.

The thing is, we could know someone mentally ill for years and never notice anything odd. But for most people, the instant we find out that they’re mentally ill, our attitude changes. Suddenly they’re not trustworthy or responsible: they’re crazy.

If you’re well enough to interact with society, you deserve to be treated as a full member of society. And that includes the negative aspects of being a member of society as well as the positive ones.

In terms of that past post: the author of that piece of crankery is a practicing physician. Perhaps he is mentally ill. But apparently he functions quite well in his day to day life as a doctor – well enough to be able to practice medicine; well enough to be able to make-or-death decisions about how the medical care of his patients. He deserves the respect of being taken seriously. He doesn’t deserve to be pushed off into a bin of crazy people who should be dismissed as not responsible fdor their actions. If he wants to put his ideas forward, they should be treated just like anyone else’s – whether he’s mentally ill or just stupidly arrogant and ignorant doesn’t matter in the least. It’s none of your or my business whether he’s mentally ill. He’s a responsible adult. And that’s all that we need to know.

The only time that mental illness matters is when someone has something that they can’t control. And that’s very rare. Most mental illnesses don’t affect our ability to be reliable, rational, trustworthy, functional members of society. We’re not incapacitated. We’re not crazy. We’ve got just got a chronic illness.

To connect this to the politics of the moment: lots of folks are pointing out that if you look at Giffords’ shooter, at his troubles in school, at his writings in various places on the net, he’s clearly mentally ill, so clearly no one is responsible for what happened.

I’m not a psychiatrist, obviously. Based on his writings, I’d guess that there’s a fair chance that he’s schizophrenic. And that doesn’t matter.

He’s a murderer. He carefully put together and executed a careful plan for a multiple murder. From everything that we’ve seen and heard, he knew and understood exactly what he was doing. The fact that he’s mentally ill doesn’t change his culpability.

Don’t hold the millions of people who suffer from mental illness responsible for the horrific actions deliberately taken by one individual. And don’t say that this one horrible individual isn’t responsible for what he did.

Representational Crankery: the New Reals and the Dark Number

There’s one kind of crank that I haven’t really paid much attention to on this blog, and that’s the real number cranks. I’ve touched on real number crankery in my little encounter with John Gabriel, and back in the old 0.999…=1 post, but I’ve never really given them the attention that they deserve.

There are a huge number of people who hate the logical implications of our definitions real numbers, and who insist that those unpleasant complications mean that our concept of real numbers is based on a faulty definition, or even that the whole concept of real numbers is ill-defined.

This is an underlying theme of a lot of Cantor crankery, but it goes well beyond that. And the basic problem underlies a lot of bad mathematical arguments. The root of this particular problem comes from a confusion between the representation of a number, and that number itself. “\frac{1}{2}” isn’t a number: it’s a notation that we understand refers to the number that you get by dividing one by two.

There’s a similar form of looniness that you get from people who dislike the set-theoretic construction of numbers. In classic set theory, you can construct the set of integers by starting with the empty set, which is used as the representation of 0. Then the set containing the empty set is the value 1 – so 1 is represented as { 0 }. Then 2 is represented as { 1, 0 }; 3 as { 2, 1, 0}; and so on. (There are several variations of this, but this is the basic idea.) You’ll see arguments from people who dislike this saying things like “This isn’t a construction of the natural numbers, because you can take the intersection of 8 and 3, and set intersection is meaningless on numbers.” The problem with that is the same as the problem with the notational crankery: the set theoretic construction doesn’t say “the empty set is the value 0″, it says “in a set theoretic construction, the empty set can be used as a representation of the number 0.

The particular version of this crankery that I’m going to focus on today is somewhat related to the inverse-19 loonies. If you recall their monument, the plaque talks about how their work was praised by a math professor by the name of Edgar Escultura. Well, it turns out that Escultura himself is a bit of a crank.

The specify manifestation of his crankery is this representational issue. But the root of it is really related to the discomfort that many people feel at some of the conclusions of modern math.

A lot of what we learned about math has turned out to be non-intuitive. There’s Cantor, and Gödel, of course: there are lots of different sizes of infinities; and there are mathematical statements that are neither true nor false. And there are all sorts of related things – for example, the whole ideaof undescribable numbers. Undescribable numbers drive people nuts. An undescribable number is a number which has the property that there’s absolutely no way that you can write it down, ever. Not that you can’t write it in, say, base-10 decimals, but that you can’t ever write down anything, in any form that uniquely describes it. And, it turns out, that the vast majority of numbers are undescribable.

This leads to the representational issue. Many people insist that if you can’t represent a number, that number doesn’t really exist. It’s nothing but an artifact of an flawed definition. Therefore, by this argument, those numbers don’t exist; the only reason that we think that they do is because the real numbers are ill-defined.

This kind of crackpottery isn’t limited to stupid people. Professor Escultura isn’t a moron – but he is a crackpot. What he’s done is take the representational argument, and run with it. According to him, the only real numbers are numbers that are representable. What he proposes is very nearly a theory of computable numbers – but he tangles it up in the representational issue. And in a fascinatingly ironic turn-around, he takes the artifacts of representational limitations, and insists that they represent real mathematical phenomena – resulting in an ill-defined number theory as a way of correcting what he alleges is an ill-defined number theory.

His system is called the New Real Numbers.

In the New Real Numbers, which he notates as R^*, the decimal notation is fundamental. The set of new real numbers consists exactly of the set of numbers with finite representations in decimal form. This leads to some astonishingly bizarre things. From his paper:

3) Then the inverse operation to multiplication called division; the result of dividing a decimal by another if it exists is called quotient provided the divisor is not zero. Only when the integral part of the devisor is not prime other than 2 or 5 is the quotient well defined. For example, 2/7 is ill defined because the quotient is not a terminating decimal (we interpret a fraction as division).

So 2/7ths is not a new real number: it’s ill-defined. 1/3 isn’t a real number: it’s ill-defined.

4) Since a decimal is determined or well-defined by its digits, nonterminating decimals are ambiguous or ill-defined. Consequently, the notion irrational is ill-defined since we cannot cheeckd all its digits and verify if the digits of a nonterminaing decimal are periodic or nonperiodic.

After that last one, this isn’t too surprising. But it’s still absolutely amazing. The square root of two? Ill-defined: it doesn’t really exist. e? Ill-defined, it doesn’t exist. \pi? Ill-defined, it doesn’t really exist. All of those triangles, circles, everything that depends on e? They’re all bullshit according to Escultura. Because if he can’t write them down in a piece of paper in decimal notation in a finite amount of time, they don’t exist.

Of course, this is entirely too ridiculous, so he backtracks a bit, and defines a non-terminating decimal number. His definition is quite peculiar. I can’t say that I really follow it. I think this may be a language issue – Escultura isn’t a native english speaker. I’m not sure which parts of this are crackpottery, which are linguistic struggles, and which are notational difficulties in reading math rendered as plain text.

5) Consider the sequence of decimals,

(d)^na_1a_2…a_k, n = 1, 2, …, (1)

where d is any of the decimals, 0.1, 0.2, 0.3, …, 0.9, a_1, …, a_k, basic integers (not all 0 simultaneously). We call the nonstandard sequence (1) d-sequence and its nth term nth d-term. For fixed combination of d and the a_j’s, j = 1, …, k, in (1) the nth term is a terminating decimal and as n increases indefinitely it traces the tail digits of some nonterminating decimal and becomes smaller and smaller until we cannot see it anymore and indistinguishable from the tail digits of the other decimals (note that the nth d-term recedes to the right with increasing n by one decimal digit at a time). The sequence (1) is called nonstandard d-sequence since the nth term is not standard g-term; while it has standard limit (in the standard norm) which is 0 it is not a g-limit since it is not a decimal but it exists because it is well-defined by its nonstandard d-sequence. We call its nonstandard g-limit dark number and denote by d. Then we call its norm d-norm (standard distance from 0) which is d > 0. Moreover, while the nth term becomes smaller and smaller with indefinitely increasing n it is greater than 0 no matter how large n is so that if x is a decimal, 0 < d < x.

I think that what he’s trying to say there is that a non-terminating decimal is a sequence of finite representations that approach a limit. So there’s still no real infinite representations – instead, you’ve got an infinite sequence of finite representations, where each finite representation in the sequence can be generated from the previous one. This bit is why I said that this is nearly a theory of the computable numbers. Obviously, undescribable numbers can’t exist in this theory, because you can’t generate this sequence.

Where this really goes totally off the rails is that throughout this, he’s working on the assumption that there’s a one-to-one relationship between representations and numbers. That’s what that “dark number” stuff is about. You see, in Escultura’s system, 0.999999… is not equal to one. It’s not a representational artifact. In Escultura’s system, there are no representational artifacts: the representations are the numbers. The “dark number”, which he notates as d^*, is (1-0.99999999…) and is the smallest number greater than 0. And you can generate a complete ordered enumeration of all of the new real numbers, {0, d^*, 2d^*, 3d^*, ..., n-2d^*, n-d^*, n, n+d^*, ...}.

Reading Escultura, every once in a while, you might think he’s joking. For example, he claims to have disproven Fermat’s last theorem. Fermat’s theorem says that for n>2, there are no integer solutions for the equation x^n + y^n = z^n. Escultura says he’s disproven this:

The exact solutions of Fermat’s equation, which are the counterexamples to FLT, are given by the triples (x,y,z) = ((0.99…)10^T,d*,10^T), T = 1, 2, …, that clearly satisfies Fermat’s equation,

x^n + y^n = z^n, (4)

for n = NT > 2. Moreover, for k = 1, 2, …, the triple (kx,ky,kz) also satisfies Fermat’s equation. They are the countably infinite counterexamples to FLT that prove the conjecture false. One counterexample is, of course, sufficient to disprove a conjecture.

Even if you accept the reality of the notational artifact d^*, this makes no sense: the point of Fermat’s last theorem is that there are no integer solutions; d^* is not an integer; (1-d^*)10 is not an integer. Surely he’s not that stupid. Surely he can’t possibly believe that he’s disproven Fermat using non-integer solutions? I mean, how is this different from just claiming that you can use (2, 3, 351/3) as a counterexample for n=3?

But… he’s serious. He’s serious enough that he’s published published a real paper making the claim (albeit in crackpot journals, which are the only places that would accept this rubbish).

Anyway, jumping back for a moment… You can create a theory of numbers around this d^* rubbish. The problem is, it’s not a particularly useful theory. Why? Because it breaks some of the fundamental properties that we expect numbers to have. The real numbers define a structure called a field, and a huge amount of what we really do with numbers is built on the fundamental properties of the field structure. One of the necessary properties of a field is that it has unique identity elements for addition and multiplication. If you don’t have unique identities, then everything collapses.

So… Take \frac{1}{9}. That’s the multiplicative inverse of 9. So, by definition, \frac{1}{9}*9 = 1 – the multiplicative identity.

In Escultura’s theory, \frac{1}{9} is a shorthand for the number that has a representation of 0.1111…. So, \frac{1}{9}*9 = 0.1111....*9 = 0.9999... = (1-d^*). So (1-d^*) is also a multiplicative identity. By a similar process, you can show that d^* itself must be the additive identity. So either d^* == 0, or else you’ve lost the field structure, and with it, pretty much all of real number theory.

It's MathematicS, not Mathematic

As you may have noticed, the crank behind the “Inverse 19” rubbish in my Loony Toony Tangents post has shown up in the comments. And of course, he’s also peppering me with private mail.

Anyway… I don’t want to belabor his lunacy, but there is one thing that I realized that I didn’t mention in the original post, and which is a common error among cranks. Let me focus on a particular quote. From his original email (with punctuation and spacing corrected; it’s too hard to preserve his idiosyncratic lunacy in HTML), focus on the part that I’ve highlighted in italics:

I feel that with our -1 tangent mathematics, and the -1 tangent configuration, with proper computer language it will be possible to detect even the tiniest leak of nuclear energy from space because this mathematics has two planes. I can show you the -1 configuration, it is a inverse curve

Or from his latest missive:

thus there are two planes in mathematics , one divergent at value 4 and one convergent at value 3 both at -1 tangent(3:4 equalization). So when you see our prime numbers , they are the first in history to be segregated by divergence in one plane , and convergence in the other plane. A circle is the convergence of an open square at 8 points, 4/3 at 8Pi

One of the things that crackpots commonly believe is that all of mathematics is one thing. That there’s one theory of numbers, one geometry, one unified concept of these things that underlies all of mathematics. As he says repeatedly, what makes his math correct where our math is wrong is that there are two planes for his numbers, where there’s one for ours.

The fundamental error in there is the assumption that there is just one math. That all of math is euclidian geometry, or that all of math is real number theory, or that real number theory and euclidian geometry are really one and the same thing.

That’s wrong.

Continue reading