Category Archives: Bad Math

Free Energy Crankery and Negative Mass Nonsense

I’ve got a couple of pet peeves.

As the author of this blog, the obvious one is bad math. And as I always say, the worst math is no math.

Another pet peeve of mine is free energy. Energy is, obviously, a hugely important thing to our society. And how we’re going to get the energy we need is a really serious problem – almost certainly the biggest problem that we face. Even if you’ve convinced yourself that global warming isn’t an issue, energy is a major problem. There’s only so much coal and oil that we can dig up – someday, we’re going to run out.

But there are tons of frauds out there who’ve created fake machines that they claim you can magically get energy from. And there are tons of cranks who are all-too-ready to believe them.

Take, for example, Bob Koontz.

Koontz is a guy with an actual physics background – he got his PhD at the University of Maryland. I’m very skeptical that he’s actually stupid enough to believe in what he’s selling – but nonetheless, he’s made a donation-based business out of selling his “free energy” theory.

So what’s his supposed theory?

It sounds impossible, but it isn’t. It is possible to obtain an unlimited amount of energy from devices which essentially only require that they be charged up with negative mass electrons and negative mass positrons. Any physicist should be able to convince himself of this in a matter of minutes. It really is simple: While ordinary positive mass electrons in a circuit consume power, negative mass electrons generate power. Why is that? For negative mass electrons and negative mass positrons, Newton’s second law, F = ma becomes F = -ma.

But acquiring negative mass electrons and negative mass electrons is not quite as simple as it sounds. They are exotic particles that many physicists may even doubt exist. But they do exist. I am convinced of this — for good reasons.

The Law of Energy Conservation

The law of energy conservation tells us that the total energy of a closed system is constant. Therefore, if such a system has an increase in positive energy, there must be an increase in negative energy. The total energy stays constant.

When you drop an object in the earth’s gravitational field, the object gains negative gravitational potential energy as it falls — with that increase in negative energy being balanced by an increase of positive energy of motion. But the object does not lose or gain total energy as it falls. It gains kinetic energy while it gains an equal amount of negative gravitational energy.

How could we have free energy? If we gain positive energy, we must also generate negative energy in exactly the same amount. That will “conserve energy,” as physicists say. In application, in the field of “free energy,” that means generating negative energy photons and other negative energy particles while we get the positive energy we are seeking. What is the problem, then? The problem involves generating the negative energy particles.

a

So… there are, supposedly, “negative energy” particles that correspond to electrons and positrons. These particles have never been observed, and under normal circumstances, they have no effect on any observable phenomenon.

But, we’re supposed to believe, it really exists. And it means that we can get free energy without violating the conservation of energy – because the creation of an equal amount of invisible, undetectable, effectless negative energy balances out whatever positive energy we create.

So what is negative energy?

That’s where the bad math comes in. Here’s his explanation:

When Paul Dirac, the Nobel prize-winning physicist was developing the first form of relativistic quantum mechanics he found it necessary to introduce the concept of negative mass electrons. This subsequently led Dirac to develop the idea that a hole in a sea of negative mass electrons corresponded to a positron, otherwise known as an antielectron. Some years later the positron was observed and Dirac won the Nobel prize.

Subsequent to the above, there appears to have been no experimental search for these negative mass particles. Whether or not negative mass electrons and negative mass positrons exist is thus a question to which we do not yet have an answer. However, if these particles do exist, their unusual properties could be exploited to produce unlimited amounts of energy — as negative mass electrons and negative mass positrons, when employed in a circuit, produce energy rather than consume it. Newton’s 2nd law F = ma becomes F = – ma and that explains why negative mass electrons and negative mass positrons produce energy rather than consume it. I believe that any good physicist should be able to see this quite quickly.

The following paragraph is actually wrong. There is such a thing as relativistic quantum mechanics. QM and special relativity are compatible, and the relativistic QM fits is at that intersection. General relativity and QM remains an unsolved problem, as discussed below. I’m leaving the original paragraph, because it seems dishonest to just delete it, like I was pretending that I never screwed up.

There is no such thing as relativistic quantum mechanics. One of the great research areas of modern physics is the attempt to figure out how to unify quantum mechanics and relativity. Many people have tried to find a unifying formulation, but no one has yet succeeded. There is no theory of relativistic QM.

It’s actually a fascinating subject. General relativity seems to be true: every test that we can dream up confirms GR. And quantum mechanics also appears to be true: every test that we can dream up confirms the theory of quantum mechanics. And yet, the two are not compatible.

No one has been able to solve this problem – not Dirac, not anyone.

Even within the Dirac bit… there is a clever bit of slight-of-hand. He starts by saying that Dirac proposed that there were “negative mass” electrons. Dirac did propose something like that – but the proposal was within the frame of mathematics. Without knowing about the existence of the positron, he worked through the implications of relativity, and would up with a model which could be interpreted as a sea of “negative mass” electrons with holes in it. The holes are positrons.

To get a sense of what this means, it’s useful to pull out a metaphor. In semiconductor physics, when you’re trying to describe the behavior of semiconductors, it’s often useful to talk about things backwards. Instead of talking about how the electrons move through a semiconductor, you can talk about how electron holes move. An electron hole is a “gap” where an electron could move. Instead of an electron moving from A to B, you can talk about an electron hole moving from B to A.

The Dirac derivation is a similar thing. The real particle is the positron. But for some purposes, it’s easier to discuss it backwards: assume that all of space is packed, saturated, with “negative mass” electrons. But there are holes moving through that space. A hole in a “negative mass”, negatively charged field is equivalent to a particle with positive mass and positive charge in an empty, uncharged space – a positron.

The catch is that you need to pick your abstraction. If you want to use the space-saturated-with-negative-mass model, then the positron doesn’t exist. You’re looking at a model in which there is no positron – there is just a gap in the space of negative-mass particles. If you want to use the model with a particle called a positron, then the negative mass particles don’t exist.

So why haven’t we been searching for negative-mass particles? Because they don’t exist. That is, we’ve chosen the model of reality which says that the positron is a real particle. Or to be slightly more precise: we have a very good mathematical model of many aspects of reality. In that model, we can choose to interpret it as either a model in which the positive-mass particles really exist and the negative-mass particles exist only as an absence of particles; or we can interpret it as saying that the negative-mass particles exist, and the positive mass ones exist only as an absence of negative-mass particles. In either case, that model provides an extremely good description of what we observe about reality. But that model does not predict that both the positive and negative mass particles both really exist in any meaningful sense. By observing and calculating the properties of the positive mass particles, we adopt the interpretation that positive mass particles really exist. Every observation that we make of the properties of positive mass particles is implicitly an observation of the properties of negative-mass particles. The two interpretations are mathematical duals.

Looking at his background and at at other things on his site, I think that Koontz is, probably, a fraud. He’s not dumb enough to believe this. But he’s smart enough to realize that there are lots of other people who are dumb enough to believe it. Koontz has no problem with pandering to them in the name of his own profit. What finally convinced me of that was his UFO-sighting claim here. Absolutely pathetic.

Facts vs Beliefs

One of the things about current politics that continually astonishes me is the profound lack of respect for reality demonstrated by so many of the people who want to be in charge of our governments.

Personally, I’m very much a liberal. I lean way towards the left-end of the political spectrum. But for the purposes of this discussion, that’s irrelevant. I’m not talking about whether people are proposing the right policy, or the right politics. What I’m concerned with is the way that the don’t seem to accept the fact that there are facts. Not everything is a matter of opinion. Some things are just undeniable facts, and you need to deal with them as they are. The fact that you don’t like them is just irrelevant. As the old saying goes, you’re entitled to your own opinion, but you’re not entitled to your own facts.

I saw a particularly vivid example of this last week, but didn’t have a chance to write it up until today. Rick Perry was presenting his proposal for how to address the problems of the American economy, particularly the dreadfully high unemployment rate. He claims that his policy will, if implemented, create 2.5 million jobs over the next four years.

The problem with that, as a proposal, is that in America, due to population growth, just to break even in employment, we need to add 200,000 jobs per month – that’s how fast the pool of employable people is growing. So we need to add over two million jobs per year just to keep unemployment from rising. In other words, Perry is proposing a policy that will, according to his (probably optimistic, if he’s a typical politician) estimate, result in increasing unemployment.

This is, obviously, bad.

But here’s where he goes completely off the rails.

Chris Wallace: “But how do you answer this question? Two and a half million jobs doesn’t even keep pace with population growth. Our unemployment rate would increase under this goal.

Rick Perry: “I don’t believe that for a minute. It’s just absolutely false on its face. Americans will get back to work.”

That’s just blatant, stupid idiocy.

The employable population is growing. This is not something debatable. This is not something that you get to choose to believe or not to believe. This is just reality.

If you add 2.5 million jobs, and the population of employable workers seeking jobs grows by 4 million people, then the unemployment rate will get worse. That’s simple arithmetic. It’s not politics, it’s not debatable, and it has nothing to do with what Rick Perry, or anyone else, believes. It’s a simple fact.

The fact that a candidate for president can just wave his hands and deny reality – and that that isn’t treated as a disqualifying error – is simply shocking.

Yet Another Cantor Crank

I get a fair bit of mail from crackpots. The category that I find most annoying is the Cantor cranks. Over and over and over again, these losers send me their “proofs”.

What bugs me so much about this is how shallowly wrong they are.

What Cantor did was remarkably elegant. He showed that given anything that is claimed to be a one-to-one mapping between the set of integers and the set of real numbers (also sometimes described as an enumeration of the real numbers – the two terms are functionally equivalent), then here’s a simple procedure which will produce a real number that isn’t in included in that mapping – which shows that the mapping isn’t one-to-one.

The problem with the run-of-the-mill Cantor crank is that they never even try to actually address Cantor’s proof. They just say “look, here’s a mapping that works!”

So the entire disproof of their “refutation” of Cantor’s proof is… Cantor’s proof. They completely ignore the thing that they’re claiming to disprove.

I got another one of these this morning. It’s particularly annoying because he makes the same mistake as just about every other Cantor crank – but he also specifically points to one of my old posts where I rant about people who make exactly the same mistake as him.

To add insult to injury, the twit insisted on sending me PDF – and not just a PDF, but a bitmapped PDF – meaning that I can’t even copy text out of it. So I can’t give you a link; I’m not going to waste Scientopia’s bandwidth by putting it here for download; and I’m not going to re-type his complete text. But I’ll explain, in my own compact form, what he did.

It’s an old trick; for example, it’s ultimately not that different from what John Gabriel did. The only real novelty is that he does it in binary – which isn’t much of a novelty. This author calls it the “mirror method”. The idea is, in one column, write a list of the integers greater than 0. In the opposite column, write the mirror of that number, with the decimal (or, technically, binary) point in front of it:

Integer Real
0 0.0
1 0.1
10 0.01
11 0.11
100 0.001
101 0.101
110 0.011
111 0.111
1000 0.0001

Extend that out to infinity, and, according to the author, the second column it’s a sequence of every possible real number, and the table is a complete mapping.

The problem is, it doesn’t work, for a remarkably simple reason.

There is no such thing as an integer whose representation requires an infinite number of digits. For every possible integer, its representation in binary has a fixed number of bits: for any integer N, it’s representation is no longer that lceil  log_2(n) rceil. That’s always a finite integer.

But… we know that the set of real numbers includes numbers whose representation is infinitely long. so this enumeration won’t include them. Where does the square root of two fall in this list? It doesn’t: it can’t be written as a finite string in binary. Where is π? It’s nowhere; there’s no finite representation of π in binary.

The author claims that the novel property of his method is:

Cantor proved the impossibility of both our enumerations as follows: for any given enumeration like ours Cantor proposed his famous diagonal method to build the contra-sample, i.e., an element which is quasi omitted in this enumeration. Before now, everyone agreed that this element was really omitted as he couldn’t tell the ordinal number of this element in the give enumeration: now he can. So Cantor’s contra-sample doesn’t work.

This is, to put it mildly, bullshit.

First of all – he pretends that he’s actually addressing Cantor’s proof – only he really isn’t. Remember – what Cantor’s proof did was show you that, given any purported enumeration of the real numbers, that you could construct a real number that isn’t in that enumeration. So what our intrepid author did was say “Yeah, so, if you do Cantor’s procedure, and produce a number which isn’t in my enumeration, then I’ll tell you where that number actually occurred in our mapping. So Cantor is wrong.”

But that doesn’t actually address Cantor. Cantor’s construction specifically shows that the number it constructs can’t be in the enumeration – because the procedure specifically guarantees that it differs from every number in the enumeration in at least one digit. So it can’t be in the enumeration. If you can’t show a logical problem with Cantor’s construction, then any argument like the authors is, simply, a priori rubbish. It’s just handwaving.

But as I mentioned earlier, there’s an even deeper problem. Cantor’s method produces a number which has an infinitely long representation. So the earlier problem – that all integers have a finite representation – means that you don’t even need to resort to anything as complicated as Cantor to defeat this. If your enumeration doesn’t include any infinitely long fractional values, then it’s absolutely trivial to produce values that aren’t included: 1/3, 1/7, 1/9.

In short: stupid, dull, pointless; absolutely typical Cantor crankery.

I get mail: Brown's Gas and Perpetual Motion

In the past, I’ve written about free-energy cranks like Tom Bearden, and I’ve made several allusions to the Brown’s gas” crankpots. But I’ve never actually written in any detail about the latter.

Brown’s gas is a term used primarily by cranks for oxyhydrogen gas. Oxyhydrogen is a mixture of hydrogen and oxygen in a two-to-one molar ratio; in other words, it’s exactly the product of electrolysis to break water molecules into hydrogen and oxygen. It’s used as the fuel for several kinds of torches and welders. It’s become a lot less common, because for most applications, it’s just not as practical as things like acetylene torches, TIG welders, etc.

But for free-energy cranks, it’s a panacea.

You see, the beautiful thing about Brown’s gas is that it burns very nicely, it can be compressed well enough to produce a very respectable energy density, and when you use it, its only exhaust gas is water. If you look at it naively, that makes it absolutely wonderful as a fuel.

The problem, of course, is that it costs energy to produce it. You need to pump energy into water to divide it into hydrogen and oxygen; and then you need to use more energy to compress it in order to make it useful. Still, there are serious people who are working hard on things like hydrogen fuel cell power sources for cars – because it is an attractive fuel. It’s just not a panacea.

But the cranks… Ah, the cranks. The cranks believe that if you just find the right way to burn it, then you can create a perfect source of free energy. You see, if you can just burn it so that it produces a teeny, tiny bit more energy being burned that it cost to produce, then you’ve got free energy. You just run an engine – it keeps dividing the water into hydrogen and oxygen, and then you burn it, producing more energy than you spent to divide it; and the only by-product is water vapor!

Of course, this doesn’t work. Thermodynamics fights back: you can’t get more energy out of recombining atoms of hydrogen and oxygen than you spent splitting molecules of water to get that hydrogen and oxygen. It’s very simple: there’s a certain amount of latent energy in that chemical bond. You need to pump in a certain amount of energy to break it – if I remember correctly, it’s around 142 Joules per gram of water. When you burn hydrogen and oxygen to produce water, you get exactly that amount of energy back. It’s a state transition – it’s the same distance up as it is back down. It’s like lifting a weight up a step on a staircase: it takes a certain amount of energy to move the weight up one step. When you drop it back down, it won’t produce more energy falling that you put in to lift it.

But the Brown’s gas people won’t let that stop them!

Here’s an email I recieved yesterday from a Brown’s gas fan, who noticed one of my old criticisms of it:

Hi Mark,

My name is Stefan, and I recently came across your analysis regarding split water technology to power vehicle. You are trying to proof that it makes no sense because it is against the physic low of energy conservation?

There is something I would like to ask you, if you could explain to me. What do you think about the sail boat zigzagging against the wind? Is it the classical example of perpetual motion?

If so, I believe that the energy conversion law is not always applicable, and even maybe wrong? Using for example resonance you can destroy each constructions with little force, the same I believe is with membrane HHO technology at molecular level?

Is it possible that we invented the law of impossibility known as the Energy Conservation Law and this way created such limitation? If you have some time please answer me what do you think about it? This World as you know is mostly unexplainable, and maybe we should learn more to better understand how exactly the Universe work?

The ignorance in this is absolutely astonishing. And it’s pretty typical of my experience with the Brown’s gas fans. They’re so woefully ignorant of simple math and physics.

Let’s start with his first question, about sailboat tacking. That’s got some interesting connections to my biggest botch on this blog, my fouled up debunking of the downwind-faster-than-the-wind vehicle.

The tacking sailboat is a really interesting problem. When you think about it naively, it seems like it shouldn’t be possible. If you let a leaf blow in the wind, it can’t possibly move faster than the wind. So how can a sailboat do it?

The anwser to that is that the sailboat isn’t a free body floating in the wind. It’s got a body and keel in the water, and a sail in the air. What it’s doing is exploiting that difference in motion between the water and the air, and extracting energy. Mathematically, the water behaves as a source of tension, resisting the pressure of the wind against the sail, and converting it into motion in a different direction. Lift the body of the sailboat out of the water, and it can’t do that anymore. Similarly, a boat can’t accelerate by “tacking” against the water current unless it has a sail. It needs the two parts in different domains; then it can, effectively, extract energy from the difference between the two. But the most important point about a tacking sailboat – more important than the details of the mechanism that it uses – is that there’s no energy being created. The sailboat is extracting kinetic energy from the wind, and converting it into kinetic energy in the boat. There’s no energy being created or destroyed – just moved around. Every bit of energy that the boat acquires (plus some extra) was removed from the wind.

So no, a sailboat isn’t an example of perpetual motion. It’s just a very typical example of moving energy around from one place to another. The sun heats the air/water/land; that creates wind; wind pushes the boat.

Similarly, he botches the resonance example.

Resonance is, similarly, a fascinating phenomenon, but it’s one that my correspondant totally fails to comprehend.

Resonance isn’t about a small amount of energy producing a large effect. It’s about how a small amount of energy applied over time can add up to a large amount of energy.

There is, again, no energy being created. The resonant system is not producing energy. A small amount of energy is not doing anything more than a small amount of energy can always do.

The difference is that in the right conditions, energy can add in interesting ways. Think of a spring with a weight hanging on the end. If you apply a small steady upward force on the weight, the spring will move upward a small distance. When you release the force, the weight will fall to slightly below its apparent start point, and then start to come back up. It will bounce up and down until friction stops it.

But now… at the moment when it hits its highest position, you give it another tiny push, then it will move a bit higher. Now it’s bounce distance will be longer. If every time, exactly as it hits its highest point, you give it another tiny push, then each cycle, it will move a little bit higher. And by repeatedly appyling tiny forces at the right time, the forces add up, and you get a lot of motion in the spring.

The key is, how much? And the answer is: take all of the pushes that you gave it, and add them up. The motion that you got from the resonant pattern is exactly the same as the motion you’d get if you applied the summed force all at once. (Or, actually, you’d get slightly more from the summed force; you lost some to friction in the resonant scenario.

Resonance can create absolutely amazing phenomena, where you can get results that are absolutely astonishing; where forces that really seem like they’re far to small to produce any result do something amazing. The famous example of this is the Tacoma Narrows bridge collapse, where the wind happened to blow just right to created a resonant vibration which tore the bridge apart:

But there’s no free energy there; no energy being created or destroyed.

So, Stefan… It’s always possible that we’re wrong about how physics work. It’s possible that conservation of energy isn’t a real law. It’s possible that the world might work in a way where conservation of energy just appears to be a law, and in fact, there are ways around it, and that we can use those ways to produce free energy. But people have been trying to do that for a very, very long time. We’ve been able to use our understanding of physics to do amazing things. We can accelerate particles up to nearly the speed of light and slam them together. We can shoot rockets into space. We can put machines and even people on other planets. We can produce energy by breaking atoms into pieces. We can build devices that flip switches billions of times per second, and use them to talk to each other! And we can predict, to within a tiny fraction of a fraction of the breadth of a hair how much energy it will take to do these things, and how much heat will be produced by doing them.

All of these things rely on a very precise description of how things work. If our understanding were off by the tiniest bit, none of these things could possibly work. So we have really good reasons to believe that our theories are, to a pretty great degree of certainty, accurate descriptions of how reality works. That doesn’t mean that we’re right – but it does mean that we’ve got a whole lot of evidence to support the idea that energy is always conserved.

On the side of the free energy folks: not one person has ever been able to demonstrate a mechanism that produces more energy than was put in to it. No one has ever been able to demonstrate any kind of free energy under controlled experimental conditions. No one has been able to produce a theory that describes how such a system could work that is consistent with observations of the real world.

People have been pushing Brown’s gas for decades. But they’ve never, every, not one single time, been able to actually demonstrate a working generator. No one has ever done it. No one has been able to build a car that actually works using Brown’s gas without an separate power source. No one has build a self-sustaining generator. No one has been able to produce any mathematical description of how Brown’s gas produces energy that is consistent with real-world observations.

So you’ve got two sides to the argument about Brown’s gas. On one side, you’ve got modern physics, which has reams and reams of evidence, precise theories that are confirmed by observation, and unbelievable numbers of inventions that rely on the precision of those theories. On the other side, you’ve got people who’ve never been able to to do a demonstration, who can’t describe how things work, who can’t explain why things appear to work the way that they appear, who have never been able to produce a single working invention…

Which side should we believe? Given the current evidence, the answer is obvious.

What happens if you don't understand math? Just replace it with solipsism, and you can get published!

About four years ago, I wrote a post about a crackpot theory by a biologist named Robert Lanza. Lanza is a biologist – a genuine, serious scientist. And his theory got published in a major journal, “The American Scholar”. Nevertheless, it’s total rubbish.

Anyway, the folks over at the Encyclopedia of American Loons just posted an entry about him, so I thought it was worth bringing back this oldie-but-goodie. The original post was inspired by a comment from one of my most astute commenters, Mr. Blake Stacey, where he gave me a link to Lanza’sarticle.

The article is called “A New Theory of the Universe”, by Robert Lanza, and as I said, it was published in the American Scholar. Lanza’s article is a rotten piece of new-age gibberish, with all of the usual hallmarks: lots of woo, all sorts of babble about how important consciousness is, random nonsensical babblings about quantum physics, and of course, bad math.

Continue reading

Stupid Politician Tricks; aka Averages Unfairly Biased against Moronic Conclusions

In the news lately, there’ve been a few particularly egregious examples of bad math. One that really ticked me off came from Alan Simpson. Simpson is one of the two co-chairs of a presidential comission that was asked to come up with a proposal for how to handle the federal budget deficit.

The proposal that his comission claimed that social security was one of the big problems in the budget. It really isn’t – it requires extremely creative accounting combined with several blatant lies to make it into part of the budget problem. (At the moment, social security is operating in surplus: it recieves more money in taxes each year than it pays out.)

Simpson has claimed that social security must be cut if we’re going to fix the budget deficit. As part of his attempt to defend his proposed cuts, he said the following about social security:

It was never intended as a retirement program. It was set up in ‘37 and ‘38 to take care of people who were in distress — ditch diggers, wage earners — it was to give them 43 percent of the replacement rate of their wages. The life expectancy was 63. That’s why they set retirement age at 65

When I first heard that he’d said that, my immediate reaction was “that miserable fucking liar”. Because there are only two possible interpretations of that statement. Either the guy is a malicious liar, or he’s cosmically stupid and ill-informed. I was willing to accept that he’s a moron, but given that he spent a couple of years on the deficit commission, I couldn’t believe that he didn’t understand anything about how social security works.

I was wrong.

In an interview after that astonishing quote, a reported pointed out that the overall life expectancy was 63 – but that the life expectancy for people who lived to be 65 actually had a life expectancy of 79 years. You see, the life expectancy figures are pushed down by people who die young. Especially when you realize that social security start at a time when the people collecting it grew up without antibiotics, there were a whole lot of people who died very young – which bias the age downwards. Simpson’s
response to this?

If you’re telling me that a guy who got to be 65 in 1940 — that all of them lived to be 77 — that is just not correct. Just because a guy gets to be 65, he’s gonna live to be 77? Hell, that’s my genre. That’s not true.

So yeah.. He’s really stupid. Usually, when it comes to politicians, my bias is to assume malice before ignorance. They spend so much of their time repeating lies – lying is pretty much their entire job. But Simpson is an extremely proud, arrogant man. If he had any clue of how unbelievably stupid he sounded, he wouldn’t have said that. He’d have made up some other lie that made him look less stupid. He’s got too much ego to deliberately look like a credulous drooling cretin.

So my conclusion is: He really doesn’t understand that if the overall average life expectancy for a set of people is 63, that the life expectancy of the subset people who live to be 63 going to be significantly higher than 63.

Just to hammer in how stupid it is, let’s look at a trivial example. Let’s look at a group of five people, with an average life expectancy of 62 years.

One died when he was 12. What’s the average age at death of the rest of them to make the overall average life expectancy was 62 years?

frac{4x + 12}{5} = 62, x = 74
.

So in this particular group of people with a life expectancy of 62 years, the pool of people who live to be 20 has a life expectancy of 74 years.

It doesn’t take much math at all to see how much of a moron Simpson is. It should be completely obvious: some people die young, and the fact that they die young affects the average.

Another way of saying it, which makes it pretty obvious how stupid Simpson is: if you live to be 65, you can be pretty sure that you’ll live to be at least 65, and you’ve got a darn good chance of living to be 66.

It’s incredibly depressing to realize that the report co-signed by this ignorant, moronic jackass is widely accepted by politicians and influential journalists as a credible, honest, informed analysis of the deficit problem and how to solve it. The people who wrote the report are incapable of comprehending the kind of simple arithmetic that’s needed to see how stupid Simpson’s statement was.

Hold on tight: the world ends next saturday!

(For some idiot reason, I was absolutely certain that today was the 12th. It’s not. It’s the tenth. D’oh. There’s a freakin’ time&date widget on my screen! Thanks to the commenter who pointed this out.)

A bit over a year ago, before the big move to Scientopia, I wrote about a loonie named Harold Camping. Camping is the guy behind the uber-christian “Family Radio”. He predicted that the world is going to end on May 21st, 2011. I first heard about this when it got written up in January of 2010 in the San Francisco Chronicle.

And now, we’re less than two weeks away from the end of the world according to Mr. Camping! So I thought hey, it’s my last chance to make sure that I’m one of the damned!

Continue reading

Another Crank comes to visit: The Cognitive Theoretic Model of the Universe

When an author of one of the pieces that I mock shows up, I try to bump them up to the top of the queue. No matter how crackpotty they are, I think that if they’ve gone to the trouble to come and defend their theories, they deserve a modicum of respect, and giving them a fair chance to get people to see their defense is the least I can do.

A couple of years ago, I wrote about the Cognitive Theoretic Model of the Universe. Yesterday, the author of that piece showed up in the comments. It’s a two-year-old post, which was originally written back at ScienceBlogs – so a discussion in the comments there isn’t going to get noticed by anyone. So I’m reposting it here, with some revisions.

Stripped down to its basics, the CTMU is just yet another postmodern “perception defines the universe” idea. Nothing unusual about it on that level. What makes it interesting is that it tries to take a set-theoretic approach to doing it. (Although, to be a tiny bit fair, he claims that he’s not taking a set theoretic approach, but rather demonstrating why a set theoretic approach won’t work. Either way, I’d argue that it’s more of a word-game than a real theory, but whatever…)

The real universe has always been theoretically treated as an object, and specifically as the composite type of object known as a set. But an object or set exists in space and time, and reality does not. Because the real universe by definition contains all that is real, there is no “external reality” (or space, or time) in which it can exist or have been “created”. We can talk about lesser regions of the real universe in such a light, but not about the real universe as a whole. Nor, for identical reasons, can we think of the universe as the sum of its parts, for these parts exist solely within a spacetime manifold identified with the whole and cannot explain the manifold itself. This rules out pluralistic explanations of reality, forcing us to seek an explanation at once monic (because nonpluralistic) and holistic (because the basic conditions for existence are embodied in the manifold, which equals the whole). Obviously, the first step towards such an explanation is to bring monism and holism into coincidence.

Continue reading

E. E. Escultura and the Field Axioms

As you may have noticed, E. E. Escultura has shown up in the comments to this blog. In one comment, he made an interesting (but unsupported) claim, and I thought it was worth promoting up to a proper discussion of its own, rather than letting it rage in the comments of an unrelated post.

What he said was:

You really have no choice friends. The real number system is ill-defined, does not exist, because its field axioms are inconsistent!!!

This is a really bizarre claim. The field axioms are inconsistent?

I’ll run through a quick review, because I know that many/most people don’t have the field axioms memorized. But the field axioms are, basically, an extremely simple set of rules describing the behavior of an algebraic structure. The real numbers are the canonical example of a field, but you can define other fields; for example, the rational numbers form a field; if you allow the values to be a class rather than a set, the surreal numbers form a field.

So: a field is a collection of values F with two operations, “+” and “*”, such that:

  1. Closure: ∀ a, b ∈ F: a + b in F ∧ a * b ∈ f
  2. Associativity: ∀ a, b, c ∈ F: a + (b + c) = (a + b) + c ∧ a * (b * c) = (a * b) * c
  3. Commutativity: ∀ a, b ∈ F: a + b = b + a ∧ a * b = b * a
  4. Identity: there exist distinct elements 0 and 1 in F such that ∀ a ∈ F: a + 0 = a, ∀ b ∈ F: b*1=b
  5. Additive inverses: ∀ a ∈ F, there exists an additive inverse -a ∈ F such that a + -a = 0.
  6. Multiplicative Inverse: For all a ∈ F where a != 0, there a multiplicative inverse a-1 such that a * a-1 = 1.
  7. Distributivity: ∀ a, b, c ∈ F: a * (b+c) = (a*b) + (a*c)

So, our friend Professor Escultura claims that this set of axioms is inconsistent, and that therefore the real numbers are ill-defined. One of the things that makes the field axioms so beautiful is how simple they are. They’re a nice, minimal illustration of how we expect numbers to behave.

So, Professor Escultura: to claim that that the field axioms are inconsistent, what you’re saying is that this set of axioms leads to an inevitable contradiction. So, what exactly about the field axioms is inconsistent? Where’s the contradiction?

Representational Crankery: the New Reals and the Dark Number

There’s one kind of crank that I haven’t really paid much attention to on this blog, and that’s the real number cranks. I’ve touched on real number crankery in my little encounter with John Gabriel, and back in the old 0.999…=1 post, but I’ve never really given them the attention that they deserve.

There are a huge number of people who hate the logical implications of our definitions real numbers, and who insist that those unpleasant complications mean that our concept of real numbers is based on a faulty definition, or even that the whole concept of real numbers is ill-defined.

This is an underlying theme of a lot of Cantor crankery, but it goes well beyond that. And the basic problem underlies a lot of bad mathematical arguments. The root of this particular problem comes from a confusion between the representation of a number, and that number itself. “\frac{1}{2}” isn’t a number: it’s a notation that we understand refers to the number that you get by dividing one by two.

There’s a similar form of looniness that you get from people who dislike the set-theoretic construction of numbers. In classic set theory, you can construct the set of integers by starting with the empty set, which is used as the representation of 0. Then the set containing the empty set is the value 1 – so 1 is represented as { 0 }. Then 2 is represented as { 1, 0 }; 3 as { 2, 1, 0}; and so on. (There are several variations of this, but this is the basic idea.) You’ll see arguments from people who dislike this saying things like “This isn’t a construction of the natural numbers, because you can take the intersection of 8 and 3, and set intersection is meaningless on numbers.” The problem with that is the same as the problem with the notational crankery: the set theoretic construction doesn’t say “the empty set is the value 0″, it says “in a set theoretic construction, the empty set can be used as a representation of the number 0.

The particular version of this crankery that I’m going to focus on today is somewhat related to the inverse-19 loonies. If you recall their monument, the plaque talks about how their work was praised by a math professor by the name of Edgar Escultura. Well, it turns out that Escultura himself is a bit of a crank.

The specify manifestation of his crankery is this representational issue. But the root of it is really related to the discomfort that many people feel at some of the conclusions of modern math.

A lot of what we learned about math has turned out to be non-intuitive. There’s Cantor, and Gödel, of course: there are lots of different sizes of infinities; and there are mathematical statements that are neither true nor false. And there are all sorts of related things – for example, the whole ideaof undescribable numbers. Undescribable numbers drive people nuts. An undescribable number is a number which has the property that there’s absolutely no way that you can write it down, ever. Not that you can’t write it in, say, base-10 decimals, but that you can’t ever write down anything, in any form that uniquely describes it. And, it turns out, that the vast majority of numbers are undescribable.

This leads to the representational issue. Many people insist that if you can’t represent a number, that number doesn’t really exist. It’s nothing but an artifact of an flawed definition. Therefore, by this argument, those numbers don’t exist; the only reason that we think that they do is because the real numbers are ill-defined.

This kind of crackpottery isn’t limited to stupid people. Professor Escultura isn’t a moron – but he is a crackpot. What he’s done is take the representational argument, and run with it. According to him, the only real numbers are numbers that are representable. What he proposes is very nearly a theory of computable numbers – but he tangles it up in the representational issue. And in a fascinatingly ironic turn-around, he takes the artifacts of representational limitations, and insists that they represent real mathematical phenomena – resulting in an ill-defined number theory as a way of correcting what he alleges is an ill-defined number theory.

His system is called the New Real Numbers.

In the New Real Numbers, which he notates as R^*, the decimal notation is fundamental. The set of new real numbers consists exactly of the set of numbers with finite representations in decimal form. This leads to some astonishingly bizarre things. From his paper:

3) Then the inverse operation to multiplication called division; the result of dividing a decimal by another if it exists is called quotient provided the divisor is not zero. Only when the integral part of the devisor is not prime other than 2 or 5 is the quotient well defined. For example, 2/7 is ill defined because the quotient is not a terminating decimal (we interpret a fraction as division).

So 2/7ths is not a new real number: it’s ill-defined. 1/3 isn’t a real number: it’s ill-defined.

4) Since a decimal is determined or well-defined by its digits, nonterminating decimals are ambiguous or ill-defined. Consequently, the notion irrational is ill-defined since we cannot cheeckd all its digits and verify if the digits of a nonterminaing decimal are periodic or nonperiodic.

After that last one, this isn’t too surprising. But it’s still absolutely amazing. The square root of two? Ill-defined: it doesn’t really exist. e? Ill-defined, it doesn’t exist. \pi? Ill-defined, it doesn’t really exist. All of those triangles, circles, everything that depends on e? They’re all bullshit according to Escultura. Because if he can’t write them down in a piece of paper in decimal notation in a finite amount of time, they don’t exist.

Of course, this is entirely too ridiculous, so he backtracks a bit, and defines a non-terminating decimal number. His definition is quite peculiar. I can’t say that I really follow it. I think this may be a language issue – Escultura isn’t a native english speaker. I’m not sure which parts of this are crackpottery, which are linguistic struggles, and which are notational difficulties in reading math rendered as plain text.

5) Consider the sequence of decimals,

(d)^na_1a_2…a_k, n = 1, 2, …, (1)

where d is any of the decimals, 0.1, 0.2, 0.3, …, 0.9, a_1, …, a_k, basic integers (not all 0 simultaneously). We call the nonstandard sequence (1) d-sequence and its nth term nth d-term. For fixed combination of d and the a_j’s, j = 1, …, k, in (1) the nth term is a terminating decimal and as n increases indefinitely it traces the tail digits of some nonterminating decimal and becomes smaller and smaller until we cannot see it anymore and indistinguishable from the tail digits of the other decimals (note that the nth d-term recedes to the right with increasing n by one decimal digit at a time). The sequence (1) is called nonstandard d-sequence since the nth term is not standard g-term; while it has standard limit (in the standard norm) which is 0 it is not a g-limit since it is not a decimal but it exists because it is well-defined by its nonstandard d-sequence. We call its nonstandard g-limit dark number and denote by d. Then we call its norm d-norm (standard distance from 0) which is d > 0. Moreover, while the nth term becomes smaller and smaller with indefinitely increasing n it is greater than 0 no matter how large n is so that if x is a decimal, 0 < d < x.

I think that what he’s trying to say there is that a non-terminating decimal is a sequence of finite representations that approach a limit. So there’s still no real infinite representations – instead, you’ve got an infinite sequence of finite representations, where each finite representation in the sequence can be generated from the previous one. This bit is why I said that this is nearly a theory of the computable numbers. Obviously, undescribable numbers can’t exist in this theory, because you can’t generate this sequence.

Where this really goes totally off the rails is that throughout this, he’s working on the assumption that there’s a one-to-one relationship between representations and numbers. That’s what that “dark number” stuff is about. You see, in Escultura’s system, 0.999999… is not equal to one. It’s not a representational artifact. In Escultura’s system, there are no representational artifacts: the representations are the numbers. The “dark number”, which he notates as d^*, is (1-0.99999999…) and is the smallest number greater than 0. And you can generate a complete ordered enumeration of all of the new real numbers, {0, d^*, 2d^*, 3d^*, ..., n-2d^*, n-d^*, n, n+d^*, ...}.

Reading Escultura, every once in a while, you might think he’s joking. For example, he claims to have disproven Fermat’s last theorem. Fermat’s theorem says that for n>2, there are no integer solutions for the equation x^n + y^n = z^n. Escultura says he’s disproven this:

The exact solutions of Fermat’s equation, which are the counterexamples to FLT, are given by the triples (x,y,z) = ((0.99…)10^T,d*,10^T), T = 1, 2, …, that clearly satisfies Fermat’s equation,

x^n + y^n = z^n, (4)

for n = NT > 2. Moreover, for k = 1, 2, …, the triple (kx,ky,kz) also satisfies Fermat’s equation. They are the countably infinite counterexamples to FLT that prove the conjecture false. One counterexample is, of course, sufficient to disprove a conjecture.

Even if you accept the reality of the notational artifact d^*, this makes no sense: the point of Fermat’s last theorem is that there are no integer solutions; d^* is not an integer; (1-d^*)10 is not an integer. Surely he’s not that stupid. Surely he can’t possibly believe that he’s disproven Fermat using non-integer solutions? I mean, how is this different from just claiming that you can use (2, 3, 351/3) as a counterexample for n=3?

But… he’s serious. He’s serious enough that he’s published published a real paper making the claim (albeit in crackpot journals, which are the only places that would accept this rubbish).

Anyway, jumping back for a moment… You can create a theory of numbers around this d^* rubbish. The problem is, it’s not a particularly useful theory. Why? Because it breaks some of the fundamental properties that we expect numbers to have. The real numbers define a structure called a field, and a huge amount of what we really do with numbers is built on the fundamental properties of the field structure. One of the necessary properties of a field is that it has unique identity elements for addition and multiplication. If you don’t have unique identities, then everything collapses.

So… Take \frac{1}{9}. That’s the multiplicative inverse of 9. So, by definition, \frac{1}{9}*9 = 1 – the multiplicative identity.

In Escultura’s theory, \frac{1}{9} is a shorthand for the number that has a representation of 0.1111…. So, \frac{1}{9}*9 = 0.1111....*9 = 0.9999... = (1-d^*). So (1-d^*) is also a multiplicative identity. By a similar process, you can show that d^* itself must be the additive identity. So either d^* == 0, or else you’ve lost the field structure, and with it, pretty much all of real number theory.