Remember Granville Sewell? He’s the alleged mathematician who wrote the very non-mathematical “A Mathematician’s View of Evolution”, which I fisked [a few weeks ago](http://scienceblogs.com/goodmath/2006/10/second_law_slop_from_granville.php). Well, he’s back with a response to the people who criticized him, called [“Can Anything Happen in an Open System?”](http://www.math.utep.edu/Faculty/sewell/articles/open.pdf)
Did he actually address any of the criticisms in a substantial way? Did he actually say *anything* new?
Of course not. Do these idiots *ever* really address criticisms?
The Cranky Book Meme
Chad, over at [Uncertain Principles](http://scienceblogs.com/principles/2006/10/cranky_book_meme_voted_off_the.php) found an interesting meme, which I thought would be fun to take a stab at:
>What authors have you given up on for good? And why?
Darn good question, that is. I’m often fascinated by comparing an authors earliest stories/books to their later ones, to see how they changed. And there are definitely a few authors who’s work I really enjoyed at one time, but who have deteriorated to the point where I’ll never read them again. I’ll tell you about three of mine – feel free to add your own in the comments.
Manifolds and Glue
So, after the last topology post, we know what a manifold is – it’s a structure where the neighborhoods of points are *locally* homeomorphic to open spheres in some ℜn.
We also talked a bit about the idea of *gluing*, which I’ll talk about
more today. Any manifold can be formed by *gluing together* subsets of ℜn. But what does *gluing together* mean?
Let’s start with a very common example. The surface of a sphere is a simple manifold. We can build it by gluing together *two* circles from ℜ2 (a plane). We can think of that as taking each circle, and stretching it over a bowl until it’s shaped like a hemisphere. Then we glue the two hemispheres together so that the *boundary* points of the hemispheres overlap.
Now, how can we say that formally?
A Bit About Number Bases
After my binary fingermath stuff, a few people wrote to me to ask about just how binary really works. For someone who does the kinds of crazy stuff that I do, the idea of different number bases is so fundamental that it’s easy to forget that most people really don’t understand the idea of using different bases.
To start off with, I need to explain how our regular number system works. Most people understand how our numbers work, but don’t necessarily understand *why* they’re set up that way.
Our number system is *positional* – that is, what a given digit of a number means is dependent on its position in the number. The positions each represent one power of ten, increasing as you go from right to left, which is why our number system is called base-10. You can do positional numbers using any number as the base – each digit will correspond to a power of the base.
So suppose we have a number like 729328. The first position from the right, containing the digit “8” is called position zero, because it tells us how many 100=1s we have. The next position is position 1, or the 101=10s column, because it says how many tens there are (2). And then we have the digit three in position 2, meaning that there are 3 102s = 3 100s, etc. So the number really means 7×105+2×104+9×103+3×102+2×101+8×100 = 7×100,000 + 2×10,000 + 8×1,000 + 3×100 + 2×10 + 8×1. Now let’s look at the same kind of thing in base 3, using the number 102120. That’s 1×35 + 0×34+2×33+1×32+2×31+0×30 = 1×254+0×81+2×27+1×9+2×3+0×1 = 254 + 54 + 9 + 6 = 323 in base 10.
Normally, to show that we’re using a different base, we’d write the numbers with a subscript showing the base – so we’d normally write the binary for 11 as 10112.
To write numbers in bases larger than 10, we have one very small problem, which is that we only have numbers to write digits from 0 to 9, but hex has digits from 0 to 15. The way that we work around that is by using letters of the alphabet: A=10, B=11, C=12, D=13, E=14, F=15, … So base-16 uses 0-9 and A-F as digits.
Base-8 (octal) and base-16 (hexidecimal, or hex for short) are special to a lot of computer people. The reason for that is that because 8 and 16 are powers of 2, that means that each digit in octal and hexidecimal corresponds exactly to a specific number of binary digits. Every digit in hexidecimal is something between 0 and 15, which corresponds to exactly 4 binary digits. Base-10 doesn’t have this property – a digit in base ten takes *between* three and four digits – and that means that you *can’t* divide up a string of binary into parts that correspond to decimal digits without looking at the number. With hexidecimal, every hex-digit is 4 binary digits, which is remarkably convenient.
Why would that make a difference? Well, we store data in memory in 8-bit groups called bytes. How big a number can you store in one byte? 255. That’s a strange looking number. But what about in hexidecimal? It’s the largest number you can write in 2 digits: FF16. It gets worse as you get bigger. If you’ve got a computer that has bytes of memory numbered using 4 byte addresses, what’s the largest memory address? In hex, it’s easy to write: FFFFFFFF16. In decimal? 4294967296. The hex numbers are a lot more natural to work with when you’re doing something close to the hardware. In modern programming, we rarely use hex anymore, but a few years ago, it was absolutely ubiquitous. I used to be able to do arithmetic in hex as naturally as I can in decimal. Not anymore; I’ve gotten rusty.
Among computer geeks, there’s an old argument that number systems based on powers of two are more natural than others, but I’m not going to get into *that* basket of worms. If you’re interested, take a look at the [Intuitor Hex Headquarters](http://www.intuitor.com/hex/).
Who needs a calculator? Multiplying with Your Fingers
To do multiplication with your fingers in binary is very easy: it’s just a mixture of addition and bit-shifting. The only real trick is memory: to multiply a×b, you need to remember the binary digits of both x and y, which can be a bit of a trick for 10 digit binary numbers.
The trick that I like is to use coins. Lay out a bunch of coins: one for each binary digit of a, and one for each binary digit of b. Put two lines on a piece of paper: one line will be the ones, the other will be the zeros. So, for example, to multiple 47 times 24, you’d start with the following:
The coins on the paper are your guide, to help you remember the two numbers you’re multiplying. Now the basic algorithm. You start with zero on your fingers; then, starting with the *largest* digit of *B*:
1. Take the current sum on your hands, and multiply it by two. Multiplying by
two is just a simple shift operation – shift each digit up to the next highest
position.
2. If the current digit is “1”, then add “A” to what’s on your fingers.
3. Move to the next digit of “B”.
So, for example:
That’s all there is to it. It’s really easy; once you’ve
gotten used to doing binary addition on your fingers, moving
to multiplication this way is very straightforward and
mechanical.
For the particularly clever folks out there, you’ll notice that this is
pretty much the same algorithm that we used for [multiplying roman numerals](http://scienceblogs.com/goodmath/2006/08/roman_numerals_and_arithmetic.php).
Following Up on the Lancet Study
As expected, the Lancet study on civilian deaths in Iraq has created a firestorm on the net. What frankly astounds me is how utterly *dreadful* most of the critiques of the study have been.
My own favorite for sheer chutzpah is [Omar Fadil](http://politicscentral.com/2006/10/11/jaccuse_iraq_the_model_respond.php):
>I wonder if that research team was willing to go to North Korea or Libya and I
>think they wouldn’t have the guts to dare ask Saddam to let them in and investigate
>deaths under his regime.
>No, they would’ve shit their pants the moment they set foot in Iraq and they would
>find themselves surrounded by the Mukhabarat men counting their breaths. However,
>maybe they would have the chance to receive a gift from the tyrant in exchange for
>painting a rosy picture about his rule.
>
>They shamelessly made an auction of our blood, and it didn’t make a difference if
>the blood was shed by a bomb or a bullet or a heart attack because the bigger the
>count the more useful it becomes to attack this or that policy in a political race
>and the more useful it becomes in cheerleading for murderous tyrannical regimes.
>
>When the statistics announced by hospitals and military here, or even by the UN,
>did not satisfy their lust for more deaths, they resorted to mathematics to get a
>fake number that satisfies their sadistic urges.
You see, going door to door in the middle of a war zone where people
are being murdered at a horrifying rate – that’s just the *peak* of cowardice! And wanting to know how many people have died in a way – that’s clearly nothing but pure bloodthirst – those horrible anti-war people just *love* the blood.
And the math is all just a lie. Never mind that it’s valid statistical mathematics. Never mind that it’s a valid and well-proven methodology. Don’t even waste time actually *looking* at the data, or the metholodogy, or the math. Because people like Omar *know* the truth. They don’t need to do any analysis. They *know*. And anyone who actually risks their neck on the ground gathering real data – they’re just a bunch of sadistic liars who resort to math as a means of lying.
That’s typical of the responses to the study. People who don’t like the result are simply asserting that it *can’t* be right, they *know* it can’t be right. No argument, no analysis, just blind assertions, ranging from emotional beliefs that
[the conclusions *must* be wrong](http://www.abc.net.au/worldtoday/content/2006/s1763454.htm) to
[accusations that the study is fake](http://rightwingnuthouse.com/archives/2006/10/11/a-most-ghoulish-debate/), to [claims that the entire concept of statistical analysis is
clearly garbage.](http://timblair.net/ee/index.php/weblog/please_consider/)
The Lancet study is far from perfect. And there *are* people who have
come forward with [legitimate questions and criticisms](http://scienceblogs.com/authority/2006/10/the_iraq_study_-_how_good_is_i.php) about it. But that’s not the response that we’ve seen from the right-wind media and blogosphere today. All we’ve seen is blind, pig-ignorant bullshit – a bunch of innumerate jackasses screaming at the top of their lungs: “**IT’S WRONG BECAUSE WE SAY IT’S WRONG AND IF YOU DISAGREE YOU’RE A TRAITOR!”**”.
The conclusion that I draw from all of this? The study *is* correct. No one, no matter how they try, has been able to show any *real* problem with the methodology, the data, or the analysis that indicates that the estimates are invalid. When they start throwing out all of statistical mathematics because they don’t like the conclusion of a single study, you know that they can’t find a problem with the study.
The Iraqi Death Tally Study
I’ve gotten a lot of mail from people asking my opinion about [the study published today in the Lancet][lancet] about estimating the Iraqi death toll since the US invasion.
So far, I’ve only had a chance to skim the paper. But from what I can see about it, the methodology is sound. They did as careful an analysis as possible under the circumstances, and they’re very open about the limitations of their approach. (For example, they admit that there were methodological changes compared to earlier studies to reduce the risk to members of the survey team; and there were several
data collection errors leading to invalid or incomplete data which was then excluded from the analysis.)
My guess would be that this study is a pretty solid *upper* bound on the death toll of the war. Population-analysis sampling based techniques like this do tend to produce larger numbers than other analyses, but over the long term, while the sampling techniques tend to over-estimate, those higher numbers have tended to be quite a bit *closer* to the truth than the lower numbers generated by other techniques.
When I compare this to what the US government has been trying to feed us, I find that I trust these results much more: this study is open and honest, tells us exactly how they gathered and analyzed the data, and is honest and forthcoming about its limitations and flaws. In comparison, the official US estimates are just black-box numbers – our government has refused to provide *any* information on how their casualty estimates were produced.
Faced with that contrast, and the history of causalty recording and analysis in past wars and natural disasters, I’m strongly inclined to believe that while we will probably *never* know the real number of people who’ve died as a result of our invasion of Iraq, the figure of 600,000 deaths as of today estimated by the Lancet study is *far* closer to the truth than the US government estimate of 30,000 as of last december.
Believe me, nothing would make me happier than being wrong about this. I really don’t want to believe that my country is responsible for a death toll that makes a homicidal maniac like Saddam Hussein look like a pansy… But facts are what they are, and the math argues that this mind-boggling death toll is most likely all too real.
[lancet]: http://www.thelancet.com/webfiles/images/journals/lancet/s0140673606694919.pdf
Even More Pathetic Statistics from HIV/AIDS Denialists
While looking at the sitemeter referrals to GM/BM, I noticed a link
from “New Aids Review”, a denialist website that that I mentioned in
[my critique of Duesberg.](http://scienceblogs.com/goodmath/2006/09/pathetic_statistics_from_hivai.php)
The folks at NAR are continuing to pull bad math stunts, and I couldn’t resist
returning to the subject to show how stubbornly boneheaded people can be,
and how obviously bad math can just slip by without most people blinking an eye.
Binary Fingermath
There is another way of doing math on your fingers, which gives you a much greater range of numbers, and which makes multiplication particularly easy. It’s a bit more work to get used to than the finger abacus, but it has a lot less limitations. Someone in the comments of the finger-abacus post mentioned that they do something similar.
The methods for binary fingermath that I’ll describe are my own creation; so if you think they’re ridiculous, the blame is entirely mine. I know other people have come up with similar things, but this is my own personal variant.
Navier Stokes: False Alarm
There’s bad news on the math front. Penny Smith has *withdrawn* her Navier Stokes paper, because of the discovery of a serious error.
But to be optimistic for a moment, this doesn’t mean that there’s nothing there. Remember that when Andrew Wiles first showed his proof of Fermat’s last theorem, he discovered a very serious error. After that, it took him a couple of years, and some help from a colleague, but he *did* eventually fix the problem and complete the proof.
Whatever develops, it remains true that Professor Smith has made *huge* strides in her work on Navier-Stokes, and if she hasn’t found the solution yet, she has at least helped pave to road to it. Here’s hoping that she finishes it!
Good luck, Professor Smith! We’re pulling for you.