Leading up to Topoi: Getting Back to Categories

As I mentioned a few posts ago, I recently changed jobs. I left Google, and I’m now working for foursquare. Now that I’m done with the whole job-hunting thing, and things are becoming relatively saner and settling down, I’m trying to get back to blogging.

One thing that I’ve been wanting to spend some time learning about is Topoi theory. Topoi theory is an approach to building logic and mathematics using category theory as a fundamental basis instead of set theory. I’ve been reading the textbook Topoi: The Categorial Analysis of Logic (Dover Books on Mathematics), and I’ll be blogging my way through it. But before I get started in that, I thought it would be a good idea to revise and rerun my old posts on category theory.

To get started, what is category theory?

Back in grad school, I spent some time working with a thoroughly insane guy named John Case who was the new department chair. When he came to the university, he brought a couple of people with him, to take temporary positions. One of them was a category theorist whose name I have unfortunately forgotten. That was the first I’d ever heard of cat theory. So I asked John what the heck this category theory stuff was. His response was “abstract nonsense”. I was astonished; a guy as wacky and out of touch with reality as John called something abstract nonsense? It turned out to be a classic quotation, attributed to one of the founders of category theory, Norman Steenrod. It’s silly and sarcastic, but it’s also not an entirely bad description. Category theory takes abstraction to an extreme level.

Category theory is one of those fields of mathematics that fascinates me: where you take some fundamental concept, and abstract it down to its bare essentials in order to understand just what it really is, what it really means. Just like group theory takes the idea of an algebraic operation, strip it down to the bare minimum, and discovering the meaning of symmetry; category theory looks at what happens if you take the concept of a function as a mapping from one thing to another, and strip it down to the bare minimum, and see what you can discover?

The fundamental thing in category theory is an arrow, also called a morphism. A morphism is an abstraction of the concept of homomorphism, which I talked about a bit when I was writing about group theory. Category theory takes the concept function mapping from one set of values to another, and strips it down to itsbarest essentials: the basic concept of something that maps from one thing to some other thing.

The obvious starting point for our exploration of category theory is: what the heck is a category?

To be formal, a category C is a tuple: (O, M, circ), where:

  1. O (or Obj(C)) is a set of objects. Objects can be anything, so long as they’re distinct, and we can tell them apart. All that we’re going to do is talk about mappings between them – so as long as we can identify them, it doesn’t matter what they really are. We’ll look at categories of sets, of numbers, of topological spaces, and even categories of categories.

  2. M (or Mor(C)) is a set of morphisms, also called arrows. Each morphism is a mapping from an object in O called its source, to an object in O called its target. Given two objects a and b in O, we’ll write Mor(a,b) for the set of morphisms from a to b. To talk about a specific morphism f from a to b, we’ll write it as f : a rightarrow b.
  3. circ is the composition operator: dot is a binary operation that is the abstraction of function composition; circ; given an arrow f in Mor(a,b), and an arrow g in Mor(b,c), f circ g in Mor(a,c). It’s got the basic properties of function composition:

    1. Associativity: forall f : a rightarrow b, g : b rightarrow c, h : c =rightarrow d) h circ (g circ f) = (h circ g) circ f.
    2. Identity: forall a,b in O(C): exists 1_a, 1_b in Mor(C): forall f : a rightarrow b: 1_b circ f = f = f circ 1_a.

One neat little trick to simplify things is that we can actually throw away Obj(C), and replace it with the set of identity morphisms: since there is exactly one identity morphism per object, there’s no real need to distinguish between the identity morphism and the object. It’s a nice trick because it means that we have nothing but morphisms all the way down; but it’s really just a trick. We can talk about Obj(C); or Id(C); but we still need to be able to talk about the objects in some way, whether they’re just a subset of the morphisms, or something distinct.

Now, we get to something about category theory that I really don’t like. Category theory is front-loaded with rather a lot of definitions about different kinds of morphisms and different kinds of categories. The problem with that is that these definitions are very important, but we don’t have enough of the theory under our belts to be able to get much of a sense for why we should care about them, or what their intuitive meaning is. But that’s the way it goes sometimes; you have to start somewhere. It will make sense eventually, and you’ll see why these definitions matter.

There are a lot of special types of morphisms, defined by properties. Here’s the basic list:

  • A monomorphism (aka a monic arrow ) is an arrow f : a rightarrow b such that forall g_1, g_2: x rightarrow a: f circ g_1 = f circ g_2 Leftrightarrow g_1 = g_2. That is, f is monic if and only if, when composed with other arrows, it always produces different results for different arrows.
  • An epic (or epimorphism) is an arrow f : a rightarrow b such that forall g_1, g_2: b rightarrow x):  g_1 circ f = g_2 circ f Leftrightarrow  g_1 = g_2. This is almost the same as a monic, but it’s from the other side of the composition; instead of f circ g_i in the definition, it’s g_i circ f; so an arrow is epic if when another arrow is composed with f, it always produces different results for different arrows.
  • An isomorphism is an arrow f : a rightarrow b such that exists g: b rightarrow a: f circ g = 1_b land g circ f = 1_a – an isomorphism is, basically, a reversible arrow: there’s a morphism that always reverses the action of an iso arrow.
  • An endomorphism is an arrow f : a rightarrow  b where a = b. It’s sort of like a weak identity arrow.
  • An automorphism is an arrow that is both an endmorphism and an isomorphism.

One last definition, just because it gives me a chance to point out something useful about category theory. A functor is a morphism in the category of all categories. What that means is that it’s a structure-preserving mapping between categories. It’s neat in a theoretical way, because it demonstrates that we’re already at a point where we’re seeing how category theory can make it easier to talk about something complicated: we’re using it to describe itself! But the concept of functor also has a lot of applications; in particular, the module system of my favorite programming language makes extensive use of functors.

In Ocaml, a module is something called a structure, which is a set of definitions with constrained types. One of the things you often want to be able to do is to write a piece of code in a way that allows you to make it parametric on some other structure. The way you do that is to write a functor: a “function” from a structure to a structure. For example, to implement a generic binary tree, you need a type of values that you’ll put in the tree; and an operation to compare values. The way you do that is to write a functor which takes a structure defining a type and a comparison operator, and mapping it to a structure which is an implementation of a binary tree for that type and comparison.

The Ocaml functor is a category theoretic functor: category theory provides an easy way to talk about the concept of the compile-time “function” from structure to structure.

A Taste of Specification with Alloy

In my last post (which was, alas, a stupidly long time ago!), I talked a bit about software specification, and promised to talk about my favorite dedicated specification tool, Alloy. Alloy is a very cool system, designed at MIT by Daniel Jackson and his students.

Alloy is a language for specification, along with an environment which allows you to test your specifications. In a lot of ways, it looks like a programming language – but it’s not. You can’t write programs in Alloy. What you can do is write concise, clear, and specific descriptions of how something else works.

I’m not going to try to really teach you Alloy. All that I’m going to do is give you a quick walk-though, to try to show you why it’s worth the trouble of learning. If you want to learn it, the Alloy group’s website has a really good the official Alloy tutorial. which you should walk through.

Continue reading

The Value of Tests: It's more than just testing!

Since I have some free time, I’ve been catching up on some of the stuff
I’ve been meaning to read. I’ve got a reading list of stuff that I’ve wanted
to look at that were written by other authors with my publisher. Yesterday, I started looking at Cucumber, which is an interesting behavior-driven development tool. This post isn’t really about Cucumber, but about something that Cucumber reminded me of.

When a competent programmer builds software, they write tests. That’s just
a given. But why do we do it? It seems like the answer is obvious: to make sure that our software works. But I’d argue that there’s another reason, which in the long run is as important as the functional one. It’s to describe what the software does. A well-written test doesn’t just make sure that the software does the right thing – it tells other programmers what the code is supposed to do.

A test is an executable specification. Specifications are a really good thing; executable specifications are even better.

Continue reading

Weekend Recipe: Kinda-Sorta Ratatouille

This is a really fun recipe. I’m calling it a sort-of Ratatouille, because that’s the closest thing that I’ve seen to it.

The way this came about is amusing. My wife and I watch a lot of cooking shows. There’s one on the food network that I really like called “Chopped”. The show is a bit gimmicky, but the basic idea is that it’s a cooking competition. They bring in a group of really good chefs, and then give them some kind of surprise ingredients that they need to use to cook dishes. Then they get judged on the quality of what they made. I enjoy it because you really get to see something about how the chefs think when they create a dish.

When we watch, I frequently tell her what I would do with the ingredients. So this morning, she decided it would be fun to do the basic Chopped thing: she’d go to the local farmers market, grab some nice ingredients, and make me do something with them.

She came home with a wonderful local chevre (fresh goat cheese; in fact, this was the best fresh chevre I’ve ever had), a couple of japanese eggplants, some sweet corn, and a sweet duck and dried cherry sausage. This dish is what I did with them.

Ingredients

  • 1 pound duck sausage. You could use any really good quality sweet sausage.
  • 2 japanese eggplants, sliced into disks about 1/2 inch thick.
  • 2 ears of fresh corn, cut off of the cob.
  • 1 sweet onion, diced.
  • 3 cloves garlic, finely minced.
  • 1/2 teaspoon fresh thyme.
  • 1 cup white wine.
  • 1/4 pound crumbled chevre.
  • A bunch of diced tomatoes. I used some nice fresh local grape tomatoes.

Instructions

  1. First, poach the sausage in wine. Cook it until it’s just firm. The idea here is that we want to cut the sausage into chunks, and we don’t want those chunks to fall apart. So we’re cooking it just enough to make it firm enough to dice.
  2. Remove the sausage from the poaching liquid. Don’t throw away the liquid – that stuff has a lot of flavor in it!
  3. Put the garlic into a food processor with a bunch of olive oil, and puree it. Then pour that over the eggplant, and let it marinate for a while.
  4. Preheat the oven to 450 degrees. When it’s hot, put the eggplant on a baking sheet, salt it, and roast it until it’s cooked through.
  5. Dice the sausage into largish cubes, and then brown those in some olive oil. Set them aside, covered to stay warm.
  6. In the same pan where you cooked the sausage, add some olive oil, and throw in the onions. Let them cook until they get nice and soft. Then add the corn and tomatoes.
  7. While the corn is cooking, add salt, pepper, and thyme to taste. If it looks a bit dry, add in some of the poaching liquid. Cook it until the corn is cooked however you like it.
  8. Lay the roasted eggplant slices onto the plates. Cover them with the corn and tomato mixture. On top of that, spoon some of the browned sausage. Finally, crumble some of the chevre on top.

I served it with fresh crutons – slices of good french baguette, brushed with olive oil, toasted, and then rubbed with a clove of garlic while they were still hot.

It worked really well. The sausage was a bit on the sweet side because of the cherries, but that and the sweet corn were beautifully balanced by the tartness of the chevre. All of the ingredients were so beautiful, and cooked this way, each of their flavors and textures came together amazingly well. It was one of the best dishes I’ve ever created! So we’ll definitely be doing the chopped thing at home again!

Stuff Everyone Should Do (part 2): Coding Standards

Another thing that we did at Google that I thought was surprisingly effective and useful was strict coding standards.

Before my time at Google, I was sure that coding standards were pointless. I had absolutely no doubt that they were the kind of thing that petty bureaucrats waste time writing and then use to hassle people who are actually productive.

I was seriously wrong.

At Google, I could look at any piece of code, anywhere in Google’s codebase, and I could read it. The fact that I was allowed to do that was pretty unusual in itself. But what was surprising to me was just how much the standardization of style – indents, names, file structures, and comment conventions – made it dramatically easier to look at a piece of unfamiliar code and understand it. This is still surprising to me – because those are all trivial things. They shouldn’t have much impact – but they do. It’s absolutely shocking to realize how much of the time you spend reading code is just looking for the basic syntactic structure!

There’s a suite of common objections to this, all of which I used to believe.

It wastes time!
I’m a good coder, and I don’t want to waste time on stupidity. I’m good enough that when I write code, it’s clear and easy to understand. Why should I waste my time on some stupid standard? The answer is: because there is a value in uniformity. As I alluded to earlier – the fact that every piece of code that you look at — whether it was written by you, by one of your closest coworkers, or by someone 11 timezones away — will always demarcate structures in the same way, will always use the same naming conventions – it really, genuinely makes a big difference. You need so much less effort to read code that you haven’t looked at in a while (or at all), because you can immediately recognize the structure.
I’m an artist!
This is phrased facetiously, but it does reflect a common complaint. We programmers have a lot of pride in our personal style. The code that I write really does reflect something about me and how my mind works. It’s a reflection of my skill and my creativity. If I’m forced into some stupid standard, it seems like it’s stifling my creativity. The thing is, the important parts of your style, the important reflections of your mind and your creativity aren’t in trivial syntactic things. (If it is, then you’re a pretty crappy programmer.) The standard actually makes it easier for other people to see your creativity – because they can actually see what you’re doing, without being distracted by the unfamiliar syntactic quirks.
One size fits all actually fits none!
If you have a coding standard that wasn’t designed specifically for your project, then it’s probably non-optimal for your project. That’s fine. Again, it’s just syntax: non-optimal doesn’t mean bad. The fact that it’s not ideal for your project doesn’t mean that it’s not worth doing. Yeah, sure, it does reduce the magnitude of the benefit for your project, but at the same time, it increases the magnitude of the benefit for the larger organization. In addition, it frequently makes sense to have project-specific code styles. There’s nothing wrong with having a project-specific coding standard. In fact, in my experience, the best thing is to have a very general coding standard for the larger organization, and then project-specific extensions of that for the project-specific idioms and structures.
I’m too good for that!
This is actually the most common objection. It’s sort-of a combination of the others, but it gets at an underlying attitude in a direct way. This is the belief on the part of the complainer that they’re a better programmer than whoever wrote the standard, and lowering themselves to following the standard written by the inferior author will reduce the quality of the code. This is, to put it mildly, bullshit. It’s pure arrogance, and it’s ridiculous. The fact of the matter is that no one is so good that any change to their coding style will damage the code. If you can’t write good code to any reasonable coding standard, you’re a crappy programmer.

When you’re coding against a standard, there are inevitably going to be places where you disagree with the standard. There will be places where your personal style is better than the standard. But that doesn’t matter. There will, probably, also be places where the standard is better than your style. But that doesn’t matter easier. As long as the standard isn’t totally ridiculous, the comprehension benefits are significant enough to more than compensate for that.

But what if the coding standard is totally ridiculous?

Well, then, it’s rough to be you: you’re screwed. But that’s not really because of the ridiculous coding standard. It’s because you’re working for idiots. Screwing up a coding standard enough to really prevent good programmers from writing good code is hard. It requires a sort of dedicated, hard-headed stupidity. If you’re working for people who are such idiots that they’d impose a broken coding standard, then they’re going to do plenty of other stupid stuff, too. If you’re working for idiots, you’re pretty much screwed no matter what you do, coding standard or no. (And I don’t mean to suggest that software businesses run by idiots are rare; it’s an unfortunate fact, but there’s no shortage of idiots, and there are plenty of them that have their own businesses.)

Things Everyone Should Do: Code Review

As I alluded to in my last post (which I will be correcting shortly), I no longer work for Google. I still haven’t decided quite where I’m going to wind up – I’ve got a couple of excellent offers to choose between. But in the interim, since I’m not technically employed by anyone, I thought I’d do a bit of writing about some professional things that are interesting, but that might have caused tension with coworkers or management.

Google is a really cool company. And they’ve done some really amazing things – both outside the company, where users can see it, and inside the company. There are a couple of things about the inside that aren’t confidential, but which also haven’t been discussed all that widely on the outside. That’s what I want to talk about.

The biggest thing that makes Google’s code so good is simple: code review. That’s not specific to Google – it’s widely recognized as a good idea, and a lot of people do it. But I’ve never seen another large company where it was such a universal. At Google, no code, for any product, for any project, gets checked in until it gets a positive review.

Everyone should do this. And I don’t just mean informally: this should really be a universal rule of serious software development. Not just product code – everything. It’s not that much work, and it makes a huge difference.

What do you get out of code review?

There’s the obvious: having a second set of eyes look over code before it gets checked in catches bugs. This is the most widely cited, widely recognized benefit of code review. But in my experience, it’s the least valuable one. People do find bugs in code review. But the overwhelming majority of bugs that are caught in code review are, frankly, trivial bugs which would have taken the author a couple of minutes to find. The bugs that actually take time to find don’t get caught in review.

The biggest advantage of code review is purely social. If you’re programming and you know that your coworkers are going to look at your code, you program differently. You’ll write code that’s neater, better documented, and better organized — because you’ll know that people who’s opinions you care about will be looking at your code. Without review, you know that people will look at code eventually. But because it’s not immediate, it doesn’t have the same sense of urgency, and it doesn’t have the same feeling of personal judgement.

There’s one more big benefit. Code reviews spread knowledge. In a lot of development groups, each person has a core component that they’re responsible for, and each person is very focused on their own component. As long as their coworkers components don’t break their code, they don’t look at it. The effect of this is that for each component, only one person has any familiarity with the code. If that person takes time off or – god forbid – leaves the company, no one knows anything about it. With code review, you have at least two people who are familiar with code – the author, and the reviewer. The reviewer doesn’t know as much about the code as the author – but they’re familiar with the design and the structure of it, which is incredibly valuable.

Of course, nothing is every completely simple. From my experience, it takes some time before you get good at reviewing code. There are some pitfalls that I’ve seen that cause a lot of trouble – and since they come up particularly frequently among inexperienced reviewers, they give people trying code reviews a bad experience, and so become a major barrier to adopting code review as a practice.

The biggest rule is that the point of code review is to find problems in code before it gets committed – what you’re looking for is correctness. The most common mistake in code review – the mistake that everyone makes when they’re new to it – is judging code by whether it’s what the reviewer would have written.

Given a problem, there are usually a dozen different ways to solve it. Andgiven a solution, there’s a million ways to render it as code. As a reviewer, your job isn’t to make sure that the code is what you would have written – because it won’t be. Your job as a reviewer of a piece of code is to make sure that the code as written by its author is correct. When this rule gets broken, you end up with hard feelings and frustration all around – which isn’t a good thing.

The thing is, this is such a thoroughly natural mistake to make. If you’re a programmer, when you look at a problem, you can see a solution – and you think of what you’ve seen as the solution. But it isn’t – and to be a good reviewer, you need to get that.

The second major pitfall of review is that people feel obligated to say something. You know that the author spent a lot of time and effort working on the code – shouldn’t you say something?

No, you shouldn’t.

There is never anything wrong with just saying “Yup, looks good”. If you constantly go hunting to try to find something to criticize, then all that you accomplish is to wreck your own credibility. When you repeatedly make things to criticize just to find something to say, then the people who’s code you review will learn that when you say something, that you’re just saying it to fill the silence. Your comments won’t be taken seriously.

Third is speed. You shouldn’t rush through a code review – but also, you need to do it promptly. Your coworkers are waiting for you. If you and your coworkers aren’t willing to take the time to get reviews done, and done quickly, then people are going to get frustrated, and code review is just going to cause frustration. It may seem like it’s an interruption to drop things to do a review. It shouldn’t be. You don’t need to drop everything the moment someone asks you to do a review. But within a couple of hours, you will take a break from what you’re doing – to get a drink, to go to the bathroom, to talk a walk. When you get back from that, you can do the review and get it done. If you do, then no one will every be left hanging for a long time waiting on you.

Topoi Prerequisites: an Intro to Pre-Sheafs

I’m in the process of changing jobs. As a result of that, I’ve actually got some time between leaving the old, and starting the new. So I’ve been trying to look into Topoi. Topoi are, basically, an alternative formulation of mathematical logic. In most common presentations of logic, set theory is used as the underlying mathematical basis – set theory and a mathematical logic built alongside it provide a complete foundational structure for mathematics.

Topoi is a different approach. Instead of starting with set theory and a logic with set theoretic semantics, Topoi starts with categories. (I’ve done a bunch of writing about categories before: see the archives for my category theory posts.)

Reading about Topoi is rough going. The references I’ve found so far are seriously rough going. So instead of diving right in, I’m going to take a couple of steps back, to some of the foundational material that I think helps make it easier to see where the category theory is coming from. (As a general statement, I find that category theory is fascinating, but it’s so abstract that you really need to do some work to ground it in a way that makes sense. Even then, it’s not easy to grasp, but it’s worth the effort!)

A lot of category theoretic concepts originated in algebraic topology. Topoi follows that – one of its foundational concepts is related to the topological idea of a sheaf. So we’re going to start by looking at what a sheaf is.

Continue reading

What happens if you don't understand math? Just replace it with solipsism, and you can get published!

About four years ago, I wrote a post about a crackpot theory by a biologist named Robert Lanza. Lanza is a biologist – a genuine, serious scientist. And his theory got published in a major journal, “The American Scholar”. Nevertheless, it’s total rubbish.

Anyway, the folks over at the Encyclopedia of American Loons just posted an entry about him, so I thought it was worth bringing back this oldie-but-goodie. The original post was inspired by a comment from one of my most astute commenters, Mr. Blake Stacey, where he gave me a link to Lanza’sarticle.

The article is called “A New Theory of the Universe”, by Robert Lanza, and as I said, it was published in the American Scholar. Lanza’s article is a rotten piece of new-age gibberish, with all of the usual hallmarks: lots of woo, all sorts of babble about how important consciousness is, random nonsensical babblings about quantum physics, and of course, bad math.

Continue reading

Weekend Recipe: Orichette with Broccoli Rabe

When it comes to cooking, I absolutely love Italian food. Real Italian food, that is. In America, until recently, like all too many ethnic foods, Italian food was bastardized into trashy stuff – mostly sickeningly sweet tomato stuff from cans. Real Italian food is wonderful, simple, and fresh. Italian cooking is all about getting the best quality fresh ingredients, and doing as little to them as possible.

A couple of weeks ago, my wife and I went to Eataly. Eataly is a labor of love by the wonderful Italian chef Mario Batali. It’s a sort of massive Italian market, with a collection of restaurants embedded in it, cooking the stuff that they sell. There’s a pasta restaurant, a pizza oven, a seafood restaurant, a salumeria, a cruda bar (cruda is sort of like Italian sashimi: very fresh fish, served raw with a sprinkle of salt and olive oil), and so on.

We went to the pasta place there, and had the most phenomenal pasta dish. It was everything that I love about good Italian cooking: amazing ingredients, prepared in a simple way that brings out their flavors. It was amazing. So, naturally, I had to reproduce it at home. And being Italian food, that was pretty easy to do – because it’s such a simple dish!

The dish was Orichette with sweet Italian sausage and broccoli rabe. Basically, you need a really good sausage, and really good fresh broccolli rabe. It’s all about those flavors, without distractions.

The trick to this is the length of the cooking time. It took me a while to figure this out: I tend to cook veggies Chinese style, which means that I barely cook them at all. I stir fry american broccoli for under a minute. But that doesn’t work for rabe. Broccoli rabe is an absolutely lovely veggie, but it really needs to be cooked well. When it’s raw, it’s got a very strong, almost overwhelming horseradishy bitterness. You need to really let it cook for a while to get it past that. But the thing about it is, unlike the typical American broccoli, it’s got the strength to handle that. It doesn’t turn into mush. You cook rabe for 20 minutes, and it’s still got some body to it. Do it right, and it’s one of the most lovely, succulent vegetables in the world.

Ingredients

  • 3/4 pound good quality sweet sausage meat. It’s important to get a really good quality sausage. If you buy a cheap prepackaged sausage from the grocery store, the dish won’t work. You want a really good fresh Italian sausage. We bought our at the butcher counter at Eataly. You should remove the skin, so that all you have is the meat, crumbled.
  • A head brocolli rabe, cut into roughly two-inch lengths.
  • 4 cloves minced garlic
  • Salt and pepper
  • Chili flakes
  • One cup dry white wine
  • 1 teaspoon sugar (just enough to take the edge off the acid from the wine)
  • Olive oil
  • One pound orichette

Instructions

  1. Heat a saute pan. Add a few tablespoons of olive oil when it’s hot.
  2. Throw in the sausage meat. Stir it around, breaking it up into smallish bite-sized pieces. Cook it on high heat until it gets nicely browned.
  3. Reduce the heat to medium, add the garlic and chili flakes, and then the broccoli. It will look like it’s way too much brocolli rabe, but don’t worry. It’s going to cook down a lot.
  4. Stir around until the broccoli rabe starts to wilt. Then add the white wine and the sugar, and reduce the heat to a low boil.
  5. Start cooking the pasta. Orichette generally cooks for a bit more than ten minutes, and the broccoli rabe should cook for between 15 and 20 minutes, so work out your timing from that so that they’ll both finish at the same time.
  6. When most of the white wine has cooked away from the sauce, add 1/2 cup of the pasta water. Whenever the sauce starts to look dry, add some of the pasta water. This adds some salt (because your pasta water should be salted!), and it also helps to build the sauce, because the starch acts as a binder.
  7. Taste the sauce, and add salt and pepper as needed.
  8. When the pasta is done, drain it, and add it to the sauce, drizzle with a few more tablespoons of olive oil, and toss it together.

Serve with a sprinkle of parmesan cheese.

Stupid Politician Tricks; aka Averages Unfairly Biased against Moronic Conclusions

In the news lately, there’ve been a few particularly egregious examples of bad math. One that really ticked me off came from Alan Simpson. Simpson is one of the two co-chairs of a presidential comission that was asked to come up with a proposal for how to handle the federal budget deficit.

The proposal that his comission claimed that social security was one of the big problems in the budget. It really isn’t – it requires extremely creative accounting combined with several blatant lies to make it into part of the budget problem. (At the moment, social security is operating in surplus: it recieves more money in taxes each year than it pays out.)

Simpson has claimed that social security must be cut if we’re going to fix the budget deficit. As part of his attempt to defend his proposed cuts, he said the following about social security:

It was never intended as a retirement program. It was set up in ‘37 and ‘38 to take care of people who were in distress — ditch diggers, wage earners — it was to give them 43 percent of the replacement rate of their wages. The life expectancy was 63. That’s why they set retirement age at 65

When I first heard that he’d said that, my immediate reaction was “that miserable fucking liar”. Because there are only two possible interpretations of that statement. Either the guy is a malicious liar, or he’s cosmically stupid and ill-informed. I was willing to accept that he’s a moron, but given that he spent a couple of years on the deficit commission, I couldn’t believe that he didn’t understand anything about how social security works.

I was wrong.

In an interview after that astonishing quote, a reported pointed out that the overall life expectancy was 63 – but that the life expectancy for people who lived to be 65 actually had a life expectancy of 79 years. You see, the life expectancy figures are pushed down by people who die young. Especially when you realize that social security start at a time when the people collecting it grew up without antibiotics, there were a whole lot of people who died very young – which bias the age downwards. Simpson’s
response to this?

If you’re telling me that a guy who got to be 65 in 1940 — that all of them lived to be 77 — that is just not correct. Just because a guy gets to be 65, he’s gonna live to be 77? Hell, that’s my genre. That’s not true.

So yeah.. He’s really stupid. Usually, when it comes to politicians, my bias is to assume malice before ignorance. They spend so much of their time repeating lies – lying is pretty much their entire job. But Simpson is an extremely proud, arrogant man. If he had any clue of how unbelievably stupid he sounded, he wouldn’t have said that. He’d have made up some other lie that made him look less stupid. He’s got too much ego to deliberately look like a credulous drooling cretin.

So my conclusion is: He really doesn’t understand that if the overall average life expectancy for a set of people is 63, that the life expectancy of the subset people who live to be 63 going to be significantly higher than 63.

Just to hammer in how stupid it is, let’s look at a trivial example. Let’s look at a group of five people, with an average life expectancy of 62 years.

One died when he was 12. What’s the average age at death of the rest of them to make the overall average life expectancy was 62 years?

frac{4x + 12}{5} = 62, x = 74
.

So in this particular group of people with a life expectancy of 62 years, the pool of people who live to be 20 has a life expectancy of 74 years.

It doesn’t take much math at all to see how much of a moron Simpson is. It should be completely obvious: some people die young, and the fact that they die young affects the average.

Another way of saying it, which makes it pretty obvious how stupid Simpson is: if you live to be 65, you can be pretty sure that you’ll live to be at least 65, and you’ve got a darn good chance of living to be 66.

It’s incredibly depressing to realize that the report co-signed by this ignorant, moronic jackass is widely accepted by politicians and influential journalists as a credible, honest, informed analysis of the deficit problem and how to solve it. The people who wrote the report are incapable of comprehending the kind of simple arithmetic that’s needed to see how stupid Simpson’s statement was.