Recipe: Kimchi-Marinated Steak Sandwich

Last weekend, I was watching cooking shows on PBS. One of my favorite chefs, Ming Tsai, came on. Along with a guest, he ad-libbed a dish that looked absolutely phenomenal: a kimchi marinated cheesesteak sandwich.

I didn’t write down the recipe; I just took the idea, and ran with it. I mean come on, what could be better than a well-seared medium-rare flank steak with kimchi and Tallegio? My version is, I think, a bit lighter than what they made, but it’s not exactly a light snack!

For my tastes, this is an amazing combination. Start with flank steak, which is one of my favorite cuts of beef. It’s not the tenderest piece of beef, so it needs some work, but it’s got an absolutely fantastic flavor, and as long as you prepare it properly, don’t overcook it, and slice it thin, it comes out great.

Then you’ve got kimchi. I only discovered kimchi fairly recently, and I absolutely love the stuff, and I’m always looking for more things to eat it with.

Finally, tallegio cheese. I adore soft cheeses, and tallegio is probably the nicest soft cheese that I’ve had from Italy. It’s very creamy, with a nice funkiness to it, but not an overpowering stinkiness.

Put all three of those together in a sandwich, and I’m in heaven.

Ingredients

  • One largish (1 1/2 pound) flank steak.
  • 1/2 cup kimchi, finely minced, plus 1 tablespoon of
    the liquid from the kimchi.
  • Some more kimchi, left whole.
  • salt and pepper
  • one teaspoon sugar.
  • one half of a large onion, cut into thin slices.
  • 1/2 pound tallegio cheese.
  • Bagette.

One note about the ingredients: I recently made homemade kimchi for the first time. It’s really easy to make, and it is really a totally different thing from the stuff out of a jar. I don’t see why it’s so different: kimchi is a fermented preserved cabbage, which is exactly the kind of thing that should be really easy to make in quantity, and then sell in jars. But the fact is, nothing from the store remotely compares to homemade. I basically used this recipe, except that I couldn’t get any daikon, so I left that out.

Instructions

  1. Take a couple of forks, and go up and down the steak, poking holes. Don’t be lazy: the more you do this, the better. You’re tenderizing the meat, and making a way for the marinade to help penetrate.
  2. Salt the meat generously, then coat both sides with the minced kimchi. Refrigerate overnight.
  3. Scrape the kimchi off the steak, but do not throw it away. We’re going to cook it, and use it as a topping.
  4. Heat a cast-iron pan on high heat until it’s smoking hot.
  5. Sprinkle half the sugar and some more salt and pepper onto the top of the steak, and then put the sugared side down in the pan, for about 3 minutes. While it’s cooking on that side, put the rest of the sugar and salt onto the other side. Then turn it over, and cook the other side for 3 minutes.
  6. Transfer the steak into a 350 degree oven for 8-10 minutes. (I did eight; it was exactly the way I like it, nice and medium rare, but I think a lot of people would prefer it a bit more well done. Take it out after the ten minutes, and leave it alone for a while. Don’t slice it immediately!
  7. While the steak is resting, heat up a pan, and add some oil. Toss in the onions with a bit of salt, and stir them around. When they just start to turn brown, add in the kimchi and any liquid left from the marinade, and turn down the heat. Cook it for about five minutes, adding a bit of water if it gets too dry.
  8. Slice the break in half, and spread a thin layer of the taleggio cheese over both halves.
  9. Slice the steak on a bias against the grain into thin slices.
  10. Take any drippings from slicing the steak, and pour them into the onions and kimchi.
  11. Put a heap of steak slices onto the bread, and then put a pile of the cooked onions and kimchi, and then a piece of fresh uncooked kimchi on top.
  12. Close up the sandwich, and eat!

Philadelphia, eat your heart out. Your cheesesteaks got nothing on this!

Back to an old topic: Bad Vaccine Math

The very first Good Math/Bad Math post ever was about an idiotic bit of antivaccine rubbish. I haven’t dealt with antivaccine stuff much since then, because the bulk of the antivaccine idiocy has nothing to do with math. But the other day, a reader sent me a really interesting link from what my friend Orac calls a “wretched hive of scum and quackery”, naturalnews.com, in which they try to argue that the whooping cough vaccine is an epic failure:

(NaturalNews) The utter failure of the whooping cough (pertussis) vaccine to provide any real protection against disease is once again on display for the world to see, as yet another major outbreak of the condition has spread primarily throughout the vaccinated community. As it turns out, 90 percent of those affected by an ongoing whooping cough epidemic that was officially declared in the state of Vermont on December 13, 2012, were vaccinated against the condition — and some of these were vaccinated two or more times in accordance with official government recommendations.

As reported by the Burlington Free Press, at least 522 cases of whooping cough were confirmed by Vermont authorities last month, which was about 10 times the normal amount from previous years. Since that time, nearly 100 more cases have been confirmed, bringing the official total as of January 15, 2013, to 612 cases. The majority of those affected, according to Vermont state epidemiologist Patsy Kelso, are in the 10-14-year-old age group, and 90 percent of those confirmed have already been vaccinated one or more times for pertussis.

Even so, Kelso and others are still urging both adults and children to get a free pertussis shot at one of the free clinics set up throughout the state, insisting that both the vaccine and the Tdap booster for adults “are 80 to 90 percent effective.” Clearly this is not the case, as evidenced by the fact that those most affected in the outbreak have already been vaccinated, but officials are apparently hoping that the public is too naive or disengaged to notice this glaring disparity between what is being said and what is actually occurring.

It continues in that vein. The gist of the argument is:

  1. We say everyone needs to be vaccinated, which will protect them from getting the whooping cough.
  2. The whooping cough vaccine is, allagedly, 80 to 90% effective.
  3. 90% of the people who caught whooping cough were properly vaccinated.
  4. Therefore the vaccine can’t possibly work.

What they want you to do is look at that 80 to 90 percent effective rate, and see that only 10-20% of vaccinated people should be succeptible to the whooping cough, and compare that 10-20% to the 90% of actual infected people that were vaccinated. 20% (the upper bound of the succeptible portion of vaccinated people according to the quoted statistic) is clearly much smaller than 90% – therefore it’s obvious that the vaccine doesn’t work.

Of course, this is rubbish. It’s a classic apple to orange-grove comparison. You’re comparing percentages, when those percentages are measuring different groups – groups with wildly difference sizes.

Take a pool of 1000 people, and suppose that 95% are properly vaccinated (the current DTAP vaccination rate in the US is around 95%). That gives you 950 vaccinated people and 50 unvaccinated people who are unvaccinated.

In the vaccinated pool, let’s assume that the vaccine was fully effective on 90% of them (that’s the highest estimate of effectiveness, which will result in the lowest number of succeptible vaccinated – aka the best possible scenario for the anti-vaxers). That gives us 95 vaccinated people who are succeptible to the whooping cough.

There’s the root of the problem. Using numbers that are ridiculously friendly to the anti-vaxers, we’ve still got a population of twice as many succeptible vaccinated people as unvaccinated. so we’d expect, right out of the box, that better than 2/3rds of the cases of whooping cough would be among the vaccinated people.

In reality, the numbers are much worse for the antivax case. The percentage of people who were ever vaccinated is around 95%, because you need the vaccination to go to school. But that’s just the childhood dose. DTAP is a vaccination that needs to be periodically boosted or the immunity wanes. And the percentage of people who’ve had boosters is extremely low. Among adolescents, according to the CDC, only a bit more than half have had DTAP boosters; among adults, less that 10% have had a booster within the last 5 years.

What’s your succeptibility if you’ve gone more than 5 years without vaccination? Somewhere 40% of people who didn’t have boosters in the last five years are succeptible.

So let’s just play with those numbers a bit. Assume, for simplicity, than 50% of the people are adults, and 50% children, and assume that all of the children are fully up-to-date on the vaccine. Then you’ve got 10% of the children (10% of 475), 10% of the adults that are up-to-date (10% of 10% of 475), and 40% of the adults that aren’t up-to-date (40% of 90% of 475) is the succeptible population. That works out to 266 succeptible people among the vaccinated, which is 85%: so you’d expect 85% of the actual cases of whooping cough to be among people who’d been vaccinated. Suddenly, the antivaxers case doesn’t look so good, does it?

Consider, for a moment, what you’d expect among a non-vaccinated population. Pertussis is highly contagious. If someone in your household has pertussis, and you’re succeptible, you’ve got a better than 90% chance of catching it. It’s that contagious. Routine exposure – not sharing a household, but going to work, to the store, etc., with people who are infected still gives you about a 50% chance of infection if you’re succeptible.

In the state of Vermont, where NaturalNews is claiming that the evidence shows that the vaccine doesn’t work, how many cases of Pertussis have they seen? Around 600, out of a state population of 600,000 – an infection rate of one tenth of one percent. 0.1 percent, from a virulently contagious disease.

That’s the highest level of Pertussis that we’ve seen in the US in a long time. But at the same time, it’s really a very low number for something so contagious. To compare for a moment: there’s been a huge outbreak of Norovirus in the UK this year. Overall, more than one million people have caught it so far this winter, out of a total population of 62 million, for a rate of about 1.6% or sixteen times the rate of infection of pertussis.

Why is the rate of infection with this virulently contagious disease so different from the rate of infection with that other virulently contagious disease? Vaccines are a big part of it.

Hensel's Lemma: Newton's Method for p-Adic numbers

This is, sadly, the last part of my posts about P-adic arithmetic. It’s not for lack of information, but rather for lack of competence on my part. The area of math that I’m worst at is analysis, and an awful lot of the interesting things that you can do with p-adic numbers is analysis.

For an explanation of what p-adic numbers are, and how arithmetic on them works, you can refer to my first post on the p-adics, and for an explanation of p-adic metrics and distance, you can look at this post.

Today, we’re going to look at Hensel’s lemma. This is really cool. Hensel’s lemma shows you a simple way of finding polynomial roots in P-adic arithmetic. It’s sometimes called Newton’s method for p-adics, which is both a good description of how it’s used (but a complete misnomer because the lemma is a description of of a property, and Newton’s method is a description of a procedure).

Hensel’s lemma says that if you’ve got a polynomial, and for a prime number p, you can find a root using modulo arithmetic using base p, then you can use the base-p root to find the corresponding root using modulo base p2, and given the root using base p2, you can find it for modulo p3, and so on!

So far, this is just a statement about modulo arithmetic – not about p-adic arithmetic. But what ends up happening is that the result modulo p gives you a first approximation of the root in p-adic. Using the process defined by Hensel’s lemma, you can take that root and extend it to a better approximation. It ends up being the case that each time you lift the result to a higher exponent of p, you’re performing the exactly analog of one step in Newton’s method of finding roots! But it’s actually better than that. In you look at Hensel’s lemma in the p-adic numbers, then each step extends the approximation by exactly one digit!

Let’s look at the lemma in formal terms:

Suppose that you have a polynomial f(x).

If there is an integer, i and a prime number p such that for f(i) = 0 in mod p, j such that f(j) = 0 in mod p^2. In general, if there’s an exponent k where f(i) = 0 in mod p^k, then there’s a j where f(j) = 0 in mod p^{k+1}.

In fact, we can do even better than that! If we know the root i for an exponent k, then we can easily compute j using a process called lifting:

(In this, is the first derivative of f at i.)

You can see, looking at that equation, that the derivative of f at i can’t be zero mod p^k, or else the denominator of the fraction would be zero, and everything would go kablooie in your face!

In P-adic arithemetic, this becomes absolutely amazing. If i is the root mod p^k, then the root mod p^{k+1} is:

It can’t be simpler. And you can clearly see the relation to Newton’s method.

For fun, let’s try looking at an example: let’s take the square root of 2 in 7-adic. In base-10, the square root of 2 is roughly 1.4142135624. In p-adic, it’s going to bo something completely different. We’ll start by phrasing it as a polynomial: x^2 - 2 = 0. So, what’s the solution to that mod-7? 3: in mod-7, 3*3=2.

So, our first approximation of a square root of 2 is 3.

We can extend to the next digit using Hensel’s lemma, and we get j = 3 - 7/6 =   (9-2)/6 = 7/6. We’re looking at mod 7 arithmetic here, so 6 = -1. So j = 3 + 7 = 10 = 13.

Repeated, we’d get 213, 6213, and so on.

Good Math: the Book

Ladies and Gentlemen! I’ve got a very big announcement!

mcmath At long last, I’ve written a book based on this blog. It’s being published by the Pragmatic Bookshelf. The way that the Prags work, you can pre-order the book now, and immediately download the current draft. As I update the rest of the book, you’ll get all of the updates, up to the final printed version of the book.

I’m very proud of this thing. It’s the result of a whole lot of work, and I think it should be a lot of fun for mathematically curious folks to read.

For this book, I’ve collected up a bunch of my favorite posts from the history of this blog, and revised/rewritten them in a much more polished, coherent form. The book includes:

  1. Numbers: some basic number theory, what numbers mean, how they’re constructed.
  2. Funny numbers: a collection of strange but interesting numbers: 0, e, i, …
  3. Writing Numbers: Roman numerals, egyptian fractions, continued fractions, …
  4. Logic: first order predicate logic, proofs, temporal logic, and prolog.
  5. Set theory
  6. Computation and computing machines

Aside from being a fun read, buying it will support Scientopia. From the time that Scientopia got started, I’ve been paying all of the hosting bills. That’s about $250 a month, for the last three years. Making some money from this book will make it possible for me to continue supporting it, and maybe even pay for some help with technical support!

Not to mention the fact that if this book does well, there will be more volumes down the line. In particular, I’ve got my eye on doing some stuff about the math of programming: type theory, category theory, algebraic data structures, etc.

Enjoy!

Vortex Garbage

A reader who saw my earlier post on the Vortex math talk at a TEDx conference sent me a link to an absolutely dreadful video that features some more crackpottery about the magic of vortices.

It says:

The old heliocentric model of our solar system,
planets rotating around the sun, is not only boring,
but also incorrect.

Our solar system moves through space at 70,000km/hr.
Now picture this instead:

(Image of the sun with a rocket/comet trail propelling
it through space, with the planets swirling around it.)

The sun is like a comet, dragging the planets in its wake.
Can you say “vortex”?

The science of this is terrible. The sun is not a rocket. It does not propel itself through space. It does not have a tail. It does not leave a significant “wake”. (There is interstellar material, and the sun moving through it does perturb it, but it’s not a wake: the interstellar material is orbiting the galactic center just like the sun. Gravitational effects do cause pertubations, but it’s not like a boat moving through still water, producing a wake.) Even if you stretch the definition of “wake”, the sun certainly does not leave a wake large enough to “drag” the planets. In fact, if you actually look at the solar system, the plane the ecliptic – the plane where the planets orbit the sun – is at a roughly 60 degree angle to the galactic ecliptic. If planetary orbits were a drag effect, then you would expect the orbits to be perpendicular to the galactic ecliptic. But they aren’t.

If you look at it mathematically, it’s even worse. The video claims to be making a distinction between the “old heliocentric” model of the solar system, and their new “vortex” model. But in fact, mathematically, they’re exactly the same thing. Look at it from a heliocentric point of view, and you’ve got the heliocentric model. Look at the exact same system from point that’s not moving relative to galactic center, and you get the vortex. They’re the same thing. The only difference is how you look at it.

And that’s just the start of the rubbish. Once they get past their description of their “vortex” model, they go right into the woo. Vortex is life! Vortex is sprirituality! Oy.

If you follow their link to their website, it gets even sillier, and you can start to see just how utterly clueless the author of this actually is:

(In reference to a NASA image showing the interstellar “wind” and the heliopause)

Think about this for a minute. In this diagram it seems the Solar System travel to the left. When the Earth is also traveling to the left (for half a year) it must go faster than the Sun. Then in the second half of the year, it travels in a “relative opposite direction” so it must go slower than the Sun. Then, after completing one orbit, it must increase speed to overtake the Sun in half a year. And this would go for all the planets. Just like any point you draw on a frisbee will not have a constant speed, neither will any planet.

See, it’s a problem that the planets aren’t moving at a constant speed. They speed up and slow down! Oh, the horror! The explanation is that they’re caught by the sun’s wake! So they speed up when they get dragged, until they pass the sun (how does being dragged by the sun ever make them faster than the sun? Who knows!), and then they’re not being dragged anymore, so they slow down.

This is ignorance of physics and of the entire concept of frame of reference and symmetry that is absolutely epic.

There’s quite a bit more nonsense, but that’s all I can stomach this evening. Feel free to point out more in the comments!

Types Gone Wild! SKI at Compile-Time

Over the weekend, a couple of my Foursquare coworkers and I were chatting on twitter, and one of my smartest coworkers, a great guy named Jorge Ortiz, pointed out that type inference in Scala (the language we use at Foursquare, and also pretty much my favorite language) is Turing complete.

Somehow, I hadn’t seen this before, and it absolutely blew my mind. So I asked Jorge for a link to the proof. The link he sent me is a really beautiful blog post. It doesn’t just prove that Scala type inference is Turing complete, but it does it in a remarkably beautiful way.

Before I get to the proof, what does this mean?

A system is Turing complete when it can perform any possible computation that could be performed on any other computing device. The Turing machine is, obviously, Turing complete. So is lambda calculus, the Minsky machine, the Brainfuck computing model, and the Scala programming language itself.

If type inference is Turing complete, then that means that you can write a Scala program where, in order to type-check the program, the compiler has to run an arbitrary program to completion. It means that there are, at least theoretically, Scala programs where the compiler will take forever – literally forever – to determine whether or not a given program contains a type error. Needless to say, I consider this to be a bad thing. Personally, I’d really prefer to see the type system be less flexible. In fact, I’d go so far as to say that this is a fundamental error in the design of Scala, and a big strike against it as a language. Having a type-checking system which isn’t guaranteed to terminate is bad.

But let’s put that aside: Scala is pretty entrenched in the community that uses it, and they’ve accepted this as a tradeoff. How did the blog author, Michael Dürig, prove that Scala type checking is Turing complete? By showing how to implement a variant of lambda calculus called SKI combinator calculus entirely with types.

SKI calculus is seriously cool. We know that lambda calculus is Turing complete. It turns out that for any lambda calculus expression, there’s a way rewriting it without any variables, and without any lambdas at all, using three canonical master functions. If you’ve got those three, then you can write anything, anything at all. The three are called S, K, and I.

  • The S combinator is: S x y z = x z (y z).
  • The K combinator is: K x y = x.
  • The I combinator is: I x = x.

They come from intuitionistic logic, where they’re fundamental axioms that describe how intuitionistic implication works. K is the rule A Rightarrow (B Rightarrow A); S is the rule (A Rightarrow (B Rightarrow C)) Rightarrow ((A Rightarrow B) Rightarrow C); and I is (A Rightarrow A).

Given any lambda calculus expression, you can rewrite it as a chain of SKIs. (If you’re interested in seeing how, please just ask in the comments; if enough people are interested, I’ll write it up.) What the author of the post id is show how to implement the S, K, and I combinators in Scala types.

trait Term {
  type ap[x <: Term] <: Term
  type eval <: Term
}

He’s created a type Term, which is the supertype of any computable fragment written in this type-SKI. Since everything is a function, all terms have to have two methods: one of them is a one-parameter “function” which applies the term to a parameter, and the second is a “function” which simplifies the term into canonical form.

He implements the S, K, and I combinators as traits that extend Term. We’ll start with the simplest one, the I combinator.

trait I extends Term {
  type ap[x <: Term] = I1[x]
  type eval = I
}

trait I1[x <: Term] extends Term {
  type ap[y <: Term] = eval#ap[y]
  type eval = x#eval
}

I needs to take a parameter, so its apply type-function takes a parameter x, and returns a new type I1[x] which has the parameter encoded into it. Evaluating I1[x] does exactly what you’d want from the I combinator with its parameter – it returns it.

The apply “method” of I1 looks strange. What you have to remember is that in lambda calculus (and in the SKI combinator calculus), everything is a function – so even after evaluating I.ap[x] to some other type, it’s still a type function. So it still needs to be applicable. Applying it is exactly the same thing as applying its parameter.

So if have any type A, if you write something like var a : I.ap[A].eval, the type of a will evaluate to A. If you apply I.ap[A].ap[Z], it’s equivalent to taking the result of evaluating I.ap[A], giving you A, and then applying that to Z.

The K combinator is much more interesting:

// The K combinator
trait K extends Term {
  type ap[x <: Term] = K1[x]
  type eval = K
}

trait K1[x <: Term] extends Term {
  type ap[y <: Term] = K2[x, y]
  type eval = K1[x]
}

trait K2[x <: Term, y <: Term] extends Term {
  type ap[z <: Term] = eval#ap[z]
  type eval = x#eval
}

It's written in curried form, so it's a type trait K, which returns a type trait K1, which takes a parameter and returns a type trait K2.

The implementation is a whole lot trickier, but it's the same basic mechanics. Applying K.ap[X] gives you K1[X]. Applying that to Y with K1[X].ap[Y] gives you K2[K, Y]. Evaluating that gives you X.

The S combinator is more of the same.

// The S combinator
trait S extends Term {
  type ap[x <: Term] = S1[x]
  type eval = S
}

trait S1[x <: Term] extends Term {
  type ap[y <: Term] = S2[x, y]
  type eval = S1[x]
}

trait S2[x <: Term, y <: Term] extends Term {
  type ap[z <: Term] = S3[x, y, z]
  type eval = S2[x, y]
}

trait S3[x <: Term, y <: Term, z <: Term] extends Term {
  type ap[v <: Term] = eval#ap[v]
  type eval = x#ap[z]#ap[y#ap[z]]#eval
}


Michid then goes on to show examples of how to use these beasts. He implements equality testing, and then shows how to test if different type-expressions evaluate to the same thing. And all of this happens at compile time. If the equality test fails, then it's a type error at compile time!

It's a brilliant little proof. Even if you can't read Scala syntax, and you don't really understand Scala type inference, as long as you know SKI, you can look at the equality comparisons, and see how it works in SKI. It's really beautiful.

Static Typing: Give me a break!

I’m a software engineer. I write code for a living. I’m also a programming language junkie. I love programming languages. I’m obsessed with programming languages. I’ve taught myself more programming languages than any sane person has any reason to know.

Learning that many languages, I’ve developed some pretty strong opinions about what makes a good language, and what kind of things I really want to see in the languages that I use.

My number one preference: strong static typing. That’s part of a more general preference, for preserving information. When I’m programming, I know what kind of thing I expect in a parameter, and I know what I expect to return. When I’m programming in a weakly typed language, I find that I’m constantly throwing information away, because I can’t actually say what I know about the program. I can’t put my knowledge about the expected behavior into the code. I don’t think that that’s a good thing.

But… (you knew there was a but coming, didn’t you?)

This is my preference. I believe that it’s right, but I also believe that reasonable people can disagree. Just because you don’t think the same way that I do doesn’t mean that you’re an idiot. It’s entirely possible for someone to know as much as I do about programming languages and have a different opinion. We’re talking about preferences.

Sadly, that kind of attitude is something that is entirely too uncommon. I seriously wonder somethings if I’m crazy, because it seems like everywhere I look, on every topic, no matter how trivial, most people absolutely reject the idea that it’s possible for an intelligent, knowledgeable person to disagree with them. It doesn’t matter what the subject is: politics, religion, music, or programming languages.

What brought this little rant on is that someone sent me a link to a comic, called “Cartesian Closed Comic”. It’s a programming language geek comic. But what bugs me is this comic. Which seems to be utterly typical of the kind of attitude that I’m griping about.

See, this is a pseudo-humorous way of saying “Everyone who disagrees with me is an idiot”. It’s not that reasonable people can disagree. It’s that people who disagree with me only disagree because they’re ignorant. If you like static typing, you probably know type theory. If you don’t like static typing, that there’s almost no chance that you know anything about type theory. So the reason that those stupid dynamic typing people don’t agree with people like me is because they just don’t know as much as I do. And the reason that arguments with them don’t go anywhere isn’t because we have different subjective preferences: it’s because they’re just too ignorant to understand why I’m right and they’re wrong.

Everything about this argument is crap, and it pisses me off.

Most programmers – whether they prefer static typing or not – don’t know type theory. Most of the arguments about whether to use static or dynamic typing aren’t based on type theory. It’s just the same old snobbery, the “you can’t disagree with me unless you’re an idiot”.

Among intelligent skilled engineers, the static versus dynamic typing thing really comes down to a simple, subjective argument:

Static typing proponents believe that expressing intentions in a static checkable form is worth the additional effort of making all of the code type-correct.

Dynamic typing proponents believe that it’s not: that strong typing creates an additional hoop that the programmer needs to jump through in order to get a working system.

Who’s right? In fact, I don’t think that either side is universally right. Building a real working system is a complex thing. There’s a complex interplay of design, implementation, and testing. What static typing really does is take some amount of stuff that could be checked with testing, and allows the compiler the check it in an abstract way, instead of with specific tests.

Is it easier to write code with type declarations, or with additional tests? Depends on the engineers and the system that they’re building.

Friday Random Ten

I haven’t done an FRT in a while. I’m really trying to get back to posting more regularly, and I’ve always enjoyed talking a bit about the crazy stuff I love to listen to.

  1. Marillion, “If my Heart were a Ball”: Very appropriate that in my first FRT in a while, Marillion would come up first. There’s just something about these guys, I can’t get enough of them. This is the live version of this song, from the “Live at High Voltage 2010” recording. I love this song on the original recording, but the version they played here is even better.
  2. Gordian Knot, “Shaman’s Whisper”: Wow, but I haven’t listened to this in a while. Gordian Knot is what instrumental prog rock should be. It’s got complexity and skill, but it’s also got soul. This is music, not wankery.
  3. Spock’s Beard, “The 39th Street Blues”: not a bad track, but this album always leaves a bad taste in my mouth. It’s heavy-handed preachy stuff, written by Neil Morse right after his turn to born again christianity.
  4. Parallel or 90 Degrees, “Threesome” : Interesting track from Andy Tillison and Po90. I really love the album that this came from, but this is my least favorite track. It’s got a very heavy electric fuzz effect which always gives me a headache.
  5. Squackett, “Stormchaser”: Here’s something really super. Steve Hackett (of old Genesis) and Chris Squire (of Yes) got together and recorded an album. Amazingly, it really does sound like a blend of Hackett-era Genesis mixed with early Yes. I really like this.
  6. Sonic Youth, “Stil” : I’ve adored Sonic Youth from the first time I heard them. I’ve never been a big fan of much of anything punk-related, except for these guys. I particularly like it when they’re screwing around experimenting with free improvs. This piece comes from one of their collections of experiments, called “SYR2”. Not something that mainstream rock fans will enjoy, but I think it’s brilliant.
  7. Takenobu, “Dark in the City” : This is something that a friend of mine from work turned me on to. I don’t really know how to describe it. Sort-of mellow alt-rock with layered string accompaniment? I don’t know, but it’s so beautiful.
  8. Steven Wilson, “Significant Other” : I learned about Steven Wilson because of Porcupine Tree. But I’ve learned to basically buy everything that he does. From prog-rock to psychedelia to experimental jazz, from original music to remixes of classic old albums, everything that he does is pure gold. This is from Insurgentes, an album that I just can’t stop listening to. You can definitely hear that it’s the same guy who led Porcupine Tree, but it’s got a really unique dark sound to it.
  9. vonFrickle, “Cranium Controller” : Remember what I said about Gordian Knot? vonFrickle is an instrumental prog band that frequently strays into the wankery area. A lot of their stuff is fantastic, but it’s often complex showoff because they want to show off how they can do this awesomely complex stuff, not because the awesomely complex stuff actually sounds good. This track is one of those that feels like complexity for the sake of showing off.
  10. Steve Hogarth and Steven Wilson, “A Cat with Seven Souls” : What do you get when you take the lead singer of my favorite band, and blend his style with one of my favorite all-around musicians? It’s what you’d expect, which is pretty amazing.

Sloppy Dualism Denies Free Will?

When I was an undergrad in college, I was a philosophy minor. I spent countless hours debating ideas about things like free will. My final paper was a 60 page rebuttal to what I thought was a sloppy argument against free will. Now, it’s been more years since I wrote that than I care to admit – and I still keep seeing the same kind of sloppy arguments, that I argue are ultimately circular, because they’re hiding their conclusion in their premises.

There’s an argument against free will that I find pretty compelling. I don’t agree with it, but I do think that it’s a solid argument:

Everything in our experience of the universe ultimately comes down to physics. Every phenomenon that we can observe is, ultimately, the result of particles interacting according to basic physical laws. Thermodynamics is the ultimate, fundamental ruler of the universe: everything that we observe is a result of a thermodynamic process. There are no exceptions to that.

Our brain is just another physical device. It’s another complex system made of an astonishing number of tiny particles, interacting in amazingly complicated ways. But ultimately, it’s particles interacting the way that particles interact. Our behavior is an emergent phenomenon, but ultimately, we don’t have any ability to make choice, because there’s no mechanism that allows us free choice. Our choice is determined by the physical interactions, and our consciousness of those results is just a side-effect of that.

If you want to argue that free will doesn’t exist, that argument is rock solid.

But for some reason, people constantly come up with other arguments – in fact, much weaker arguments that come from what I call sloppy dualism. Dualism is the philosophical position that says that a conscious being has two different parts: a physical part, and a non-physical part. In classical terms, you’ve got a body which is physical, and a mind/soul which is non-physical.

In this kind of argument, you rely on that implicit assumption of dualism, essentially asserting that whatever physical process we can observe isn’t really you, and that therefore by observing any physical process of decision-making, you infer that you didn’t really make the decision.

For example…

And indeed, this is starting to happen. As the early results of scientific brain experiments are showing, our minds appear to be making decisions before we’re actually aware of them — and at times by a significant degree. It’s a disturbing observation that has led some neuroscientists to conclude that we’re less in control of our choices than we think — at least as far as some basic movements and tasks are concerned.

This is something that I’ve seen a lot lately: when you do things like functional MRI, you can find that our brains settled on a decision before we consciously became aware of making the choice.

Why do I call it sloppy dualism? Because it’s based on the idea that somehow the piece of our brain that makes the decision is different from the part of our brain that is our consciousness.

If our brain is our mind, then everything that’s going on in our brain is part of our mind. Taking a piece of our brain, saying “Whoops, that piece of your brain isn’t you, so when it made the decision, it was deciding for you instead of it being you deciding.

By starting with the assumption that the physical process of decision-making we can observe is something different from your conscious choice of the decision, this kind of argument is building the conclusion into the premises.

If you don’t start with the assumption of sloppy dualism, then this whole argument says nothing. If we don’t separate our brain from our mind, then this whole experiment says nothing about the question of free will. It says a lot of very interesting things about how our brain works: it shows that there are multiple levels to our minds, and that we can observe those different levels in how our brains function. That’s a fascinating thing to know! But does it say anything about whether we can really make choices? No.

Depression and Geeks

Since this weekend, when the news of Aaron Swartz’s suicide, there’s been a lot of discussion of the goverments ridiculous pursuit of him, and of the fact that he suffered from depression. I can’t contribute anything new about his prosecution. It was despicable, ridiculous, and sadly, all too typical of how our government works.

But on the topic of depression, I want to chime in. A good friend of mine wrote a post on his own blog about depression in the tech/geek community., which I feel like I have to respond to.

Benjy, who wrote the post, is a great guy who I have a lot of respect for. I don’t intend this to be an attack on him. But I’ve seen a lot of similar comments, and I think that they’re built on a very serious mistake.

Benjy argues that the mathematical/scientific/logical mindset of a geek (my word, not his) makes us more prone to depression:

Someone whose toolkit for dealing with the world consists of logic and reason, ideals and abstractions, may have particularly weak defenses against this trickster disease.

You realize that it’s lying to you, that there are treatments, that that things aren’t objectively as bad as they feel. But you know, on some level deeper than logic, that there is no point, no hope and no future. And to encounter, maybe for the first time, the hard limits of rationality, to realize that there’s a part of your mind that can override the logical world view that is the core of your identity, may leave you feeling particularly helpless and hopeless.

You can’t rationalize depression away, a fact that people who’ve never suffered from it find hard to comprehend. But if someone you care about is struggling with it, and it’s likely that someone is, you can help them find a new way to access their mind.

Tell them that you care about them and appreciate them and are glad to have them in your life. Show them that you enjoy being around them and that you love them. And above all, spend time with them. Give them glimpses of an alternate future, one in which they are secure, happy and loved, tear away the lies that depression needs in order to survive, and in that sunlight it will wither.

Most of what Benjy wrote, I agree with completely. The problem that I have with it is that I think that parts of it are built on the assumption that our conscious reasoning is a part of the cause of depression. If geeks are more prone to suffering from depression because the way that our minds work, that means that the way that we make decisions and interpret the world is a part of why we suffer from this disease. The implication that too many people will draw from that is that we just need to decide to make different decisions, and the disease will go away. But it won’t – because depression isn’t a choice.

The thing that you always need to remember about depression – and which Benjy mentions – is that depression is not something which you can reason with. Depression isn’t a feeling. It’s not a way of thinking, or a way of viewing the world. It’s not something that you can choose not to suffer from. It’s a part of how your brain works.

The thing that anyone who suffers from depression needs to know is that it’s a disease, and that it’s treatable. It doesn’t matter if your friends are nice to you. It doesn’t matter if you know that they love you. That kind of thinking – that kind of reasoning about depression – is part of the fundamental trap of depression.

Depression is a disease of the brain, and it affects your mind – it affects your self in a terrible way. No amount of support from your friends and family, no amount of positive reinforcement can change that. Believing that emotional support can help a depressed person is part of the problem, because it’s tied to the all-too-common stigma of mental illness: that you’re only suffering because you’re too weak or too helpless to get over it.

You don’t just get over a mental illness like depression, any more than you get over diabetes. As a friend or loved one of a person with diabetes, being kind, showing your love for them doesn’t help unless you get them to get treatment.

I’m speakaing from experience. I’ve been there. I spent years being miserable. It nearly wrecked my marriage. My wife was as supportive and loving as anyone could dream of. But I couldn’t see it. I couldn’t see anything.

The experience of depression in different for different people. But for me, it was like the world had gone flat. I wasn’t sad – I was just dead inside. Nothing could have any impact on me. It’s a hard thing to explain, but looking back, it’s like the world had gone two-dimensional and black-and-white. Eventually, I was reading something in some magazine about depression, and it talked about that flat feeling, and I realized that maybe, maybe that was what was wrong with me.

When I started taking antidepressants, it was almost frightening, because it changed the world so much. ANtidepressants didn’t make me happy. In fact, for a while, they made me very sad, because I was realizing how awful I’d been treating my wife and daughter. But they made me feel things again. A few weeks after I started taking them, I realized that I was noticing colors. I hadn’t done that for years. It wasn’t that I couldn’t see colors when I was depressed, but they didn’t mean anything.

Antidepressants aren’t a panacaea. They don’t work for everyone. But there are treatments that can help. The way to defeat depression is to do something that changes the way the brain is functioning. For some people, the exercise of therapy can do that. For others, it’s medication. For still others, exercise. The key is to get to someone who understands the disease, and who can help you find what will work for your brain.

My point here is that when we’re talking about depression, we need to realize that most of the time, no one is at fault. People don’t suffer from depression because they did something wrong, or because they’re weak, or because they’re flawed. People don’t suffer from depression because their friends and family are inadequate. Depression is a disease – a treatable, chronic disease. It needs to be recognized, and it needs to be treated.

In my case, my depression wasn’t caused by my wife and daughter. It wasn’t their fault, and it wasn’t my fault. No amount of support, love, and appreciation could have helped, because the nature of my depression meant that I couldn’t see those things. The only thing that anyone could have done for me is recognized that I was suffering from depression, and pushed me to get treatment sooner.

If someone you know is suffering from depression, then they need help. But the help they need isn’t any amount of love or appreciation. It isn’t instilling any kind of hope, because depression kills hope in your brain. The thing that you can do to help is to help them get the treatment that they need.