Vortex Math Returns!

Cranks never give up. That’s something that I’ve learned in my time writing this blog. It doesn’t matter how stupid an idea is. It doesn’t matter how obviously wrong, how profoundly ridiculous. No matter what, cranks will continue to push their ridiculous ideas.

One way that this manifests is the comments on old posts never quite die. Years after I initially write a post, I still have people coming back and trying to share “new evidence” for their crankery. George Shollenberger, the hydrino cranks, the Brown’s gas cranks, the CTMU cranks, they’ve all come back years after a post with more of the same-old, same-old. Most of the time, I just ignore it. There’s nothing to be gained in just rehashing the same old nonsense. It’s certainly not going to convince the cranks, and it’s not going to be interesting to my less insane readers. But every once in a while, something comes along in those comments, something that’s actually new and amusing comes along. Today I’ve got an example of that for you: one of the proponents of Markus Rodin’s “Vortex Math” has returned to tell us the great news!

I have linked Vortex Based Mathematics with Physics and can prove most physics using vortex based mathematics. I am writing an article call “Temporal Physics of Vortex Based Mathematics” here: http://www.vortexspace.org

This is a lovely thing, even without needing to actually look at his article. Just start at the very first line! He claims that he can “prove most of physics”.

Science doesn’t do proof.

What science does is make observations, and then based on those observations produce models of the universe. Then, using that model, it makes predictions, and compares those predictions with further observations. By doing that over and over again, we get better and better models of how the universe works. Science is never sure about anything – because all it can do is check how well the model works. It’s always possible that any model doesn’t describe how things actually work. But it gives us a good approximation, in a way that allows us to understand how things work. Or, not quite how things work, but how we can affect the world by our actions. Our model might not capture what’s really happening – but it’s got predictive power.

To give an example of this: our model of the universe says that the earth orbits the sun, which is orbits the galactic core, which is moving through the universe. It’s possible that this is wrong. You can propose an alternative model in which the earth is the stationary center of the universe, and everything moves around it. As a model, it’s not very attractive, because to make it fit our observations, it requires a huge amount of complexity – it’s a far, far more complex model than our standard one, and it’s much harder to use to make accurate predictions. But it can be made to work, just as well as our standard one. It’s possible that that’s how the universe actually works. I don’t think any reasonable person actually believes that the universe works that way, but it’s possible that our entire model is wrong. Science can’t prove that our model is correct. It can just show that it’s the simplest model that matches our observations.

But Mr. Calhoun claims that he can prove physics. That claim shows that he has no idea of what science is, or what science means. And if he doesn’t understand something that simple, why should we trust him to understand any more?

Ah, but when we take a look at some of his writings… it’s a lovely pile of rubbish. Remember the mantra of this blog? The worst math is no math. Mr. Calhoun’s writing is a splendid example of this. He claims to be doing science, math, and mathematical proofs – but when you actually look at his writing, there’s not a spec of genuine math to be found!

Let’s start with a really quick reminder of what vortex math is. Take the sequence of doubling in natural numbers in base-10. 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, …. If, for each of those numbers, you sum the digits until you get a single digit result, you get: 1, 2, 4, 8, 7, 5, 1, 2, 4, 8, 7, 5, … It turns into a repeated sequence, 1, 2, 4, 8, 7, 5, over and over again. You can do the same thing in the reverse direction, by halving: 1, 0.5, 0.25, 0.125, 0.0625, 0.03125, 0.015625, 0.0078125, where the digits sum to 1, 5, 7, 8, 4, 2, 1, 5, …

According to Rodin, this demonstrates something profound. This is the heart of Vortex mathematics: this cycle in the numbers shows that there’s some kind of energy flow that is fundamental to the universe, based on this kind of repeating sequence.

So, how does Mr. Calhoun use this? He thinks that he can connect it to black holes and white holes:

Do not forget that we already learned that black holes suck in matter while “compressing” it; and, on the other side of the black hole is a white hole that then takes the same matter and spits it back out while “de-compressing” the matter. The “magnetic warp” video on Youtube shows the same torus shape Marko had illustrated in his “vortex based mathematics” video [see below]:

You can clearly see the vortex in the center of the torus magnets. This is made possible using two Ferrofluid Hele-Shaw Cells [Hele-Shaw effect]. Here are a few links about using ferrofluid hele-shaw cell to view magnetic fields:

http://en.wikipedia.org/wiki/Hele-Shaw_flow

http://www2.warwick.ac.uk/fac/cross_fac/iatl/ejournal/issues/volume2issue1/snyder/

Here is a quote from a Youtube user about the magnets:

“Walter Rawls, a? scientist who did a great deal of research with Albert Roy Davis, said that he believes at the center of every magnet there is a miniature black hole.”

I have not verified the above statement about Walter Rawls as of yet. However, the above images prove beyond doubt Marko’s torus universe mathematical geometry. Now lets take a look at Marko’s designs:

The pictures look kind-of-like this silly torus thing that Rodin likes to draw: therefore they prove beyond doubt that Rodin’s rubbish is correct! Wow, now that’s a mathematical proof!

It gets worse from there.

The next section is “The Physics of Time”.

If you looked at the Youtube videos of the true motion of the Earth through space you now know that we are literally falling into a black hole that is at the center of the galaxy. The motion of the Earth; all of the rotation and revolution, all of that together is caused by space-time. Time is acually the rate and pattern of the motion of matter as it moves through space. It is the fourth dimension. you have probably heard this if you have studied Einstien theories: “As an object moves faster the rate of its motion [or time] slows down”. Sounds like an oxymoron doesn’t it? Well it not so strange once you understand how the fabric of space-time relates to Vortex Based Mathematics.

Motion of the Earth

The planet Earth rotates approx every twenty-four hours. It makes a complete 360o rotation every twenty-four hours. That amount of time is the frequency of the rate of rotation.

Looking down from the north pole of the Earth, you will see that if we divide the sphere into 36 equal parts the sunrise would have to pass through all of the degrees of the sphere in order to make a complete cycle:

Remember the Earth is a “giant magnet” that is spinning. The electromagnetic field of this “giant magnet” is moving out of the north pole [which is really at the geographic south pole] and going to the south pole [which again is really at the geographic north pole]. This electromagnetic field is moving or spinning [see youtube video at top] according to a frequency or cycle.

I don’t know if you realize this, but matter can be compressed or expanded without it being destroyed. A black hole does not de-molecularize matter then in passing to the white hole reassemble it again. Nothing that is demolecularized can naturally be put back together again. If an object is destroyed then is it destroyed; there is no reassembly. Matter can be however, compressed and decompressed. As you probably know and have heard this before there is an huge amount of distance between the atoms in your body. Like the giant void of space and much like the distances between planets in our solar system; the atomic matter in our bodies is just as similar in the amount of space between each atom.

What fills the spaces between each atom? Well, Its space-time. It is the fabric of the inertia ether that all matter in space moves through. Spacetime or what I call “etherspace” is what I have come to realize as “the space in between the spaces”. This “etherspace” can be compressed and then decompressed. Etherspace can enable all of the matter in your body to be greatly compressed without your body being destroyed; and at the same time functioning as it normally should. The ether space then allows your body to be decompressed again; all the while functioning as it should.

It is the movement of spacetime or “ether space” that is causing the rotation and revolving of the planet we live on. It is also responsible for the motions of all of the bodies in space.

Magnets will, whether great or small, act as engines for etherspace. They pull in etherspace at the south pole and also pump out etherspace at the north pole of the magnet. All magnets do this; the great planet earth all the way to the little magnet that sticks to your refridgerator door. Vortex based mathematics prove all of this. I will show you.

As I stated earlier the Earth is a giant magnet and if we apply the Vortex Based Mathematics to the 10o degree spacings of this “giant magnet” lets see what happens. Now we are going to see the de-compression of space-time eminatiing from the true north pole of the giant magnet of the Earth. Let’s deploy a doubling circuit to the spacings of the planet. We will start at 0o and go all the way to 360o .

Calhoun certainly shows that he’s a worthy inheritor of the mantle of Rodin. Rodin’s entire rubbish is really based on taking a fun property of our particular base-10 numerical notation, and without any good reason, believing that it must be a profound fundamental property of the universe. Calhoun takes two arbitrary things: the 360 degree conventional angle measurement, and the 24 hour day, and likewise, without any good reason, without even any argument, believes that they are fundamental properties of the universe.

Where does the 24 hour day come from? I did a bit of research, and there are a couple of possible arguments. It appears to date back to the old empire of Egypt. The argument that I found most convincing is based on how the Egyptians counted on their hands. They did a lot of things in base-12, because using your thumb to point out the joints of the fingers on your hand, you can count to 12. The origin of our base-10 is based on using fingers to count; base-12 is similar, but based on a slightly different way of counting on your fingers. Using base-12, they decided to describe time in terms of counting periods of light and darkness: 12 bright periods, 12 dark ones. There’s nothing scientific or fundamental about it: it’s an arbitrary way of measuring time. The Greeks adopted it from the Egyptians; the Romans adopted it from the Greeks; and we adopted it from the Romans. There is no fundamental reason why it is the one true correct way of measuring time.

Similarly, the 360 degree system of angular measure is not the least bit fundamental. It dates back to the Babylonians. In writing, the Babylonions used a base-60 system, instead of our base-10. In their explorations of geometry, they observed that if you inscribed a hexagon inside of a circle, each of the segments of the hexagon was the same length as the radius of the circle. So they measured an angle in terms of which segment of the inscribed hexagon it crossed. Within those sig segments, they divided them into sixty sections, because what else would people who use base-60 use? And then to subdivide those, they used 60 again. The 360 degree system is a random historical accident, not a profound truth.

I don’t want to get too far off track (or too farther off track), but: In fact, when you’re talking about angles, there is a fundamental measurement, called a radian. Whenever you do math using angles, you end up needing to introduce a conversion factor which converts your angle into radians.

Anyway – this rubbish about the 24 hour day and 360 degree circle are what passes for math in Calhoun’s world. This is as close to math or to correctness that Calhoun gets.

What’s even worse is his babble about black holes and white holes.

Both black and white holes are theoretical predictions of relativity. The math involved is not simple: it’s based on Einstein’s field equations from general relativity:

 R_{munu} - frac{1}{2}g_{munu}R + g_{mueta}Lambda = frac{8pi G}{c^4}T_{munu}

In this equation, the subscripted variables are all symmetric 4×4 tensors. Black and white holes are “solutions” to particular configurations of those tensors. This is not elementary math, not by a long-shot. But if you want to really talk about black and white holes, this is how you do it.

Translating from the math into prose is always a problem, because the prose is far less precise, and it’s inevitably misleading. No matter how well you think you understand based on the prose, you don’t understand the concept, because you haven’t been told enough, in a precise enough way, to actually understand it.

That said, the closest I can come is the following.

We’ll start with black holes. Black holes are much easier to understand: put enough mass into a small enough area of space, and you wind up with a boundary line, called the event horizon, where anything that crosses that boundary, no matter what – even massless stuff like light – can never escape. We believe, based on careful analysis, that we’ve observed black holes in our universe. (Or rather, we’ve seen evidence that they exist; you can’t actually see a black hole; but you can see its effects.) We call a black hole a singularity, because nothing beyond the event horizon is visible – it looks like a hole in space. But it isn’t: it’s got a mass, which we can measure. Matter goes in to a black hole, and crosses the event horizon. We can no longer see the matter. We can’t observe what happens to it once it crosses the horizon. But we know it’s still there, because we can observe the mass of the hole, and it increases as matter enters.

(It was pointed out to me on twitter that my explanation of the singularity is wrong. See what happens when you try to explain mathematical stuff non-mathematically?)

White holes are a much harder idea. We’ve never seen one. In fact, we don’t really think that they can exist in our universe. In concept, they’re the opposite of a black hole: they are a region with a boundary than nothing can ever cross. In a black hole, you can’t cross the boundary an escape; in a white hole, once something crosses the boundary, it can’t ever re-enter. White holes only exist in a strange conceptual case, called an eternal black hole – that is, a black hole that has been there forever, which was never formed by gravitational collapse.

There are some folks who’ve written speculative work based on the solutions to the white hole field equations that suggest that our universe is the result of a white hole, inside of the event horizon of a black hole in an enclosing universe. But in this solution, the white hole exists for an infinitely small period of time: all of the matter in it ejects into a new space-time realm in an instant. There’s no actual evidence for this, beyond the fact that it’s an interesting way of interpreting a solution to the field equations.

All of this is a long-winded way of saying that when it comes to black holes, Calhoun is talking out his ass. A black hole is not one end of a tunnel that leads to a white hole. If you actually do the math, that doesn’t work. A black hole does not “compress” matter and pass it to a white hole which decompresses it. A black hole is just a huge clump of very dense matter; when something crosses the event horizon of a black hole, it just becomes part of that clump of matter.

His babble about magnetism is similar: we’ve got some very elegant field equations, called Maxwell’s equations, which describe how magnetism and electric fields work. It’s beautiful, if complex, mathematics. And they most definitely do not describe a magnet as something that “pumps eitherspace from the south pole to the north pole”.

There’s no proof here. And there’s no math here. There’s nothing here but the midnight pot-fueled ramblings of a not particularly bright sci-fi fan, who took some wonderful stories, and believed that they were based on something true.

Basic Data Structures: Hash Tables

I’m in the mood for a couple of basics posts. As long-time readers might know, I love writing about data structures.

One of the most important and fundamental structures is a hashtable. In fact, in a lot of modern programming languages have left hashtables behind, for reasons I’ll discuss later. But if you want to understand data structures and algorithmic complexity, hashtables are one of the essentials.

A hashtable a structure for keeping a list of (key, value) pairs, where you can look up a value using the key that’s associated with it. This kind of structure is frequently called either a map, an associative array, or a dictionary.

For an example, think of a phonebook. You’ve got a collection of pairs (name, phone-number) that make up the phonebook. When you use the phonebook, what you do is look for a person’s name, and then use it to get their phone number.

A hashtable is one specific kind of structure that does this. I like to describe data structures in terms of some sort of schema: what are the basic operations that the structure supports, and what performance characteristics does it have for those operations.

In those schematic terms, a hashtable is very simple. It’s a structure that maintains a mapping from keys to values. A hashtable really only needs two operations: put and get:

  1. put(key, value): add a mapping from key to value to the table. If there’s already a mapping for the key, then replace it.
  2. get(key): get the value associated with the key.

In a hashtable, both of those operations are extremely fast.

Let’s think for a moment about the basic idea of a key-value map, and what kind of performance we could get out of a cople of simple naive ways of implementing it.

We’ve got a list of names and phone numbers. We want to know how long it’ll take to find a particular name. How quickly can we do it?

How long does that take, naively? It depends on how many keys and values there are, and what properties the keys have that we can take advantage of.

In the worst case, there’s nothing to help us: the only thing we can do is take the key we’re looking for, and compare it to every single key. If we have 10 keys, then on average, we’ll need to do an average of about 5 steps before we find the key we’re looking for. If there are 100 keys, then it’ll take, on average, about 50 steps. If there are one million keys, then it’ll take an average of half a million steps before we can find the value.

If the keys are ordered – that is, if we can compare one key to another not just for equality, but we can ask which came first using a “less than or equal to” operator, then we can use binary search. With binary search, we can find an entry in a list of 10 elements in 4 steps. We can find an entry in a list of 1000 keys in 10 steps, or one in a list of one million keys in 20 steps.

With a hashtable, if things work right, in a table of 10 keys, it takes one step to find the key. 100 keys? 1 step. 1000 keys? 1 step. 1,000,000,000 keys? Still one step. That’s the point of a hashtable. It might be really hard to do something like general a list of all of the keys – but if all you want to do is look things up, it’s lightning.

How can it do that? It’s a fairly simple trick: the hashtable trades space for time. A hashtable, under normal circumstances, uses a lot more space than most other ways of building a dictionary. It makes itself fast by using extra space in a clever way.

A hashtable creates a bunch of containers for (key, value) pairs called buckets. It creates many more buckets than the number of (key, value) pairs than it expects to store. When you want to insert a value into the table, it uses a special kind of function called a hash function on the key to decide which bucket to put the (key, value) into. When you want to look for the value associated with a key, it again uses the hash function on the key to find out which bucket to look in.

It’s easiest to understand by looking at some actual code. Here’s a simple, not at all realistic implementation of a hashtable in Python:

  class Hashtable(object):
    def __init__(self, hashfun, size):
      self._size = size
      self._hashfun = hashfun
      self._table = [[] for i in range(size)]

    def hash(self, key):
      return self._hashfun(key) % self._size

    def get(self, key, value):
      self._table[self.hash(key)].append((key, value))

    def get(self, key):
      for k,v in self._table[self.hash(key)]:
        if k == key:
          return v
      return None

If you’ve got a good hash function, and your hashtable is big enough, then each bucket will end up with no more than one value in it. So if you need to insert a value, you find an (empty) bucket using its hashcode, and dump it in: one step. If you need to find a value given its key, find the bucket using its hashcode, and return the value.

There are two big problems with hashtables.

First, everything is dependent on the quality of your hash function. If you hash function maps a lot of values to the same bucket, then your performance is going to suck. In fact, in the worst case, it’s no better than just searching a randomly ordered list. Most of the time, you can come up with a hash function that does pretty good – but it’s a surprisingly tricky thing to get right.

Second, the table really needs to be big relative to the number of elements that you expect to have in the list. If you set up a hashtable with 40 buckets, and you end up with 80 values stored in it, your performance isn’t going to be very good. (In fact, it’ll be slightly worse that just using a binary search tree.)

So what makes a good hash function? There are a bunch of things to consider:

  1. The hash function must be deterministic: calling the hash on the same key value must always produce the same result. If you’re writing a python program like the one I used as an example above, and you use the value of the key objects fields to compute the hash, then changing the key objects fields will change the hashcode!
  2. The hash function needs to focus on the parts of the key that distinguish between different keys, not on their similarities. To give a simple example, in some versions of Java, the default hash function for objects is based on the address of the object in memory. All objects are stored in locations whose address is divisible by 4 – so the last two bits are always zero. If you did something simple like just take the address modulo the table size, then all of the buckets whose numbers weren’t divisible by four would always be empty. That would be bad.
  3. The hash function needs to be uniform. That means that it needs to map roughly the same number of input values to each possible output value. To give you a sense of how important this is: I ran a test using 3125 randomly generated strings, using one really stupid hash function (xoring together the characters), and one really good one (djb2). I set up a small table, with 31 buckets, and inserted all of the value into it. With the xor hash function, there were several empty buckets, and the worst bucket had 625 values in it. With djb2, there were no empty buckets; the smallest bucket had 98 values, and the biggest one had 106.

So what’s a good hash function look like? Djb2, which I used in my test above, is based on integer arithmetic using the string values. It’s an interesting case, because no one is really entirely sure of exactly why it works better than similar functions, but be that as it may, we know that in practice, it works really well. It was invented by a guy named Dan Bernstein, who used to be a genius poster in comp.lang.c, when that was a big deal. Here’s djb2 in Python:

def djb2(key):
  hash = 5381
  for c in key:
    hash = (hash * 33) + ord(c)
  return hash

What the heck is it doing? Why 5381? Because it’s prime, and it works pretty well. Why 33? No clue.

Towards the beginning of this post, I alluded to the fact that hashtables have, at least to some degree, fallen out of vogue. (For example, in the Go language standard library, the map type is implemented using a red-black tree.) Why?

In practice, it’s rarely any faster to really use a hashtable than to use a balanced binary tree like a red-black tree. Balanced trees have better worst-case bounds, and they’re not as sensitive to the properties of the hash function. And they make it really easy to iterate over all of the keys in a collection in a predictable order, which makes them great for debugging purposes.

Of course, hash tables still get used, constantly. The most commonly used data structures in Java code include, without a doubt, the HashMap and HashSet, which are both built on hashtables. They’re used constantly. You usually don’t have to implement them yourself; and usually system libraries provide a good default hash function for strings, so you’re usually safe.

There’s also a lot of really fascinating research into designing ideal hash functions for various applications. If you know what your data will look like in advance, you can even build something called a perfect hash function, which guarantees no collisions. But that’s a subject for another time.

A Note to the Trolls Re: Comment Policies

Since yesterday’s post, I’ve been deluged with trolls who want to post comments about their views of sexual harassment. I’ve been deleting them as they come in, and that has, in turn, led to lots of complaints about how horrible unfair and mean I am.

I’ve been doing this blogging thing for a long time, and I’ve watched as a number of sites that I used to really enjoy have wound up becoming worthless, due to comment trolls. There are tons of trolls out there, and they’re more than happy to devote a lot of time and energy to trolling and derailing. When I started my blog, I had a very open commenting policy: I rarely if ever deleted comments, and only did so when they were explicitly abusive towards other commenters. Since then, I’ve learned that in the current internet culture, that doesn’t work. The only way to maintain a halfway decent comment forum is to moderate aggressively. So I’ve become much more aggressive about removing the stuff that I believe to be trolling.

Here’s the problem: Trolls aren’t interested in real discussions. They’re interested in derailing discussions that they don’t like. I’m not interested in hosting flame wars, misogynistic rants, or other forms of trolling. In case you haven’t noticed, this is my blog. I’ll do what I feel is appropriate to maintain a non-abusive, non-troll-infested comment section. I am under no obligation to post your rants, and I am under no obligation to provide you with a list of bullet points of what my exact standards are. If I judge a comment to be inappropriate, I’ll delete it. If don’t like that, you’re welcome to find another forum, or create your own. It’s a big internet out there: there’s bound to be a place where your arguments are welcome. But that’s not this place. If I’m over-aggressive in my moderation, the only one who’ll be hurt by that will be me, because I will have wrecked the comment forum on my blog. That’s a risk I’m prepared to take.

Let me add one additional comment about the particular trolls who’ve been coming to visit lately: I’ve learned, over time, a thing or two about the demographics of the people who visit this blog. As much as I’d prefer it to be otherwise, the frequent commenters on this blog are overwhelmingly male – over the history of the blog, of commenters where gender can be identified, the comments are over 90% male. Similarly, in my career as an engineer, the population of my coworkers has been very, very skewed: the engineering population at my workplaces has varied, but I’ve never worked anywhere where the population of engineers and engineering managers was less than 80% male.

But according to my recent trollish commenters, I’m supposed to believe that suddenly that population has changed, dramatically. Suddenly, every single comment is being posted by a woman who has never seen any male-on-female sexual harassment, but who was a personal witness of multiple female engineering managers who sexually harassed their male employees without any repercussions. It’s particularly amusing, because those rants about the evil sexually-harassing female managers are frequently followed by rants about how the problem is the difference in sexual drive between men and women. Funny how women just aren’t as sexually motivated as man, and that’s the cause of the problem, but there are all of these evil female managers sexually harassing their employees despite their inferior female sexual drive, isn’t it?

Um, guys?! You’re not fooling me. You’re not fooling anyone. I’m not obligated to provide you with a forum for your lies. So go away, find someplace else. Or feel free to keep submitting your comments, but know that they’re going to wind up in the bit-bucket.

It's easy to not harass women

For many of us in the science blogging scene, yesterday was a pretty lousy day. We learned that a guy who many of us had known for a long time, who we’d trusted, who we considered a friend, had been using his job to sexually harass women with sleezy propositions.

This led to a lot of discussion and debate in twitter. I spoke up to say that what bothered me about the whole thing was that it’s easy to not harass people.

This has led to rather a lot of hate mail. But it’s also led to some genuine questions and discussions. Since it can be hard to have detailed discussions on twitter, I thought that I’d take a moment here, expand on what I meant, and answer some of the questions.

To start: it really is extremely easy to not be a harasser. Really. The key thing to consider is: when is it appropriate to discuss sex? In general, it’s downright trivial: if you’re not in a not in private with a person with whom you’re in a sexual relationship, then don’t. But in particular, here are a couple of specific examples of this principle:

  • Is there any way in which you are part of a supervisor/supervisee or mentor/mentee relationship? Then do not discuss or engage in sexual behaviors of any kind.
  • In a social situation, are you explicitly on a date or other romantic encounter? Do both people agree that it’s a romantic thing? If not, then do not discuss or engage in sexual behaviors.
  • In a mutually understood romantic situation, has your partner expressed any discomfort? If so, then immediately stop discussing or engaging in sexual behaviors.
  • In any social situation, if a participant expresses discomfort, stop engaging in what is causing the discomfort.

Like I said: this is not hard.

To touch on specifics of various recent incidents:

  • You do not meet with someone to discuss work, and tell them about your sex drive.
  • You do not touch a students ass.
  • You do not talk to coworkers about your dick.
  • You don’t proposition your coworkers.
  • You don’t try to sneak a glance down your coworkers shirt.
  • You don’t comment on how hot your officemate looks in that sweater.
  • You do not tell your students that you thought about them while you were masturbating.

Seriously! Is any of this difficult? Should this require any explanation to anyone with two brain cells to rub together?

But, many of my correspondants asked, what about grey areas?

I don’t believe that there are significant grey areas here. If you’re not in an explicit sexual relationship with someone, then don’t talk to them about sex. In fact, if you’re in any work related situation at all, no matter who you’re with, it’s not appropriate to discuss sex.

But what about cases where you didn’t mean anything sexual, like when you complimented your coworker on her outfit, and she accused you of harassing her?

This scenario is, largely, a fraud.

Lots of people legitimately worry about it, because they’ve heard so much about this in the media, in politics, in news. The thing is, the reason that you hear all of this is because of people who are deliberately promoting it as part of a socio-political agenda. People who want to excuse or normalize this kind of behavior want to create the illusion of blurred lines.

In reality, harassers know that they’re harassing. They know that they’re making inappropriate sexual gestures. But they don’t want to pay the consequences. So they pretend that they didn’t know that what they were doing wrong. And they try to convince other folks that you’re at risk too! You don’t actually have to be doing anything wrong, and you could have your life wrecked by some crazy bitch!.

Consider for a moment, a few examples of how a scenario could play out.

Scenario one: woman officemate comes to work, dressed much fancier than usual. Male coworker says “Nice outfit, why are you all dressed up today?”. Anyone really think that this is going to get the male coworker into trouble?

Scenario two: woman worker wears a nice outfit to work. Male coworker says “Nice outfit”. Woman looks uncomfortable. Man sees this, and either apologizes, or makes note not to do this again, because it made her uncomfortable. Does anyone really honestly believe that this, occurring once, will lead to a formal accusation of harassment with consequences?

Scenario three: woman officemate comes to work dressed fancier than usual. Male coworker says nice outfit. Woman acts uncomfortable. Man keeps commenting on her clothes. Woman asks him to stop. Next day, woman comes to work, man comments that she’s not dressed so hot today. Anyone think that it’s not clear that the guy is behaving inappropriately?

Scenario four woman worker wears a nice outfit to work. Male coworker says “Nice outfit, wrowr”, makes motions like he’s pawing at her. Anyone really think that there’s anything ambiguous here, or is it clear that the guy is harassing her? And does anyone really, honestly believe that if the woman complains, this harasser will not say “But I just complimented her outfit, she’s being oversensitive!”?

Here’s the hard truths about the reality of sexual harassment:

  • Do you know a professional woman? If so, she’s been sexually harassed at one time or another. Probably way more than once.
  • The guy(s) who harassed her knew that he was harassing her.
  • The guy(s) who harassed her doesn’t think that he really did anything wrong.
  • There are a lot of people out there who believe that men are entitled to behave this way.
  • In order to avoid consequences for their behavior, many men will go to amazing lengths to deny responsibility.

The reality is: this isn’t hard. There’s nothing difficult about not harassing people. Men who harass women know that they’re harassing women. The only hard part of any of this is that the rest of us – especially the men who don’t harass women – need to acknowledge this, stop ignoring it, stop making excuses for the harassers, and stand up and speak up when we see it happening. That’s the only way that things will ever change.

We can’t make exceptions for our friends. I’m really upset about the trouble that my friend is in. I feel bad for him. I feel bad for his family. I’m sad that he’s probably going to lose his job over this. But the fact is, he did something reprehensible, and he needs to face the consequences for that. The fact that I’ve known him for a long time, liked him, considered him a friend? That just makes it more important that I be willing to stand up, and say: This was wrong. This was inexcusable. This cannot stand without consequences..

Combining Non-Disjoint Probabilities

In my previous post on probability, I talked about how you need to be careful about covering cases. To understand what I mean by that, it’s good to see some examples.

And we can do that while also introducing an important concept which I haven’t discussed yet. I’ve frequently talked about independence, but equally important is the idea of disjointness.

Two events are independent when they have no ability to influence one another. So two coin flips are independent. Two events are disjoint when they can’t possibly occur together. Flipping a coin, the event “rolled a head” and the event “rolled a tail” are disjoint: if you rolled a head, you can’t roll a tail, and vice versa.

So let’s think about something abstract for a moment. Let’s suppose that we’ve got two events, A and B. We know that the probability of A is 1/3 and the probability of B is also 1/3. What’s the probability of A or B?

Naively, we could say that it’s P(A) + P(B). But that’s not necessarily true. It depends on whether or not the two events are disjoint.

Suppose that it turns out that the probability space we’re working in is rolling a six sided die. There are three basic scenarios that we could have:

  1. Scenario 1: A is the event “rolled 1 or 2”, and B is “rolled 3 or 4”. That is, A and B are disjoint.
  2. Scenario 2: A is the event “rolled 1 or 2”, and B is “rolled 2 or 3”. A and B are different, but they overlap.
  3. Scenario 3: A is the event “rolled 1 or 2”, and B is the event “rolled 1 or 2”. A and B are really just different names for the same event.

In scenario one, we’ve got disjoint events. So P(A or B) is P(A) + P(B). One way of checking that that makes sense is to look at how the probability of events work out. P(A) is 1/3. P(B) is 1/3. The probability of neither A nor B – that is, the probability of rolling either 5 or 6 – is 1/3. The sum is 1, as it should be.

But suppose that we looked at scenario 2. If we made a mistake and added them as if they were disjoint, how would things add up? P(A) is 1/3. P(B) is 1/3. P(neither A nor B) = P(4 or 5 or 6) = 1/2. The total of these three probabilities is 1/3 + 1/3 + 1/2 = 7/6. So just from that addition, we can see that there’s a problem, and we did something wrong.

If we know that A and B overlap, then we need to do something a bit more complicated to combine probabilities. The general equation is:

 P(A cup B) = P(A) + P(B) - P(A cap B)

Using that equation, we’d get the right result. P(A) = 1/3; P(B) =
1/3; P(A and B) = 1/6. So the probability of A or B is 1/3 + 1/3 – 1/6 = 1/2. And P(neither A nor B) = P(4 or 5 or 6) = 1/2. The total is 1, as it should be.

From here, we’ll finally start moving in to some more interesting stuff. Next post, I’ll look at how to use our probability axioms to analyze the probability of winning a game of craps. That will take us through a bunch of applications of the basic rules, as well as an interesting example of working through a limit case.

And then it’s on to combinatorics, which is the main tool that we’ll use for figuring out how many cases there are, and what they are, which as we’ve seen is an essential skill for probability.

Weekend Recipes: Chicken Wings with Thai Chile Sauce

In my house, chicken wings are kind of a big deal. My wife doen’t know how to cook. Her cooking is really limited to two dishes: barbecued chicken wings, and grilled cheese. But her chicken wings are phenomenal. We’ve been married for 20 years, and I haven’t found a wing recipe that had the potential to rival hers.

Until now.

I decided to try making a homemade thai sweet chili sauce, and use that on the wings. And the results were fantastic. Still not quite up there with her wings, but I think this recipe has the potential to match it. This batch of wings was the first experiment with this recipe, and there were a couple of things that I think should be changed. I wet-brined the wings, and they ended up not crisping up as well as I would have liked. So next time, I’ll dry-brine. I also crowded them a bit too much on the pan.

When you read the recipe, it might seem like the wings are being cooked for a long time. They are, but that’s a good thing. Wings have a lot of fat and a lot of gelatin – they stand up to the heat really well, and after a long cooking time they just get tender and their flavor concentrates. They don’t get tough or stringy or anything nasty like a chicken breast would cooked for this long.

The Sauce

The sauce is a very traditional thai sweet chili. It’s a simple sauce, but it’s very versatile. It’s loaded with wonderful flavors that go incredibly well with poultry or seafood. Seriously delicious stuff.

  • 1 cup sugar.
  • 1/2 cup rice vinegar.
  • 1 1/2 cup water.
  • 1 teaspoon salt.
  • 2 tablespoons fish sauce.
  • Finely diced fresh red chili pepper (quantity to taste)
  • 5 large cloves garlic, finely minced.
  • 1/2 teaspoon minced ginger.
  • 1 tablespoon of cornstarch, mixed with water.
  1. Put the sugar, salt, vinegar, water, and fish sauce into a pot, and bring to a boil.
  2. Add the garlic, ginger, and chili pepper. Lower the heat, and let it simmer for a few minutes.
  3. Leave the sauce sitting for about an hour, to let the flavors of the spices infuse into the sauce.
  4. Taste it. If it’s not spicy enough, add more chili pepper, and simmer for another minute or two.
  5. Bring back to a boil. Remove from heat, and mix in the cornstarch slurry. Then return to the heat, and simmer until the starch is cooked and the sauce thickens.

The sauce is done.

The wings

  • About an hour before you want to start cooking, you need to dry-brine the wings. Spread the wings on a baking sheet. Make a 50-50 mixture of salt and sugar, and sprinkle over the wings. Coat both sides. Let the wings sit on the sheet for an hour. After they’ve sat in the salt for an hour, rinse them under cold water, and pat them dry.
  • Lightly oil a baking sheet. Put the wings on the sheet. You don’t want them to be too close together – they’ll brown much better if they have a bit of space on the sides.
  • Put the baking sheet full of wings into a 350 degree oven. After 30 minutes, turn them over, and back for another 30 minutes.
  • Now it’s time to start with the sauce! With a basting brush, cover the top side with the sweet chile sauce. Then turn the wings over, and coat the other side. Once they’re basted with the sauce, it’s back into the oven for another 30 minutes.
  • Again, baste both sides, and then back into the oven for another 30 minutes with the second side up.
  • Take the wings out, turn the oven up to 450. Baste the wings, and then put them back in until they turn nice and brown on top. Then turn them, baste them again, and brown the other side.
  • Time to eat!

Correction, Sigma Algebras, and Mass functions

So, I messed up a bit in the previous post. Let me get that out of the way before we move forward!

In measure theory, you aren’t just working with sets. You’re working with something called σ-algebras. It’s a very important distinction.

The problem is, our intuition of sets doesn’t always work. Sets, as defined formally, are really pretty subtle. We expect certain things to be true, because they make sense. But in fact, they are not implied by the definition of sets. A σ-algebra is, essentially, a well-behaved set – a set whose behavior matches our usual expectations.

To be formal, a sigma algebra over a set S is a collection Σ of subsets of S such that:

  1. Σ is closed over set complement.
  2. Σ is closed over countable union.

The reason why you need to make this restriction is, ultimately, because of the axiom of choice. Using the axiom of choice, you can create sets which are unmeasurable. They’re clearly subsets of a measurable set, and supersets of other measurable sets – and yet, they are, themselves, not measurable. This leads to things like the Banach-Tarski paradox: you can take a measurable set, divide it into non-measurable subsets, and then combine those non-measurable subsets back into measurable sets whose size seem to make no sense. You can take a sphere the size of a baseball, slice it into pieces, and then re-assemble those pieces into a sphere the size of the earth, without stretching them!

These non-measurable sets blow away our expectations about how things should behave. The restriction to σ algebras is just a way of saying that we need to be working in a space where all sets are measurable. When we’re looking at measure theory (or probability theory, where we’re building on measures), we need to exclude non-measurable sets. If we don’t, we’re seriously up a creek without a paddle. If we allowed non-measurable sets, then the probability theory we’re building would be inconsistent, and that’s the kiss of death in mathematics.

Ok. So, with that out of the way, how do we actually use Kolmogorov’s axioms? It all comes down to the idea of a sample space. You need to start with an experiment that you’re going to observe. For that experiment, there are a set of possible outcomes. The set of all possible outcomes is the sample space.

Here’s where, sadly, even axiomatized probability theory gets a bit handwavy. Given the sample space, you can define the structure of the sample space with a function, called the probability mass function, f, which maps each possible event in the sample space to a probability. To be a valid mass function for a sample space S, it’s got to have the following properties:

  1. For each event e in S, f(e) ≥ 0 and f(e) <= 1..
  2. The sum of the probabilities in the sample space must be 1: Sigma_{e in S} f(e) = 1

So we wind up with a sort of circularity: in order to describe the probability of events, we need to start by knowing the probability of events. In fact, this isn’t really a problem: we’re talking about taking something than we observe in the real world, and mapping it into the abstract space of math. Whenever we do that, we need to take our observations of the real world and create an approximation as a mathematical model.

The point of probability theory isn’t to do that primitive mapping. In general, we already understand how rolling a single die works. We know how it should behave, and we know how and why its actual behavior can vary from our expectation. What we want to know is really how many events combine.

We don’t need any special theory to figure out what the probability of rolling a 3 on a six-sided die is: that’s easy, and it’s obvious: it’s 1 in 6. But what’s the probability of winning a game of craps?

If all days of the year 2001 are equally likely, then we don’t need anything fancy to ask what the probability of someone born in 2001’s birthday being July 21st. It’s easy: 1 in 365. But if I’ve got a group of 35 people, what’s the probability of two of them sharing the same birthday?

Both of those questions start with the assignment of a probability mass function, which is trivial. But they involve combining the probabilities given by those mass functions, and use them with Kolmogorov’s axioms to figure out the probabilities of the complicated events.

Kolmogorov's Axioms of Probability

The way that I’ve talked about probability so far is mostly informal. That’s the way that probability theory was treated for a long time. You defined probability spaces over collections of equal probability sets. You combined probability spaces by combining their events into other kinds of equally probable events.

The problem with that should be obvious: it’s circular. You want to define the probability of events; to do that, you need to start with equally probable events, which means that on some level, you already know the probabilities. If you don’t know the probabilities, you can’t talk about them. The reality is somewhat worse than that, because this way of looking at things completely falls apart when you start trying to think about infinite probability spaces!

So what can you do?

The answer is to reformulate probability. Mathematicians knew about this kind of problem for a very long time, but what they mostly just ignored it: probability wasn’t considered a terribly interesting field.

Then, along came Kolmogorov – the same brilliant guy who’s theory of computational complexity is so fascinating to me! Kolmogorov created a new formulation of probability theory. Instead of starting with a space of equally probable discrete events, you start with a measure space.

Before we can look at how Kolmogorov reformulated probability (the Kolmogorov axioms), we need to look at just what a measure space is.

A measure space is just a set with a measure function. So let X be a set. A measure μ on X is a function from a subset of X to a real number: mu: 2^X rightarrow R with the following properties:

  • Measures are non-negative: forall x subseteq X: mu(x) ge 0
  • The measure of the empty set is always 0: mu(emptyset) = 0
  • The measure of a finite sequence of unions is the sum of the individual measuresmu(x + y) = mu(x) + mu(y)

So the idea is pretty simple: a measure space is just a way of defining the size of a subset in a consistent way.

To work with probability, you need a measure space where the measure of the entire set is 1. With that idea in mind, we can put together a proper, formal definition of a probability space that will really allow us to work with, and to combine probabilities in a rigorous way.

Like our original version, a probability space has a set of events, called its event space. We’ll use F to represent the set of all possible events, and e to represent an event in that set.

There are three fundamental axioms of probability, which are going to look really similar to the three axioms of a measure space:

  1. Basic measure: the probability of any event is a positive real number: (Omega is called the unit event, and is the union of all possible events.) Alternatively, the probability of no event occurring is 0: P(emptyset)=0.
  2. Combination: For any two distinct events or sets of events e and f, the probability of e or f is P(e) + P(f): forall e, f subseteq P: e cap f = emptyset Rightarrow P(e cup  f) = P(e) + P(f). This can be extended to any countable sequence of unions.

This is very similar to the informal version we used earlier. But as we’ll see later, this simple formulation from measure theory will give us a lot of additional power.

It’s worth taking a moment to point out two implications of these axioms. (In fact, I’ve seen some presentations that treat some of these as additional axioms, but they’re provable from the first three.

  • Monotonicity: if e subeq f, then P(e) le P(f).
  • Upper Bound: for any event or set of events e, P(e) ge 0 land P(e) le 1.

The brilliance of Kolmogorov was realizing that these rules were everything you need to work out any probability you want – in both finite and infinite spaces. We’ll see that there’s a lot of complexity in the combinatorics of probability, but it will all always ultimately come back to these three rules.

Infinite Cantor Crankery

I recently got yet another email from a Cantor crank.

Sadly, it’s not a particularly interesting letter. It contains an argument that I’ve seen more times than I can count. But I realized that I don’t think I’ve ever written about this particular boneheaded nonsense!

I’m going to paraphrase the argument: the original is written in broken english and is hard to follow.

  • Cantor’s diagonalization creates a magical number (“Cantor’s number”) based on an infinitely long table.
  • Each digit of Cantor’s number is taken from one row of the table: the Nth digit is produced by the Nth row of the table.
  • This means that the Nth digit only exists after processing N rows of the table.
  • Suppose it takes time t to get the value of a digit from a row of the table.
  • Therefore, for any natural number N, it takes N*t time to get the first N digits of Cantor’s number.
  • Any finite prefix of Cantor’s number is a rational number, which is clearly in the table.
  • The full Cantor’s number doesn’t exist until an infinite number of steps has been completed, at time &infinity;*t.
  • Therefore Cantor’s number never exists. Only finite prefixes of it exist, and they are all rational numbers.

The problem with this is quite simple: Cantor’s proof doesn’t create a number; it identifies a number.

It might take an infinite amount of time to figure out which number we’re talking about – but that doesn’t matter. The number, like all numbers, exists, independent of
our ability to compute it. Once you accept the rules of real numbers as a mathematical framework, then all of the numbers, every possible one, whether we can identify it, or describe it, or write it down – they all exist. What a mechanism like Cantor’s diagonalization does is just give us a way of identifying a particular number that we’re interested in. But that number exists, whether we describe it or identify it.

The easiest way to show the problem here is to think of other irrational numbers. No irrational number can ever be written down completely. We know that there’s got to be some number which, multiplied by itself, equals 2. But we can’t actually write down all of the digits of that number. We can write down progressively better approximations, but we’ll never actually write the square root of two. By the argument above against Cantor’s number, we can show that the square root of two doesn’t exist. If we need to create the number by writing down all af its digits,s then the square root of two will never get created! Nor will any other irrational number. If you insist on writing numbers down in decimal form, then neither will many fractions. But in math, we don’t create numbers: we describe numbers that already exist.

But we could weasel around that, and create an alternative formulation of mathematics in which all numbers must be writeable in some finite form. We wouldn’t need to say that we can create numbers, but we could constrain our definitions to get rid of the nasty numbers that make things confusing. We could make a reasonable argument that those problematic real numbers don’t really exist – that they’re an artifact of a flaw in our logical definition of real numbers. (In fact, some mathematicians like Greg Chaitin have actually made that argument semi-seriously.)

By doing that, irrational numbers could be defined out of existence, because they
can’t be written down. In essence, that’s what my correspondant is proposing: that the definition of real numbers is broken, and that the problem with Cantor’s proof is that it’s based on that faulty definition. (I don’t think that he’d agree that that’s what he’s arguing – but either numbers exist that can’t be written in a finite amount of time, or they don’t. If they do, then his argument is worthless.)

You certainly can argue that the only numbers that should exist are numbers that can be written down. If you do that, there are two main paths. There’s the theory of computable numbers (which allows you to keep π and the square roots), and there’s the theory of rational numbers (which discards everything that can’t be written as a finite fraction). There are interesting theories that build on either of those two approaches. In both, Cantor’s argument doesn’t apply, because in both, you’ve restricted the set of numbers to be a countable set.

But that doesn’t say anything about the theory of real numbers, which is what Cantor’s proof is talking about. In the real numbers, numbers that can’t be written down in any form do exist. Numbers like the number produced by Cantor’s diagonalization definitely do. The infinite time argument is a load of rubbish because it’s based on the faulty concept that Cantor’s number doesn’t exist until we create it.

The interesting thing about this argument to be, is its selectivity. To my correspondant, the existence of an infinitely long table isn’t a problem. He doesn’t think that there’s anything wrong with the idea of an infinite process creating an infinite table containing a mapping between the natural numbers and the real numbers. He just has a problem with the infinite process of traversing that table. Which is really pretty silly when you think about it.

Recipe: Sous Vide Braised Pork Belly with Chevre Polenta

IMG_20130701_182032
I really outdid myself with tonight’s dinner. It was a total ad-lib – not recipe written in advance, just randomly trying to make something good. It turned out so good that I need to write down what I did, so that I can make it again!

Part 1: the pork

  • 2 1/2 pounds pork belly. I’m picky about pork; if I’m going to eat it, I want it to be good. I didn’t grow up eating pork. My family didn’t keep kosher, but we didn’t bring pork into the house. To this day, I don’t like most pork. Grocery store pork is, typically, bland, greasy, and generally nasty stuff. But the first real pork that I ate was at Momofuku in Manhattan. It was Berkshire pork, from a farm in upstate NY. That was delicious. Since then I’ve experimented, and I really think that nothing compares to fresh Berkshire. It costs a lot more than grocery store pork, but it’s worth it. I order it direct from Flying Pig Farm.
  • 4 cloves garlic.
  • 1 teaspoon salt.
  • 1 1/2 teaspoons fennel pollen.
  • 1 teaspoon dried rosemary.
  • 1 tablespoons olive oil.
  • pepper
  • 1/4 cup salt.
  • 1/4 cup sugar.
  1. Prepare the pork belly: trim off the skin, and any egregiously extra fat from the skin side.
  2. Put the garlic, fennel pollen, rosemary, 1 teaspoon of salt, and the olive oil into a mortar and pestle, and crush them to a paste.
  3. Coat the pork with the herb paste.
  4. Add fresh-ground black pepper to the pork.
  5. Mix together 1/4 cup each of sugar and salt, and coat the pork with it.
  6. Put the pork into the fridge overnight.
  7. In the morning, remove the pork from the fridge, and discard any liquids that were drawn out by the salt.
  8. Sealed the pork in a sous vide bag, and cook at 190 degrees for
    5 hours. (If you don’t have a sous vide machine, you could probably
    do it covered in a 200 degree oven. You’ll probably want to add a bit
    of water.)
  9. Take out the pork, and separate the meat from the liquid that’s collected in the bags. (Do NOT discard it; that’s pure flavor!) Put
    both into the fridge for a couple of hours to cool.
  10. When it’s cool, the fat that rendered out of the pork will have solidifed – remove it, and discard it. (Or keep it for something else.)
  11. Cut the pork into 2 inch thick chunks.
  12. In a smoking hot cast iron pan, brown the pork chunks on all sides.
  13. Add in the reserved liquids, along with 1/4 cup of port wine.
    Reduce until it forms a glaze over the pork. Remove the pork to a
    plate – it’s done!

Part 2: the Polenta

  • 1 cup polenta. I use very coarse polenta – I like my polenta to have some texture. (My friend Anoop teases me, insisting that I’m making grits.)
  • 4 cups chicken stock.
  • 1 cup water.
  • 1 teaspoon salt.
  • 1 tablespoon butter.
  • 2 ounces chevre goat cheese.
  1. Put the salt, water, and chicken stock into a pan, and bring to a boil.
  2. Reduce the heat to medium low, and stir in the polenta.
  3. Cook the polenta on medium low to low heat for 1 1/2 hours.
  4. Remove from heat, add in the butter, and stir until it’s all melted and blended in.
  5. crumble the goat cheese in, and stir it in.

Part 3: the assembly.

  1. Put a big pile of the polenta in the middle of a plate.
  2. Put a couple of chunks of the glazed pork onto the polenta.
  3. Put sauteed asparagus around the outside.