Category Archives: Good Math

Pathological Programming in a Circular Queue

Today, I’m going to show you a very simple, very goofy little language called “SCEQL”, which standards for “slow and clean esoteric queue language”. It’s based on nothing but a circular queue
of numbers with nothing but 8 commands. It’s not one of the more exciting languages, but it can
be a lot of fun to figure out how to make the circular queue do what you want it to.

Continue reading

Stepping Back a Moment

The topology posts have been extremely abstract lately, and from some of the questions
I’ve received, I think it’s a good idea to take a moment and step back, to recall just
what we’re talking about. In particular, I keep saying “a topological space is just a set
with some structure” in one form or another, but I don’t think I’ve adequately maintained
the *intuition* of what that means. The goal of today’s post is to try to bring back
at least some of the intuition.

Continue reading

Tail Recursion: Iteration in Haskell

In Haskell, there are no looping constructs. Instead, there are two alternatives: there are list iteration constructs (like foldl which we’ve seen before), and tail recursion. Let me say, up front, that in Haskell if you find yourself writing any iteration code on a list or tree-like structure, you should always look in the libraries; odds are, there’s some generic function in there that can be adapted for your use. But there are always cases where you need to write something like a loop for yourself, and tail recursion is the way to do it in Haskell.

Tail recursion is a kind of recursion where the recursive call is the very last
thing in the computation of the function. The value of tail recursion is that in a tail
recursive call, the caller does nothing except pass up the value that’s returned
by the the callee; and that, in turn, means that you don’t need to return to the caller
at all! If you think of it in terms of primitive machine-level code, in a tail-recursive call, you can use a direct branch instruction instead of a branch-to-subroutine; the tail-recursive call does *not* need to create a new stack frame. It can just reuse the callers frame.

Continue reading

Big to Small, Small to Big: Topological Properties through Sheaves (part 2)

Continuing from where we left off yesterday…
Yesterday, I managed to describe what a *presheaf* was. Today, I’m going to continue on that line, and get to what a full sheaf is.
A sheaf is a presheaf with two additional properties. The more interesting of those two properties is something called the *gluing axiom*. Remember when I was talking about manifolds, and described how you could describe manifolds by [*gluing*][glue] other manifolds together? The gluing axiom is the formal underpinnings of that gluing operation: it’s the one that justifies *why* gluing manifolds together works.
[glue]: http://scienceblogs.com/goodmath/2006/11/better_glue_for_manifolds.php

Continue reading

Big to Small, Small to Big: Topological Properties through Sheaves (part 1)

Suppose we’ve got a topological space. So far, in our discussion of topology, we’ve tended to focus either very narrowly on local properties of **T** (as in manifolds, where locally, the space appears euclidean), or on global properties of **T**. We haven’t done much to *connect* those two views. How do we get from local properties to global properties?

One of the tools for doing that is a sheaf (plural “sheaves”). A sheaf is a very general kind of structure that provides ways of mapping or relating local information about a topological space to global information about that space. There are many different kinds of sheaves; rather than being exhaustive, I’ll pretty much stick to a simple sheaf of functions on the topological space. Sheaves show up *all over* the place, in everything from abstract algebra to algebraic geometry to number theory to analysis to differential calculus – pretty much every major abstract area of mathematics uses sheaves.

Continue reading

Groupoids and Strange Definitions

In my last topology post, I started talking about the fundamental group of a topological space. What makes the fundamental group interesting is that it tells you interesting things about the structure
of the space in terms of paths that circle around and end where they started. For example, if you’re looking at a basic torus, you can go in loops staying in a euclidean-looking region; you can loop around the donut hole, or you can loop around the donut-body.
Of course, in the comments, an astute reader (John Armstrong) leapt ahead of me, and mentioned the fundamental group*oid* of a topological space, and its connection with category theory. That’s
supposed to be the topic of this post.

Continue reading

Pathological Stack Hell: Underload

Our pathological language this week is [Underload][underload]. Underload is, in some ways, similar to Muriel, only it’s a much more sensible language. In fact, there are actually serious
practical languages called *concatenative languages* based on the same idea as Underload: [Joy][joy] and [Factor][factor] are two examples.
[underload]: http://esoteric.voxelperfect.net/wiki/Underload
[muriel]: http://scienceblogs.com/goodmath/2006/11/friday_pathological_programmin_5.php
[joy]: http://www.latrobe.edu.au/philosophy/phimvt/joy.html
[factor]: http://www.factorcode.org/
Underload is a remarkably simple language. It’s stack based, like Forth, so all of the data is stored on the stack. Its only means of control is constructing a program on the stack and executing it; and the only data that it can manipulate is lists of characters.
The commands in Underload are:
* `~` – swap the top two elements on the stack.
* `:` – duplicate the top element on the stack.
* `!` – discard the top element on the stack.
* `*` – concatenate the two elements on top of the stack into a single list.
* `a` – take the element on top of the stack, and wrap it in “()”s.
* `(…)` – push the contents of the parens on the stack as a single stack element.
* `S` – print the element on top of the stack.
* `^` – take the element on top of the stack, and append it to the currently executing program
string.
As usual, we’ll start with a “Hello world” program.
(Hello world!)S
Simple, right?
Suppose we wanted to add two numbers. The easiest way to handle numbers in Underload is unary format. So suppose want to add 3 + 5. First, we need to put 3 and 5 on the stack. We can represent three as a list `xxx` and 5 as a list `xxxxx`. To push those onto the stack, we need
to wrap them in parens; and t add them, we want to just concatenate the two lists. So the program to add 3 + 5 and print the result is:
(xxx)(xxxxx)*S
As you’d probably expect, an Underload quine is extremely simple:
(:aSS):aSS
I’ll walk through that just to make it clear. First, the list `”:aSS”` is pushed onto the stack, so writing the stack as a list of comma separated elements, the stack is “`[:aSS]`”. Then we execute “:”, which duplicates the top of the stack, leaving “`[:aSS,:aSS]`”. Then we execute “a”, which wraps the element on top of the stack in parens, giving us “`[(:aSS),:aSS]`”. Now there are two “S” commands, which output the two top stack elements; so the out is `(:aSS):aSS`.
A program to generate the fibonacci series is also pretty simple:
(()(*))(~:^:S*a~^a~!~*~:(/)S^):^
It looks like a nighmare, but it’s really not bad. It starts with “()” (0) and “(*)” (1) on the stack. And the rest of the program basically copies the larger number, adds the smaller and larger (giving the next element of the sequence), leaving two fibonacci numbers of the stack. It duplicates the larger, prints it, and then goes on to the next iteration. The body of the program is `(~:^:S*a~^a~!~*~:(/)S^)`, which at the very beginning `(~:^)` duplicates itself.
There’s an online interpreter for [Underload][interp] which does single-stepping. I recommend popping over there, and watching the fibonacci series program execute. It’s much more interesting to watch it in action than it would be to read my detailed description of it!
Now, is this Turing complete? It’s a little hard to see. But the author of Underload took care of that question, by showing how to compile [Unlambda][unlambda] into Underload.
* Unlambda “`s`” translates to Underload “`((:)~*(~)*a(~*(~^)*))`”
* Unlambda “`k`” translates to Underload “`(a(!)~*)`”
* Unlambda “`i`” translates to Underload “`()`”.
* Unlambda “““” translates to Underload “~^”.
* Unlambda “`.x`” translates to Underload “`((x)S)`”.
[interp]: http://esoteric.voxelperfect.net/files/underload/underload.html
[unlambda]: http://scienceblogs.com/goodmath/2006/08/friday_pathological_programmin_3.php

Nullity – the Nonsense Number

Tons of folks have been writing to me this morning about the BBC story about an idiot math teacher who claims to have solved the problem of dividing by zero. This is an absolutely *infuriating* story, which does an excellent job of demonstrating what total innumerate idiots reporters are.

What this guy has done is invent a new number, which he calls “nullity”. This number is not on the number line, can’t be compared to other numbers by less than or greater than, etc. In other words, he’s given a name to the basic mathematical concept of “undefined”, and proclaimed that this is somehow a solution to a deep and important problem.

The thing is, there is no problem. We understand what division by zero means. You can’t do it. There is no number that meaningfully expresses the concept of what it means to divide by zero.

You can assign a name to the concept of “not a number”, which is what this bozo has done; but that’s not new! The standard floating point representation used by computer hardware manufacturers (NaN (Not a Number). NaN works as you should expect a non-number to work: it can’t be compared to anything, and no arithmetic operation works on it – because comparisons and arithmetic only work on numbers.

What he’s done is to take projective geometry – which (as I mentioned in the Steiner post a few weeks back) gives you some useful ways of using infinity; added the concept of a NaN value called nullity, and redefined the multiplication and division operators so that they’re defined to be able to produce nullity.

What good is it? Well, the crank behind it claims two things:

  1. That currently, dividing by zero on a computer causes problems because division by zero is undefined. But if computers adopted nullity, then division by zero errors could be gotten rid of, because they’ll produce nullity. Except of course that modern floating point hardware already does have a NaN value, and it doesn’t help with the kind of problem he’s talking about! Because the result is not a number; whether you call it undefined, or you define it as a value that’s outside the set of real numbers, it doesn’t help – because the calculation you’re performing can’t produce a meaningful result! He says if your pacemaker’s software divides by zero, you’ll die, because it will stop working; how will returning nullity instead of signalling a divide by zero error make it work?
  2. That it provides a well-defined value for 00, which he claims is a 1200 year old problem. Except that again, it’s not a problem. It’s a meaningless expression! If you’re raising 0 to the 0th power in a calculation, then something’s wrong with that calculation. Modifying basic math so that the answer is defined as NaN doesn’t help that.

Basically, he’s defined a non-solution to a non-problem. And by teaching it to his students, he’s doing them a great disservice. They’re going to leave his class believing that he’s a great genius who’s solved a supposed fundamental problem of math, and believing in this silly nullity thing as a valid mathematical concept.

It’s not like there isn’t already enough stuff in basic math for kids to learn; there’s no excuse for taking advantage of a passive audience to shove this nonsense down their throats as an exercise in self-aggrandizement.

To make matters worse, this idiot is a computer science professor! No one who’s studied CS should be able to get away with believing that re-inventing the concept of NaN is something noteworthy or profound; and no one who’s studied CS should think that defining meaningless values can somehow magically make invalid computations produce meaningful results. I’m ashamed for my field.

A Tree Grows Up in Haskell: Building a Dictionary Type

Last time around, I walked through the implementation of
a very simple binary search tree. The implementation that I showed
was reasonable, if not great, but it had two major limitations. First,
it uses equality testing for the search, so that the implementation is only really suitable for use as a set; and second, because it’s such
a trivial tree implementation, it’s very easy for the tree to become
highly unbalanced, resulting in poor performance.

Today, we’ll look at how to extend the implementation so that our BST
becomes useful as a key/value dictionary type. We’ll take two different
approaches to that, each of which demonstrates some common Haskell
techniques: one based on on using higher order functions, and one based on
using tuples. Balancing the tree will be a task for another day.

As an aside: in a real Haskell program, you would probably never write a simple type like this. Haskell has a great library, and it’s got plenty of library types that do a much better job than this. This
is really only for tutorial purposes.

Continue reading

Building Datatypes in Haskell (part 1)

For this post, I’m doing a bit of an experiment. Haskell includes a “literate” syntax mode. In literate mode, and text that doesn’t start with a “>” sign is considered a comment. So this entire post can be copied and used as a source file; just save it with a name ending in `”.lhs”`. If this doesn’t work properly, please post something in the comments to let me know. It’s more work for me to write the posts this way, so if it’s not working properly, I’d rather not waste the effort. I’ve tested it in both Hugs and ghci, and everything works, but who knows what will happen after a pass through MovableType!

Like most modern programming languages, Haskell has excellent support for building user-defined data types. In fact, even though Haskell is very much not object-oriented, most Haskell programs end up being centered around the design and implementation of data structures.

So today, I’m going to start looking at how you implement data types in Haskell. What I’m
going to do is start by showing you how to implement a simple binary search tree. I’ll start with
a very simple version, and then build on that.

Continue reading