Category Archives: Good Math

Not quite Basics: The Logician's Idea of Calculus

In yesterdays basics post, I alluded to the second kind of calculus – the thing that computer scientists like me call a calculus. Multiple people have asked me to explain what our kind of calculus is.

In the worlds of computer science and logic, calculus isn’t a particular thing:
it’s a kind of thing. A calculus is a sort of a logician’s automaton: a purely
symbolic system where there is a set of rules about how to perform transformations of
any value string of symbols. The classic example is lambda calculus,
which I’ve written about before, but there are numerous other calculi.

Continue reading

Basics: Calculus

Calculus is one of the things that’s considered terrifying by most people. In fact, I’m sure a lot of people will consider me insane for trying to write a “basics” post about something like calculus. But I’m not going to try to teach you calculus – I’m just going to try to explain very roughly what it means and what it’s for.

There are actually two different things that we call calculus – but most people are only aware of one of them. There’s the standard pairing of differential and integral calculus; and then there’s what we computer science geeks call a calculus. In this post, I’m only going to talk about the standard one; the computer science kind of calculus I’ll write about some other time.

Continue reading

Basics: Limits

One of the fundamental branches of modern math – differential and integral calculus – is based on the concept of limits. In some ways, limits are a very intuitive concept – but the formalism of limits can be extremely confusing to many people.

Limits are basically a tool that allows us to get a handle on certain kinds
of equations or series that involve some kind of infinity, or some kind of value that is almost defined. The informal idea is very simple; the formalism is also pretty simple, but it’s often obscured by so much jargon that it’s hard to relate it to the intuition.

Continue reading

Basics: Algebra

Basics: Algebra

While I was writing the vectors post, when I commented about how math geeks always build algebras around things, I realized that I hadn’t yet written a basics post explaining what we mean by algebra. And since it isn’t really what most people think it is, it’s definitely worth taking the time to look at.

Algebra is the mathematical study of a particular kind of structure: a structure created by taking a set of (usually numeric) values, and combining it with some operations operate on values of the set.

Continue reading

Basics: Vectors, the Other Dimensional Number

There’s another way of working with number-like things that have multiple dimensions in math, which is very different from the complex number family: vectors. Vectors are much more intuitive to most people than the the complex numbers, which are built using the problematic number i.

A vector is a simple thing: it’s a number with a direction. A car can be going 20mph north – 20mph north is a vector quantity. A 1 kilogram object experiences a force of 9.8 newtons straight down – 9.8n down is a vector quantity.

Continue reading

Rectangular Programming for Warped Minds

In light of the recent posts and discussions about multidimensional
numbers,today’s pathological language is Recurse, a two-dimensional language – like Befunge, sort of. But I find it more interesting in its own peculiar little
way. It’s actually a function-oriented two-dimensional language where every
function is rectangular.

Continue reading

Basics: Multidimensional Numbers

When we think of numbers, our intuitive sense is to think of them in terms of
quantity: counting, measuring, or comparing quantities. And that’s a good intuition for real numbers. But when you start working with more advanced math,
you find out that those numbers – the real numbers – are just a part of the picture. There’s more to numbers than just quantity.

As soon as you start doing things like algebra, you start to realize that
there’s more to numbers than just the reals. The reals are limited – they exist
in one dimension. And that just isn’t enough.

Continue reading

Basics: The Halting Problem

Many people would probably say that things like computability and the halting
program aren’t basics. But I disagree: many of our basic intuitions about numbers and
the things that we can do with them are actually deeply connected with the limits of
computation. This connection of intuition with computation is an extremely important
one, and so I think people should have at least a passing familiarity with it.

In addition to that, one of the recent trends in crappy arguments from creationists is to try to invoke ideas about computation in misleading ways – but if you’re familiar with what the terms they’re using really mean, you can see right through their
silly arguments.

And finally, it really isn’t that difficult to understand the basic idea.
Really getting it in all of its details is a bit harder, but just the basic idea that there are limits to computation, and to get a sense of just how amazingly common uncomputable things are, you don’t need to really understand the depths of it.

Continue reading

Basics: Real Numbers

What are the real numbers?

Before I go into detail, I need to say up front that I hate the term
real number. It implies that other kinds of numbers are not real,
which is silly, annoying, and frustrating. But we’re pretty much stuck with it.

There are a couple of ways of describing the real numbers. I’m going to take you through a couple of them: first, an informal intuitive description; then an axiomatic definition, and finally, a constructive definition.

Continue reading

Basics: The Turing Machine (with an interpreter!)

As long as I’m doing all of these basics posts, I thought it would be worth
explaining just what a Turing machine is. I frequently talk about things
being Turing equivalent, and about effective computing systems, and similar things, which all assume you have some clue of what a Turing machine is. And as a bonus, I’m also going to give you a nifty little piece of Haskell source code that’s a very basic Turing machine interpreter. (It’s for a future entry in the Haskell posts, and it’s not entirely finished, but it does work!)

The Turing machine is a very simple kind of theoretical computing device. In
fact, it’s almost downright trivial. But according to everything that we know and understand about computation, this trivial device is capable of any computation that can be performed by any other computing device.

The basic idea of the Turing machine is very simple. It’s a machine that runs on
top of a tape, which is made up of a long series of little cells, each of which has a single character written on it. The machine is a read/write head that moves over the tape, and which can store a little bit of information. Each step, the
machine looks at the symbol on the cell under the tape head, and based on what
it sees there, and whatever little bit of information it has stored, it decides what to do. The things that it can do are change the information it has store, write a new symbol onto the current tape cell, and move one cell left or right.

That’s really it. People who like to make computing sound impressive often have
very complicated explanations of it – but really, that’s all there is to it. The point of it was to be simple – and simple it certainly is. And yet, it can do
anything that’s computable.

Continue reading