While I was researching yesterdays post on Archimedes integration, one of the things I read reminded me of one of the stranger things about Greek and earlier math. They had a notion that the only valid fractions were *unit* fractions; that is, fractions whose numerator is 1. A fraction that was written with a numerator larger than one was considered *wrong*. Even today, if you look in a lot of math books, they use the term “vulgar fraction” for non-unit fractions.
Obviously, there *are* fractions other that *1/n*. The way that they represented them is now known as *Egyptian fractions*. An Egyptian fraction is expressed as the sum of a finite set of unit fractions. So, for example, instead of writing the vulgar fraction 2/3, the Greeks would write
“1/2 + 1/6”.
Category Archives: Good Math
Groups and Topology
I’m going to start moving the topology posts in the direction of algebraic topology, which is the part of topology that I’m most interested in. There’s lots more that can be said about homology, homotopy, manifolds, etc., and I may come back to it as some point, but for now, I feel like moving on.
There’s some fun stuff in algebraic topology which comes from the intersection between group theory
and topology. To be able to talk about that, you need the concept of a *topological group*.
First, I’ll run through a very quick review of groups. I wrote a series of posts on group theory for GM/BM when it was at blogger; if you’re interested in details, you might want to [pop over there, and take a skim.](http://goodmath.blogspot.com/2006/06/group-theory-index.html). There are also some excellent articles on group theory [at Wolfram’s mathworld](http://mathworld.wolfram.com/GroupTheory.html),
and [wikipedia](http://en.wikipedia.org/wiki/Group_theory). Then I’ll show you the beginnings of how group theory, abstract algebra, and topology can intersect.
Friday Pathological Programming: Bad Actors in Cruise
Today’s pathological language is a bit of a treat for me. I’m going to show you a twisted,
annoying, and thoroughly pointless language that *I* created.
The language is based on a model of computation called [Actors](http://en.wikipedia.org/wiki/Actor_model), which was originally proposed by Professor Gul Agha of UIUC. There’ve been some really nice languages built using ideas from Actors, but this is *not* one of them. And that’s exactly where the name comes from. What name comes to mind when you think of *really bad* actors with delusions of adequacy? For me, it’s “Cruise”.
You can get the code for Cruise on Google code, project “Cruise”, or you can grab a bundle containing the code, and a compiled binary in a jarfile [here](http://scienceblogs.com/goodmath/upload/2006/11/cruise.zip). To run it, just “java -jar Cruise.jar cruse-program-file”. Just so you know, the code *sucks*. It’s something I threw together in my spare time, so it’s sloppy, overcomplicated, probably buggy, and slow as a snail on tranquilizers.
Building Interesting Shapes by Gluing
I thought it would be fun to do a couple of strange shapes to show you the interesting things that you can do with a a bit of glue in topology. There are a couple of standard *strange* manifolds, and I’m going to walk through some simple gluing constructions of them.
Better Glue for Manifolds
After my [initial post about manifolds](http://scienceblogs.com/goodmath/2006/10/manifolds_and_glue.php), I wanted to say a bit more about gluing.
You can form manifolds by gluing manifolds with an arbitrarily small overlap – as little as a single point along the point of contact between the manifolds. The example that I showed, forming a spherical shell out of two circles, used a minimal overlap. If all you want to do is show that the topology you form is a manifold, that kind of trivial gluing is sufficient, and it’s often the easiest way to splice things together.
But there are a lot of applications of manifolds where you need more than that. So today, I’m going to show you how to do proper gluing in a way that preserves things like metric properties of manifolds when they’re glued together.
Programming in Color (fixed)
Todays programming pathology is programs as art.
Start with a really simple stack based language, add in a crazy way of encoding instructions using color, and you end up with a masterpiece of beautiful insanity. It’s not too exciting from a purely computational point of view, but the programs are really great to look at. Yes, it’s a pathological language with truly beautiful source code!
*(The original version of this post had some trouble because I linked to the original images in-place,
which the owner of the Piet webpage had blocked. I didn’t realize he didn’t want links. I’ve since downloaded
the images, coverted them to jpegs, and posted them here. I initially thought that the problem with the images was formats, which is what I originally said in this explanation. It’s not the image format, but the linking; but converting the files to jpeg and uploading them removed the links that caused the problem.)*
The "C is Efficient" Language Fallacy
I came across an article yesterday about programming languages, which hit on one of my major peeves, so I can’t resist responding. The article is at greythumb.org, and it’s called Programmer’s rant: what should and should not be added to C/C++. It’s a variation on the extremely common belief that C and C++ are the best languages to use when you need code to run fast. They’re not. They’re good at things that need to get very close to the hardware – not in the efficiency sense, but in the sense of needing to be able to fairly directly munge the stack, address specific hardware registers, etc. But they are *dreadful* languages for writing real scientific and/or numerical code.
To quote the part of the article that set me off:>
First of all, these fears are nonsense. C and C++ are never going to disappear. Why? Because there
are classes of programming problems that are still and will always be CPU bound and there is still
no language as fast as C or C++ for these problems. I highly doubt that there ever will be.I’m talking about things like: scientific number crunching, game/simulation physics, raytracing,
real-time 3d graphics, audio processing, codec implementation, high-speed network packet routing,
evolutionary computation (my personal favorite :), and of course implementing all these high-level
languages’ runtimes. There are also problems like OS and hardware driver implementation where you
need something “close to the metal” that can interact closely with and even embed assembly language.
C is basically shorthand for assembler, which is why it’s the preferred language for that kind of
thing.For these tasks, premature optimization at the level of language and framework choice is not evil.
In some cases it’s a requirement. I predict that at least some of these tasks will still be done in
C, C++, or some language with similar characteristics 50 years from now. To give you an idea of just
how much faster C can be for tasks like this, I have found that evolvable instruction set based
evolutionary computation is almost twice as fast when competently implemented in C than a similar
competent implementation in Java.
Here’s the problem. C and C++ suck rocks as languages for numerical computing. They are not the fastest, not by a longshot. In fact, the fundamental design of them makes it pretty much impossible to make really good, efficient code in C/C++. There’s a good reason that Fortran is still the language of choice for real, intense scientific applications that require the absolute best performance that can be drawn out of our machines – applications like computational fluid dynamics.
Making real applications run really fast is something that’s done with the help of a compiler. Modern architectures have reached the point where people can’t code effectively in assembler anymore – switching the order of two independent instructions can have a dramatic impact on performance in a modern machine, and the constraints that you need to optimize for are just more complicated than people can generally deal with.
So for modern systems, writing an efficient program is sort of a partnership. The human needs to carefully choose algorithms – the machine can’t possibly do that. And the machine needs to carefully compute instruction ordering, pipeline constraints, memory fetch delays, etc. The two together can build really fast systems. But the two parts aren’t independent: the human needs to express the algorithm in a way that allows the compiler to understand it well enough to be able to really optimize it.
And that’s where C and C++ fall down. C and C++ are strongly pointer-based languages. The real semantics of almost anything interesting end up involving pretty much unrestricted pointers. In C and C++, there’s no such thing as an array – there’s just pointers, which you can subscript and a shorthand for pointer arithmetic and indirection(`x[n]` in C/C++ is the same thing as `*(x+n)`.)
That pointer based nature means that in a C or C++ program, it’s very hard for a compiler to figure out what things are independent. It comes down to a problem called alias detection. Alias detection is identifying when two variables *might* be referencing the same location. Alias detection becomes a horrific mess in the presence of unrestricted pointers. Let me show you an example:
for (int i=0; i < 20000) { for (int j=0; j < 20000) { x[i][j] = y[i-2][j+1] * y[i+1][j-2]; } }
If you look at that loop, it can be parallelized or vectorized without any problem if and only if the array pointed to by `x` and the array pointed to by `y` are completely distinct with no overlap. But there’s no way to write code in C or C++ that guarantees that. If it were Fortran-77, you could easily check if they were distinct. If it were Fortran-98, you could check if `x` or `y` were declared as possible pointer targets, and the programmer could make it obvious that they didn’t overlap if they wanted to. But you can’t do that in C or C++. (And Fortran isn’t even the best – an experimental language called Sisal from Lawrence Livermore labs used to be able to beat Fortran by around 20% on typical code!)
That example involves parallelization of code, but alias related problems aren't just an issue for parallelism; it’s just easiest to show an example for parallelism. The aliasing issues in C and C++ have a very direct impact on real code. Let me tell you about a concrete example of this, and then I’ll stop ranting. About six years ago, I was working on a project where I needed to implement a rather messy algorithm to compute something called the "longest common subsequence" (LCS) of two arrays. The standard algorithm for computing LCS is using something called dynamic programming; it's **O***(n3) time, and **O**(n2) space. There’s an algorithm that was designed by people doing computational biology that can do it in the same time, but using on average **O**(n) space.
I didn’t know what language to use for this project, so I decided to do an experiment. I wrote the LCS algorithm in a bunch of different languages, to compare how complex the code was, and how fast it ran. I wrote the comp bio algorithm in C, C++, OCaml, Java, and Python, and recorded the results. What I got timing-wise for running the programs on arrays of 2000 elements each was:
- C: 0.8 seconds.
- C++: 2.3 seconds.
- OCaml: 0.6 seconds interpreted, 0.3 seconds fully compiled.
- Java: 1 minute 20 seconds.
- Python: over 5 minutes.
About a year later, testing a new JIT for Java, the Java time was down to 0.7 seconds to run the code, plus about 1 second for the JVM to start up. (The startup times for C, C++, and Ocaml weren’t really measurable – they were smaller than the margin of error for the measurements.)The Objective-Caml bytecode interpreter was faster than the carefully hand-optimized C program! Why? Because the OCaml compiler could recognize that the arrays were completely independent – it didn’t need to worry about one iteration of the loop stepping on the values used by another. The C compiler couldn’t apply a lot of useful optimizations, because it couldn’t be sure that they were valid.
And it’s not just non-assignment based functional languages where you can see supposedly less-efficient high level languages crushing the performance of C/C++. CMU CommonLisp can beat C/C++ on numeric code. There was a paper a few years back documenting it: using a Sun SPARC workstation, if you use the optional type declarations, and write scientific/numeric code in Lisp, using vectors (Lisp arrays) and assignments to implement exactly the same algorithm as C, the CMU CommonLisp code will perform better than C code generated by either the Solaris C compiler or GCC with maximum optimization.
Dimensions and Topology
Back in the early days of Good Math/Bad Math, when it was still at blogger, one of the most widely linked posts was one about the idea of dimension. At the time, I said that the easiest way to describe a dimension was as a direction. If you’ve got a point in a plane, and you want to say where it is, you can do it with two numbers – one for each of the fundamental directions in the plane. If you’ve set an origin, “(5,-2)” is enough to uniquely identify exactly one point. You con reach any point on the plane by moving in two directions: up/down and left/right.
If you’ve got a cube, you can’t uniquely specify a point using its distance in two directions. Up three and left two doesn’t give you one point – there are lots of points that are up three and left two. You need a third direction, forward/back, for depth. That’s the third dimension – a direction that could not be formed by any combination of the two directions you had in the plane.
Topology has its own sense of dimension – in fact, it has several. They’re interesting because, as happens so often in topology, they start with the intuition that we get from simple metric spaces like ℜn, and work it down to its bare essentials by figuring out what it means when you apply it to an arbitrary topological space – that is, an arbitrary structure formed from open sets.
More Fractran: A Trivial Interpreter
For your amusement and edification, the following is a very simple interpreter for fractran
programs which, in addition to running the program to generate its result also generates a trace
to show you how the program executed.
;; A Trivial Fractran Interpreter
;;
;; Copyright 2006 Mark C. Chu-Carroll
;; http://scienceblogs.com/goodmath
;;
;; You’re welcome to do anything you want with this code as long
;; as you keep a copy of this header to identify the original source.
;;
;; This program runs a fractran program. A fractran program is a list
;; of fractions. The fractions are represented by a list of two integers:
;; the numerator, and the denominator. For example, the classic fractran
;; multiplication program could be written:
;; ((385 13) (13 21) (1 7) (3 11) (7 2) (1 3))
;; or:
;; (((* 7 11 5) 13) (13 (* 3 7)) (1 7) (3 11) (7 2) (1 3))
;;
;;
;; To run a program until it terminates, call (run-fractran program input).
;; This will return a list; the car of the list will be the result of
;; running the program, and the cdr will be a trace of the executions in the
;; form of a list of the fractions that ran at each step.
;;
;; To run a program for a specific maximum number of steps, call
;; (run-fractran-bounded program input maxsteps)
;;
(define (step-fractran fract int)
(if (equal? fract ()) int
(let ((fr (car fract)))
(if (= (remainder int (cadr fr)) 0)
(cons (/ (* int (car fr)) (cadr fr))
(list fr))
(step-fractran (cdr fract) int)))))
(define (run-fractran fract int)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran fract new-int)))
(list int ))))
(define (run-fractran-bounded fract int bound)
(if (> bound 0)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran-bounded fract new-int (- bound 1))))
(list int)))
(list int)))
;; The mult program.
(define mult ‘((385 13) (13 21) (1 7) (3 11) (7 2) (1 3)))
;;
;; (run-fractran mult 432)
;; The primes program
(define primes ‘((17 91) (78 85) (19 51) (23 38) (29 33) (77 29) (95 23)
(77 19) (1 17) (11 13) (13 11) (15 2) (1 7) (55 1)))
;; (run-fractran-bounded primes 2 1000)
———-
Commenter Pseudonym has kindly provided a Haskell version in the comments, which was mangled by MTs comment formatting, so I’m adding a properly formatted version here. I think it’s a really interesting comparison to the scheme code above. The Haskell code is very nice; cleaner than my rather slapdash Scheme version. But overall, I think it’s a pretty good comparison – it gives you a sense of what the same basic code looks like in the two languages. Personally, I think the Haskell is clearer than the Scheme, even though the Scheme is my own code.
module Fractran where
import Ratio
import Data.Maybe
import Control.Monad.Fix
type Program = [Rational]
runFractran :: [()] -> Program -> Integer -> [Integer]
runFractran bound prog l
= step bound prog l
where
step _ [] l = []
step [] (f:fs) l
= []
step (_:bound) (f:fs) l
= let p = f * fromIntegral l
in case denominator p of
1 -> let pi = numerator p
in pi : step bound prog pi
_ -> step bound fs l
fractran :: Program -> Integer -> [Integer]
fractran prog l
= runFractran (fix (():)) prog l
fractranBounded :: Int -> Program -> Integer -> [Integer]
fractranBounded b prog l
= runFractran (take b $ fix (():)) prog l
mult = [385%13, 13%21, 1%7, 3%11, 7%2, 1%3]
primes = [17%91, 78%85, 19%51, 23%38, 29%33, 77%29, 95%23,
77%19, 1%17, 11%13, 13%11, 15%2, 1%7, 55%1]
— fractran mult (2^4 * 3^3)
— fractranBounded 1000 primes 2
Prime Number Pathology: Fractran
Today’s pathological language is based on a piece of work called Fractran by John Conway of game theory fame. It’s a really fascinating bugger; absolutely insanely difficult to program in, but based on one of the most bizarrely elegant concepts of computation that I’ve ever seen. It’s amazing that this is Turing complete. It’s not a real programming language in the sense of being able to write practical programs; it’s more of a simple theoretical computational model which has been implemented as a language.
It’s based on the idea of numbers as products of prime factors. As you should remember from elementary school, every number can be represented by a collection of prime numbers that, multiplied together, produce the number. For a few examples:
- 24 = 2×2×2×3, or 23×31
- 291 = 3×97
- 1800 = 5×5×3×3×2×2×2=52×32×23
Conway figured out that using something based on that concept, you can express any computable function using nothing but a list of positive fractions.