Category Archives: Programming

Basics: Parallel, Concurrent, and Distributed

This came up in a question in the post where I started to talk about π-calculus, but I thought it was an interesting enough topic to promote it up to a top-level post. If you listen to anyone talking about computers or software, there are three worlds you’ll constantly hear: parallel, concurrent, and distributed. At first glance, it sounds like they mean the same thing, but in fact, they’re three different things, and the differences are important.

Continue reading

An Experiment with π-calculus and Programming Language Design

I feel like a bit of a change of pace, and trying a bit of an experiment.

Re-reading Backus’s old FP work reminds me of what I was doing the last time I read it, which
was back in grad school. At the time, I was working on some stuff involving parallel computation,
and I discovered Robin Milner’s π-calculus as a tool for describing parallel computation. You
can think of π-calculus as being a sort of parallel (pun intended) for the λ-calculus: in
sequential, single-threaded computation, λ-calculus can be a great tool for describing what
things mean. But λ-calculus has absolutely no way of describing the concept of multiple
things happening at once. π-calculus is fundamentally built on the concept of multiple threads
which can only share information by passing messages.

There’s a reason that reading Backus made me think of this, beyond just the temporal coincendence of the fact that I was working with π-calculus the last time I read Backus’s
FP paper. One of the major points that Backus made was how poorly the vonNeumann model was
at describing many computations; that has become far more true in recent years. Even my laptop now has multiple processors; computation just isn’t single-threaded anymore. But most programming languages are basically deeply single-threaded. Even Haskell, for all of its functional
purity, isn’t particularly good at multi-threaded execution. But I’ve always thought it would be
a great idea to build a language around π-calculus the way that ML is built around λ-calculus.

So my experiment, such as it is, is to see if I can work through how to create an actual, useful, non-pathological programming language based on the π-calculus; and not just do that,
but do it publicly, here on the blog.

Continue reading

Backus's Idea of Functional Programming

In my earlier post about John Backus, I promised to write something about his later work on
functional programming languages. While I was in a doctors office getting treated for an awful
cough, I re-read his 1977 Turing Award Lecture. Even 30 years later, it remains a fascinating read,
and far from being dated, it’s positively astonishingly to see both how far-sighted Backus was, and how little progress we’ve actually made.

Continue reading

Simple Programming in Binary: Binary Combinatory Logic

For reasons that I’ll explain in another post, I don’t have a lot of time for writing a long pathological programming post, so I’m going to hit you with something short, sweet, and beautiful: binary combinatory logic.

I’ve written in the past about lambda calculus, and it’s equivalent variable-free form, the SKI combinator calculus. I’ve ever written about other combinator calculus based languages, like Unlambda and Iota.

Binary combinatory logic, aka BCL, is a language based on SKI calculus – except that it encodes the entire thing into binary. Two characters, plus two rewrite rules, and that’s it – a complete
combinator calculus based programming language.

SKI combinator calculus is a simple variable-free calculus with three constructs: S, K, and I; and I isn’t really primitive, but can be defined in terms of S and K.

  1. S=λx y z.x z (y z)
  2. K=λx.(λy.x)
  3. I=λx.x=SKK

So, in BCL, S is written “01”; K is written “00”. And there are two rewrite rules, which basically define “1” without a zero prefix as a a paren-like grouping construct:

  1. “1100xy”, where “x” and “y” are valid BCL terms (that is, complete syntactic units),
    gets rewritten to be “x”. If you follow that through, that means that it reduces to ((Kx)y).
  2. “11101xzy” gets rewritten to “11xz1yz”. Again, following it through, and that
    reduces out to “(((Sx)y)z)”.

So, following on unlambda’s method of handling IO, “hello world” in BCL is:

010001101000010000010110000000000101101111
000010110111110011111111011110000010011010

bcl.gif

And here’s the really neat thing. Write an interpreter for BCL in BCL. Take the bit string that results, and convert it to a bitmap. That’s what’s over the right here. So, for example, the first line is “1111100000111001”; keep going, and you’ll find the entire BCL interpreter.

The Bad Ballet of Regular Expressions: Pathological Programming in Thutu

For today’s installation of programming insanity, I decided to go with a relative of Thue, which is one of my favorite languages insane languages that I wrote about before. Thue is a language based on a rewriting system specified by a semi-Thue grammar. Todays language is called Thutu (pronounced tutu); it’s a string rewriting system like Thue, only it’s based on regular expressions instead of grammars, and it’s even got regular expression-based control flow mechanisms, making it a sort of hybrid language.

The scary thing about Thutu is that it’s not all that different from a language I’ve wanted to find some time to write myself – except that the one I want to write isn’t intended to be pathological. I’ve never stopped missing Teco for writing text processing programs; and since
my TECO programs tended to be roughly of the form: “Find something matching this pattern, and then take this action”, a regular-expression based language would make a lot of sense.

But anyway, today we’re looking at Thutu, which is a deliberately obscure version of this idea.

Continue reading

Clear Object-Oriented Programming? Not in Glass

Todays bit of programming insanity is a bit of a novelty: it’s an object-oriented programming language called Glass, with an interpreter available here. So far in all of my Friday Pathological Programming columns, I haven’t written about a single object-oriented language. But Glass is something
special. It’s actually sort of a warped cross between Smalltalk and Forth – two things that should never have gotten together; in the words of the language designer, Gregor Richards, “No other language is implemented like this, because it would be idiotic to do so.”

Continue reading

The Most Pathological Machine I've Ever Seen: Tag

What we have here is a truly warped language.

Back in the very early days of what eventually became computer science, many of the people working in the field invented all sorts of automatons/computing formalisms. The one that I’ve always found the most confounding is the Tag machine invented by Emil Post.

The tag machine is simple to the point of triviality. The machine is a queue of characters (with one character designated as “Halt”), and a set of rules. Each rule has a different character that selects the rule, and a string of characters. Each step, the machine looks at the first character of the queue, and selects the rule that that is associated with that character. If the character is “Halt”, the machine just stops, and whatever is left on the queue is the result of the computation. Otherwise, it appends the selected rule’s string of characters to the end of the queue, and then removed a fixed number of characters from the front of the queue. The machines are called “n-Tag” machines where “n” is number of character dropped each step.

That’s it. Look at the front of the queue, use it to pick a set of characters to append, and then remove and discard the first N characters from the queue..

For any N≥2, a Post N-tag machine is Turing equivalent.

Like I said above – I’ve always found the Post Tag machine to be thoroughly confounding. I can grasp why it’s Turing equivalent, but for the life of me, I’ve never been able to figure out how to actually implement anything interesting on one. So, I figured, there are tons of esoteric/pathological language fans out there who love nothing more than the challenge of writing interesting programs for bizarre languages; I’ll write a post-tag language, and throw it to the wolves, and see what happens!

Continue reading

Using Monads for Control: Maybe it's worth a look?

So, after our last installment, describing the theory of monads, and the previous posts, which focused on representing things like state and I/O, I thought it was worth taking a moment to look at a different kind of thing that can be done with monads. We so often think of them as being state wrappers; and yet, that’s only really a part of what we can get from them. Monads are ways of tying together almost anything that involves sequences.

Continue reading

Crazy Stack Games: Programming in Kipple

Insane Stacking

Todays pathology is playing with stacks. Lots of lots of stacks. Stacks for data. Stacks for control. Stacks out the wazoo. It’s called Kipple for no particularly good reason that I know of.

Kipple happens to be one of the pathological languages that I highly recommend trying to write some programs in. It’s crazy enough to be a challenge, but there is a basic logic to how you program to it – which makes figuring out how to write programs rewarding rather than just frustrating.

Continue reading

Rectangular Programming for Warped Minds

In light of the recent posts and discussions about multidimensional
numbers,today’s pathological language is Recurse, a two-dimensional language – like Befunge, sort of. But I find it more interesting in its own peculiar little
way. It’s actually a function-oriented two-dimensional language where every
function is rectangular.

Continue reading