Flat Earthers Can’t Do Math!

Running this blog, I regularly check referrers – that is, the sites who’s links to GM/BM actually bring readers to the blog. It’s interesting to see who’s linking, and it helps give me an idea which posts people find most interesting. Also, every once in a while, it gives me something interesting to blog about. Like today. I got linked to by the Flat Earth Society!

The FES is the grand-daddy of hopeless crackpot organizations. They are, in all seriousness, a group of people who believe that the earth is flat. They’ll tell you that the space program is a total fraud. Every picture you’ve seen from space is faked. No one ever went to the moon. Depending on which flat-earther you talk to, satellites are either a fraud, or they’re just great big balloons floating above the earth’s surface. It’s all an eloborate conspiracy for some nefarious reason.

The FES set up a group of forums. And in one of them, someone posted something about that old -1/12 nonsense. It’s the worst misunderstanding/misrepresentation of the idea behind that that I’ve seen so far, and let me tell you, that’s really saying something.

I’ll keep this short so that it might get read.

Analytic continuation allows us to approximate an infinite series as a function. The more terms you add to the series, the more accurately you describe the function. So at the limit, when you have all the terms, that series can be considered equal to that function.

This allows us to assign meaningful numerical values to infinite series, even divergent ones.

For example, the infinite series “1 + 2 + 3 + 4 + 5… ” can be evaluated. However, the result is disturbingly counter-intuitive. If you stop adding terms at any finite point, you’ll have a larger and larger number as the result. However, if you “evaluate this after an infinite number of terms, you’ll find the sum to actually be -1/12.

There are a variety of proofs for this, and this number is demonstrably a meaningful . This result is actually seen in physics. Furthermore this result is a foundation in string theory for the number of required dimensions.

Analytic computation isn’t about converging infinite series. It’s not about adding more and more terms. What he’s describing is, simply, the idea of an infinite series that converges to a value. For example, consider the following:

 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \cdots

That series taken to infinity is equal to 2. You can keep drawing it out. At any point, the sum of the series is less than 2. 1 1/2, 1 3/4, 1 511/512, 1 65535/65536, on and on, forever. But if you keep following it, using the mathematical concept of limits, it’s easy to show that the sum of the full series is exactly 2.

You don’t need to bring analytic continuations into the picture to show that – just limits. In fact, analytical continuations don’t help with solving that problem – they’ve got nothing to do with it.

Analytical continuation is both much simpler (in concept) and much harder (in practice) than convergence of infinite series.

Analytical continuations come into play when we have an partial function for something, which we’re using as an partial solution for the value of something else. There are lots of problems where we don’t know a perfect, total closed-form equation for some function that we’re interested in. But we’ve managed to find a closed-form equation that matches the function we’re interested in a lot of the time. Then, using some pretty hairy complex (in the sense of complex numbers, although it’s also pretty complicated) analysis tricks, we can figure out what the value of the target function should be at places where our partial solution is undefined. So even though the partial solution doesn’t work some of the time, we can use that partial solution to derive the actual solution. That process of using analysis of the partial solution to get at least some of the undefined points where it the partial solution doesn’t work is analytical continuation.

In the case of the infamous -1/12 argument, we’re trying to probe at a really important thing called Riemman’s zeta function. The Zeta function describes fundamental properties of prime numbers. It’s an important deep thing which ends up having a lot of applications when you’re dealing with number theory, topology, and differential equations, among other things. Because of how it describes some fundamental properties, any concrete application of those fundamental properties ends up involving Zeta as well – including things like string theory, which propose a topological structure for the univese.

We don’t know, in general, how to write a simple equation for Zeta. We do know how to write an equation that is a partial subset of the zeta function – a equation that describes a function that works much of the time, but which is not zeta, and which is not defined in some places where zeta is.

Using analytical continuation, we can compute the value of the zeta function in places where the equation that we use for a partial approximation does not work, where that equation cannot and will not ever produce a result.

Using analytical continuation, \zeta(-1) = -1/12. At -1, the equation that we know for computing zeta in some places does not work. At all. But analytical continuation shows us what the value of \zeta is at that point: it does not tell us the value of computing that equation at -1: that’s doesn’t work.

(In terms of physics, the application of this is interesting. The zeta function is used in a lot of places in string theory, and it’s normally “defined” in physics text as being precisely the series that gives us the partial approximation of zeta. So in string theory physics, when you find that series, it’s not actually that series; it’s zeta, which was expanded out into that (technically incorrect) series. Since it’s zeta, you can replace it with zeta. Since zeta(-1)=-1/12, that means that in those physics equations, you can effectively pretend that 1+2+3+4+5+…=-1/12. It isn’t, but a combination of a notational convenience and a shortcut to avoid a long-winded explanation of analytical continuation means that too many physicists are taught to believe that it is.)

So, shocking as it might be: flat earthers – they’re not just wrong about geography!

Functional Programming Comes to the Macintosh! Introducing Swift!

So, Apple just recently announced a new programming language! For people like me, that’s big news. (In case you’re a relatively new reader of this blog, by background I’m not a mathematician. I’m a computer scientist, and I did my PhD in programming language design and implementation. I’m utterly obsessed with programming languages. Last time I set out to count how many language I’d learned, it was over 120, and I’ve at least read the specifications of another 30 or so since then. I don’t claim to be able to program in all of those today – but they’re all things that I have written something in at one time or another. Like I said, this is an obsession for me!)

Before I go into describing swift, I’m going to go through a bit of history. Swift is a hybrid language, in a way that reflects where it came from. To understand that, we need to see where it came from.

Where it came from: a step back into Swift’s history

To understand Swift, you need to know a bit of where it came from.

Since the dawn of OSX, the bulk of software development on the Macintosh has been done in a language caled Objective-C. Objective-C is a lovely programming language (for its time), which never really caught on outside of OSX and it’s predecessor NeXTStep. Swift is a hybrid language, combining object-oriented programming and functional programming; Objective-C, it’s ancestor, was a hybrid between old-fashioned procedural imperative programming and object-oriented programming.

Objective-C dates back to around 1984 or so. Back then, the main programming languages that were used for developing applications were predominantly procedural imperative languages, like Pascal and C. What the “imperative” part of that means is that they’re languages where you specify what you want the computer to do using a sequence of very precise steps. The procedural part means that the main structural principle in those languages is dividing code up into functional elements called procedures or subroutines.

For example, suppose you wanted to implement a binary search tree. In a procedural language, you’d write something like:

typedef struct BinarySearchTreeNode {
    char *value;
    struct BinarySearchTreeNode *left;
    struct BinarySearchTreeNode *right;
} BinarySearchTree;

BinarySearchTree *create_tree(char *str) { ... }

void add_value(BinarySearchTree *tree, char *str) { ... }

void _rebalance(BinarySearchTree *tree) { ... }

Even though you’re implementing a data structure, the fundamental code structure doesn’t reflect that. You’ve got a declaration of a datatype (the struct), and you’ve got a collection of functions. The functions aren’t associated with the declaration, or with each other. The code is organized around the functions, and any connection between the functions and the data structures – or between different functions – isn’t reflected in the structure of the code.

If you think in terms of abstractions: the only abstraction in the procedural version is that one procedure can’t access local variables defined in other procedures. All of the details of the BinarySearchTree data structure are open and accessible to anyone who feels like messing with them. Even operations – like the _rebalance function, which is intended to be only used in the implementation of other BinarySearchTree methods, is callable by anyone who wants to call it.

In the late 1980s, that started to change. The idea of object orientation came onto the scene. In object-oriented programming, you’re still writing imperative code, but the overall organization is based on data structures. You implement a data structure, and associated with it are the operations on that structure. In the object-oriented system, the operations on a data structure effectively became part of the structure itself. Re-writing our example above, in an object-oriented way, the code is similar, but the principle organizational structure changes. Instead of data type declaration and a set of functions that are only related to one another by our understanding, the functions (now called methods) are declared and implemented as part of the data structure.

@interface BinarySearchTree
- init: (NSString)str;
- addKey: (NSString)str withValue: val;
@end

@implementation BinarySearchTree {
  NSString key;
  id value;
}
- init: (NSString)str { ... }
- addNode: (NSString)str { ... }
- rebalance;
@end

Only the implementor of the BinarySearchTree can actually see the internals of the implementation. Rebalance isn’t part of the public interface of the class, and users of the BinarySearchTree can’t access it.

There’s more to object-orientation than this – in particular, there’s an idea called inheritance. Inheritance lets you declare a new data structure as being based on another:

@interface BinaryHeap: BinarySearchTree { }
- (NSString) pop;
- push: (NSString)s;
@end

Inheritance makes it easy to reuse code. To implement a heap, we don’t need to re-implement the binary search tree maintenance methods. We just implement the new methods of BinaryHeap using the BinarySearchTree operations that is inherited by the heap implementation.

Object-orientation seems obvious in retrospect, but back in the late 80s, it was absolutely revolutionary! All of us who’d been programming in old languages were entranced by this new idea, and we wanted to try it!

But there was a problem. When object-oriented programming started becoing popular, most of us were programming in C. We couldn’t just abandon C for some new language, because we relied on tons of existing code, all of which was written in C. For example, back then, I was doing most of my programming on an Amiga. The Amiga UI libraries were all written in C, and if I wanted to write an Amiga program, I needed to use those libraries. I couldn’t just grab Smalltalk and start coding, because Smalltalk couldn’t call the libraries I needed to use.

The way that we started to write object-oriented code was with hybrid languages based on C – languages that contained the familiar C that could interface with our existing systems, but which also had extensions for writing object-oriented programs. There were two main hybrids for C programmers, based on the two main schools of object-oriented programming. There was C++, based on the Simula model of object-orientation, and Objective-C, based on the Smalltalk model. With C++ or Objective-C, you could use object-oriented programming for all of your new code, while still be able to use non object-oriented libraries.

C++ is much more familiar to most people. In the mainstream community, Objective-C never caught on, but C++ became the dominant systems programming language. But Apple, because of OSX’s roots in NeXTStep, embraced Objective-C. If it wasn’t for them, Objective-C would likely have completely disappeared – outside of the Apple world, no one really used it.

Objective-C was an absolutely lovely language for its time. But it was invented around 1984 – it’s an old programming language. Lots of new ideas have come along since the days of Objective-C, and with its roots in C, it’s really hard to make any big changes to Objective-C without breaking existing code.

Now, we’re at the cusp of another big change in programming. Functional programming seems to be becoming the newest rage. The way that functional programming is catching on is very similar to the way that object-orientation did it before: by hybrids. If you look at many of the new, trendy programming languages – such as Scala, Clojure, and Rust – what you find is basically a subset which is the old familiar object-oriented language, and a bunch of functional stuff integrated into it.

With objective-C really starting to look pretty crufty, Apple did the obvious thing: they made a break, and introduced a new hybrid functional/object-oriented language, which they named Swift.

Introducing Swift

In the announcement, they described Swift as “Objective-C without the C”, but I’d describe it more as “Objective-C meets SML”. Swift is a hybrid language in the tradition of Objective-C, but instead of being a hybrid of sprocedural and object-oriented languages, it’s a hybrid of object-oriented and functional languages.

Let’s start with a simple example. In most places, it’s traditional to start with “hello world”, but I think that that’s both boring, and not particularly enlightening. I like to start with the factorial function.

func fact(n: Int) -> Int {
  if n == 0 {
    return 1
  } else {
    return n * fact(n - 1)
  }
}

let x = fact(10)
println("The factorial of 10 is \(fact(10))")

It’s pretty standard for a modern-ish language. The only thing that’s at all unusual is that if you look inside the println, you can see that Swift does string interpolation: if you put \( expression) into a string, then the result of that expression is converted into a string, and inserted into the enclosing string. That’s been common in scripting languages for a long time, but it hasn’t been a standard thing in systems programming languages like Swift. It’s a small thing, but it’s a good one.

The first big surprise in Swift is how you can test that function. In Objective-C, you would have had to create a full program, compile it, and run the executable. That’s not the case in Swift. In Swift, you can open a playground, which is a fully interactive scripting environment. It’s not that Swift comes with an interpreter – a playground is a workspace with fully access to the compiler, and it can compile and evaluate expressions on the fly! The playground isn’t limited to just text – you can create UI elements, interface builder based UIs, interactive games – just put the code into a workspace, and edit it. As the code changes, the moment it becomes syntactically valid, the playground will dynamically compile it using the LCC backend, and execute the generated code for you!

All I can say is: It’s about time! I’ve been seeing toy versions of this kind of interactivity since I was an undergrad in the 1980s. It’s about time that we got that in full blown systems language. Just this one little feature makes me really want to like Swift. And the rest of the language, while not being exceptional, is good enough to make me want to start coding in it.

In terms of syntax, Swift is mostly pretty straightforward. It’s got a clear C heritage, but with a whole lot of cleanups.

The Type System

People like me really care about type systems. Type systems can be controversial – some people love to program with strong types, and others really hate it. I’m very much in the camp that loves it. In my experience, when I’m programming in a language like SML or OCaml or Scala, which have really powerful, expressive type systems, my code very frequently ends up running correctly the first time I execute it. That’s not because I’m some kind of superhuman genius programmer, or because the languages I like somehow prevent me from making mistakes. It’s because in a strong type system, the particular kinds of mistakes that I’m most likely to make are exactly the kinds of things that will show up as a type error. I make just as many mistakes in a strongly statically typed language as I would in a dynamically typed one, but the type system catches them up front, before I can ever run my program. For my way of programming, I like that – I get all of those stupid errors out of the way quickly, so that I can spend my time tracking down the hard stuff.

Swift has a strong type system, in roughly the vein of the SML family of languages. You have the usual array of simple types (String, Int, Float, Character, etc.) In addition, you can create your own data types, and those types can be generic, with type parameters: in any declaration, you can include type parameters in angle brackets. For example, if you wanted to implement a generic list type, you could declare it as:

class List<T> {
  func insert(v: T) { ... }
  ...
}

T is a type parameter. When you declare a variable that can store a list, you need to specify the value of that type parameter:

  var l: List<Int> = ...;

You can also declare constraints on type parameters. A constraint is a way of saying “You can only use a type parameter that meets these requirements”. For example, if you wanted to implement a generic binary search tree, you’d need to be able to compare elements. If you just wrote BinarySearchTree<T>, you wouldn’t be able to use the less-than operator, because not all types can be compared that way. So you need to declare a constraint:

class BinarySearchTree<T: Comparable> { ... }

Now, if you were to try to create BinarySearchTree<Dictionary<String, String>>, you’d get a compile error. You can’t use the less-than operator on dictionaries, so they don’t satisfy the constraints.

The constraints are built using the object oriented features of Swift. Swift allows you to define something called a protocol, which is equivalent to an interface in Java, or a trait in Scala. A protocol just defines a set of methods; when you declare a class, you can declare that it implements a protocol, and the compiler will check that it implements the needed methods. When you write a type T: P, it just means that the type T implements the protocol P.

The main innovation in the type system (at least for a systems programming language) is optional types. In Swift, when you declare a variable using var foo, the variable foo will always contain a value. It can’t be nil. If you want a variable to potentially be empty, you need to explicitly say so, by declaring it with an optional type, which is written as a type-name followed by a question mark, like “var foo: String?“.

When a variable or parameter has an optional type, you can’t just access the value in that variable directly. You need to do something to check if it’s got a value in it or not. The easiest way to do that is using a conditional:

let opt_value: String? = some_function_call()
if let value = opt_value {
  value.do_something()
}

In many cases, you can make the code even simpler, by using option chaining:

let opt_value: String? = some_function_call()
opt_value?.do_something()

The question-mark in an expression does an automatic test-and-dereference. If the value is present, then the following operation is executed. If it’s absent, then the expression returns nil. The thing after the question mark doesn’t have to be a method call – it can also be a subscript, or a field reference. And you can (as the name suggests), chain these things together:

    thing_which_returns_option?.my_foo?.do_something()?[42]

Finally, you can force a dereference of a value directly using !. If you know that an option value is present, and you don’t want to write out a conditional to test it, you can append an explanation point to the end of the expression. It will generate a run-time nil-dereference error if the optional value is empty.

Handling optionality this way works a lot like what I said about static typing before. It doesn’t make errors go away – the errors that would have turned up an null pointer dereferences can still happen in your code. But most of them will be caught in advance by the compiler; and the places where null pointer reference can happen that won’t be caught by the compiler will be easy to find, because they’ll be explicit dereferences of an option, using “!”. (I’ve written about this idea before; see here for details

Object-Orientation in Swift

Let’s move on a bit. I spent a lot of time talking about data structures and object-orientation. Swift’s version of object-orientation is pretty standard for a modern programming language. For example, here’s a sketch of an implementation of a simple tree-based dictionary. (This doesn’t compile yet; it should, but right now, it crashes the compiler.)

class BinarySearchTreeDictionary<T: Comparable, S> {
  let key: T
  let value: S
  var left: BinarySearchTreeDictionary<T, S>?
  var right: BinarySearchTreeDictionary<T, S>?

  init(key: T, value: S) {
    self.key = key
    self.value = value
    self.left = nil
    self.right = nil
  }

  func insert(key: T, value: S) {
    if key > self.key {
      if let l = self.left {
        l.insert(key, value)
      } else {
        self.left = new BinarySearchTreeDictionary<T, S>(key, value)
      }
    } else {
      if let r = self.right {
        r.insert(key, value)
      } else {
        self.right = new BinarySearchTreeDictionary<T, S>(key, value)
      }
    }
  }
}

If you’ve been doing much programming in any modern language, this should mostly look pretty familiar. There are a couple of things that might need explanation.

  • One of the fields of our class is declared with “let”, and two with “var”. As I said in the introduction, Swift is a hybrid between functional and object-oriented programming. The two declaration types reflect that: identifiers declared with “let” are constants, and identifiers declared with “var” are variables. In this binary search tree, we’re not going to try to do in-place rebalancing, so we don’t need the key to be mutable.
  • The type of the left and right fields are option types, as we saw in the previous section.
  • Inside of the implementation of insert, you can see one of the effects of the optional type declaration. You can’t just use an optional value, because it might be nil. There are several ways of using optionals. In this case, we use conditional unwrapping, where you put a constant declaration into the condition of the if statement. If the optional value is present, then the constant is bound, and the body of the conditional is executed.

In Swift, classes are values that are always passed by reference. If you want to implement a class that’s passed by value, you can use a struct. Structs are declared the same way as classes, except that they (obviously) use the struct keyword instead of class. In a struct, by default, all methods are constant: they’re not allowed to modify the structure in any way. If you want a method to be able to modify the structure, you need to explicitly declare that using the mutating keyword. If you declare a method as mutating, you can change anything about it that you want – up to and including entirely replacing it by assigning a new structure value to self!

struct SearchTreeNode<Key: Comparable, Value> {
  mutating func pivot_right() {
    var tmp = self
    self = self.left
    tmp.right = self.right
    self.right = tmp
  }
}

Swift even goes as far as incorporating a bit of aspect-oriented programming in its object system. Aspect-oriented programming is a crazy thing that mostly grew out of object-orientation. The idea of it is that sometimes, you’d like to be able to inject behaviors into your code to add functionality to things like assignments. The way that you do that is by specifying that there’s some action you want to augment, and providing a function containing the code that you want to inject into that action.

For a pretty typical example, you might want to provide a public field in your class, but you want to make sure that no one can assign an invalid value to it, so when someone assigns a value, you provide code to check it for validity.

In Swift, for any field or global variable, there’s a pair of implicit functions that you can define, called willSet and didSet. willSet is called before the assignment happens, and didSet is called after.

For example, if you were implementing a volume control, and you wanted to make sure that no one set the volume higher than a user defined maximum, you could write:

struct VolumeControl {
  var max: Int
  var current: Int {
    didSet(newVolume) {
      if newVolume > max {
        current = max
      }
    }
  }
}

There’s yet another type of object or data structure that you can create with Swift. They’re called enums. From the name, you might think that enums are enumerations. They can certainly be used to implement enumerations, but they’re useful for much more than that. They’re what programming languages geeks like me call algebraic types, or what Scala calls case classes. An enum is any data type whose value can take multiple forms.

Enum types are a bit more limited than I might have liked, but they’re a really nice thing to have. They’re intended to be used for simple, lightweight values that could take several different forms. For example, a few weeks ago, I posted my parser combinator library. In the parser combinators, a parse result could be one of three things: a successful parse (which includes a product value and a parser input containing the unconsumed inputs), a failed parse (which didn’t include anything, but which indicated that the parse failed, but there wasn’t an error), or an error. To implement that in Swift, I’d use an enum:

enum ParseResult<In, Out> {
  case Success(Out, ParserInput<In>)
  case Failure
  case Error(String)
}

The way that you use enums usually comes down to pattern matching:

  val e: ParseResult<In, Out> = parser.parse(original_in)
  switch (e) {
    case .Success(let result, var rest): return other.parse(rest)
    case .Failure: return Failure
    case .Error(let msg): return Error(msg)
  }

Finally, there’s one more kind of data structure: tuples. Tuples are light-weight structures which don’t require a separate declaration of the type. You can just write tuple-values directly, and the type is inferred:

  let tuple = (1, 3, "hi there")

Tuples are a wonderful addition. The main use for tuples is allowing functions to return multiple values. You no longer need to muck about with bunching stuff into untyped lists, or stuffing some of your results into out-parameters. You can just return multiple values.

Functional Programming

The really new stuff in Swift is all functional programming stuff. Functional programming is the new hot thing in programming. For good reason. There’s two main facets to Swift’s version of functional programming: managing mutability, and first-class functions and closures.

Managing Mutability

In programming, you can’t avoid mutable state. It’s a fact. Most of the time, it’s the reason that we’re using a computer. For example, I’m using a program called Atom to write this blog post. There wouldn’t be any point is using Atom if I couldn’t modify the file I’m writing.

But mutable state makes things harder. In a large, complex software system, code which avoids mutating things is usually easier to understand, less prone to errors, less prone to unexpected side effects, and generally easier to interact with.

That’s the driving force behind the new hybrid languages: we want to be able to write functional code when it’s possible. But programming in a purely functional language is frequently really awkward, because you need state – so, in a functional language, you need to find a way to fudge the state in, using Monads, streams, or something else.

The key to using functional programming in a mutable-state world is controlling mutability. That is, you want to be able to glance at some code, and say “This is not going to change anything”, or “This might change stuff”. It’s less about making it impossible to change stuff than it is about making it clear just where stuff could be changed, and making sure that it can’t change unless you specifically declared its mutability. I’m a big fan of keeping things functional whenever it’s reasonable, as I wrote about here”.

Swift does a bunch of things to create that guarantee:

  1. Identifiers are declared with either “let” or “var”. If they’re declared with “let”, then the identifier names a constant which cannot be altered by assignment. If the value of a constant identifier is a struct or enum, its fields cannot be altered by assignment either.
  2. Methods on structs cannot alter the underlying struct unless the method is specifically annotated as “mutating”. Without that annotation in the declaration, the object cannot be altered by any of its methods.
  3. Function parameters are, by default, immutable. To allow a parameter to be changed, you need to specifically annotate it with var or inout in its declaration. If it’s a var, then changes to the parameter will not affect the original value in the caller; they will be made on a local copy.
  4. Structs and enums are passed by value. That means that the structure is (conceptually) copied when it’s passed as a parameter to a function. Unless you annotate the function parameter as an output parameter, the parameter cannot be modified; even if you call a mutating method, the mutation will be on the local copy of the value, not on the caller. (The reason for the “conceptually” up there is that the object is copied lazily; if you try to alter it, the copy happens at the point of the first alteration, so passing complex structures by-value doesn’t incur a copy cost unless you modify it.)

Functions and Closures

The other side of the functional programming support isn’t about restricting things, but about enabling new things. And here Swift really shines. Swift has support for first class functions (function parameter and return values), anonymous functions, curried functions, and full closures.

Swift’s support for first class functions means that functions are just values, like any other value type. A swift function can be assigned to a variable, passed to a function, or returned as the result type of a function.

For example, suppose you wanted to write a generic sort function – that is, a sort function that didn’t just compare values using the standard less-than operator, but which could take any function that a user wanted to do comparisons. In Swift, you could write:

  func sort_generic<T>(list: Array<T>, comparator: (T, T) -> Bool) -> Array<T> {
    ...
    if comparator(v, w) { ... }
    ...
  }

This is something that those of us with experience with Lisp have been absolutely dying to have in a mainstream language for decades.

Closures are a closely related concept. A closure is a function value with one really interesting property. It closes over the environment in which it was declared. To understand what I mean, let’s look at a really simple example, copied from the Swift specification.

func makeIncrementor(amount: Int) -> () -> Int {
  var runningTotal = 0
  func incrementor() -> Int {
      runningTotal += amount
      return runningTotal
  }
  return incrementor
}

This function returns a value which is, itself a function. The interesting thing is that the function can use any variable defined in any of the scopes enclosing its declaration. So the function incrementor can access the amount parameter and the runningTotal variable, even after the makeIncrementor function has returned. Since those are local to the invocation, each time you call makeIncrementor, it creates new variables, which aren’t shared with other invocations.

So let’s look at what happens when you run it:

let f = makeIncrementor(2)
let g = makeIncrementor(5)
f() 2
f() 4
g() 5
f() 6
g() 10

Anonymous functions make it really easy to work with first-class functions. You don’t need to write a function declaration and then return it the way the example above did. Any time you need a function value, you can create it in-place as an expression.

func makeAnonIncrementor(amount: Int) -> () -> Int {
  var runningTotal = 0
  return {
    runningTotal += amount
    return runningTotal
  }
}

If the function anonymous function takes parameters, you declare them before the function body with “in”:

  sort_generic(mylist, { (x: Int, y: Int) -> Bool in return x > y})

Currying, finally, is a shorthand way of writing function values. The idea is that if you have a two-parameter function, you can rewrite it as a one parameter function that returns another one parameter function. That sounds confusing until you see it:

  func add_numbers(x: Int, y: Int) -> Int {
      return x + y
  }

  func curried_add_numbers(x: Int) -> Int -> Int {
    return { (y: Int) -> Int in return x + y }
  }

If I want to add 3+2, I can either call add_numbers(3, 2), or curried_add_numbers(3)(2): they do the same thing.

Swift provides a special way of declaring curried functions:

  func better_curried_add_numbers(x: Int)(y: Int) -> Int {
    return x + y
  }

Pattern Matching

Pattern matching isn’t, strictly speaking, a functional language thing; but it’s an idea that was introduced to most programmers by its inclusion in functional programming languages. The idea is simple: you can write assignments or conditionals that automatically decompose a data structure by matching the structural pattern of that data structure.

As usual, that becomes a lot clearer with an example.

    let (x, y, z) = (1, 2.7, "three")
  

The right hand side of that assignment structure is a tuple with three values. The first one is an integer, the second is a floating point number, and the third is a string. The left-hand side of the assignment has exactly the same structure – so the Swift compiler will match the pieces. That’s equivalent to:

    let x = 1
    let y = 2.7
    let z = "three"
  

That’s particularly useful for multiple return values. Strictly speaking, Swift doesn’t really have multiple return values: every function returns exactly one value. But that one value may be a tuple containing multiple values. So you can write
things like:

  let result, error_core = my_function(parameters)

Pattern matching also happens in switch statements, where each branch of the switch can be a different pattern, as we saw earlier in the ParseResult example.

All of this stuff means that I can write really beautiful functional code in Swift. And that’s a really, really good thing.

The Warts

For the most part, I’ve been saying nice things about Swift. But it’s not all goodness. There are some problems, and some of them are pretty big.

The biggest thing is: no concurrency, no threading, no locks, no message passing. There’s absolutely no mention of concurrency at all. These days, that’s downright shocking. My phone has 4 CPU cores. Every machine that this language is intended to program, from the iPhone to the iPad to the Macintosh, has multiple CPUs, and programs for them need to deal with concurrency. But there’s not a shred of support in the language. That is, to put it mildly, absolutely insane. You can, of course, hack concurrency in via libraries, but most recent languages – Rust and Go come to mind – make concurrency a fundamental goal for a really good reason: concurrency is hard to get right, and concurrency hacks come at a significant cost in efficiency, correctness, and optimizibility. (Even older languages like Java have concurrency deeply embedded in the structure of the language for exactly that reason; leaving it out of swift is just shocking.)

Another big one is exception handling. Again, there’s absolutely nothing in Swift. This is particularly surprising because Objective-C had an exception handling system retrofitted into it. The libraries that Swift needs to be able to interact with use an ugly macro-based exception handling system – why is there nothing for it in Swift?

There are also a fair number of smaller issues:

  • The syntax is decidedly odd at times. Not a big deal. But there are some particular syntactic warts. There’s a range expression, which is fantastic. In fact, there are two, and they’re really hard to tell apart. 1..4 is a shorthand for (1, 2, 3); 1...4 is a shorthand for (1, 2, 3, 4). I predict much pain from off-by-one errors caused by using the wrong one.
  • The different ways of declaring objects have seemingly arbitrary rules. A struct is allowed to have class variables; a class can’t. Why? I have no idea; the limitation is simple stated in the docs as a fact.
  • Optionals aren’t as powerful as they could be. If you look at a language like Scala, optionals are implemented using the same type semantics as anything else: they’re a parametric type, Option[T]. That makes it possible to do a bunch of really nice stuff without needing to have any special cases wired into the language. In Swift, they’re a special case, and limits in the type system make it much harder to do many things. As a result, there’s a bunch of special case syntax for things like “option chaining” and forced deferencing. This isn’t a huge deal, but it’s frustrating.

And the big problem at the moment: the implementation is incredibly buggy. Not just a little bit buggy: virtually every code example in this article crashed the compiler. Obviously, it’s very early days for Swift, so it shouldn’t surprise anyone that the implementation is immature and buggy, but the degree of bugginess is astonishing. Examples from Apple’s own book on Swift crash the compiler! They’re claiming that with the release of OSX Yosemite this fall that you should be able to start writing real applications using Swift. They’ve got a long way to go, and not a lot of time if that’s going to be true.

Conclusion

This article, despite being ridiculously long, still only really scratches the surface of Swift. I haven’t talked about how Swift does arrays and dictionaries, which is really interesting. I haven’t talked much about syntax. I barely touched on pattern matching and options. I didn’t mention the module system at all. I didn’t talk about named parameters, which are a bit weird, but nice. There’s so much more to Swift than I could possible talk about here.

The thing to take away, then, is that overall, Swift is very exciting. Functional/Object-Oriented hybrid programming, with a strong parametric type system, and first-class interactivity in a systems programming language! I’m really looking forward to using it. I’m desperately hoping that Apple will open-source the compiler, because I want to be able to use Swift for lots of things, not just Macintosh programming. I don’t have super-high hopes for it, but it’s definitely possible, and I expect that open-source people are probably already working on a GCC front-end.

As an experiment, I’ve been trying to port my parser combinators to Swift. But at the moment, the Swift compiler chokes on the code. When I manage to debug my code/Apple manages to debug their compiler enough to make it work, I’ll post an update with that code, and more info about what it’s really like to write Swift code.

Don’t worry about playing with guns. You can’t hurt anyone.

Pardon me. Someone is wrong on the internet. I must do something!

Some stupid fucking dumb-ass idiot was playing with a loaded gun. He knew it was loaded. And he grabbed it by the trigger to pick it up. Because, you see, he knew that it’s a persnickety old gun, and the trigger usually jams, so he figured it’d be safe to grab it by the trigger. What could possibly go wrong?

Naturally, it went off, and hit his next door neighbor in the head, killing him. He’d just brought his wife and newborn child home when he was killed by the spectacular stupidity of his neighbor.

The police, discussing the case, said:

“The odds, I’d guess, are 1 in infinity,” said BCSO Maj. Tommy Ford. “This was just tragic.”

Those of us who aren’t mathematical morons know that a “1 in infinity” chance is the probability better known as 0.

You might say that I’m being pedantic. You’d be right. But it’s a stupid statement that tries to minimize the incredible stupidity of what happened here. This wasn’t a crazy freakish chance. A handgun is a device which is specifically designed to be an efficient and effective means for killing people. It is not a toy. Picking up a gun by its trigger is, for all intents and purposes, no different from just randomly aiming the gun and pulling that trigger. The chances of hitting someone aren’t huge, but they aren’t tiny, either.

This wasn’t a super-rare freak accident. This was stupidity and negligence by a jackass. Diminishing it, trying to pretend that it was just freakish is a way of making excuses for the
sub-human scumbag who killed an innocent man. Calling it one-in-infinity – that is, saying that it was impossible even though it wasn’t even particularly unlikely – is just a way of covering up for the fact that our insanely irresponsible way of dealing with deadly weapons kills innocent people.

We do a lot of that in America. Guns are downright sacred here. You can insult Jesus with less repurcussions that insulting a gun owner. A community figure like a local policeman can’t just point out that a gun was used in a horrible, stupid, irresponsible, evil way. Because that would be seen as a threat to the so-called “guns rights” shits who think that any number of innocents getting killed is an acceptable price in order to ensure that they can keep their precious penis replacements.

The man who killed his neighbor in this incident should never have been able to have that gun. He was a convicted felon, prohibited from owning a gun by state law. But we can’t do something sensible like, say, check whether someone can legally own a gun before letting them buy one. That would be an unacceptable infringement on their freedom.

Meanwhile, this shit is being charged with manslaughter. After all, it wasn’t murder. He was just playing with his precious, precious toy.

Innovation isn’t just hardware!

I’m a bit late to the party on this, but hey, such is life! I work on infrastructure at Twitter, and we’ve been going crazy trying to get stuff deployed in time to be ready for the world cup. So I haven’t had time to write before now!

Anyway, late last week, a professor known for stupid self-promoting stunts announced that the Turing test had been passed! This was, according to said professor, a huge thing, a really big deal, a historic event!

(Perhaps almost as big a deal as the time he had an RFID chip implanted in his arm, and announced that now he was the world’s first cyborg!)

Lots of people have written about the stupidity of the claim. The alleged “winner” was a program that pretended to be a teenaged kid with ADD who wasn’t a native english speaker. It didn’t even attempt to simulate intelligence, just to mislead the judges by providing excuses for its incoherence.

But that’s not what I wanted to comment on. Like I said, I’m late to the game, and that means that the problems with the alleged winner of the competition to pass the Turing test have been covered many times already. What I wanted to comment on was a theme I saw in several of the explanations. Here’s a typical example, taken from an article in the New Yorker:

Here’s what Eugene Goostman isn’t: a supercomputer. It is not a groundbreaking, super-fast piece of innovative hardware but simply a cleverly-coded piece of software, heir to a program called ELIZA that was first developed—as a joke—in the nineteen-sixties.

This is an example of what I call “IBM Thinking”. I used to work for IBM, and one of the (many) frustrations there was a deep-seated belief that “innovation” and “technology” meant hardware. Software is just the silly unimportant stuff that runs on top of hardware; hardware is what matters.

According to this attitude, any advance in technology, any new achievement, must happen because someone created better hardware that made it possible. A system that beats the turing test? If it’s real, it must be a new supercomputer! If it’s not, then it’s not interesting!

This is, in a word, nonsense.

Hardware advances. But so does software. And these days, the big advances are more likely to be in software than in hardware.

There’s a mathematical law, called Church’s thesis, which we’ve known about for a long time. Put simply, it says that there is a maximum limit in computation. All computing devices, no matter how they’re designed or built, are ultimately, at best, equivalent to other computing computing devices. It doesn’t matter whether you’re a Turing machine, or a PC, or a supercomputing cluster – the set of problems that can be solved by computation is fixed. Different devices may be able to solve a given problem faster by some amount, but it can’t solve problems that are truly unsolvable.

We’ve gotten to the point where we can build incredibly fast computers. But those computers are all built on the same basic model of computing that we’ve been using for decades. When a problem that seemed unsolvable before becomes solvable now, most of the time, that’s not because there was some problem with old hardware that made the problem unsolvable – it’s because people have figured out how to write software that solves the problem. In fact, a lot of recent innovations in hardware became possible not because of some fundamental change in the hardware, but because people worked out clever software to help design and lay out circuits on silicon.

To take one example that’s very familiar to me (because my wife is one of the technical leads of the project), consider IBM’s Watson – the computer that beat the two best human Jepoardy players. IBM has stressed, over and over again, that Watson was a cluster of machines built on IBM’s Power architecture. But the only reason that they used Power was marketing. What made Watson special was that a team of brilliant researchers solved a very hard problem with clever software. Watson wasn’t a supercomputer. It was a cluster of off-the-shelf hardware. It could easily have been built from a collection of ultra-cheap PC motherboards stacked together with a high-speed network. The only thing that made Watson special – the thing that made Watson possible had nothing to do with hardware. It’s just a bunch of cleverly-coded software. That’s it.

To get even closer to home: I work on software that lets Twitter run its system on a cluster of thousands and thousands of cheap machines. The hardware is really unimpressive, except for its volume. It’s a ton of cheap PC motherboards, mounted in rack after rack, with network cables connecting the racks. Google has the same thing, but with even more machines. Amazon has the same thing, except that they might even have more machines that Google! If you handed me a ton of money, I – an idiot when it comes to hardware – could build a datacenter like Twitter’s. It would be a huge amount of work – but there’d be nothing inventive about building a new cluster. In fact, that’s the whole point of cluster-based computing: it’s both cheaper and easier to just buy a couple of thousand cheap machines, and distribute your work among them than it is to buy and program one huge machine capable of doing it all by itself.

What makes those clusters special isn’t the hardware. It’s all software. How do you take a thousand, or ten thousand, or a hundred thousand, or a million computers, and make them useful? With software. It’s just clever software – two systems, called Mesos and Aurora, which take that collection of thousands upon thousands of machines, and turn it into something that we can easily program, and easily share between thousands of different programs all running on it.

That doesn’t take away from the accomplishment. It just puts the focus where it belongs. “Clever software” isn’t some kind of trick. It’s the product of hard work, innovation, and creativity. It’s where many – or even most – of the big technological advances that we’re watching today are coming from. Innovation and advance aren’t just something that happens in hardware.

Yes all men

Unless you’ve been hiding under a rock, you know about the horrible events of last friday. A misogynistic creep killed a bunch of people, because he believed that women owed him sex and affection, because in his own opinion, he was a terrific guy, an absolute prince. But they didn’t give him what he deserved, and so, he decided to punish them for their “crimes”, and went off on a killing spree.

Seven dead people and a dozen injured ones later, people started reacting. Many women, quite appropriately, pointed out the misogynistic venom of this guy, and how common it is among men. And predictably, tons of men got annoyed at that, and replied saying “Not all men are like that”, and therefore, not all men are responsible.

Bullshit.

Yes, we are responsible. We live in this culture. We sit here, day after day, watching this shit, and doing nothing. Because we maintain that we aren’t guilty. Women on the internet are regularly threatened for the crime of speaking, and we sit by and watch, without doing or saying anything about it.

We are part of the problem, because, for the most part, we don’t care. We aren’t the targets of the abuse. We aren’t the ones who can’t walk down the street without getting harassed. We aren’t the ones who can’t speak without being threatened. And so we just don’t care.

When a man like this guy goes out and murders people because of his hatred of women, our main concern isn’t how common people like him are. It’s not how many women are threatened by people like him, or how many women are actually killed by people like him. It’s about how unfair it is that women talk about the kind of hatred they face from men, without making a specific exception for guys like us. What we worry about isn’t the threats they face – it’s how their reaction to being threatened with actual violence hurts our poor, precious feelings.

Yes, it’s all men who are responsible. Let’s face it: we live in a culture where we are the dominant group. If we got together, stood up, and said “We’re not going to tolerate this shit anymore” – if even a decent-sized minority of us were willing to stand up and say it – the hateful assholes would be driven underground. If we stood up and said “No”, and made sure that any shit-headed bigoted woman-hater actually paid some price in standing in our communities, the threats would end.

If we acknowledged that the violent hatred of women was not just a sickness; that a threat to women is a real threat to other human beings that was serious; that those threats are crimes. That the everyday threats against women are more serious that the threats of terrorism that we’ve used to justify war. If we did that, we’d have to admit that we need to do something about it.

But we don’t. We sit and watch, and downplay the threats. We say that they’re not really serious, that’s just how people act on the internet. We say that the threats don’t matter – those fragile women just need to “man up”, and grow thicker skins. And when women die – as they do every day – we just say it was a crazy person, there’s nothing we can do about it.

3000 people died in a terrorist attack 13 years ago. We were so horrified by it that we started two wars, and virtually rewrote our entire legal system, because it was such a horrible, terrifying threat! We needed to do something to protect ourselves from the terrorists!

The men who threaten women are terrorists too. They’re working from exactly the same motivations: the use of fear to control behavior. Men who threaten women are making threats to force women to behave the way they want them to.

We’re willing to make insane sacrifices to protect ourselves from the threats of terrorists. But we’re not willing to sacrifice anything to protect women from other men.

Until that’s not true anymore – until we stand up and say “Enough!”, and make the behavior truly unacceptable – then we are guilty. All of us.

You can’t even describe most numbers!

(This is a revised version of an old post; you can see the original here.)

Please read the addendum at the bottom – I got carried away with computability, and ended up screwing up quite badly.

In the comments on my last post, a bit of discussion came up about some of the strange properties of real numbers, and that made me want to go back and get this old post about numbers that can’t even be described, and update it.

In those comments, we were talking about whether or not π contains all sorts of interesting subsequences. The fact is, when it comes to π, we just don’t know.

But… We do know that there are many numbers that do. There’s a property called normality. Informally, normality just says that taken to infinity, all sequences of digits are equally likely to occur. In an infinitely long normal number, that means that any finite length subsequence will occur, given enough time. My name, your name, the complete video of the original version of Star Wars, a photograph of the dinner I just ate and never took a photo of – it’s all there, in any normal number.

If you think about that, at first, it might seem profound: there are numbers that contain absolutely everything that ever did exist, and that ever could exist. That seems like it should be amazing. If the numbers were at all special, it might be. But they aren’t. Normality isn’t a rare property that’s only possessed by a special meaningful number like π. It’s a property that’s possessed by almost every number. If you could create a randomly selected list of a billion real numbers (you can’t in reality, but you can mathematically, using a finite version of the axiom of choice), odds are, all of them would be normal – all of them would have this property.

But here’s where it gets really strange. Those numbers, those normal numbers that contain everything? Most of them can’t even be described.

It gets stranger than that. We know that we can’t really write down an irrational number. We can write down successively more and more precise approximations, but any finite representation won’t work. So we can’t actually write down the overwhelming majority of real numbers. But in fact, not only can’t we write them down, we can’t describe the vast majority of numbers.

Numbers are something that we deal with every day, and we think we understand them. Over the ages, people have invented a variety of notations that allow us to write those numbers down: the familiar arabic notation, roman numerals, fractions, decimals, continued fractions, algebraic series, etc. I could easily spend months on this blog just writing about different notations that we use to write numbers, and the benefits and weaknesses of each notation.

But the fact is, when it comes to the vast, overwhelming majority of numbers, all of those notations are utterly useless! That statement seems bizarre at best. As strange as it it seems, though it’s true. In order to understand it, though, we have to start at the very beginning: What does it mean for a number to be describable?

Continue reading

Infinite and Non-Repeating Does Not Mean Unstructured

So, I got in to work this morning, and saw a tweet with the following image:

pi

Pi is an infinite, non-repeating decimal – meaning that every possible number combination exists somewhere in pi. Converted into ASCII text, somewhere in that string of digits is the name of every person you will ever love, the date, time, and manner of your death, and the answers to all the great questions of the universe. Converted into a bitmap, somewhere in that infinite string of digits is a pixel-perfect representation of the first thing you saw on this earth, the last thing you will see before your life leaves you, and all the moments, momentous and mundane, that will occur between those points.

All information that has ever existed or will ever exist, the DNA of every being in the universe.

EVERYTHING: all contaned in the ratio of a circumference and a diameter.

Things like this, that abuse misunderstandings of math in the service of pseudo-mystical rubbish, annoy the crap out of me.

Before I go into detail, let me start with one simple fact: No one knows whether or not π contains every possible finite-length sequence of digits. There is no proof that it does, and there is no proof that it does not. We don’t know. At the moment, no one does. If someone tells you for certain that it does, they’re bullshitting. If someone tell you for certain that it doesn’t,
they’re also bullshitting.

But that’s not really important. What bothers me about this is that it abuses a common misunderstanding of infinity. π is an irrational number. So is e. So are the square roots of most integers. In fact, so are most integral roots of most integers – cube roots, fourth roots, fifth roots, etc. All of these numbers are irrational.

What it means to be irrational is simple, and it can be stated in two different ways:

  1. An irrational number is a number that cannot be written as a ratio (fraction) of two finite integers.
  2. An irrational number is a number whose precise representation in decimal notation is an infinitely long non-repeating sequence of digits.

There are many infinitely long sequences of digits. Some will eventually include every finite sequence of digits; some will not.

For a simple example of a sequence that will, eventually, contain every possible sequence of digits: 0.010203040506070809010011012013014015016…. That is, take the sequence of natural numbers, and write them down after the decimal point with a 0 between them. This will, eventually, contain every possible natural number as a substring – and every finite string of digits is the representation of a natural number.

For a simple example of a sequence that will not contain every possible sequence of digits, consider 0.01011011101111011111… That is, the sequence of natural numbers written in unary form, separated by 0s. This will never include the number combination “2”. It will never contain the number sequence “4”. It will never even contain the digit sequence for four written in binary, because it will never contain a “1” followed by two “0”s. But it never repeats itself. It goes on and on forever, but it never starts repeating – it keeps adding new combinations that never existed before, in the form of longer and longer sequences of “1”s.

Infinite and non-repeating doesn’t mean without pattern, nor does it mean without structure. All that it means is non-repeating. Both of the infinite sequences I described above are infinitely long and non-repeating, but both are also highly structured and predictable. One of those has the property that the original quote talked about; one doesn’t.

That’s the worst thing about the original quotation: it’s taking a common misunderstanding of infinity, and turning it into an implication that’s incorrect. The fact that something is infinitely long and non-repeating isn’t special: most numbers are infinitely long and non-repeating. It doesn’t imply that the number contains all information, because that’s not true. </p.

Hell, it isn’t even close to true. Here’s a simple piece of information that isn’t contained anywhere in π: the decimal representation of e. e is, like π, represented in decimal form as an infinitely long sequence of non-repeating digits. e and π are, in fact, deeply related, via Euler’s equation: e^{i\pi} + 1 = 0. But the digits of e never occur in π, because they can’t: in decimal form, they’re both different infinitely long sequences of digits, so one cannot be contained in the other.

Numbers like π and e are important, and absolutely fascinating. If you take the time to actually study them and understand them, they’re amazing. I’ve writted about both of them: π here and e here. People have spent their entire lives studying them and their properties, and they’re both interesting and important enough to deserve that degree of attention. We don’t need to make up unproven nonsense to make them interesting. We especially don’t need to make up nonsense that teaches people incorrect “fact” about how infinity works.

Parser Combinators Part 2: This time with types!

If you didn’t read my first post on parser combinators, I highly recommend going back and reading it. This post assumes both that you understand parser combinators, and that you’re already familiar with the way I implemented parser combinators in Python.

All of the code discussed in this post is available in a Github repos. It’s still not anything that I’d seriously use for anything important, but it’s good enough to explain stuff.

Continue reading

Net Neutrality

A reader sent me a question about the Net Neutrality stuff that’s been in the news lately:

I keep hearing about net neutrality and the FCC’s new rules. What is net neutrality? Is it a good thing or a bad thing? Should I be worried about it?

I’d actually been meaning to write something about it, but hadn’t gotten around to it, so this seems like a good time!

Before I can really explain net neutrality, we need to go back to some basics, so that we understand how the internet works. A few years back, a not-too-bright congresscritter described the internet as a being like a series of tubes. Lots of people made fun of him for it. But honestly, it’s not a bad description! The network is something like a huge collection of tubes. Each connection has a certain amount of capacity, sort of like the diameter of a tube. What you pay for when you get connected to the internet is a connection with a certain amount of capacity, called bandwidth.

Net neutrality is a property of a particular kind of network. Any network is built out of a massively tangled web of wires and other kinds of connections. In the old days of telephones, telephones worked on something called circuit switching. Simplifying massively, in a circuit switched network, when you called your grandmother on the phone, the telephone network flipped a series of switches creating a circuit between your phone and your grandmother’s phone. The network would find some sequence of wires in the telephone network that ran from your phone to the local service center, from there through a sequence of switches leading to the local service center near your grandmother, and from there to your grandmothers phone. That series of wires was assigned to your connection, and the switches were set to turn that into an electrical circuit connecting your phone to your grandmother’s phone.

att-windowless Circuit switching is remarkably inefficient. The wires carrying your conversation can carry a huge quantity of information, but the voice signal that it actually carried consumed a very small portion of it. It also required massive buildings full of nothing but switches! (For example, there’s this windowless skyscraper in Manhattan that used to be the ATT switching station for NYC. That huge building used to contain nothing but thousands upon thousands of electronic switches!)

The internet takes a different approach, called packet switching. Instead of dedicating wires to corrections, all communication on the internet is broken into little chunks, called packets. Each packet contains routing information that says where it’s coming from, and where it’s going. Your computer is really physically connected to a network called a LAN (local area network) that’s got a small number of computers on it. But one of the computers that’s watching on that wire is a special one called a router. The router is connected to both your LAN, and to some other network(s). It takes any packet that’s put on the network that’s supposed to get delivered to a computer that’s not on the LAN, copies it, and sends it to a network that’s closer to the computer it’s supposed to get sent to. That keeps happening – from network to network to network, until eventually it gets to its destination. The end result is a lot like what used to happen on the telephone network – some sequence of wires gets found which forms a path from your computer to the computer it wants to talk to. But instead of physically reconfiguring the network by flipping mechanical switches, some sequence of routers copies your packets, one at a time, from network to network to network until it gets to where it’s supposed to be going.

The key to the whole shebang is that copying step. When a network router gets a packet, and it’s capable of getting that packet closer to its destination, it’s supposed to copy the packet and send it on its way.

Network neutrality is a very simple idea. It says that your router has to treat all packets equally. It’s not allowed to look at a packet, and say, “Oh, that packet is from Joe, and I don’t like him, so I’m not going to send his packet”, or even “I don’t like Joe, so I’ll send his packet down a slower connection”.

It’s important to understand that in this system, there are no free riders. Joe paid his internet provider for a certain amount of bandwidth. In order to give Joe what he paid for, his internet provider paid their upstream for a certain amount of bandwidth. And so on. The bandwidth is paid for.

Suppose, for example, that I want to watch “Arrested Development” on Netflix (they need some competition so check out: ShowBox App Download). To do that, I pay a monthly fee to my local cable company for a high bandwidth internet connection. I’m paying to be able to send and receive a certain amount of data per month, and I’m choosing to use a part of that bandwidth to watch a stupid comedy show.

So my computer sends a message to netflix, saying “Please send me this TV show.” I’m using the bandwidth that I paid for – both the upward bandwidth to send the request, and the downward bandwidth to receive the packets from the video stream – to send and receive packets from my local cable companies network. As part of my agreement with the cable company, they’re not just supposed to provide me with access to computers on their network – they’re supposed to route my packets out onto the broader internet, and to provide me with a certain amount of bandwidth to do that.

My request goes from my computer to the cable company’s computer. From there, it probably gets sent onto a backbone connection – one of the very high bandwidth connections that, as the name suggests, form the backbone of the internet. My cable company needs to pay for that connection. They pay for a certain amount of bandwidth – both up and down, and the service that I paid for is really a slice of that.

From the backbone, the request goes to an Amazon datacenter. (I think Netflix uses Amazon’s EC2 servers.) Amazon paid for a connection to the backbone, with a certain amount of bandwidth. Part of that is downward – to receive messages like mine request to watch a TV show, and part is upward, for them to send me the packets in the video stream that make up the show.

Finally, Netflix pays Amazon for a collection of computers in their datacenter, and for the bandwidth that they use. Netflix receieves the request, finds the video in their storage, and starts to send me the packets of the livestream. To do that, they use the upward bandwidth that they paid for from Amazon, which uses the upward bandwidth that they paid for to send it to the backbone. From the backbone, it gets to the connection to my local cable company, where it uses the downward bandwidth that my cable company paid for, and then from there to my computer, using the downward bandwidth that I paid for.

At no step of this chain is anyone free-riding. Every bit of bandwidth is paid for.

Net Neutrality essentially says three things:

  1. Bandwidth gets paid for once. If I pay my cable company for 1 gigabyte of downward bandwidth, then that bandwidth is paid for: they can’t go to netflix and said “MarkCC paid for bandwidth, but if you want him to be able to download your video, you also need to pay us for that bandwidth”;
  2. When I pay for an internet connection, I pay for bandwidth – that is, for total quantity. I don’t pay for the ability to watch just says that when I pay for an internet connection, I pay for bandwidth. I don’t pay separately for bandwidth from netflix and bandwidth from twitter.
  3. Routers need to treat all traffic equally. They don’t get to say “Oh, yeah, MarkCC paid for his bandwidth, but since he’s downloading from netflix, and they didn’t pay us extra to deliver it to him, we’ll slow it down.”.

Should you be worried about it? Hell yes! Neutrality is the entire reason that the internet, as we know it, exists. Take it away, and it becomes close to impossible for new businesses to set up shop and compete with existing ones. Netflix has lots of money; if Comcast decides to demand that they pay extra for their customers to be able to watch movies, they’ll be able to afford it. But a new company that wants to set up shop and start competing with Netflix won’t have the resources to do it.

But ignore all of that. The real thing to understand about the opposition to net neutrality is that it’s all about double billing. Internet service providers are already highly profitable businesses, and they deserve to be. They’re providing a valuable service that requires a substantial amount of expensive infrastructure. They built that infrastructure, and they’re profiting from it. That’s all fine. But what they’re trying to do now is say that when you pay for a service, they’re not obligated to actually give you that service unless someone else also pays for it.

You’re already paying for your network connection and your bandwidth: what the people opposing neutrality are arguing is that that’s not enough. They want to be able to double-charge for traffic. You pay for your download bandwidth. The services that you use, like Netflix, pay (a lot!) for their upload bandwidth. And now, your internet service provider wants to be able to charge the services again for the download bandwidth that you already paid for.

That’s what it’s all about. Net neutrality is really just saying that you buy bandwidth from your ISP, and they have to provide that service. When they try to claim that they should be able to separately bill a company like Netflix for their traffic, what they’re saying is that they want to hold the service that you already paid for hostage until they get paid for it again.

To close, let’s jump back to the old telephone metaphor. In the old days of telephone service, you used to pay a basic fee every month for your connection to the telephone network. That didn’t cover calls outside of your local phone companies exchanges – those were extra. If you wanted to call your grandmother, you had to pay for the use of the switches and wires that would be used to connect you to her. You paid for your connection to the network. Your grandmother paid for her connection to the network. And you paid for the use of the lines to connect your network to her network for the call.

Under the arguments of the anti-net neutrality people, the telephone company should have been allowed to say “Ok, sure, you paid for the network to connect to and talk to your grandma. But we’re not going to let you talk to her unless she also pays for the connection that you already paid for.”

Depression and Arrogant Assholes

This is somewhat off-topic for the blog, but pretty damned on-topic for my life.

I’ve talked about this before, but it’s the kind of thing that just keeps coming up.

I’ve got a bunch of different medical conditions.

I’m a post-surgical GERD sufferer – for all intents and purposes, I was born without a sphincter at the top of my stomach. That means that acid could easily get out of my stomach, into my esophagus, my vocal chords, and my lungs. Without surgery, I would probably be dead of esophageal cancer as a result. Since the surgery, I’ve had lingering problems with my swallowing reflex being fouled up, being unable to burp or vomit, and suffering from pretty severe chest pain from muscle spasms. I take a variety of medication as a result, to keep it all under control. For over ten years, I took benzodiazapenes every day as part of the treatment regimen for that: benzos are an addictive drug.

I’ve also got some of the worst allergies of anyone I know. When I first got tested for allergies, based on my symptoms, the allergist selected a set of 45 possible allergens to test me with. Most people with bad allergies would show a significant reaction to 15 or so. I came up positive for 42. I needed to get allergy shots for 25 years, and I continue to take antihistamines and inhaled steroids on a daily basis to treat it.

I also have clinical depression, and very severe social anxiety. I take an SSRI every day for the depression, but I’ve yet to find anything that works for the social anxiety.

No one ever gives me any grief about my stomach troubles. I mean, what could I do? The muscle at the top of my stomach didn’t work. There was a huge amount of acid getting into my throat and even my lungs every day! Of course I had to do something about it!

And allergies? Man, that sucks. Everyone feels bad when they see me sniffling, or when I have to pull out an inhaler because I can’t breathe. But you know, luck of the draw, some people get stuck with allergies. No fun, but it’s manageable with medication, right?

But depression? Whoa, baby. Whole different story. Instead of the benign sympathy or indifference that I get from people who hear about my other troubles, I get shit like this:

To be clear, that wasn’t directed at me in particular. No, it was directed at everyone who’s suffering with depression. See, depression isn’t a real illness. People who are living with depression aren’t suffering from a real illness. No, we’re worshipping our depression. All we need to do is stop being such pathetic assholes, and get up and “make strides to improve our life”.

Before I started taking my SSRIs, I didn’t worship my depression. I didn’t even realize that I was depressed. I just felt like the world had gone flat. It’s hard to describe it better than that. I didn’t really feel much of anything. I wasn’t sad. I wasn’t happy. I wasn’t anything. The world was all grays, no colors. Good things happened, and I couldn’t feel good. Bad things happened, and I couldn’t feel bad. My wife was pregnant with our second child, and I wasn’t happy, I wasn’t nervous, I wasn’t anything. I was just flat.

When I finally realized that there was something wrong with me, I got a referral from my doctor, and I saw a psychiatrist, who prescribed medication for me. About two weeks later, I realized that stuff was different, because I was noticing colors. I never stopped seeing them, but they stopped registering – they didn’t mean anything. I literally felt as if I weighed less – like someone had laid an invisible lead blanket on me, and now it had been removed. Something bad happened at work – the project that I’d been working on for the previous two years had not gotten its funding – and I was upset about it!

It was an amazing thing, a total change of the world. The pills didn’t make me happy. With the stuff at work, and the stress of a new child coming, I was probably more unhappy than I was before I started taking them. But the unhappiness was real, it was mine, and I felt it.

After about a year of taking the pills, on the advice of my doctor, I tried stopping it. For people with symptoms like mine, he said that about 40% would relapse pretty quickly, but 60% would be fine without any medication. After about three months without it, the world went back to that flat drab nothingness – I was part of that 40%. So I restarted the medication, and I haven’t stopped since.

I won’t even get into the social anxiety stuff here. That’s even worse that the depression. Depression is commonly viewed more as a personal weakness than an illness; but SA isn’t even seen as that: it’s just a joke.

Contrary to what the assholes out there say, depression isn’t a personal weakness. It’s not the pathetic obsession of weak-souled losers who just need to get off their couch and stop being such a schlub. It’s a real illness. It’s difficult, and it’s painful.

My depression is just as real as my GERD was, or as my allergies and asthma are. It needed to be treated just as seriously as they did. Getting stomach surgery probably saved my life. Getting treated for depression probably did too.

The assholes who try to portray depression as weakness, who mock people for suffering from depression, who make cracks about being nice to our moms and getting off of our butts: they’re not helping. But really, they don’t want to help. They want to feel smug and superior. But they’re doing worse than just not helping. They’re actively making things worse. By reinforcing the stigma against mental illness, they’re making it less likely that people who desperately need help will be able to get it.

So, as I responded to the tweet I quoted above: