Several commenters pointed out that I made several mistakes in my description of Erlang. It turns out that the main reference source I used for this post, an excerpt from a textbook on Erlang available on the Erlang website, is quite out of date. I didn’t realize this; I assumed that if they pointed towards that as a tutorial, that it would represent the current state of the language. My bad. As a result, several things that I said about Erlang – including some negative comments, were inaccurate. I’ve updated the article, and the changes are marked with comments.
As long-time readers will recall, one of my greatest interests in programming languages. A while back, I wrote a tutorial on Haskell, which is one of the most influential languages currently in the programming language research community. Haskell is a pure, lazy, functional language which gained a lot of attention in recent times for being incredibly expressive, while maintaining a solid theoretical basis with well-defined semantics. What made Haskell unusual was that it’s a completely pure functional language – meaning no true state at all – not for I/O, not for assignment, not for mutable data – but through the
use of a clever construct called a monad, you can create the effect of state without disturbing the functional semantics of the language. It’s quite a nice idea, although I must admit that I remain somewhat skeptical about how scaleable it might be.
One of the competitors of Haskell for mindshare in the community of people who are interested in functional programming languages is a language called Erlang. In many ways, Erlang is a polar opposite to Haskell. Erlang is, basically, a functional language, but it’s designers didn’t object to tossing in a bit of state when it made things easier. It’s dynamically typed,
and even for a dynamically typed language, it’s support for typing is weak. It’s gotten its attention for a couple of big reasons:
- Erlang was designed by Ericsson for implementing the software in their switches and routers. It’s the first functional language designed by a company for building production applications for extremely complex performance-critical low-level applications like switching and routing.
- Erlang is specifically designed for building distributed systems. As I’ve mentioned before, programming for distributed systems is incredibly difficult, and most programming languages are terrible at it.
Concurrency and distributed systems are a big deal. I’d argue that programming for concurrency and distribution are the biggest problems facing software developers today. Pretty much every computer that you can buy has multiple processors, and to take full advantage of their power, code needs to use concurrency. And the network is an unavoidable part of our environment: many of the applications that we’re writing today need to be aware of the internet, and need to interact with other systems. These are just simple facts of the modern computing world – and software developers need tools to deal with them.
You see, concurrent programs are hard to write. Part of that is intrinsic complexity: there are simply more issues that you need to address to write a correct program with concurrency than there would be in a similar non-concurrent program. The other part of it involves tools and languages: the programming languages and related tools that we use are terrible at concurrency.
Many Haskell proponents claim that because Haskell doesn’t specify the order in which things are done, and it makes the data dependency relations explicit, and that therefore, it’s more amenable to automatic parallelization. The problem with that claim is that it doesn’t work. Haskell systems don’t, in general, parallelize well. They’re particularly bad for the kind of very coarse thread-based concurrency that
we need to program for on multi-core computers, or distributed systems. Haskell usually ends up falling back to Monads: monads provide an explicit way of writing parallel code. But Monad’s aren’t a great answer. In Haskell, almost everything ends up falling back to Monads – I/O, state, mutable structures, and so on. And monads are hard to combine! So you’re generally stuck with an extremely difficult problem of how to manage the non-concurrent monads that you need with the concurrency monad. The end result is as bad as any other of the god-awful train-wrecks that we use.
Erlang is different. Concurrency operations aren’t a second-class mess grafted onto an existing non-concurrent language. Concurrency is exactly what it was designed for, and at least for the Ericsson applications which it was designed for, it does a great job. (The Ericsson folks have published case study papers demonstrating how well Erlang performed.) The big question to me is, how much of the performance of Erlang comes from the fact that the people who designed it are the leaders of the team who used it? How much of it comes from the fact that it was specifically designed for the applications that were implemented in it? And how much comes from the general qualities of the language for implementing arbitrary distributed systems? That’s what I really want to know. (Plus, Erlang also looks like a good language for implementing Pica, so
this also gives me a good excuse for learning enough about it to see if it will really do the
job.)
All of this is an extremely long-winded, almost Oracian way of saying that I’m going to start
writing a series of tutorial articles on Erlang. For today, I’m not going to go into depth on
anything – I’m just going to show a couple of examples of what Erlang looks like. I’ll get into
depth in subsequent posts.
For starters, let’s look at the pure functional part of it. We’ll start off with that canonical example of all functional programs, the factorial function.
-module(tut). -export([factorial/1]). factorial(0) -> 1; factorial(N) -> N * factorial(N-1)
Right off, you should notice that there’s two sub-languages: a module language (where you describe how things fit together), and an expression language (where you describe what things do). The module language statements always start with “-“. A source file is a module, and always starts off with a module declaration, declaring its name. The standard Erlang implementation expects a module to share a name with the source file that contains it. So the example above would need to be in a file named “tut.erl”.
The export statement tells the Erlang compiler what declarations in the module should be visible to code outside of the file. This is one of the places where a bit of ugliness from the original implementation creeps through: in the prolog style, the name of a function is given by both its name, and its number of parameters. So factorial
is factorial/1
in module tut
.
With the housekeeping stuff out of the way, we can look at the meat of things. It’s a simple pattern-matching based declaration of factorial – virtually identical to what you would have written in Haskell or SML. The first case declares the result when the parameter is “0”; the second case for any other value. There are no type declarations: unlike Haskell, Erlang is not strongly typed. (As people who know me might guess, this doesn’t thrill me; I’m a big fan of strong typing. But I’m willing to keep an open mind, and see how it works out.)
Erlang has a small set of built-in primitive types, including:
- Integers, written in the usual syntax.
- Floating point numbers, again written in the usual syntax.
- Atoms. Atoms combine both the idea of strings, and the lisp notion of symbols. If an atom
starts with a lower-case letter, and contains no punctuation or spaces, then it doesn’t need
to be quoted. Otherwise, it’s enclosed in single quotes. - Pids. Pids are process identifiers – references to other processes which can be used
for communication. - Functions. First class function values.
(I originally said there were only four basic types. I was wrong – there are more. I’m not mentioning them all here, but these are the key ones for the discussion in this article.)
There are two main kinds of data structures in Erlang: lists (which are variable length), and tuples (fixed length). Lists are written with square brackets like [1, 2, 3]
; tuples are written with curly braces like {1, 2, 3}. And like Haskell, everything
. As I said above, there’s really no type system here, so you build data types informally, using lists and tuples, rather than
declaring strict types. There’s no enforced mechanism, but a common style is to use tuples where the first element is an atom, which is used the way that a type constructor name is used in Haskell. So, for example, we might declare a tree in Haskell as follows:
data Tree a = Node (Tree a) (Tree a) | Leaf a
And then we could define a tree value as:
tree = (Node (Node (Leaf 3) (Leaf 4)) (Leaf 1))
In Erlang, we can dispense with the declaration, and just write it out:
Tree = {node, {node, {leaf, 3}, {leaf, 4}}, {leaf, 1}}.
(In addition, Erlang includes a preprocessing phase which helps reduce the amount of work involved in creating user defined record types. I’ll talk about this in a later post.)
Basic data structure manipulating programs are written in a style very similar to
the way you’d write them in Haskell or ML – basic pattern-matching style functional programming. But Erlang functions are eager – they evaluate their parameters before the function is executed. That means that a lot of Haskell-ish idioms won’t work, but it also means that the way things work is a lot more obvious: lazy evaluation is frequently confusing. To give an example of a list-based function in
Erlang, here’s an implementation of the basic “map” operator, which takes a function and a list, and returns a new list containing the result of applying the function to each element of the original list.
-module(map). -export(map/2, two/1). map(F, []) -> []; map(F,[Car|Cdr]) -> NewCar = F(Car), NewCdr = map(F,Cdr), [NewCar|NewCdr]. two(X) -> 2*X.
This takes function parameter, “F”, and applies it to each value in sequence. This can be called as follows:
example = map:map(fun map:twice/1, [1, 2, 3]). > [2, 4, 6]
To get the value of a non-anonymous function, you use its full name, qualified with the arity, and preface it with the word “fun”. So “fun map:twice/1” is the function “twice” from the module “map” which takes one parameter. (I originally had a discussion here of how Erlang’s function parameters were dreadfully awkward. Turns out that that’s an anachronism: it was true in old versions of the language, but hasn’t been true for years.)
I’m just going to show you one more quick example today, which gives you a little taste of what
concurrency looks like in Erlang. This example I’ve copied verbatim from the Erlang manual.
-module(echo). -export([start/0, loop/0]). start() -> spawn(echo, loop, []). loop() -> receive {From, Message} -> From ! Message, loop() end.
The “start” function creates a new process, using the “spawn” operator. Spawn takes three parameters: the module of the function to run in the new process; the name of the function to run in the new process; and a list of parameters to pass to the function when it’s invoked. So “start” spawns a new process running the “loop” function, returns the process identifier of that new process.
The “loop” function is a tail-recursive loop, which recieves a message from another process containing a tuple; the first element of the tuple is expected to be the process-ID of the sender, and the second is an arbitrary value. “loop” just echoes the value part of the message back to its sender; “!” is the mesage send operator, which sends its right parameter as message to the process identified by its left parameter.
To use this, we’d first invoke “start” to create a process:
Id = echo:start().
That starts the new process, and stores its pid in “Id”. Using the process identifier of the newly
spawned process, we can send it a message:
Id ! {self(), hello}. > {,hello}
This sends a message to the process in ID, containing the Pid of the sender (computed by invoking “self()”), and the value “hello”. The spawned process receives the message, and echoes it back to the sender. When the top-level process receives a message, it outputs a tuple containing the pid of the sender, and the contents of the message – so it printed “{,hello}”, because “” is the pid of the sender (the spawned loop), and “hello” was the contents of the message.
programming for distributed systems is incredibly difficult, and most programming languages are terrible at it
I would go one step further, and say _all_ languages are terrible at it. Some are slightly less terrible. I know a programmer that swears to me (in c) that using volatile makes his code “thread safe”, yet he cant even explain basic locking to me correctly.
There’s a hole in your article: “And like Haskell, everything .”
(Also, it’s “Ericsson”.)
Interesting to see that Erlang contains a dedicated message passing operator. Is it restricted to local processes, or can “pid” contain the address of another machine altogether?
Tom: To me concurrency is a bit like quantum mechanics – if you don’t think it’s hard you probably haven’t thought about it enough. I probably slaughtered that quote, but you get my point.
Hank:
A process identifier in Erlang can be for a process anywhere. There’s a mechanism (which I’ll write about in a future post) for specifying ports that should be “visible” from outside, and for specifying how to connect to a visible port from a process on another machine.
> Haskell systems don’t, in general, parallelize well. They’re particularly bad for the kind of very coarse thread-based concurrency that we need to program for on multi-core computers,
I have to strongly dispute this. Coarse grained, multi-core concurrency is the strongest aspect of concurrent and parallel Haskell, at the moment, and with type erased, native code compilation, GHC Haskell typically outperforms competing languages on threaded benchmarks.
Multi-core concurrency is very well supported, with a wider range of concurrency abstractions available in Haskell than in any significant competing language, including Erlang. I find this statement a quite bizarre, in fact.
With forkIO, Control.Concurrent.Chan, Control.Parallel, transactional
memory and nested data parallelism, there’s a wide range of composable concurrency and parallelism abstractions, that are very fast.
It’s trivial to break up a Haskell process into separate lightweight
Haskell threads running on the multicore runtime, communicating via
lock free message-passing channels, MVars or transactional memory.
And then once your program is running over multiple threads, communicating via lock-free channels, mapping these threads onto newer, larger multicore hardware is just a matter of setting -N16 (or more..) on the command line flags.
And being native code compiled, with type erasure, the result is
very fast concurrency:
http://shootout.alioth.debian.org/gp4/benchmark.php?test=chameneos&lang=all
http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all
I do agree that supports of _distribution_ is much weaker in current
Haskell implementations that in Erlang, but to say multicore concurrency, in Haskell is weak, is just plain wrong.
I’ve ported this example concurrency code to Haskell, over here http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/26#no-headaches
Not in particular, there are no scary monadic issues involved. Threads are just pure Haskell computations, communicating by the usual mechanisms, and the code scales from single to multiple cores happily.
Unless I misunderstood you, you *can* use function parameters “directly”:
map(_, []) -> [];
map(F, [H|T]) -> [F(H) | map(F, T)].
Many Haskell proponents claim that because Haskell doesn’t specify the order in which things are done, and it makes the data dependency relations explicit, and that therefore, it’s more amenable to automatic parallelization.
I don’t know of any “Haskell proponent” who has seriously claimed that for at least ten years, though this was, indeed, a strong research focus in the 80s. It does, of course, make explicit parallelisation easier than in many languages, but I don’t think that anyone has ever claimed that it’s easier in Haskell than in Erlang.
One thing that isn’t mentioned is why Erlang does real-time concurrency so well: Each thread has its own heap. This means that threads can be transparently migrated over a network, and that threads can be garbage collected independently without halting any other threads.
Haskell simply does not operate in the same space as Erlang. Erlang does real-time, Haskell doesn’t. Erlang does network independence, Haskell doesn’t. And Haskell does elegant multi-core, shared-state concurrency, Erlang doesn’t.
Vincent:
What I meant about function parameters is in contrast to Haskell or Scheme. So, for example, in Haskell, you could write map as:
map f [] = []
map f (x:xs) = (f x):(map f xs)
Notice how we apply “f” in the Haskell. A function is a value like any other, and it can be used just like any declared function. To invoke f on the value x, you use “f x”, whether f is a top-level function, a locally declared function, or a function parameter.
In Erlang, given a declared function F, I can invoke it
by F(X). Given a function parameter, I can only invoke it via apply(F,[X]). And if f is a top-level function, I can’t pass it to a function g by g(f). I need to pass it by g({module,f}).
There are a number of inaccuracies in the article:
* First a minor one:
“There’s no enforced mechanism, but a common style is to use tuples where the first element is a __string__, ”
string should in this context be a atom, a erlang string (double quoted text) is a list containing character codes/integers (8 bit – latin-1 encoded).
There is a preprocessor based record notation in erlang which makes it slightly easier to deal with these kinds of tag tuples and which also adds some compile time and run time checks – check for correct field names, correct tag name and tuple length.
———————–
* The second major error:
“This takes function parameter, “F”, and applies it to each value in sequence. Function parameters are rather awkward in Erlang: you can’t use a function parameter in a standard call; it can only be used through apply”
There are several ways to pass and call functions in erlang:
– to create a function closure (F) wrapping a segment of code in a erlang fun do:
F = fun(…Args…) -> …Code… end
– or define a reference to a function:
F = foo/3
F = module:foo_function/2
– it is also possible to pass a function name or {module,function} tuple as mentioned in the article (although the tuple notation is only retained for backwards compatibility with ancient code).
To use a function reference one can simply call it as:
F = …,
… = F(…),
or directly:
… = (fun(A) -> A * A end) (2),
… = (fun add/2) (2, 2),
… = (fun math:pow/2) (2, 2),
… = {math, pow} (2, 2),
Function and module names can be used the same way:
FunctionNameAtom = add,
… = FunctionNameAtom(2),
ModuleNameAtom = math,
FunctionNameAtom = pow,
… = ModuleNameAtom:FunctionNameAtom(2)
There are some rare cases when apply is useful, but it rare to see it in modern erlang code.
———————–
* It appears that this article is based on the old erlang book (from 1993) which is available at http://www.erlang.org/download.html (erlang-book-part1.pdf).
More current information is available in the html erlang documentation here:
http://www.erlang.org/doc/ (online version)
And in the new erlang book:
http://www.pragprog.com/titles/jaerlang/index.html
There are a number of inaccuracies in the article:
* First a minor one:
“There’s no enforced mechanism, but a common style is to use tuples where the first element is a __string__, ”
string should in this context be a atom, a erlang string (double quoted text) is a list containing character codes/integers (8 bit – latin-1 encoded).
There is a preprocessor based record notation in erlang which makes it slightly easier to deal with these kinds of tag tuples and which also adds some compile time and run time checks – check for correct field names, correct tag name and tuple length.
———————–
* The second major error:
“This takes function parameter, “F”, and applies it to each value in sequence. Function parameters are rather awkward in Erlang: you can’t use a function parameter in a standard call; it can only be used through apply”
There are several ways to pass and call functions in erlang:
– to create a function closure (F) wrapping a segment of code in a erlang fun do:
F = fun(…Args…) -> …Code… end
– or define a reference to a function:
F = foo/3
F = module:foo_function/2
– it is also possible to pass a function name or {module,function} tuple as mentioned in the article (although the tuple notation is only retained for backwards compatibility with ancient code).
To use a function reference one can simply call it as:
F = …,
… = F(…),
or directly:
… = (fun(A) -> A * A end) (2),
… = (fun add/2) (2, 2),
… = (fun math:pow/2) (2, 2),
… = {math, pow} (2, 2),
Function and module names can be used the same way:
FunctionNameAtom = add,
… = FunctionNameAtom(2),
ModuleNameAtom = math,
FunctionNameAtom = pow,
… = ModuleNameAtom:FunctionNameAtom(2)
There are some rare cases when apply is useful, but it rare to see it in modern erlang code.
———————–
* It appears that this article is based on the old erlang book (from 1993) which is available at http://www.erlang.org/download.html (erlang-book-part1.pdf).
More current information is available in the html erlang documentation here:
http://www.erlang.org/doc/ (online version)
And in the new erlang book:
http://www.pragprog.com/titles/jaerlang/index.html
Check out Erlang’s “fun” syntax. You can pass a top-level function f to the function g by using “g(fun f/1)”, that is the atom “fun”, the name of the function, and its arity. Functions in other modules are available through “fun module:name/arity”.
http://erlang.org/doc/reference_manual/expressions.html#6.17
In Haskell, almost everything ends up falling back to Monads – I/O, state, mutable structures, and so on. And monads are hard to combine! So you’re generally stuck with an extremely difficult problem of how to manage the non-concurrent monads that you need with the concurrency monad. The end result is as bad as any other of the god-awful train-wrecks that we use.
Have you every written a concurrent program in Haskell? Concurrency happens in the standard IO monad, there is absolutely no issue of non-concurrency versus concurrency monad.
Have a look at Don’s constructive proof, refuting any claims that coding the example in this post is any harder in Haskell than in Erlang. Concurrency in Haskell is easy.
Thanks, Mark, I really enojoy your forays. Concurrency issues, and loosely coupled processes, have been in the fore of my work for many years now, but I have for the most part had to deal with them on my own. 4K bytes ROM in a Z80 micro do not leave much room for formal languages! The domain has been error-free data communications with line sharing (i.e. multiple independent conversations sharing one or more lines). Error correcting protocols require maintaining state images of the remote participants and keeping the states synchronized. This stuff has usually been tightly hand-coded in assembly for me because of the physical system constraints, but the last few years have seen an explosion of capacity in what can be cost-justified in the embedded systems arena.
After exploring Erlang, how about exploring E (http://www.erights.org/) for us? I particularly like its ‘eventually’ construct, one that I have been using informally in my current box’o’objects work…
Without knowing erlang is it fair to say that it is excellent at distributed systems. I think that would certainly be its niche if it were.
Re: the errors in the post. I’m working from public docs, starting with the available section of the old-ish Erlang text. I’m learning Erlang as I go, so I’m not terribly surprised that I got a couple of things wrong. I’ll correct them in the current post when I get some time.
Re: Haskell. I still maintain that Haskell is lousy at the kind of concurrency that concerns me. There’s very fine-grained shared memory concurrency – Haskell *might* be OK at that, although my experiences with it haven’t been great. But coarse-grained explicitly threaded code without shared memory, with message passing and explicit synchronization – i.e., almost all distributed code in the real world – is *not* great in Haskell.
Reproducing the examples mentioned in this post in Haskell is no great feat – they’re *trivial* examples, deliberately so, since I’m just trying to give a tiny bit of the flavor of it. As we’ll see later in the series, trivial things are very much *not* good demonstrations of the distributed programming support in Erlang. Later, when we get to some much more heavy-duty concurrency with complex synchronization semantics – then, I’ll be *very* impressed (and very happy!) if you can show me a Haskell equivalent. But give me a chance to get to it!
Mark, I really think you should get a copy of the recent Erlang book at The Pragmatic Bookshelf: http://www.pragprog.com/titles/jaerlang/.
It will definitely speed up your learning and help you avoid most of the outdated idioms you may find in some older documentation. Godspeed!
Hi Mark,
I’m eagerly awaiting the meatier examples in which you prove the superiority of Erlang. However since you haven’t got around to them yet, perhaps you’d care to retract, at least for the moment, your comments about Haskell? They’re nothing more than uninformed assertions offered without proof, which is surprising to see from you as you normally have a more scientific approach.
Regards
Neil
Hello Mark,
thank you very much for your article. Looking at the reactions it seems highly visible.
First some critique.
* I think you are too harsh on Haskell.
* Then I feel not happy about some of the terms you use. Is there really a term “functional concurrency”? You are correct that the source decomposes in two syntaxa, one is Erlang syntax and the other one is the syntax of the Erlang pre-processor epp (have a look at lib/stdlib/src/epp.erl, the documentation seems rather poor), I would not call it “module language”.
* The Prolog syntax is not ugly, IMHO.
* There are 8 primitive types, see [Armstrong 2007], Appendix A.1
* I am glad you did not mention records as a fundamental data structure, as these are tuples handled by epp.
Then a wish. As you maybe know the current most prominent Erlang applications are ejabberd, yaws and tsung. I would really like to know (and probably a lot of folks from the XMPP community as well) if it is true and if yes why Google switched from ejabberd for Google Talk to something else. Can you ask your collegues to shed light onto this? Sorry for that inpolite question. 🙂
Richard:
I’ve already ordered a copy of the more recent book.
Neil:
I didn’t assert that Erlang is superior to Haskell – just that in one specific problem domain where I am not satisfied with Haskell’s support, that Erlang may be better.
As I’ve said before: I think that nearly all languages are terrible at distributed programming. (The only languages I can think of that I thought did a good job were Hermes and Obliq, both of which died out a few years ago.) Erlang *might* be better; since I’m still learning it, I haven’t formed an opinion yet.
But I still think that Haskell, like almost everything else out there, doesn’t handle distributed systems well. I need to edit this post when I have time; I’ll reword it to make it clear that I’m talking about distributed systems style concurrency when I criticize Haskell.
To expand on earlier comments about the fun notation, functions are a primitive data type in Erlang. You can define a fun using:
1> F = fun(X) -> X * X end.
#Fun<erl_eval.6.56006484>
2> lists:map(F, [1,2,3]).
[1,4,9]
There’s a strange distinction between named functions which are tied to modules and can’t be directly passed to functions and anonymous functions (funs) which can be. You cast a named function to an anonymous function using:
3> G = lists:map/2.
4> G(F, [1,2,3]).
The distinction stems from Erlang’s coolest feature, hot code swapping at run time. What happens if your double function is recompiled half way through the map? So, you have to choose between passing a reference to the compiled code, or the name of a function which may be recompiled.
>The only languages I can think of that I thought did a good job were Hermes and Obliq, both of which died out a few years ago.
Bearophile:
To do that properly would take a full post, which I’ll probably do at some point. For now, I’ll give you a quick idea.
Hermes was an ultra-strongly typed language designed at IBM research. The fundamental construct in the language is the process. Everything is a process: a “function” is just a process that uses a particular style of message/response. Any process that follows the function pattern can be invoked synchronously or asynchronously; and the implementation can be written either synchronously or asynchronously. To run a program with threads on a single machine, you’d just run it; to distribute it, you could either run programs on multiple machines and have them connect to each other, or you could provide a map file saying how to assign processes to machines.
Obliq is quite different. It’s a scripting language from the Modula-3 tools designed by Luca Cardelli. It’s an object-oriented language based on object composition. It’s based on using static scoping for security. It’s an amazing language, which I should write about someday.
(The only languages I can think of that I thought did a good job were Hermes and Obliq, both of which died out a few years ago.)
I always have wondered if Obliq had an audience outside of Modula-3 dorks (a group in which I include myself). I think it was an excellent language for the kind of distributed programming people want to be doing now.
Great, I always wanted to know more about a language that has been rumored to have been among the make or break factors for Ericsson telecom.
IIRC it was used to modularize the software, corresponding to an extreme hardware modularization.
Agreed. If you know that it is a Swedish based company, despite spelling “Erik” as the more international “Eric”, the mnemonic is: common swedish names were patronymic a few generations back.
So it’s “Ericsson” from “Erics son” (en: “Eric’s son”), with a typical swedish concatenation.
I think – ve swedizh kooks spel funni. Eushke-beushke-beu, vile I put de Moose in de oven.
(What does moose taste like, anyway? There must be someone who eats it, somewhere on earth.)
OT (or should we take it to the recipe thread? :-):
Um, of course Swedes eat elk. It’s our largest game. And as wolves are but slowly making a come back by migration from Russia and Norway, there are too many of them considering our intense forestry and many unfenced roads. So it’s not an uncommon meat.
Elk taste gamy, but in the nice way. Especially with a spicy sauce and the usual mellowing condiment of lingonberry jelly. My personal favorite is elk burger, as the meat mix and texture is just perfect.
Speaking of texture, I was treated with bear tongue on a semiconductor congress in Finland. Not that tongue(s) is unusual to have in your mouth. But it was decidedly an odd sensation to purposely chew on one.
Okay, Elk burger just went on my list of things to try. I’ve enjoyed bison burger and ostrichburger. but I think I may be allergic to alligator.
I had elk a few months ago, in some nice chops. The sauce was vaguely spicy, though it was not served with any jelly. It was delicious!
In your code you write: two(X) -> 2*X.
But later in text and code refer to it as: twice
Elk? Don’t you mean moose, Torbjörn?
The last few posts in this thread remind me waaaay too much of the opening credits in “Monty Python and the Holy Grail” — off topic and elk-obsessed. Outstaanding! (sic)