I’ve gotten both some comments and some e-mail from people in response to my mini-rant about Erlang’s macros. I started replying in comments, but it got long enough that I thought it made sense to promote it up to a top-level post. It’s not really an Erlang issue, but a general language issue, and my opinions concerning it end up getting into some programming language design and philosophy issues. If, like me, you think that things like that are fun, then read on!
I’ll start at the absolute beginning. What’s a macro?
A macro is a meta-structure in a language which defines a translation of the code to be performed before compiling or running that code. To give a simple example, let’s look at a simple C++ program:
#include <iostream.h> #define f(x) (2*x) int main() { cout << f(3) << endl; }
“f
” is a simple macro. Whenever you write “f(x)
” in a program, the
pre-processor, which runs before the compiler, will replace it with “(2*x)
“. So in the main function, the macro preprocessor replaces “f(3)
” so that the program that gets compiled is actually the following:
#include <iostream.h> int main() { cout << (2*3) << endl; }
Whatever you put inside of “f(...)
” will be put into the replacement. In the case of C, the macros are textual: the compiler makes no attempt to verify that the body of the macro is in any way valid C++ code. It just does a purely textual substitution. This means that it’s possible to write a macro that generates something totally invalid and incomprehensible. If you invoke the above macro with “f(+)
“, the preprocessor will expand it to “(2*+)
“.
Textual replacement macros are the trickiest and most error prone. More sophisticated macro systems, like the ones in Lisp, work on some form of parse tree. Lisp code can be represented in terms of the basic list types of lisp, and the macro system takes advantage of that to describe how to construct code in a macro by performing list manipulations. For example, in CommonLisp I could use the following to define a simple “while” loop in terms of the built-in “do” loop:
(defmacro while (test &rest body) (append (list 'do nil (list (list 'not test))) body))
What this says is: when you see an expression that looks like a function call to the function “test”, assign the first parameter to the parameter “test”; assign a list containing the rest of the parameters to a variable named “body”, and then execute the macro code using those parameter bindings,
generating as a result the list form of a lisp syntax tree. So, for example, if you wrote “(while (< x 10) (setq x (+ x 1)))
“, the macro would evaluate the expression “(append (list 'do nil (list (list 'not (< x 10)))) (setq x (+ x 1)))
“, which would evaluate to “(do () ((not (< x 10))) (setq x (+ x 1)))
“, which is a valid Lisp loop.
The Lisp macro system is amazingly powerful: you’ve got a full Turing-complete meta-programming system available to you for extending the syntax of Lisp, and it’s been used for building a huge variety of amazing things. There’s an object-oriented programming system implemented in Lisp called CLOS whose original implementation used no compiler changes at all – just a lot of skeleton code, with definitions glued together using macros.
So why do I dislike macros so much? I don’t, in theory. The problem is what happens in practice – most programming language macro systems – including the two example ones I described above, stink. What I dislike is how they’re generally built. You can fix most (but not all) of the problems caused by macros if you’ve got a good enough macro system. But because most macro systems rely on a simple expansion-and-substitution, they’re prone to creating all sort of problems. I’ll walk you through a few examples, to give you a sense of what I mean.
The most obvious example of what’s wrong with substitution macros is that they can create all sorts of unfortunate interactions with the evaluation semantics of the system. Macro evaluations don’t work
the same as function evaluations, even though they usually look the same. The unfortunate effect of
this is that you can get very unexpected semantic results from using them. Here’s a typical example. Suppose I want a macro that doubles its parameter, regardless of type. Any type that supports
the “+” operator should be usable for the macro. So if I give it a “3”, it should return “3+3” (which will evaluated by constant propagation to “6”); if you give it an “A”, it should return “A”+”A” (which evaluates to “AA”.) I can do that in C++ with:
#define double(x) (x+x)
Now, suppose I use that in a program:
#include <iostream.h> #define double(x) (x+x) int main() { for (int i=0; i < 10; i++) { cout << "doubling " << i << " gives " << double(i) << endl; } }
That generates the output:
doubling 0 gives 0 doubling 1 gives 2 doubling 2 gives 4 doubling 3 gives 6 doubling 4 gives 8 doubling 5 gives 10 doubling 6 gives 12 doubling 7 gives 14 doubling 8 gives 16 doubling 9 gives 18
So far, so good. Now suppose I get rid of the for loop, and write it in a C++-ish fashion
using a while loop and the auto-increment operator:
#include <iostream.h> #define double(x) (x+x) int main() { int i = 0; while (i < 10) { cout << "doubling " << i << " gives " << double(i++) << endl; } }
If “double” were a function, this would generate the same result as the original
program. But since “double” is a macro, if we run it, we get:
doubling 2 gives 0 doubling 4 gives 4 doubling 6 gives 8 doubling 8 gives 12 doubling 10 gives 16
We get a very odd, unexpected result. That’s because the expansion of “double
” turns “i++
” into
“(i++ + i++)
“, incrementing “i
” twice each iteration. That’s very typical of C-style macros: you can get very unexpected results from macros, because they can end up executing parameters either more than once, or not at all, so parameters with any kind of state-effects can cause all sorts of trouble. To make matters worse, the invisibility of macros means that unless you’ve seen the implementation of a library,
you don’t know which calls are function calls and which are macro invocations, so you need to check implementations to figure out what kind of parameters you can use.
You can make the argument that that’s sloppy programming: that passing a parameter expression that has a side effect is just bad, and shouldn’t be done. That’s not an unreasonable argument, except for the fact that you don’t always know when a parameter expression causes a side effect. I’ve been bitten more than once by things that are described as pure, idempotent functions, but which generate error messages – and having them generate the same message multiple times because of an unfortunate macro effect.
But even if you ban side-effecting parameters to all calls, effectively eliminating the
issue above, there’s still one major potential source of grief in most macro systems. It’s a bit more
subtle, and more serious because of its subtlety. You can use a macro for years with no ill effects, and then get screwed by this. It’s called name capture. A trivial example of it could happen with the example above: “double” is the name of a built-in type in C++, and “double” written as a function
is supposed to be a type conversion. But because we’ve implemented it as a macro, the type conversions aren’t going to do what you expect: they’ve been replaced, because the macro system captured the name.
For a more interesting and subtle example of this, we can shift back to Lisp, suppose you used
macros to implement a simple for loop. And suppose that you did it in a way that introduced a variable
in your macro, like the following:
(defmacro for (var init incr bound &rest body) `(let ((,var ,init)) (while (< ,bound ,var) ,@body (setq old-val ,var) (setq var (+ old-val ,incr)))))
In here, “old-val
” is used as a counter to hold the previous value of the iteration. It’s not really
needed here, but it allows me to illustrate the problem without getting into an overly complicated
example. What happens if you use the “for
” macro in a function which already has a variable named
“old-var
“? What happens is a disaster: the macro “captures” the existing variable, and changes its
value. The variable that’s used in the macro-expansion is in the same namespace as variables in the
context where the macro is used, and so it can overlap or collide with existing variables. You’ll never
know that this is possible until your program starts mysteriously getting wrong results, because you
finally used the macro in a context where a name was already defined. To get around this, Lisp has a
bunch of functions that generate new symbols that are guaranteed never to have been seen before, but
writing code that’s completely safe, so that it never accidentally captures or shadows an existing
variable is a very tricky skill.
There are ways around the syntactic capture issue, but they’re very rare. In fact, I really only know of one – the hygenic macro system in Scheme. (I’ve seen pretty much exactly the same thing crop up in some other Lisp variants, like Dylan, and even one prolog variant.) What these systems do is provide a sort of pattern matching facility that lets you grab parts of the expression and re-arrange them. But variables used in the macro are bound at the point where the macro is declared, not at the point where the macro is used. There’s usually a way to capture something if you specifically want to.
For example, here’s a real Scheme macro for a functional “replace” loop, taken from here:
(define-syntax replace (syntax-rules (initially with until just-before) ((replace <var> initially <value> with <newvalue> until <done>) (let loop ((<var> <value>)) (if <done> <var> (loop <newvalue>)))) ((replace <var> initially <value> with <newvalue> until just-before <done>) (let loop ((old #f) (<var> <value>)) (if <done> old (loop <var> <newvalue>)))) ((replace <var1> <var2> initially <value1> <value2> with <newvalue1> <newvalue2> until <done>) (let loop ((<var1> <value1>) (<var2> <value2>)) (if <done> (list <var1> <var2>) (loop <newvalue1> <newvalue2>))))))
The key to reading this is to understand that the names immediately after “syntax-rules” are
going to be keyword in the macro. After that, you have patterns built with the keywords and variables, paired with replacements. Expressions are matched against the patterns in sequence, with the keywords lined up, and the first one that matches the expression. So, for example, if you provided
the above rule with “(replace x initially 1 with (+ x 1) until (> x 10))”, it would match the first
pattern, and bind <var> to “x”, <value> to “1”, <newvalue> to “(+ x 1)”, and <done> to “(> x 10)”. The resu0lt of the expansion would be:
(let loop ((x 1)) (if (> x 10) x (loop (+ x 1))))
.
But there’s a problem even with hygenic macros, which is the fact that the code you run isn’t quite the code you wrote. When it comes to debugging, profiling, etc., the code that’s actually compiled can be quite different from the original code you wrote. You can’t single-step a debugger through a macro, or profile a macro. And it can be a huge surprise sometimes. To give you a sense of what I mean, here’s what happens when I ask CommonLisp to show me the result of expanding the “while” macro above invoked with (while (< x 10) (setq x (+ x 1)))
:
(BLOCK NIL (LET NIL (TAGBODY #:G7752 (IF (NOT (< X 10)) (GO #:G7753)) (SETQ X (+ X 1)) (PSETQ) (GO #:G7752) #:G7753 (RETURN-FROM NIL (PROGN)))))
Looking at that, you could quite justifiably ask: “What the hell is that?”. The answer is
that the “primitive” do-loop is actually also a macro. So macro expansion expanded them both out. And the end result is quite thoroughly incomprehensible. I’d hate to be confronted with that in a debugger! Particularly if the body of my loop was more complicated than a single assignment statement!
The Scheme system comes close to addressing that. Because the macros are so strictly structured, you can get at least some correlation between elements of the macro and elements of the executable code. But the fact remains that the transformations can totally break that. Scheme macros are fully Turing-equivalent – you can do anything in a macro, and most things will result in a mess.
Non-capturing macros with gensym is hard to do like getting scoping right is hard. It’s non obvious at the beginning, but becomes natural with experience. There is no need for any global analysis. The real problem with gensym is that it still doesn’t protect you against capture *in the encloding code*. What’s to say that the user didn’t define a lexical function named + in the scope surrounding while? In CL, the package system (and the fact that one can’t portably redefine standard functions, even locally) helps, but it’s a sociological solution, not a technological one. Still, in practice, Not An Issue.
Your CL macro is also gratuitously obfuscated. Quasiquotes are much easier to read (again, the rules are simple and local, you just have to get used to them). Finally, you don’t debug macroexpanded code, unless you’re debugging the macro itself. Do you complain that high level code compiles to barely readable asm? You have access to other ways to debug than stepping. Look at the effects of your function, not its execution: trace, break-inspect/modify-continue, or even the mighty print statement. Stepping is a low-level way to debug that, in my experience, rarely does any good.
PS, your Lisp code setqs a binding you didn’t create. It is global? Will it clobber something the user uses (more than just by capturing)? Will it die on threaded code? Will demons shoot out of your nostrils? Nobody knows…
The issue of debugging macros is addressed to some degree in PLT Scheme’s DrScheme environment, in their macro stepper tool. See http://www.ccs.neu.edu/home/ryanc/macro-stepper/macro-stepper.html for an introduction.
It might be fair to note that syntax-rules macros are quite different, and much more dangerous, from syntax-case macros.
That macro debugger helps you understand how an expression is macroexpanded, similarly to using *macroexpand-hook* or other tracing facilities. It doesn’t help you debug the expansion while it’s running.
I’ve always liked:
#define SIX 1+5
#define NINE 8+1
Then of course,
printf(“%dn”,SIX*NINE);
=D
Anyways, I’ve yet to utilize the f(x) macroing thing. I’ve always just used them for constants.
“In the case of C, the macros are textual: the compiler makes no attempt to verify that the body of the macro is in any way valid C++ code.”
If we are going to be very picky (and programmers like to be picky, right?), I don’t think that statement is totally true. Usually, the C compiler doesn’t ever get to see the macros (or any statement with a #). The preprocessor is the one that blindly does the search and replace. The preprocessor can be the same executable as the compiler, but not always. An example would be the FORTRAN 77 code I work on. The C preprocessor makes a pass through the code to deal with all the #ifdefs, #defines, and the like and then it goes off to f77.
None of that changes you complaint about the macros system though. I cannot even begin to count how many times I’ve had the preprocessor go and put bad fortran thanks to a careless macro.
The funny thing is, in C++, inline functions and templates essentially *are* macros, at least up until you try to take the address of one. The only thing that is truely missing is the ability to pass arbitrary code blocks as parameters.
I think there is no way around these sorts of problems (well, maybe some ways to deal with examining macros in a debugger), because what macros are trying to do is inherently dangerous.
Nearly all interesting macros are trying to bend their way around the base syntax of the language. Unfortunately, writing and understanding new syntax is just plain harder than writing ordinary code in a language.
I think Scheme points a way towards a cultural fix—have various utilities (like syntax-rules vs syntax-case, and some implementations even offer old-style macros), which offer different levels of power/danger trade off. You can’t prevent people from writing bad code, but you can make it easier not to.
Unfortunately you can’t do some things in Lisp without macros.
For example you cannot change order of evaluation without them. So you can’t write #’if, because applicative order will ruin your function on, e.g., (if true (/. x x) ((/. x (x x)) (/. x (x x)))
Though I personally don’t use macros much, I prefer functions whenever possible.
And addressing debugging I agree with Mr. Paul Khuong-C++ compiles to ASM, would you object it?
I don’t know a whole lot about programming (yet… I’m going to learn a lot more over the next couple years) but I think I follow your argument, for the most part.
So, if macros have all these problems, why use them at all? If you need a function that returns twice its input, why not just write a subroutine? Are macros ever the better choice? If not, why do languages have them at all?
The reason for using macros is that sometimes, rarely, they just can’t be avoided or avoiding them results in even worse code in terms on maintenance. Another reason is that macros are evaluated at compile time so the cost of evaluating them is paid only once. For some examples, there are many amazing and useful C++ Boost libraries that use C++ macros and templates as a hackish but quite powerful compile-time language.
And sometimes using macros produces cleaner code, simply because what you’re macro-expanding CAN’T be written as a function.
For instance, I have a piece of code I wrote somewhere that has a macro to define and implement an entire class with a name based on what you give it.
Why did I do that? Because the class it produces is thrown as an exception, so the only truly meaningful thing is the type. But I have a few ways I needed to be able to get information out of the class to actually extract information for debugging should it be thrown when it wasn’t anticipated – so I macro’d the whole thing up to permit me to define a class in two very short macro calls rather than several lines of entirely repeated code, which was more likely to produce a bug than the macro. (Because if you repeat something, and you need to make a change later … you have problems.)
Another example is for some testing I wanted to do. Specifically, I wanted to make sure that a certain small chunk of code would result in an exception – and I wanted to issue an error if that /didn’t/ happen.
Now, I could do what I originally did and set a boolean to false, then start a try block, then do the code, then end the try block, then catch the exception and set the boolean to true, then assert the truth of the boolean outside the catch block.
… but the third time I started writing that, I declared it to be absolutely ridiculous to repeat that completely identical boilerplate code. And in C++, at least for what I was doing, you can’t pass /code/ to a function.
So the choice was either repeat the boilerplate code … with the chance of making an error …
… or using a macro. Now, there’s a few things I did to make the macro safer. For instance, I put everything the macro declares inside an anonymous scope to avoid the capturing problem. (This way, the boolean vanishes after it’s asserted). And since I only use the parameter once, the nasty issue shown with the double example won’t happen.
Summary: I have used and will again use macros when the effort to create and verify a “safe” macro is likely to be less than the effort required to avoid/prevent/track down and fix copy/paste errors, and the things the macro is capturing cannot realistically be done by a function.
Do I like them? No! They’re ugly. But good programming is about two factors: Knowing what is and isn’t ugly, and selecting the least ugly path to your goal. And I stand by my claim that in the cases where I’ve used macros to date, they were the least ugly path available.
(I’ve also done some stuff with #ifdef/#define/#endif to protect against multiple inclusion, but that’s primarily a workaround for the fact that in C++, you can’t define a type multiple times, even if it’s the exact same type each time.)
Hmm. I’ve got a simple solution for some of the Scheme/Lisp macro issues. Why not support both eager and lazy evaluation? The default behavior would continue to be eager evaluation, but if you said
(def-lazy my-if (test-form then-form else-form)
(cond (test-form then-form)
(t else-form)))
then you’d get lazy evaluation, and that code would work the same way the “if” special form works. Am I missing something here? I’m not saying it would be easy to add to Scheme/Lisp, but it seems like it would make doing a lot of things much simpler.
I don’t know which is more worrying; the fact that JoshC has a six*nine=42 macro, or the fact that I ran it in my head and now I have a fish in my ear.
Josh: You’re missing the fact that macros can do a lot more than just delay evaluation; they let you generate arbitrary code, and, particularly, they let you create new variable/function bindings. Efficient compiled pattern matching? Macro. CLOS, the CL standard object system? Mostly (very hairy) macros and code generators. Series, a stream transducers fusion package? Codewalking macro. Syntactic sugar to eliminate that domain specific boilerplate you’re written thrice? Might be a good opportunity to macro it away, if you can’t use functions to do so.
Macros do have some issues, obviously, so they shouldn’t be used as one’s primary tool (that’s what [generic] functions are for). However they’re often better than the alternatives. At least, for the performance/code generation angle, statically typed multi-stage programming is more principled. Maybe by mixing together that, call by name and syntactically cheap anonymous functions, you’ll cover nearly all the cases.
The points you raise are valid, but the benefits of using macros outweigh the costs. Being able to generate code is very powerful — and with great power comes great responsibility. Much of Common Lisp’s library is done with macros, including some its most useful elements, such as unwind-protect and restart-bind. Without macros, you would have many more special forms or very contorted code.
The abstractions that can be built using macros is much better than inflating the language definition or forcing programmers to repeat themselves ad nauseum.
How can one dislike macros? Without them, we wouldn’t have the International Obfuscated C Code Contest. And the argument from final consequences is such a compelling logical fallacy!
Seriously, though. If you are foolhardy enough to use them, and who isn’t now and again, there are some best practices, such as enclosing parameters and constants in parenthesis to avoid precedence problems.
It’s somewhat of an unfair comparison here. C/C++ Macros are really apples to Lisp’s oranges. They are handled completely differently. Granted the inherent issue exists but our code is somewhat misleading. Why have the setq there? You should use LET if you are creating a new object shouldn’t you? In which case your object will be shadowed at that scope and unaffected. If you replace that code with a more realistic solution does your complaint still remain? The issue there would seem to be introducing a new binding that definitely will not shadow any other, which Common Lisp has a solution for. Is this solution not adequate for you?
And lastly, iostream.h? Come now, not even standard C++!!
Apy:
C++ and Lisp are two different points along the spectrum of macro systems. My goal was to give a flavor of the full range of macro systems – from the worst kind of primitive textual replacement macros, to the cleaner AST-based macros of CommonLisp, to the beautiful hygenic macros of scheme.
I admitted right in the text that the Lisp example with the setq was contrived; the real cases where you want to introduce variables inside of a macro are much more complex, and I didn’t want to have to spend time explaining something like a CLOS “define-method” macro. But it illustrates the problem that can happen in the more complex macros.
And again, as I said in the post, CommonLisp does provide a tool for generating new symbols. But using it is complicated – and generating totally new symbols isn’t always what you really want. The scheme style system, where macro variables are bound in the scope where the macro is defined, unless you specifically go out of your way to do a capture, is (in my opinion at least) vastly superior.
I’m surprised at you, Mark. You just dismissed macros in Lisp as bad because gensyms are tricky, but they’re completely trivial if you use a macro! ^_^
Seriously, have you never heard of with-gensyms? On Lisp is required reading for any lisper, and it has tons of useful utility macros defined in it as it explores the language. All you have to do is wrap any macro code that needs local variables in a with-gensyms and you’re done.
That prevents a macro from accidentally shadowing variables. As for the problem of functions being shadowed *before* you enter the macro, that’s not a macro problem at all! Shadowing core functions is Bad Coding, simple as that, at least in a language like Lisp where such an action is completely unnecessary. (At absolute worst you’d use a generic function to overload the core function.)
Now, as for macros being difficult to debug, that is true. However, it is extremely rare to actually debug all the way down to the core like you did in your example. Usually you use the macroexpand-1 function which only goes down a single level of expansion, so your while loop will turn into the code with a do loop. That doesn’t solve everything, but it certainly kills one complaint.
Forgive me for my ignorance, I do not believe I have as much experience with Lisp as yourself, but what is complicated about using GENSYM? I imagine the actual implementation of such a function is complicated but is the usage?
For nitpick sake, unless my reading comprehension is extremely poor this morning but when you admit that your example is contrived you really just seem to say in the description that “old-var” is not actually needed, which suggest that they way you approach the problem is still the valid solution, you simply need to replace old-path with something else, which I do not believe that to be the case. I certainly understand the need for a contrived example in this case but I think your explanation of what about it is contrived is misleading.
Despite my criticism, I did find the post to be enlightening.
Also, as far as I know Haskell and Ocaml do not have macro’s correct? Is this because types in such languages are so powerful (including matching)? If that is the case, are macros (in scheme/CL) mostly just a mechanism to get around limitations on types and matching? Would Haskell/Ocaml benefit from macros?
I’ve got one little complaint about macros. When I’ve worked in macro languages and made a syntax error (or worse, something goes wrong at
runtime), I have barely any support whatsoever from the compiler to find out where the problem is (the same for runtime with the debugger.)
I can think of two examples here. One is a C project where we had a few large (50-100 line) macros defined. Developing these macros was hard. They had to be implemented by hand for each platform, so this wasn’t a terribly uncommon task.
The other example is TeX or Latex. Syntax errors here are pretty easy to make, and TeX isn’t very helpful. tex prints out an error message in the expanded code. Macros expand into other macros which expand into other macros, so the expanded code isn’t terribly helpful.
What can be done about problems like this? From the users perspective to prevent problems from occurring, and from the compiler/interpreter side, how could we give the user meaningful information? How do the ‘smart’ macro languages handle these problems?
You may find macroexpand-1 useful. Here it is in sbcl:
* (defmacro while (test &body body)
`(loop while ,test do (progn ,@body)))
WHILE
* (macroexpand-1 ‘(while t ‘foo))
(LOOP WHILE T DO (PROGN ‘FOO))
;; compared to macroexpand…
* (macroexpand ‘(while t ‘foo))
(BLOCK NIL
(SB-LOOP::LOOP-BODY NIL ((UNLESS T (GO SB-LOOP::END-LOOP))) ((PROGN ‘FOO))
((UNLESS T (GO SB-LOOP::END-LOOP))) NIL))
Eric,
You will hate this advice, but if you need to stick in the C family, you could go with C++. Templates solve many of these issues associated with what people were doing with macros in C. Writing that bit in C++ or the whole thing might be worth it if that is a significant issue, but might not be practical.
Depending on your compiler and debugger, gcc can put sometimes put debugging information that includes macros, although I might be off on this. Check the gcc documentation if you are using it, it might he -g3 -ggdb.
Mark,
I do not agree with you that hygienic macros are evil. Yes, they can complicate debugging because of the mismatch between source and what gets compiled, but with modern optimizing compilers on modern hardware you do not even need macros to lose that. Also, macros, in the hand of a good programmer, can make source code way more readable (corollary: Java was not designed for use by good programmers)
Even if all macros were evil, you still should convince me that they are more evil than the alternatives.
For example, let’s say that you have code where you want to easily enable or disable logging, or where you want to repeat fairly complex locking code at entry and exit of tens of functions.
As far as I can tell, your options are:
– copy-paste a block of code tens of times, and do not forget to keep all copies in sync if (read: when) changes have to be applied
– write and debug your own generator for the code
– invent a new programming language and write and debug its first compiler
– switch to a different, existing language
– get the job done by writing some ugly, non-hygienic C macros
I maintain that, as soon as time-to-market comes in the picture, in many cases, the last option is the best choice by far.
Mark C., great article, to your points: It is a huge misunderstanding among non-Lispers that you directly manipulate ASTs to generate code. As such, I tend to shy away from putting “parse tree” and “Lisp” in the same sentence when it comes to macros. Also, the diligent macro author can implement very user friendly macros when it comes to error reporting.
Geoff W., to your point: this is an amazingly important point. This is how you “grow the language”. This is what makes Lisp so special!
For my take on hygienic macros in Scheme and what makes them so special, please have a look here.
Now that Lisp is starting to get some great free implementations and tons of libraries for doing practical tasks such as web serving and database access, it has become fashionable among academics to bash Lisp. Academics don’t like practical tools and for a long time Lisp wasn’t really all that practical so they liked Lisp. Witness the current popularity of Haskell in academic circles right now: if Haskell ever becomes suitable for commercial use, you will see them disown it faster than one drops a hot rock.
Reinder,
While your argument is true, time-to-market often forces some constraints on a project that outweigh some drawbacks of a particular tool I’m not sure that is a valid argument against Mark’s post as he is simply detailing the problems with a particular tool and showing how different versions compare. The total result of avoiding it may cost more han the usage but that doesn’t seem to be what Mark has said.
Paul Hkuong wrote:
Unless you care about correctness and ‘macro layering’, this is correct.
Of course, this is like saying ‘unless you care about the truth, the earth is flat and some 6,000 years old.’
Please read the papers and the web pages before you make such silly comments (on macro tools or otherwise) in public about scientific tools.
— Matthias Felleisen
Matthias Felleisen: I was contrasting what PLT’s macro debuggers actually does with helping debug *the result of* the macroexpansion process. There is no denying that the macro debugger is more powerful (and less ad-hoc) than *macroexpand-hook* and the likes. However, what it helps understand is the expansion process. As far as I understand, the debugging issue raised in the article, and to which Jim Meyer was responding, is about debugging the result of the expansion (its execution), not the expansion process itself: ” “What the hell is that?”… the end result is quite thoroughly incomprehensible. I’d hate to be confronted with that in a debugger!”
When I see macros, what I see is an incomplete level of indirection (which is both the solution and pain of some many computing problems). If I want to generate lots of similar code, I use a small generator to do so. The generator may be a built-in part of the language (Lisp, Python etc) or not. If not, then I write the generator, compile it, and run it as part of the build process for creating working product.
If I can’t debug a macro, and it has risks of unforeseen expansion, and I can’t pass it around as a first-class member of a language, I’d rather not use it. There are good reasons why Java chose not to provide a preprocessor.
Thank you for your explanation. Some of your posts are really good, you write in very clear way (almost like Isaac Asimov, but sometimes he makes explanations *too much* simple), you start from the basic things, you put things in perspective, contextualize them, often starting from the history of the topic you are explaining. This is good for writing books too.
C macros are tricky, but in the hands of an expert C coder they are a powerful tool that helps to produce better code, to avoid bugs, to spot them and to fix them in less time: http://users.bestweb.net/~ctips/
Lisp macros: they have discussed about them for ages, and this interesting thread contains some of the usual comments about them. One common critics is missing: macros written by the common programmer (not the ones inside the std lib, etc) are sometimes complex to understand, and they allow a programmer to personalize her/his code too much, making it less easy for other average programmers to understand and modify. Java is successful because most average programmers can modify the code written by other average programmers.
I am interested in macros from a practical point of view too. Walter is planning of adding AST macros to the very good D language (see from page 45 in this document: http://s3.amazonaws.com/dconf2007/WalterAndrei.pdf ). So I hope they will avoid some of the problems you explain (if you have time you can take a look at that document to see if you like those macros).