Back when I was a student working on my PhD, I specialized in programming languages. Lucky for me I did it a long time ago! According to Wired, if I was working on it now, I’d be out of luck – the problem is already solved!
See, these guys built a new programming language which solves all the problems! I mean, just look how daft all of us programming language implementors are!
Today’s languages were each designed with different goals in mind. Matlab was built for matrix calculations, and it’s great at linear algebra. The R language is meant for statistics. Ruby and Python are good general purpose languages, beloved by web developers because they make coding faster and easier. But they don’t run as quickly as languages like C and Java. What we need, Karpinski realized after struggling to build his network simulation tool, is a single language that does everything well.
See, we’ve been wasting our time, working on languages that are only good for one thing, when if only we’d had a clue, we would have just been smart, and built one perfect language which was good for everything!
How did they accomplish this miraculous task?
Together they fashioned a general purpose programming language that was also suited to advanced mathematics and statistics and could run at speeds rivaling C, the granddaddy of the programming world.
Programmers often use tools that translate slower languages like Ruby and Python into faster languages like Java or C. But that faster code must also be translated — or compiled, in programmer lingo — into code that the machine can understand. That adds more complexity and room for error.
Julia is different in that it doesn’t need an intermediary step. Using LLVM, a compiler developed by University of Illinois at Urbana-Champaign and enhanced by the likes of Apple and Google, Karpinski and company built the language so that it compiles straight to machine code on the fly, as it runs.
Ye bloody gods, but it’s hard to know just where to start ripping that apart.
Let’s start with that last paragraph. Apparently, the guys who designed Julia are geniuses, because they used the LLVM backend for their compiler, eliminating the need for an intermediate language.
That’s clearly a revolutionary idea. I mean, no one has ever tried to do that before – no programming languages except C and C++ (the original targets of LLVM). Except for Ada. And D. And fortran. And Pure. And Objective-C. And Haskell. And Java. And plenty of others.
And those are just the languages that specifically use the LLVM backend. There are others that use different code generators to generate true binary code.
But hey, let’s ignore that bit, and step back.
Let’s look at what they say about how other people implement programming languages, shall we? The problem with other languages, they allege, is that their implementations don’t actually generate machine code. They translate from a slower language into a faster language. Let’s leave aside the fact that speed is an attribute of an implementation, not a language. (I can show you a CommonLisp interpreter that’s slow as a dog, and I can show you a CommonLisp interpreter that’ll knock your socks off.)
What do the Julia guys actually do? They write a front-end that generates LLVM intermediate code. That is, they don’t generate machine code directly. They translate code written in their programming languages into code written in an abstract virtual machine code. And then they take the virtual machine code, and pass it to the LLVM backend, which translates from virtual code to actual true machine code.
In other words, they’re not doing anything different from pretty much any other compiled language. It’s incredibly rare to see a compiler that actually doesn’t do the intermediate code generation. The only example I can think of at the moment is one of the compilers for Go – and even it uses some intermediates internally.
Even if Julia never displaces the more popular languages — or if something better comes along — the team believes it’s changing the way people think about language design. It’s showing the world that one language can give you everything.
That said, it isn’t for everyone. Bezanson says it’s not exactly ideal for building desktop applications or operating systems, and though you can use it for web programming, it’s better suited to technical computing. But it’s still evolving, and according to Jonah Bloch-Johnson, a climate scientist at the University of Chicago who has been experimenting with Julia, it’s more robust than he expected. He says most of what he needs is already available in the language, and some of the code libraries, he adds, are better than what he can get from a seasoned language like Python.
So, our intrepid reporter tells us, the glorious thing about Julia is that it’s one language that can give you everything! This should completely change the whole world of programming language design – because us idiots who’ve worked on languages weren’t smart enough to realize that there should be one language that does everything!
And then, in the very next paragraph, he points out that Julia, the great glorious language that’s going to change the world of programming language design by being good at everything, isn’t good at everything!
Jeebus. Just shoot me now.
I’ll finish with a quote that pretty much sums up the idiocy of these guys.
“People have assumed that we need both fast and slow languages,” Bezanson says. “I happen to believe that we don’t need slow languages.”
This sums up just about everything that I hate about what happens when idiots who don’t understand programming languages pontificate about how languages should be designed/implemented.
At the moment, in my day job, I’m doing almost all of my programming in Python. Now, I’m not exactly a huge fan of Python. There’s an awful lot of slapdash and magic about it that drive me crazy. But I can’t really dispute the decision to use it for my project, because it’s a very good choice.
What makes it a good choice? A certain kind of flexibility and dynamicism. It’s a great language for splicing together different pieces that come from different places. It’s not the fastest language in the world. But for my purposess, that’s completely irrelevant. If you took a super-duper brilliant, uber-fast language with a compiler that could generate perfectly optimal code every time, it wouldn’t be any faster than my Python program. How can that be?
Because my Python program spends most of its time idle, waiting for something to happen. It’s talking to a server out on a datacenter cluster, sending it requests, and then waiting for them to complete. When they’re done, it looks at the results, and then generates output on a local console. If I had a fast compiler, the only effect it would have is that my program would spend more time idle. If I were pushing my CPU anywhere close to its limits, using less CPU before going idle might be helpful. But it’s not.
The speed of the language doesn’t matter. But by making my job easier – making it easier to write the code – it saves something much more valuable than CPU time. It saves human time. And a human programmer is vastly more expensive than another 100 CPUs.
We don’t specifically need slow languages. But no one sets out to implement a slow language. People implement useful languages. And they make intelligent decisions about where to spend their time. You could implement a machine code generator for Python. It would be an extremely complicated thing to do – but you could do it. (In fact, someone is working on an LLVM front-end for Python! It’s not for Python code like my system, but there’s a whole community of people who use Python for implementing numeric processing code with NumPy.) But what’s the benefit? For most applications, absolutely nothing.
According the the Julia guys, the perfectly rational decision to not dedicate effort to optimization when optimization won’t actually pay off is a bad, stupid idea. And that should tell you all that you need to know about their opinions.
Yet another programming language to end all programming languages. What’s the count on that now? As I recall it from my CS friends in the days of yore, on the list were Algol 68, APL, PL/1, Pascal, Lisp, Ada, C++, Smalltalk, Forth.
My longstanding opinion in the language wars, is that languages are tools. The best tool is the one that lets you get done what you want to get done the easiest.
To be fair, Lisp, APL, and Smalltalk were never intended to be “be-all and-all” languages by their designers; they were hyped by others.
At least some of the hype comes from Wired’s end, too, especially whoever added that headline: “That said, it isn’t for everyone. Bezanson says it’s not exactly ideal for building desktop applications or operating systems, and though you can use it for web programming, it’s better suited to technical computing” is saying something somewhat more reasonable.
And, for that matter, even the “Man Creates” part of the headline doesn’t make a ton of sense as he’s credited as a “co-creator” right in the top photo caption and the article’s very clear that it’s a collaboration.
They’d probably justify the headline off of the sentence “He and several other computer scientists are building a new language they hope will be suited to practically any task”–but that’s a sort of abstract statement of hope, more “long-term, we don’t want to rule out any potential use cases” than “RULE THEM ALL.”
http://xkcd.com/927/
Usually I love your blog posts and find your blog very interesting reading. But in this case I think you should spend a little more time to understand Julia and the people behind it before posting such strongly worded and judgemental posts. The Wired article grossly misrepresents what they themselves say – most of what you complain about is the hype of the journalist at Wired.
Please see the much more reasonable content at http://julialang.org/ and the discussion about this article on their mailing list: https://groups.google.com/d/topic/julia-dev/LPiZaSB2RPA/discussion
From my perspective as a user of many different programming languages for scientific computing (I’m a physicist), Julia brings the usefulness of Python (which you mention) but with the performance of c/fortran.
I think Julia’s an interesting project, especially for applications where you now use things like NumPy. But the headline on that story had a tone that made Stefan and collaborators sound almost megalomaniacal when the positions they actually take in the article text are less exciting but more defensible.
This is the problem with Wired just running with whatever headline they think will get the most clicks: you leave readers confused about your interview subjects’ statements (or at least tone) or at best disappointed that the click didn’t deliver what the link text promised.
Anyway, I’m mostly putting this down to a click-optimized-but-not-really-thought-through headline, not anything the Julia authors did (or, for the most part, anything wrong with the body of the post, though perhaps they could’ve presented opinions from a few more folks).
Great response, John. We are pretty unhappy with the headline in particular — apparently the editor added it later, without even asking the author of the piece.
It’s very hard to convey programming language details in an article for a general audience. The truly relevant or novel points tend to be quite subtle.
For our part, we’re working on a language, not telling people how to write their specific applications. Pointing out that many applications are not compute-bound is kind of a non-sequitur. My view is that a compiler writer worries about performance so fewer people have to in the future. This is very different from an application programmer wasting time optimizing non-bottlenecks in their code. If the compiler can help you, great; if not, the effort already put in to the compiler doesn’t cost you anything.
That might be true if compiler development were free. But in the real world…
The point of my non-compute-bound example was simple: Python isn’t a slow language for what it’s used for. If Guido van Rossum had started Python with the same goals as you – with the idea that high-quality optimized machine code was essential – Python, as the language it is, would never have existed.
There is no one language that’s perfect for all problem. There is no one way of implementing a language that’s correct for all problem domains.
If I’m working on a numerical computation app – like, say, computation fluid flow dynamics, which I messed with in grad school – then frankly, I want something *better than* C. C’s aliasing blocks far too many optimizations in highly performant numeric code. In that world, I want the most kick-ass optimizations that you can come up with. I want a compiler that understands floating point math, and how code reorderings will impact floating point errors. I want automatic parallelization and vectorization, and I’d be damned pissed off at a language that couldn’t provide those.
If I’m working in a language that’s gluing a bunch of component subtools together, I don’t give a damn about floating point optimization, or vectorization, or automatic parallelization. What I care about, in that case, is much more the quality of libraries that’s included with the language, and how easy the language makes it to use them.
If I’m working in a language that’s going to serve dynamic web-pages, then again, I really don’t care how well it can optimize vector operations. What I care about is primarily how well it can multithread/multiprocess, and how good it is at text IO and network communication. I care about how much overhead it introduces per process, because that’s going to affect how many CPUs I’ll need. I care about how easily it can work with MIME, JSON, and XML. But I wouldn’t care how well it can interact with command-line applications on the host operating system.
When you’re building a language, if you actually understand the problem you’re solving, you use that understanding to decide where to put your resources. Putting resources into numerical optimization is a tremendous waste in a language that won’t be used for numerical applications. Putting resources into a deep integration with XML DOM is a tremendous waste in a language that won’t be used for XML processing. Putting resources into highly optimized filesystem access is a waste in a language that isn’t going to be reading or writing files.
Putting resources in the wrong place *does* have a cost. Because it means that those resources – which could have been used elsewhere, to make the language a better tool for the purpose for which it’s designed – has a huge negative impact.
One concrete example of what I mean, and then I’ll shut up.
In my last job, we used Scala for most of our development. As a language, I *love* Scala. It’s a beautiful language, and on a language level, it’s a joy to program in. But the tools for it – they are absolutely wretched. I have never dealt with a compiler as horrifically, painfully slow as scalac. It took nearly an hour to compile a 100,000 line codebase on a tricked out developer machine – 4 cores, 16 gigabytes of ram, SSD disk.
The company that implements Scalac has been continuing to develop the language. They’ve added a very elegant reverse interpolation syntax, some extensions to the type system, and a macro system. Another way of stating my argument here is that when it comes down to it, is that they’re actively harming the language by adding those features. They’ve got a remarkably great language, but they’ve been putting their resources into adding cute but expensive to compile frills, while their users are finding that the tool quality is drastically impacting their productivity.
Putting resources in the wrong place isn’t free. It has an impact. In the case of Scala, as much as I love the language, I wouldn’t recommend using it for a real application. The performance of applications written in the language might be fine, but the amount of time that a developer is forced to waste using Scala counteracts the benefits that they get from using it. The dedication of resources that isn’t considering the way that the language is really used is killing the value of the language.
I think Julia is a useful experiment with some clever ideas, and that the problem is mostly with the article, and there, mostly with the headline. I’d direct my fire a little to the left–at Wired, not at the devs doing the work.
Except that as you can see here, the quote about “all languages should be fast” isn’t something that was put into their mouth by bad reporting. And to me, that’s the bit that annoyed me the most.
Getting high performance out of a language isn’t free. It shouldn’t be a ding against language designers who understood the domain they were targeting, and decided that for that domain, high performance code generation wasn’t a priority.
No one designs a language to be slow. But smart language designers decide where to focus their effort. And by the standards of numerical computation, that means that many languages are, effectively, slow by design: because they focused on making their language an effective tool for addressing problems in a domain where numerical performance isn’t an issue.
Your comment here assumes that one must use a different language for different tasks. It is then a waste of time to, say, optimize a language that won’t be used for compute-bound code. But that assumption is wrong.
Maybe some people like the 2-language workflow (e.g. python plus bottlenecks in C). That’s fine. But many people don’t.
Ok, yes, compiler development isn’t free. But here you seem to be talking about specific tradeoffs you don’t like — e.g. an expensive-to-compile feature when you care more about compile time than the feature. Of course we could fall into the same trap and start making bad tradeoffs, but that kind of argument doesn’t apply to the project as a whole. Our users do want fast native code; without it they probably wouldn’t care about julia. Perhaps we should have done something else entirely instead of julia? Well, people aren’t machines.
My point is that one *can* use a different language for different tasks, and that languages should be designed for what they’re going to be used for.
For *your application domain*, this kind of performance optimization is critical. And so, *for you*, for the domain that you’re targeting, dedicating resources to specific kinds of performance is important.
But your application domain isn’t the entire world. For other people, for other tasks, other things are important.
My point isn’t, and never was, that designing a flexible language which generates high performance code is a bad idea. It’s the implicit criticism of the “we shouldn’t have slow languages” crack: the languages that you deride as slow are excellent languages, well suited to the domains for which they’re intended.
You’ve made decisions about where to dedicate resources for your language, based on how you expect your language to be used. You’ve clearly thought about those decisions, and prioritized them. You’re not wrong.
But other people have done the same thing for different languages, and come to different conclusions. They’re not wrong either. They’re just trying to satisfy a different need.
That’s fair. I really didn’t mean that those languages are just plain bad. I agree they are excellent for many things. I was just trying to say in a colorful way that we can try to move to a more optimal part of the design space — keep what’s good about those languages without the slowness. Many current language projects have a similar ethos, e.g. Go and Dart.
You say “Python isn’t a slow language for what it’s used for”. With the advent of numpy/scipy this statement is certainly false for many people. You are equating your use with everyone.
On one hand, at least some of Python’s speed problems are self-inflicted. Exposing variables as a dictionary limits a lot of optimizations, and I’m hard put to see why it’s worth the cost. Maybe it has some value, but I’m hard to see it as more then a debugging trick.
I’m curious about your Scala program. It seems that large programs taking an hour to build is a known problem, long mitigated by minimal recompilation, and fortunately the JVM doesn’t have a long link time. Why was it being completely recompiled on a regular basis at all?
I think your point that no one creates a language to be “slow” is a good one.
There’s always going to be tradeoffs, and where you prioritize will generally dictate those tradeoffs.
Yet, I do think fast is always better. If you have two things providing you the same set of benefits, except one is faster, it’s obviously better.
If I take your example again, sure, your Python wouldn’t benefit from being faster, but your energy bill would. Even if the idle time between operations is long, being idle even more is a good thing.
Generally, a language strives for two things: performance or productivity.
A compromise is almost always achieved. I think the idea behind Julia is that it tries to compromise less then languages before have. It seems to claim it can do that by using modern compiling techniques and letting you code in a convenient way when convenience is needed, and a less convenient way when performance is required.
So, it’s not claiming that others are bad. I think it claims that modern compiler techniques allows to reduce that compromise, and so it’s set to achieve that. Sure, everything else could implement the same modern compiler technique and reduce their compromise. Julia is one of those that is doing it that’s all.
Finally, it claims to improve productivity and performance by making the compromise more fine grained and fine tuned to the specific use case. In C you don’t have the option to be more productive. In Python you don’t have the option to be more performant. You’re only options as a dev to choose where to compromise then becomes to change language. Julia seems to want to give you that power, so that you don’t have to switch language.
gotta love the alleged profession of journalism.
I wish you gave a deeper look at Julia. What’s great about it (or her…) is that it provides as much flexibility and dynamism as possible without sacrificing performance where it is wanted. For example, thanks to JIT compilation, types of variables are inferred everywhere possible, which allows the code to run almost as fast as C if written with some care. On the other hand, if you don’t care about performance in some parts of your code (or in your whole program), you can write it as easily and without thinking about types, just like you would in Python. No other language offers this flexibility.
I wish that people would learn something about other programming languages before making comments about how “no other language offers XXX”.
Languages with type inference, JIT, and optional types are not exactly rare. People have been doing that since (at least) the 1980s in Lisp. It’s been adopted and used in a variety of forms in other languages. In the Lisp family, you can look at Symbolics Lisp machines, which pioneered a bunch of it. Microsoft does it in the CLR. Scala uses optional type declarations with type inference, which gives a similar feel. Dylan had a really interesting take on it. So does Go. Hell, freaking VisualBasic had a form of type inference and JIT a decade ago! Java requires type declarations, but in the internals of the JIT, it does deeper analysis to generate more specific types, and then generates specialized code.
This isn’t some obscure fact only known by hardcore language geeks. It’s a straightforward thing that’s a part of many languages.
I have no doubt you correct, and overly general statements are indeed unhelpful.
But it might help to fully appreciate the context of Julia. Many of the people involved or using it come from a scientific computing background, not a programming language design one (except perhaps the creators – I’m not speaking for everyone). From this perspective the only real choices are Matlab/R/Python on the highly useful but “slow” end, or c,fortran etc. on the harder to use but “fast” end. In this light Julia does offer the best of both worlds – as easy and expressive as numpy or matlab, but as fast as c or fortran (well getting there, work on vectorization etc. is in progress).
Granted, that is a far cry from some of the very big general statements in the article, but I hope you can appreciate why so many scientists are excited by Julia, and maybe hype it up in response.
At the same time, I wouldn’t dismiss Julia just because a fan said something hyperbolic. It’s been hard for me to tell much about a language until I tried it; I certainly didn’t “get” Go ’til I’d actually done some nontrivial things with it.
I haven’t tried Julia, but there’s enough on the surface, like the promise of interactivity and conciseness plus computation speed and C interop, for it to be intriguing. Devs need room to do experiment and be ambitious for progress to happen, and I don’t think they deserve to get piled on because a publication’s writer and (especially) editor sexed up a story about their work.
Sorry — when I said “this flexibility”, I though about the combination of all the features offered by Julia, not to one of them taken in isolation. Of all the languages you cite, none seems to offer this combination, which makes it particularly suited to scientific computing, but also very good to write general-purpose programs (and both need to be mixed quite often).
Also, you complain about Scala compilation being slow, and yet you don’t acknowledge the fact that building the Julia compiler on top of LLVM is a good idea, because it avoids wasting developer resources on competing compilers, and offers a good performance (both in terms of build time and in terms of supported optimizations).
The Wired article isn’t exactly the best piece of journalism ever, but that doesn’t make Julia a useless language.
Once again, have you actually looked at Lisp?
Back in the day, Guy Steele did some numerical experiments with Lisp. CommonLisp offered optional type declarations, code specialization, vectorization, etc. They were able to get numerical code performance on a par with Fortran, using what is probably *still* the most flexible dynamic programming language ever designed.
I didn’t go out of my way to talk about the decision to use LLVM, because I think it’s transparently obvious that it’s good. That’s why so many people – not just Julia – have done it. (In fact, they’d get better performance by using GCC – GCC still has better optimization than LLVM, but I assume that the GPL issues with GCC blocked that. See, I’m not actually assuming their stupid: I’m just criticizing the stupid things that they said.)
In the case of Scala, using an LLVM backend wouldn’t have any impact on the tool speed. The actual code generation phase in scalac is fine; not great, but fine. But the front-end processing – particularly type analysis and implicit resolution – that’s a total disaster.
Once again: the point is not that spending time on numerical performance is a bad thing. It’s a *great* thing if you’re building a language that’s intended for numerical computation. But you shouldn’t be criticizing other language designers for making a perfectly rational decision to put their resources into something else when numerical performance wasn’t their goal.
re “Once again, have you actually looked at Lisp?”
Please see https://github.com/JeffBezanson/femtolisp.
It is a building block of Julia.
Lisp may be dynamic, flexible, and fast, but for a journeyman data munger and analyst such as myself, Julia (and Python) are a much more suitable. I think Julia has a lot of promise for the space currently dominated by Matlab, Python/Numpy, and R.
Yes, true enough. But in the case of python there has been a huge push to build libraries (eg, numpy, scipy) around it so it can do scientific computing. And it is quite nice until your problem gets too big and then it is painful. If Julia fulfills its promise it will solve this problem and be better then the other players in the high level scientific computing arena (python, matlab, R, IDL, mathematica, etc) for many problems. Of course, it has a long ways to go (eg, libraries) but it shows good promise. I, for one, am very happy to see them working on it.
The point you are making in your rant has some good points, but the way you wrote is spiteful and nasty.
It sounds like this is aimed at folks like me (scientists who write the odd bit of special purpose code). The key really will be the libraries.
The great thing about Matlab/python/R is that if I want to do something, 9 times out of 10 a little googling will reveal the someone else has already done it, or at least part of it.
I wouldn’t give R the time of day if weren’t for the huge number of useful packages.
The fact that writing a GCC frontend is about an order of magnitude harder than writing a LLVM frontend (if your starting position is knowing compiler writing but not knowing either backend) may have had something to do with it.
Oh, and GCC has no support for JIT. That’s a rather important factor, too.
Really? I could swear I remember reading an article about someone building a GCC-based jit. Can’t find the ref now, but I’m >90% sure that I saw that somewhere!
It’s possible that someone did it, but it’s far from “out of the box”.
GCC generates assembly code, which is then fed to an assembler. There are several ways you could get it to generate machine code instead, but it would require effort that’s not being spent implementing the language. LLVM already provides it.
As far as I know none of the Julia devs claim exclusivity over type inference, JIT, etc. I’m sure they’d be happy to cite the lineage of pretty much every feature in Julia in terms of previous languages that had them.
Programming languages are not just bags of features though. Julia is a thoughtfully-designed language that combines a nice type system with multiple dispatch in a way that I’ve really enjoyed coding in. As a language with roots in scientific / numerical computing it has really nice semantics for working with arrays. With some care one can also write very efficient code that is much faster than comparable code currently in use by scientists, so that hardly seems like wasted effort to me.
From what I can tell all this vitriol seems to be aimed at the perceived rhetoric supporting Julia, most of which is not coming from Julia devs, or even the Julia community.
Nobody (except hyperbolic Wired journalists) are saying that Julia will replace all other languages, that it is better than them, that it is the one true language, etc. It is an example of a language that seems to occupy a nice sweet spot on a variety of cost-benefit curves, and it’s nice to write both very abstract high-level code as well as very performant tight loops without context-switching between languages.
Thanks, I had a good laugh 🙂 I’ve seen some crap languages sold as pure gold, but this line of argumentation tops it all, brilliant!
Just shoot me now.
You stay out of this. He doesn’t have to shoot you now.
You really should evaluate the language on its merits, not on this article.
It’s actually a Lisp with syntax. It is homoiconic, has hygienic macros and higher order, first class functions. It is optionally typed.
Its most distinctive feature is a mix of multiple dispatch (think CLOS or Dylan) and parametric types.
It relies on BLAS (whichever you want) for linear algebra and libuv (of Node.js fame) for fast, asynchronous socket handling.
It compiles to fast native code on the fly.
The “one language” motto of the article comes from a misunderstanding of one of the main objectives of Julia: being able to prototype your code as easily as in Python, and then optimize the bottlenecks and get close to C speed, in the same language, rather than having to translate your code to C. It already delivers on that point.
You may also want to have a look at the way it safely handles shelling out: http://julialang.org/blog/2013/04/put-this-in-your-pipe/
The language is moving fast, and it still has some rough edges. Its main weakness at the moment is the relative lack of libraries (there’s already 200+ packages), but there’s a built in FFI that allow to call C and FORTRAN libraries from Julia without writing foreign code.
The authors are ambitious and want to disrupt the status quo. I think they’re on the right track.
Give it a try.
On one hand, I’m much more criticizing the article than the language. I really don’t like the grandiose claims about “disrupting the status quo”, because frankly, they just simply aren’t.
Personally, I happen to think that Homoiconic is completely irrelevant, and macros are downright evil. (I should write a post about it sometime…) In my experience, domain specific languages are virtually never a good thing. There are some rare cases where a brilliant designer manages to make one that’s wonderful; but most of the time, all that they do is obscure things. You know a programming language; you don’t know what gorp someone else added to it. What macros and DSLs mean is that when you sit down with a new piece of code, you don’t know what it means. I hate that.
Honestly, it looks like a fairly typical semi-typed lispy language. That’s very nice, but it’s not revolutionary, and it’s not going to disrupt any status quo.
re. “disrupting the status quo”: I was referring to the current situation of having to use two languages, one for prototyping, and another one to get speed. It will be disrupted if Julia is successful.
Homoiconicity makes metaprogramming easier (which can be useful when you generate bindings for a C library, for example), and there’s more to macros than DSLs (which don’t need macros to exist, to begin with).
Macro invocations are syntactically distinct from function calls (prefixed with @). You know from the get go when you’re dealing with macros.
If I had to write a DSL in Julia, I’d probably rely on the fact that binary operators really are functions that can be overloaded for arbitrary types.
> I’m much more criticizing the article than the language
As the post is phrased, you’re criticising “the idiocy of these guys” re: how they don’t understand LLVM, tradeoffs, precedent, etc. Who “these guys” are is not specified, and you suggested it didn’t really matter when I commented that the article wasn’t very good earlier. Aiming your flamethrower, or at least acknowledging some imprecision, is important when it’s set to “high.”
How can you argue
“My point is that one *can* use a different language for different tasks, and that languages should be designed for what they’re going to be used for.”
and
“In my experience, domain specific languages are virtually never a good thing.”
at the same time?
Good luck writing a convex program or represent a differential algebraic equation in a way that a _human_ understands (not in a way that a solver understands) without a DSL.
IMO, DSLs are easy to do wrong, but occupy an extremely important place in software infrastructure. Enterprise software, in particular, often requires the application of business rules, which are best written and maintained by a domain expert. DSLs are almost exactly what you want here.
Plus, of course, there’s a fuzzy line. One person’s scripted application is another person’s DSL. One person’s framework is another person’s EDSL.
I would like to hear why you think hygienic macros are evil (that C-style macros are evil is, of course, not even controversial). You might like to catch up with some of Phil Wadler’s recent work first, such as A practical theory of language-integrated query.
My impression of Julia so far (I downloaded it today before I saw this article!) is that it’s really a modern SISAL, in much the same way that Java is a modern COBOL. It’s far from revolutionary, but I’m pleased that someone is working in this much-neglected space.
Having said all this, I completely agree with you about the article.
I have no basis for opining on this language discussion–which is quite interesting–but for the love of motherfucken godde, when the goddam motherfucke did the horrible fake word “performant” come to mean “well-performing”????
Pingback: Visto nel Web – 117 | Ok, panico
Julia does not seem to have “product types” a.k.a. “discriminated unions” a.k.a. “case classes” and pattern matching. Once you’ve had these (F# (my fave), Haskell, Scala, etc.) it starts to seem almost absurd to approach most non-trivial (esp. data-structure) programming problems without them. Pattern-matching add-ons can be had with meta-programming support, but unless it’s baked-in, you don’t get the benefit that pervasive use of the pattern gives you.
Pingback: CS 330 Lecture 20 – C++ | teaching machines