Rob Martin has become a fan of Clojure recently. Nothing wrong with that, Clojure has a lot going for it and if you’ve never had a chance to write code in Lisp, it’s probably the best way to begin these days.

But then, Rob gets a little bit too emotional and he starts drawing all kinds of dangerous conclusions. Such as this one:

Why is functional programming important? Because Moore’s law has started to falter.

It’s not the first time that functional programming gets advocated as the heroic technology that will rescue us from buggy multithreaded code *and* that it will allow our programs to magically scale along with the multiple cores that computers have these days. Concurrency problems? Just pick a functional programming language — any language — and suddenly, your code is multithread safe and it will automatically scale.

I find this simplification a bit disappointing coming from technologists, but I really read this at least once a week these days.

If you’ve ever written multi-threaded code, the thought of eight, sixteen, thirty-two, or even more processors running your program should fill you with dread. Writing multi-threaded code correctly is hard! But why is it so hard? Because it is hard to manage the state of variables when more than one CPU has access to them.

First, a nit: when you write multi-threaded code, four processors shouldn’t scare you more than two. Either your code is multi-threaded safe or it’s not. The only thing that changes when you run it on multiple processors is that you are more likely to find bugs when you throw more processors at it.

I’ll agree with Rob on the fact that managing the state of variables with more than one CPU is hard, but come on, it’s still not rocket science. As I write this, hundreds of thousands of lines written in C, C++, C#, Java and who knows what other non functional programming languages are running concurrently, and they are doing just fine.

Java has shown amazing powers of adaptation over the years, and when it comes to concurrency, people like Brian Goetz and Doug Lea and libraries such as java.util.concurrent don’t get the recognition they deserve. Java is also the living proof that you don’t need concurrency support at the language level to be effective, libraries can do just fine.

That kind of code is admittedly harder to write than straight imperative programming, but can anyone who’s looked at Clojure’s STM API (atoms, agents, ref) or Scala and Erlang’s actors say that writing code with these paradigms is that much easier?

To make matter worse, these new paradigms come at a cost that’s very often glossed over by their own advocates. When people tell you that “Actors are a share nothing architecture”, they are lying to you. You are sharing a great deal with actors, just in more subtle ways that your mind needs to be very aware of. You have the illusion of automatic multi thread safety but you pay the price by having to wrap your head around a fully asynchronous model. It’s not easy. And when you have two actors A and B that are sending messages to an actor C, aren’t they sharing the state of C?

Stephan Schmidt attacked this subject not long ago. Read his post and don’t miss the comments, they are very enlightening as well. My take away from that discussion is that if there is a silver bullet to concurrency programming, Actors are not it.

Actually, it’s pretty clear to me that there is no silver bullet, and Alex Payne seems to agree. In this post, Alex sends a very powerful message of compromise and inclusion. Blocking or non-blocking I/O? Select or events? Java locking or Actors? Agents or refs?

Anyone who tells you that only one of these approaches works and the others don’t is trying to sell you something.

To quote Alex:

In fact, taking a hybrid approach to concurrency seems to be the way forward if the academy is any indication.

Strive to learn new languages, technologies, paradigms, just don’t fall in love with them.