
Joe Armstrong 
237
You can do it again. It’s not so much remembering. When I say you can 
remember a program exactly, I don’t think that it’s actually remembering. 
But you can do it again. If Bill could remember the actual text, I can’t do 
that. But I can certainly remember the structure for quite a long time. 
Seibel: Is Erlang-style message passing a silver bullet for slaying the problem 
of concurrent programming? 
Armstrong: Oh, it’s not. It’s an improvement. It’s a lot better than shared 
memory programming. I think that’s the one thing Erlang has done—it has 
actually demonstrated that. When we first did Erlang and we went to 
conferences and said, “You should copy all your data.” And I think they 
accepted the arguments over fault tolerance—the reason you copy all your 
data is to make the system fault tolerant. They said, “It’ll be terribly 
inefficient if you do that,” and we said, “Yeah, it will but it’ll be fault 
tolerant.”
The thing that is surprising is that it’s more efficient in certain 
circumstances. What we did for the reasons of fault tolerance, turned out 
to be, in many circumstances, just as efficient or even more efficient than 
sharing.
Then we asked the question, “Why is that?” Because it increased the 
concurrency. When you’re sharing, you’ve got to lock your data when you 
access it. And you’ve forgotten about the cost of the locks. And maybe the 
amount of data you’re copying isn’t that big. If the amount of data you’re 
copying is pretty small and if you’re doing lots of updates and accesses and 
lots of locks, suddenly it’s not so bad to copy everything. And then on the 
multicores, if you’ve got the old sharing model, the locks can stop all the 
cores. You’ve got a thousand-core CPU and one program does a global 
lock—all the thousand cores have got to stop. 
I’m also very skeptical about implicit parallelism. Your programming 
language can have parallel constructs but if it doesn’t map into hardware 
that’s parallel, if it’s just being emulated by your programming system, it’s 
not a benefit. So there are three types of hardware parallelism. 
There’s pipeline parallelism—so you make a deeper pipeline in the chip so 
you can do things in parallel. Well, that’s once and for all when you design