I’m making an effort to understand esoteric programming language features/buzzwords well enough that I can explain them to others. Some of them are pretty hairy, so I’ll start with one I figured out pretty quickly: Multiple dispatch / multimethods.
First let’s talk about two things that seem similar but are deceptively different: Dynamic single dispatch, and method overloading. Dynamic single dispatch, in most object-oriented languages, is just plain calling the methods that are attached to objects – Number.add() might add its argument to the number, whereas Array.add() might add an item to the array. When you call “add”, you could imagine that there’s an additional, invisible parameter that’s the type of object you called it from.
In some languages – mostly statically typed ones like Java – you can also overload methods based on the object types of the arguments. For example, Number.add(Number n) might add its argument to the original number, whereas Number.add(String s) might concatenate the string representation of the number to its argument. So overloading the explicit arguments is pretty just like overriding methods in a subclass, right? The only difference is that the “argument” is in front of the method with a “.” instead of in the argument list?
Wrong, as it turns out. In languages without multiple dispatch – which is most of them – method overriding is based on run-time type whereas method overloading is based on compile-time type. That sounds very technical, but what it means is both simple and kind of surpising. If you have Number.add(Number myNumber) and Number.add(Integer myInteger), and you declare Number myNumber = new Integer(4) and then call someNumber.add(myNumber), you’re going to get first version of the method – the one that applies to Numbers - even though your argument was a more specific subclass of type Integer. In order to use the Integer version of the method, you need to declare Integer myNumber = Integer(4). I’m sure regular Java programmers are used to this, but it was counterintuitive enough to me that I had to try it out myself in order to really believe it.
In languages with multiple dispatch, such as Common Lisp or Julia, you usually have things called “multimethods”, which are declared basically like normal functions but “overload” based on the run-time type of their arguments. In such a language, you could call “add” on, say, 4 and 4.8 and correctly get the version that adds an Integer to a Float, even if both arguments were stored in variables of type Number.
Of course, you can emulate this feature simply by manually checking the run-time types of all your arguments, but that gets heinous pretty quickly.
(I think that "generic function" is a synonym for "multimethod"; if there's a distinction, it eludes me.)
Multiple dispatch was easy to explain; next up we have an infamously difficult-to-explain concept. Most explanations start by saying "monads are actually quite simple", and that's only sort of true - they're maybe not as hard to understand as you've heard, but they're moderately complex as programming concepts go.
If this sounds a lot like Promises in JavaScript, that's because Promises are almost a type of monad (their API is slightly stripped down from the technical definition.) Also, the newly-added async/await syntax in JavaScript, which lets you use Promises without wanting to tear your own eyes out, is somewhat parallel to the special syntax Haskell uses to make monads less clunky.
Another esoteric language feature: syntactic macros, known mostly as a feature of Lisp, but also starting to show up in newer languages such as Scala and Julia.
Macros are a type of subroutine that works differently from functions. Or at least, differently from what are usually called "functions" - some people define "function" in such a way that macros count as functions. Normal functions, in most languages, do the following things:
That means, for example, that if you write a "println" function, and x=2 and y=3, then println(x+y) will print "5" rather than "2+3" or "x+y".
Syntactic macros, instead, do the following things:
Most macros basically work like normal functions but with the arguments evaluated somewhere in the middle rather than at the beginning - "lazy evaluation" is the technical term for that. I already hinted at two possible syntactical macros - variations on "println" where println(x+y), with x=2 and y=3, prints either "x+y" or "2+3". That's basically just a normal function but instead of evaluating the arguments right away, you choose when and whether they get evaluated.
Another example: The way "if" works in most languages is very macro-like - the "if" statement takes two or more arguments; the first one wrapped in (), the second one wrapped in {}, and the remaining ones having a special syntax involving "else." The reason "if" has to work this way is that if "if" worked like a normal function, the second argument - the block of code that's supposed to get executed if the condition is true - would get executed regardless of whether the first argument evaluated to true or not, which would defeat the point in many cases.
Syntactic macros can be viewed as an extension of the compiler or interpreter - they let you add new syntax rules to the language. This is actually a more general definition than the one I gave - the type of macro I described has syntax that looks like the syntax for a normal function, but it behaves differently. But Lisp has another type of macro called a "reader macro" with different syntax - it behaves more like a quotation mark, changing how the following symbols are parsed.
The two different ways of understanding what a "macro" is can lead to different ways of implementing macros. The Julia and Scala implementations of macros generally follow my first explanation - they have syntax similar to functions, but they evaluate using different rules than normal functions. The macro implementation in the sweet.js project, on the other hand, aims to let the user extend the syntax of JavaScript in any way they want; as such, it's more general and powerful than the Julia or Scala syntax, but it's also much messier.
Because syntactic macros are basically "functions that write code", they usually operate at a higher level than normal functions - potentially more powerful, but harder to write and debug. In general, the less regular the syntax of a language, and the more "syntactical sugar" the language supports, the harder it is to write macros in that language. Which is why most friendly, readable languages like Python don't support syntactic macros, but concise, painfully logical languages like Lisp and Scheme do. Some new languages, like Scala and Julia, try for the best of both worlds, and I think the jury's still out on how well they've done.
Macros are a very general concept, but there are a few especially common use cases, so some languages try to provide subsets of macro functionality without opening up the entire can of worms that macros open:
I think it's undecided at this point whether the best route is to (1) ignore macros entirely, like most languages do, (2) implement most macro use cases as specific language features, without providing general macros, as in C++ or Smalltalk, or (3) implement true syntactic macros, as in Scala or Julia.
I realized that for some people - mostly depending on their main programming language - closures might qualify as an "obscure programming concept" (JavaScript folks should know this already, so go back to sleep.) There's a lot of confusion about lambdas, closures, anonymous functions, first-class functions, and callbacks - the five concepts tend to hang together, and I don't think any languages exist where all five of those things are completely distinct from one another. All of these things could be thought of as types of functions, or as descriptions of characteristics that functions might or might not have:
I said that a closure stores the calling scope along with a function. What that's usually going to mean is something like this:
let foo = function() {
let x = 1;
let bar = function() {
return x;
};
let baz = function(a) {
x = a;
};
return {bar: bar, baz: baz};
};
f = foo();
bar = f.bar;
baz = f.baz;
bar();
baz(3);
bar();
You see (if you run this in JavaScript) how "bar" and "baz" keep references to "x", even after the function "f" that created "x" finished running? That "x" isn't visible directly from the outer scope, but "bar" and "baz" act as the "getter" and the "setter", which gives you most of the functionality of objects, but with a syntax that revolves around functions rather than methods.
Why is this useful? I can't possibly explain all the applications here, but one common use in JavaScript would be to dynamically create a menu. You might create the menu by passing a variety of options into a function, which returns an object that has hidden references to all the things you passed in.
Generally speaking, the more "functional" a language is, the more the five concepts I described in this post will be the default way of doing things. In JavaScript and Lisp, closures are the default way functions work. In Python and Ruby, most functions aren't closures, but there's special syntax to make closures. Some languages, like C or older version of Java, don't have closures at all. Since I'm primarily a JavaScript programmer, I have trouble imagining life without closures, and it was weird to write about them as if they were an exotic thing that needed explaining.
Ever since I started writing this series, I've dreaded writing about continuations. It's actually not actually very difficult to explain what continuations are; what's harder is to explain is why you would ever want to use them - I've barely ever programmed in a language that supports continuations, so my feel for this is pretty limited.
Strictly speaking, a continuation is a snapshot of the state and control flow of a program at a particular point in time. But when people talk about using continuations, they're always talking about using continuations with a function called "call-with-current-continuation", abbreviated "call/cc." Call/cc is very similar to "goto", with two main differences that make it a little more organized:
So you're basically jumping back in time, but you can also sneak along some information from the future, as arguments when you invoke the goto-like object. Which, if you think about it, is how a function return works:
This is a very general idea, and supposedly all control constructs - functions, loops, exceptions, and so on - can be implemented with continuations. But most languages already have functions, loops, and exceptions, so it's not clear why you'd introduce a glorified "goto" with all the mess that implies. You'll notice that most modern languages don't support continuations, and not too many people are complaining about that.
On the other hand, Paul Graham wrote several chapters about how to use continuations, so they're probably cool and powerful in the same way syntactic macros are. They're apparently useful for situations where you need to sort of "roll back time" - backtracking constraint solvers, jumping quickly out of deeply recursive functions, websites where you can use the "Back" button, and so on. This is another one of those language features where it's genuinely controversial whether they're worth the hassle.