Somebody once wrote that there’s no more seductive sentence in the English language than, “I want to hear your story,” and maybe they’re right, because often you don’t have to do any more than just say that.

—Mitch Albom

It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood, who strives valiantly; who errs and comes short again and again; because there is not effort without error and shortcomings; but who does actually strive to do the deed; who knows the great enthusiasm, the great devotion, who spends himself in a worthy cause, who at the best knows in the end the triumph of high achievement and who at the worst, if he fails, at least he fails while daring greatly. So that his place shall never be with those cold and timid souls who know neither victory nor defeat.

JavaScript needs macros

JavaScript needs modules? No.

JavaScript needs a macro compiler companion to extend the language with new statements:

    defs export(VarDecleration v) {
        this.{{v.varName}} = {{v.initValue}};

    defs module(VarName n, Block b) {
        var {{n}} = new function () {{b}};

    defs import(VarName module, Period p, VarName vName) {
        var {{vName}} = {{module}}.{{vName}};

So (~> pronounced “compiles to”):

    >>> export var foo = 3;
    ~> = 3;

    >>> module X { ... }; 
    ~> var X = new function () { ... }

    >>> import A.B;
    ~> var B = A.B;

And there are your modules, compatible with every browser instantaneously, without having to wait for IE12 to implement any JS dialects.

Sometimes I think the thalamus gets as much attention as it does because it looks so cool in pictures. Ditto for the hippocampus when it comes to memory.

Sometimes I think the thalamus gets as much attention as it does because it looks so cool in pictures. Ditto for the hippocampus when it comes to memory.

JSON’s Cinderella Story

In 1999, the chord of ideas we know as JSON was born into our world as part of the JavaScript “scripting” language. This tiny little package of data meta-structure was rarely ever glanced at for most of its childhood. So unimportant was it, that it wasn’t completely standardized until its 7th year. That year was 2006, and that year, everything changed.

Now is 2010, and JavaScript has undeniably become a dominant feature of the web.

When I was growing up we had a small glass end table. The glass was round and just laid on top of its stand. The table had been knocked over many times and we began to believe the glass was bullet-proof or indestructible somehow. My earliest memory of this table was when I was around 5 years old. My last memory of it was when I was 14 and proving to my friends that this table cannot be broken.


Just replaced a ton of stuff like this:

function topOffset(obj) {
  var curtop = 0;
  if (obj.offsetParent) {
    do {
      curtop += obj.offsetTop;
    } while (obj = obj.offsetParent);
    return curtop;
  return 0;

With this:

function topOffset(obj) {
  return obj ? obj.offsetTop + topOffset(obj.offsetParent) : 0;

Feels good.

Here’s a bit more of a mind-cruncher if anyone feels like mental exercise:

  var nextCell = (function f(c, d) {
                    return c && (c.depth <= d ? c : ( && f(, d)));
                  })(, currentCell.depth)

Anomalous motion illusions

One, two, three, and four.

I have always thought Hans Christian Andersen should have written a companion piece to the Emperor’s New Clothes, in which everyone points at the Emperor shouting, in a Nelson from the Simpson’s voice, “Ha ha! He’s naked.” And then a lone child pipes up, ‘No. He’s actually wearing a really fine suit of clothes.” And they all clap hands to their foreheads as they realise they have been duped into something worse than the confidence trick, they have fallen for what E. M. Forster called the lack of confidence trick. How much easier it is to distrust, to doubt, to fold the arms and say “Not impressed”. I’m not advocating dumb gullibility, but it is has always amused me that those who instinctively dislike Apple for being apparently cool, trendy, design fixated and so on are the ones who are actually so damned cool and so damned sensitive to stylistic nuance that they can’t bear to celebrate or recognise obvious class, beauty and desire. The fact is that Apple users like me are the uncoolest people on earth: we salivate, dribble, coo, sigh, grin and bubble with delight.
Stephen Fry (link)

JS Macros

Something I’d love to see written on top of JavaScript:

macro let(Var v, Equals, Exp e, Block body) {
    (function (<v>) {

macro aif(Exp test, Block body) {
    let it = <test> {
Filed ↓ dreaming

Good code needs few unit tests

> “We write great code here, just look at how many unit tests we have!”

This meme needs to die. Let’s make the formal argument:

  • Units are abstractions which encapsulate a piece of functionality.
  • A unit tests covers an intended use-case for the interface a Unit exposes.
  • One quality measure of an abstraction is the complexity of its interface (i.e. API size).
  • Another measure is the amount of state the abstraction encapsulates.
  • Good abstractions have simple interfaces and as little state as necessary.

Hence, good abstractions require few unit tests. Conversely, a lots of unit tests are a symptom of an architecture with complex and highly stateful abstractions, which, in turn, are a sign of a low quality architecture.


Kay said:

science is a relationship between what we can represent and are able to think about, and “what’s out there”

Hamming said:

There are wavelengths that people cannot see, there are sounds that people cannot hear, and maybe computers have thoughts that people cannot think

If both are right, what does that mean?

Measuring Abstractions, Part 0

Reality is complicated. We use metaphors to organize that complexity into understandable models. When we combine metaphors using common patterns, we call those patterns “abstractions”.

Metaphor -> Abstraction -> Model

If a metaphor is a step, an abstraction is a cross-body lead, and a model is the rumba.

In computing, we use abstractions in an effort to make complicated computations simple and easy to reason with. A good abstraction, like a good dance, is seamless and elegant. Variable scope, function invocation, objects, and actors - these have physical, social, and conscious meaning completely missing from the physical components of a computer.

When expressing programs, we constantly have to decide between the metaphors we choose to represent our intentions. This inevitably leads to what, as modestly as possible, I consider the fundamental question of computing: given a computation, how do we find the right abstraction to model it?

Of course, we can’t answer this without asking some lemmas. This post is about the first:

How can we measure abstractions?

  1. power
  2. size
  3. clarity


The power of an abstraction is measured in the amount of complexity it lets us ignore when we employ it. Computing has done an excellent job creating ever-more-powerful abstractions, mostly by creating massive compound-abstractions. Consider a ORM command one might write:

Roles.objects.filter(player__league=get_config_value('league_id'), role__rrole='ADM', user=request.user).delete()

In a typical system, a compiler compiles this to Python VM-code, which interprets it to generate an SQL string, sends that string over a TCP/IP socket (several semesters’ worth of complexity in itself), where the mysqld deameon receives and parses it, compiles it into MySQL instructions, which are interpreted and executed on your hard disk, returning the results as a string passed back over that socket, which are parsed back into a memory structure by the ORM. That’s power.


Apart from being powerful, the above example is also massive. It’s a compound-abstractions, because it uses parsers, compilers, and sockets. I would be very surprised if someone without a graduate degree (or equivalent work experience) could explain the details which have been abstracted away. Even then, it would take considerable effort to deduce the events it causes at the hardware-level. This has both benefits and drawbacks, which I won’t get into here.


A clear abstraction is one which can disappear when the necessity to look through it arises. Our consciousness is limited in what we can conceptualize at once, but with practice, abstractions become subconscious. Clarity is the measure of speed at which we can expand on the simplified model of the abstraction. Compound abstractions are always less clear than their components.

Further thought

There is a lot more thinking to do here. Compound abstractions have their benefits and drawbacks. If two abstractions are functionally equal, a compound abstraction with less parts is “better” than one with less. But this is rarely the case.

Power, size, and clarity, aren’t independent, which is a bad sign - a good model should measure things which don’t affect each other.

Another important distinction completely ignored here is the notion of time and stating of intensions. When does who enter what into the system? The simplest form of this question is the distinction between runtime and compile-time metaphors.

Filed ↓ brain dump

The Black Box Disease

"The smartest people I know disdain abstractions, preferring to speak in concrete specifics. Take Paul Buchheit, the genius behind Gmail. When he talks about building web applications, he doesn’t think about high-level things like the underlying semantic structure of the data — instead he talks about the little “heads” that read data off of the hard disk and how fast they can move."
- Aaron Swartz, The Genius is in the Details

There’s little question that most contemporary software engineers have only a vague understanding of the software and hardware stack they use every day. I think this is a sign of a still-maturing industry, in which “black box” abstractions are training wheels whose advantage will wane with time.

Abstractions have an “opacity”, and the smartest people I know prefer transparent abstractions to opaque ones. In the context of programming systems, this notion instantiates as the distinction between a language which makes it easy to look underneath high-level features and one which hides implementation details away. This is a general explanation of the attractiveness of small-kernel languages, in which most abstractions are implemented in the language itself, available for inspection (and modification). The same applies to large software projects: abstractions which reuse existing metaphors are superior to those which create new ones.

So if we are to be master engineers, we should avoid abstractions which permanately hide details, and instead, seek out those which allow us to ignore the details when convenience allows, but promptly think through the abstraction when necessary.

I left it in Los Angeles.

You may have seen this graphic visualizing the relative speeds of accessing L1, L2 cache, RAM, and a disk seeking. Jeff Dean expounds in a presentation including other similar measurements, and hacker news hypothesizes about other analogies. I’m surprised nobody hit on this one, though:

Just slow things down by a factor of 10^9:

If L1 cache is like remembering something off the top of your head, then L2 cache is like remembering it after thinking for a couple of seconds - or maybe glancing at a cheat sheet next to your monitor.

RAM lookup is like walking down the hall, finding someone who knows the answer, and asking them.

Disk seeking, by comparison, would be like walking from New York to Los Angeles, taking two months to find whatever it is you need, and walking back to New York (assuming you walk 10 hours a day, at 5-6 miles per hour).

That’s a long walk.

(numbers from here)


13700000 sec / (3600 sec/hour) * (24 hour/day) ~ 160 days

(2*2776 miles) / (5.5 miles/hour) / (10 hours/day) / (24 hours/day) ~ 100 days

Which leaves you two months of wandering around LA :)