ECMAScript 6

The limits of my language mean the limits of my world.
Ludwig Wittgenstein

For the past few months I’ve been exclusively writing ECMAScript 6 code by taking advantage of transpilation[1] to a currently supported version of JavaScript.

ECMAScript 6, henceforth ES6 and formerly ES.next, is the latest version of the specification. As of August 2014 no new features are being discussed, but details and edge cases are still being sorted out. It’s expected to be completed and published mid-2015.

Adopting ES6 has simultaneously resulted in increased productivity (by making my code more succinct) and eliminated entire classes of bugs by addressing common JavaScript gotchas.

More importantly, however, it’s reaffirmed my belief in an evolutionary approach towards language and software design as opposed to clean-slate recreation

This should be fairly obvious to you if you’ve been using CoffeeScript, which set out to focus on the good parts of JS and hide the broken ones. ES6 has been able to adopt a lot of CoffeeScript’s great innovations in a non-disruptive way, to such an extent that some have even questioned its role moving forward.

For all intents and purposes, JavaScript has merged CoffeeScript into master. 

I call that a victory for making things and trying them out.
@raganwald

Instead of making a thorough review of all the new features, I’ll point out the most interesting ones. To incentivize developers to upgrade, new languages or frameworks need to (1) feature a compelling compatibility story and (2) give you a big enough carrot.

#The module syntax

ES6 introduces syntax for defining modules and declaring dependencies. I emphasize the word syntax because ES6 is not concerned with the actual implementation details of how modules are fetched or loaded.

This further strengthens the interoperability between the different contexts in which JavaScript can be executed.

Up to now, no guidelines existed for how to actually do this. A common approach is to introduce a function declaration:

Consider as an example the simple task of writing a reusable implementation of CRC32 in JavaScript.

Up to now, no guidelines existed for how to actually do this. A common approach is to introduce a function declaration:

function crc32 () {
  // …
}

With the caveat, of course, that it introduces a single fixed global name that other parts of the code will have to refer to. And from the perspective of the code that uses that crc32 function, there’s no way to declare the dependency. One just has to assume the function will exist prior to the code’s interpretation.

With this situation in mind, Node.JS led the way with the introduction of the require runtime function and the module.exports and exports objects. Despite succeeding in creating a thriving ecosystem of modules around it, the interoperability possibilities were still somewhat limited.

A common scenario to illustrate this is the generation of browser bundles of modules, with tools like browserify or webpack. These can only be conceived because they treat require() as syntax, effectively ridding it of its inherent dynamism.

If you’re trying to transport code to the browser, the following is not subject to static analysis and therefore breaks:

require(woot() + ‘_module.js’)

In other words, the packer’s algorithm can’t possibly know what woot() means ahead of time.

ES6 has introduced the right set of restrictions while accomodating for most existing use cases, drawing inspiration from most of the informally-specified ad-hoc module systems out there, like jQuery’s $.

The syntax does take some getting used to. The most common pattern for dependency definitions is surprisingly impractical.

The following code:

import crc32 from ‘crc32’

works for

export default function crc32(){}

but not for

export function crc32()

the latter is considered a named export and requires the { } syntax in the import statement:

import { crc32 } from ‘crc32’

In other words, the simplest (and arguably most desirable) form of module definition requires the extra default keyword. Or in the absence of that, the usage of { } when importing.

#Destructuring

One of the most common patterns that has emerged in modern JavaScript code is the usage of option objects.

So common is this practice that newly specified browser APIs, like WHATWG’s fetch (a modern substitute for XMLHttpRequest) follow it:

fetch(‘/users’, {
  method: ‘POST’,
  headers: {
    Accept: ‘application/json’,
    ‘Content-Type’: ‘application/json’
  },
  body: JSON.stringify({
    first: ‘Guillermo’,
    last: ‘Rauch’
  })
})

The widespread adoption of this pattern has effectively prevented the JavaScript ecosystem from falling into The Boolean Trap.

If said API accepted regular parameters as opposed to an options object, calling fetch would be an exercise in argument order memorization and the typing of the null keyword.

// artistic rendition of a nightmare alternative world
fetch(‘/users’, ‘POST’, null, null, {
  Accept: ‘application/json’,
  ‘Content-Type’: ‘application/json’
}, null, JSON.stringify({
  first: ‘Guillermo’,
  last: ‘Rauch’
}))

On the implementation side of things, however, things haven’t looked nearly as pretty. Looking at the function’s declaration signature is no longer descriptive of its input’s possibilities:


function fetch(url, opts){
  // …
}

Usually followed by the manual assignment of defaults to local variables:

opts = opts || {}
var body = opts.body || ''
var headers = opts.headers || {}
var method = opts.method || 'GET'

And unfortunately for us, despite being common, the || practice actually introduces subtle bugs. In this case we’re not admitting that opts.body could be 0, so robust code would most likely look like:

var body = opts.body === undefined ? '' : opts.body

Thanks to destructured parameters we can at once clearly define the parameters, properly set defaults and expose them to the local scope:

fetch(url, { body='', method='GET', headers={} }){
  console.log(method) // no opts. everywhere!

As a matter of fact, defaults can also apply to the entire object parameter as well:

fetch(url, { method='GET' } = {}){
  // the second parameter defaults to {}
  // the following will output "GET":
  console.log(method)
}

You can also destructure with the assignment operator as follows:

var { method, body } = opts

This is reminiscent to me of the expressiveness granted by with, without the magic or negative performance implications.

#New Conventions

Some parts of the language have been altogether replaced with better alternatives that’ll quickly become a new default for how you write JavaScript.

I’ll go over some of them.

#let/const over var

Instead of writing var x = y you’ll most likely be writing let x = y. let scopes your variable definition to the block it’s defined in:

if (foo) {
  let x = 5
  setTimeout(function(){

    // x is `5` here
  
  }, 500)
}
// x is `undefined` here

This is especially useful for for or while loops:

for (let i = 0; i < 10; i++) {}
// `i` doesn't exist here.

When you want to ensure binding immutability with the same semantics as let, use const instead.

#template strings over concatenation

With the lack of sprintf or similar utilities in the standard JavaScript library, composing strings has always been more painful than it should.

Template strings make the embedding of expressions trivial, as well as support for multiple lines. Simply replace with `.

let str = `
  Hello ${first}. 
  We are in the year ${new Date().getFullYear()}
`

#class over prototypes

Defining a class was cumbersome and required a deep understanding of the language internals. Even though it’s still obviously useful to grasp the inner-workings, the barrier to entry to newcomers was unnecessarily high.

class offers syntax sugar for defining a constructor function, the methods within prototype and getters / setters. It also provides prototypical inheritance with syntax alone (no utilities or 3rd party modules).

class A extends B {
  constructor(){}
  method(){}
  get prop(){}
  set prop(){}
}

I initially was surprised to learn classes are not hoisted (explanation here). You should therefore think of them translating roughly to var A = function(){} as opposed to function A(){}.

#()=> over function

Not only is (x, y) =&gt; {} shorter to write than function (x,y) {}, but the behavior of this within the function body will most likely refer to what you want.

The so-called “fat arrow” functions are lexically bound. Consider the example of a method within a class that launches two timers:

class Person {
  constructor(name){
    this.name = name
  }

  timers(){
    setTimeout(function(){
      console.log(this.name)
    }, 100)

    setTimeout(() => {
      console.log(this.name)
    }, 100)
  }
}

To the dismay of newcomers to the language, the first timer (using function) will log "undefined". The second one will now correctly log name.

#First-class support for async I/O

Asynchronous code execution has been around for almost the entire history of the language. setTimeout, after all, was introduced around the time JavaScript 1.0 came out.

But arguably, the language didn’t really support it. The return value of function calls that scheduled “future work” is normally undefined, or in the case of setTimeout a Number.

The introduction of Promise addresses this, and by doing so fills a very large hole of interoperability and composability.

On one hand, APIs you’ll encounter become wholly more predictable. As a test of this, consider the new fetch API. How does it work beyond the signature we described? You guessed right. It returns a Promise.

If you’ve used Node.JS in the past, you know that there’s an informal contract that callbacks will follow the signature:

function (err, result){}

Also informally specified is the idea that callbacks will fire only once. And that null will be the value in the absence of an error (and not undefined or false). Except, this might not always be the case.

#Towards the future

ES6 is gaining a lot of momentum in the ecosystem. Chrome and io.js have already incorporated some of its features. A lot has already been written about it.

But what’s worth pointing out is that this momentum has been largely enabled by transpilation rather than actual support. Great tools have emerged to enable transpiling and polyfilling, and browsers have over time added proper debugging and error reporting support for them (through source maps).

The evolution of the language and its proposed features are outpacing implementation. As mentioned above, Promise is genuinely exciting as a building block alone, but consider the benefits of solving the callback problem once and for all.

ES7 is poised to do this by introducing the possibility of await-ing a promise:

async function uploadAvatar(){
  let user = await getUser()
  user.avatar = await getAvatar()
  return await user.save()
}

Even though the spec is in its early discussions, the same tool that compiles ES6 to ES5 already enables it.

There’s still substantial work left to do to make sure the process of adopting new language syntax and APIs becomes even more friction-less to those getting started.

But one thing is for certain: we must embrace the moving target.

1. ^ I use the word “transpilation” throughout the article on the basis of its popularity to refer to JavaScript source-to-source compilation. That aside, the merits of the term are technically debatable.