:(

Not found.

A Robot for Timo

Here at FlightCar Engineering we’re a very small team, and one of us — Timo Zimmermann — works remotely from Heidelberg, Germany. Timo’s an expert in the web framework we use, Django, and is awesome to work with: mixing together good humour, an enjoyment of teaching and learning, and deep technical expertise.

One day a link to Double Robotics got passed around our internal chat room — it’s an unexpected use of Segway technology, putting an iPad on top of a mobile robot and letting a remote participant drive the robot around while video chatting. We keep a video chat with Timo open while at work, so we were pretty interested in this.

There wouldn’t be much point in FlightCar buying one of these robots; our local developers fit around a single desk. Still, it would be useful to be able to video chat with Timo and have him be able to choose which of us to “look” at, as well as being able to join in with office conversations in general. Could we come up with something much simpler that still has most of the advantages of the Double robot in our situation?

I have a little electronics experience (from my time at One Laptop Per Child, as well as a previous fun personal project) and recently received a Kickstarter backer RFduino. Alex Fringes and I decided to go ahead and build a basic, stable/unmoving telepresence device as a present for Timo. Here’s what we did:

Parts list

$140 Bescor MP-101 pan head with power supply and remote control
$68 RFduino “teaser kit” + prototyping shield + single AAA battery shield
$29 Rosco 8″ Snake Arm
$13 Rotolight Male 1/4″ to 1/4″ adapter
$15 Grifiti Nootle iPad mini Tripod Mount

Total: $265 USD

I’m not counting the cost of the iPad (the Double Robotics robot costs $2500 and doesn’t include an iPad either), or the tripod we’re putting the Bescor pan head on top of (I had a monopod already, and basic tripods are very cheap), but everything else we used is listed above. Here’s the final result:

How it works

The pan head is easy to control programmatically. It has a 7-pin port on the back, and four of the pins correspond directly to up/down/left/right — to move in a direction, you just apply voltage to that pin until you want to stop. This is a perfect match for an Arduino-style microcontroller; Arduino is a hobbyist electronics platform that makes it easy to cheaply prototype new hardware creations, by giving you I/O pins you can attach wires to and a simple programming interface. Local electronics hacker and Tiny Museum-cofounder Steve Pomeroy helped out by determining the pinout and soldering between the remote control port’s pins and our RFduino’s prototyping board, and Alex set to work writing the code that would run on the RFduino and iPads. We ended up with an architecture like this:

So, to expand on the diagram: Timo moves his iPhone, the orientation is sensed and passed on to our local iPad via the nodejs bridge (which exists just to proxy through NAT), which converts it into a single letter “r”, “l”, “u”, “d”, or “s” (for stop) and then the RFduino reads a character at a time over Bluetooth Low Energy and sends a voltage pulse to the appropriate pin. We chose iPhone orientation sensing as the control mechanism at Timo’s end, but you could also use explicit direction buttons, or even something like face detection.

We decided to hide the fact that we were building this from Timo and introduced it to him as a surprise, coincidentally on Valentine’s Day. We love you Timo!

Finally, we’ve put all of the code we wrote — for the RFduino, the nodejs bridge, and the local and remote iOS apps — under an open source license on GitHub, so we’ve shared everything we know about how to build these devices. We’d be very happy if other people can help improve the features we’ve started on and find a cheaper way to build more of these!

(By the way, we’re hiring for a Lead Front End Engineer in Cambridge, MA at the moment!)

More technical talks

Since my blog post arguing that Technical talks should be recorded, I’ve continued to record talks – here are the new recordings since that post, mostly from the Django Boston meetup group:

My award for “best anecdote” goes to Adam Marcus’s talk, which taught me that if you ask 100 Mechanical Turk workers to toss a coin and tell you whether it’s heads or tails, you’ll get approximately 70 heads. Consistently. This either means that everyone’s tossing biased/unfair coins, or (and this is the right answer) that you can’t trust the average Turk worker to actually perform a task that takes a couple of seconds. (Adam Marcus goes on to describe a hierarchy where you start out giving deterministic tasks to multiple workers as cross-checks against each other, and then over time you build relationships with and promote individual workers whose prior output has been proven trustworthy.)

Dell C6100 XS23-SB server

Last week’s laptop review reminds me that I should also write about a new server purchase. (I know, everyone’s moving to cloud computing, and here I am buying a rackmount server to colocate..)

Kelly Sommers has one of the best dev blogs out there, and she recently wrote about a new server she’s installing at home. It turns out that Dell made some dramatically useful servers around four years ago — the server is a slim rackmount size (2U) yet contains four independent nodes, each of which can carry dual Xeon processors, eight RAM banks, and three 3.5″ disks. Dell didn’t sell these via standard markets: they went to large enterprises and governments, and are now off-lease and cheaply available on eBay. They’re called “Dell C6100″ servers, and there are two models that are easy to find: XS23-SB, which uses older (LGA771) CPUs and DDR2 RAM; and XS23-TY3, which uses newer LGA1366 CPUs and DDR3. Here’s a Serve the Home article about the two models. (There are also new C6100 models available from Dell directly, but they’re different.)

I got one of these — each of the four nodes has two quad-core Xeon L5420s @ 2.5GHz and 24GB RAM, for a total of 8 CPUs and 96GB RAM for $750 USD. I’ve moved the RAM around a bit to end up with:

CPU RAM Disk
2 * L5420 32GB 128GB SSD (btrfs), 1TB (btrfs)
2 * L5420 24GB 3 * 1TB (raid5, ext4)
2 * L5420 24GB 3 * 750GB (raid5, ext4)
2 * L5420 16GB 2 * 1TB (raid1, ext4)

While I think this is a great deal, there are some downsides. These machines were created outside of the standard Dell procedures, and there aren’t any BIOS updates or support documentation available (perhaps Coreboot could help with that?). This is mainly annoying because the BIOS on my XS23-SB (version 1.0.9) is extremely minimal, and there are compatibility issues with some of the disks I’ve tried. A Samsung 840 EVO 128GB SSD is working fine, but my older OCZ Vertex 2 does not, throwing “ata1: lost interrupt” to every command. The 1TB disks I’ve tried (WD Blue, Seagate Barracuda) all work, but the 3TB disk I tried (WD Green) wouldn’t transfer at more than 2MB/sec, even though the same disk does 100MB/sec transfers over USB3, so I have to suspect the SATA controller — it also detected the disk as having 512-byte logical sectors instead of 4k sectors. Kelly says that 2TB disks work for her; perhaps we’re limited to 2TB per drive bay by this problem.

So what am I going to use the machine for? I’ve been running a server (void.printf.net) for ten years now, hosting a few services (like tinderbox.x.org, openetherpad.org and a Tor exit node) for myself and friends. But it’s a Xen VM on an old machine with a small disk (100GB), so the first thing I’ll do is give that machine an upgrade.

While I’m upgrading the hardware, what about the software? Some new technologies have come about since I gave out accounts to friends by just running “adduser”, and I’m going to try using some of them: for starters, LXC and Btrfs.

LXC allows you to “containerize” a process, isolating it from its host environment. When that process is /sbin/init, you’ve just containerized a entire distribution. Not having to provide an entirely separate disk image or RAM reservation for each “virtual host” saves on resources and overhead compared with full virtualization from KVM, VirtualBox or Xen. And Btrfs allows for copy-on-write snapshots, which avoid duplicating data shared between multiple snapshots. So here’s what I did:

$ sudo lxc-create -B btrfs -n ubuntu-base -t ubuntu

The “-B btrfs” has to be specified for initial creation.

$ sudo lxc-clone -s -o ubuntu-base -n guest1

The documentation suggested to me that the -s is unneeded on btrfs, but it’s required — otherwise you get a subvol but not a snapshot.

root@octavius:/home/cjb# btrfs subvol list /
ID 256 gen 144 top level 5 path @
ID 257 gen 144 top level 5 path @home
ID 266 gen 143 top level 256 path var/lib/lxc/ubuntu-base/rootfs
ID 272 gen 3172 top level 256 path var/lib/lxc/guest1/rootfs

We can see that the new guest1 subvol is a Btrfs snapshot:

root@octavius:/home/cjb# btrfs subvol list -s /
ID 272 gen 3172 cgen 3171 top level 256 otime 2014-02-07 21:14:37 path var/lib/lxc/guest1/rootfs

The snapshot appears to take up no disk space at all (as you’d expect for a copy-on-write snapshot) — at least not as seen by df or btrfs filesystem df /. So we’re presumably bounded by RAM, not disk. How many of these base system snapshots could we start at once?

Comparing free before and after starting one of the snapshots with lxc-start shows only a 40MB difference. It’s true that this is a small base system running not much more than an sshd, but still — that suggests we could run upwards of 700 containers on the 32GB machine. Try doing that with VirtualBox!

So, what’s next? You might by now be wondering why I’m not using Docker, which is the hot new thing for Linux containers; especially since Docker 0.8 was just released with experimental Btrfs support. It turns out that Docker’s better at isolating a single process, like a database server (or even an sshd). Containerizing /sbin/init, which they call “machine mode”, is somewhat in conflict with Docker’s strategy and not fully supported yet. I’m still planning to try it out. I need to understand how secure LXC isolation is, too.

I’m also interested in Serf, which combines well with containers — e.g. automatically finding the container that runs a database, or (thanks to Serf’s powerful event hook system) handling horizontal scaling for web servers by simply noticing when new ones exist and adding them to a rotation.

But the first step is to work on a system to provision a new container for a new user — install their SSH key to a user account, regenerate machine host keys, and so on — so that’s what I’ll be doing next.

Fujitsu Lifebook U904 review

I got a new laptop. Linux works reasonably well on it. Here’s my research in case you’re thinking about getting one too.

I wanted my next laptop to be an ultrabook (less than 1″ thick, less than 5 lbs) with a HiDPI/Retina display. That left me looking at the Samsung Ativ Book 9 Plus, Asus Zenbook UX301-LA, Dell XPS 15 Touch, and Fujitsu Lifebook U904.

Model Screen size, res RAM, upgradable? Disk, upgradable? Weight Price (approx. USD)
Samsung Ativ Book 9 Plus 13.3″, 3200×1800 4GB, no 128GB SSD, sort of (uses M.2) 3.1 lbs $1320
Asus Zenbook UX301-LA 13.3″, 2560×1440 8GB, no 2x128GB SSD in RAID 0, yes 2.6 lbs $1900
Dell XPS 15 Touch 15.6″, 3200×1800 16GB, no 1TB HDD + 32GB SSD, yes 4.6 lbs $1950
Fujitsu Lifebook U904 14″, 3200×1800 6GB, yes to 10GB 500GB HDD + 16GB SSD, yes 3.1 lbs $1350

In short, I decided that 13.3″ wasn’t large enough for comfortable viewing (especially at 3200×1800!) or typing, and the Dell was too heavy and bulky, so the Lifebook was the best option for me. (I also liked that the Lifebook has Intel graphics, whereas the Dell has nVidia Optimus.)

Fujitsu Lifebook U904

Some observations about the Lifebook under Ubuntu 13.10:

  • The screen is amazing. Fedora seems to scale its UI for HiDPI but Ubuntu doesn’t — menus are tiny in Ubuntu 13.10. Be warned that the screen is very glossy.

  • Web pages are unreadably tiny by default. You can set layout.css.devPixelsPerPx to 1.25 in Firefox, or “Page Zoom” in Advanced Settings to 120% to Chrome to fix. (Thanks to Alexander Patrakov for pointing me at the Chrome option in the comments.)

  • I’d use the touchscreen more if swipe-to-scroll worked on web pages in Firefox. Haven’t found a way to do that.

  • It’s the first time I’ve been able to have a row of xterms, then a full-length code editor window at comfortable reading size, then a (approximately half-screen-width) web browser all on the screen at once, and it feels very productive.

  • I saw graphics corruption (glitchy icons) on Fedora, both F20 and Rawhide. Ubuntu is fine.

  • The kernel (both Fedora and Ubuntu) always boots with minimum brightness, so you have to bring it back up each time you reboot.

  • Sometimes the mouse pointer doesn’t come back after suspend/resume. The fastest workaround I’ve found is just switching VT away and back — ctrl-alt-f1 followed by ctrl-alt-f7.

  • Sometimes audio doesn’t come back after resume. This doesn’t happen very often, but I haven’t found a way to bring it back other than rebooting.

  • Touchscreen, SD slot, USB3 work great.

  • Flash video makes the fan kick in, and it’s loud enough to be annoying. HTML5 video is fine. The fan’s usually very quiet.

  • While the RAM and disk are user-upgradable, it does require fully opening up the machine — it’s not as simple as removing a few screws on the bottom panel. I haven’t done it myself yet.

  • The onboard HDMI port only supports up to 1920×1080 on external monitors (this is pretty common). There’s an optional port replicator that has a DisplayPort port for higher res displays. If you use external monitors a lot, you might hold out for a laptop with a mini-DisplayPort built in.

  • I really miss my ThinkPad’s trackpoint; I’m going to try a tiling window manager.

Eight years and eight percent: Always giving more

(This is a joint blog post with Madeleine.)

Our tradition continues: to celebrate our eighth year of marriage Madeleine and I are giving 8% of our joint pretax income. (Each year we give 1% more.) This giving is made to organizations which we believe have the most concrete short term “estimated value” for helping others.

As people look forward to making resolutions for the coming year, we hope our own example helps inspire others to give – just as others have inspired us by giving more, despite financial pressures. Those who go ahead of us have blazed a trail we happily follow.

“Path Squiggles” by Dominic Alves“Path Squiggles” by Dominic Alves

As in previous years, we are guided by the research performed by GiveWell. Efficiency in good should matter, and for this reason our money will be going to help the developing world. Money can do more immediate good for the global poor – each dollar can accomplish more – than it can do to ameliorate the lives of those in first-world poverty.

Almost all of our giving this year will go to GiveDirectly. GiveDirectly aims to distribute 90% of the money it receives directly to poor individuals in the developing world. Their methods have been developed in Kenya, where the M-Pesa mobile-phone-based money transfer system facilitates the transfer of cash. GiveDirectly had a great year, with high profile and supportive articles in the New York Times, NPR’s This American Life podcast, and even The Economist. Even better, these articles often introduce one of the central ideas behind GiveWell (which has recommended GiveDirectly as one of three top charities) – that we can try to target donations to do the most good for the most people, and that acknowledging this involves a dramatic rethinking of which charities we choose to support.

Mobile Phone with Money in Kenya by Erik (HASH) Hersman“Mobile Phone with Money in Kenya” by Erik (HASH) Hersman

There are many ways to make our lives meaningful. We have been fortunate to grow our family with our first child: a concrete meaning and joy, though a local one. We’ve also been especially fortunate to have had employment (past and present) where our skills are used to improve the world. A third path to meaning – one we hope others will join us in celebrating – is to give, to give more, and to give wisely.

May you find the happiness of giving in the new year!

Technical talks should be recorded

I’ve picked up an interest in JavaScript and HTML5 this year, and have gone to a bunch of great technical talks in Boston. I brought a camera with me and recorded some of them, so you can see them too if you like. Here they are:

Rick Waldron – The Future of JavaScript

Mike Pennisi – Stress Testing Realtime Node.js Apps

Paul Irish – The Mobile Web Is In Deep Trouble

Daniel Rinehart – Debugging Node.js Applications

Ian Johnson – Prototyping data visualizations in d3.js

Kenneth Reitz – Heroku 101

I think these are world-class talks. But if I hadn’t brought my little camera with me and recorded them, they would be destroyed. No-one else offered to record them, even though they were popular — the Paul Irish talk had 110 people signed up to attend, and more than the same number again waitlisted who couldn’t go because they wouldn’t fit in the room. So there were more people in Boston who didn’t get to see the talk (but wanted to) than who did, even before we start counting the rest of the world’s interest in technical talks.

I’m happy that I’m able to help disseminate knowledge from Boston, which has an abundance of incredibly smart people living here or visiting, to wherever in the world you’re reading from now. But I’m also sad, because there are far more talks that I don’t go to here, and I expect most of those aren’t being recorded.

We’re technologists, right? So this should be easy. It’s not like I went to video camera school:

  • The equipment I’m using (Panasonic Lumix G2 camera and Lumix 20mm f/1.7 lens) costs under USD $800. Maybe it could be cheaper; maybe a recent cellphone (HTC One or Galaxy S4?) would be adequate.
  • I use a $20 tripod which is half broken.
  • I don’t use an external audio recorder (just the camera’s microphone) so the audio is noisier than it could be.
  • My camera’s sensor is small so it doesn’t have great low-light performance, and it records 720p instead of 1080p.
  • Sometimes the refresh rate/frequency of the projector is out of sync with the camera and there are strobing colors going across the screen in the final video. I don’t think I can do anything about this on the camera’s side?
  • I don’t do any editing because I don’t have time; I just upload the raw video file to YouTube and use YouTube’s “crop” feature to trim the start and end, that’s it.

I’d really like to know what the right answer is here. Am I overestimating how important it is to record these, and how privileged I am to be somewhere where there’s an interesting talk happening almost every day? Is owning a device that can record HD video for around 90 mins rare, even amongst well-paid developers and designers? If the presenter just recorded a screencast of their laptop with audio from its microphone, is that good enough or is that too boring for a full-length talk?

Might part of the problem be that people don’t know how to find videos of technical talks (I don’t know how anyone would find these unless they were randomly searching YouTube) so there isn’t as much demand as there should be — is there a popular website for announcing new recordings of tech talks somewhere? Maybe I just need to write up a document that describes how to record talks with a minimum of hassle and make sure people see it? Do we need to make a way for someone to signify their interest in having an upcoming talk be recorded, so that a team of volunteer videographers could offer to help with that?

WebRTC without a signaling server

WebRTC is incredibly exciting, and is starting to see significant deployment: it’s available by default in Chrome and Firefox releases now. Most people think of WebRTC as an API for video calling, but there’s a general purpose method for directly sharing data between web browsers (even when they’re behind NAT) in there if you look harder. For example:

  • P does peer-to-peer mesh networking in JavaScript.
  • TowTruck allows you to add collaboration features (collaborative text editing, text chat, voice chat) to websites.
  • PeerCDN forms a network from a site’s visitors, and uses it to offload serving up static content away from the web server and on to the networked peers.
  • The Tor Project is interested in using WebRTC to enable volunteers with JavaScript-enabled web browsers to become on-ramps onto the Tor network for users under censorship, as part of the Flash Proxies project. The idea is that censoring organizations may block the public Tor relays directly, but they can’t easily block every random web browser who might route traffic for those relays over WebRTC, especially if each web browser’s proxy is short-lived.

All of this activity means that we might finally be close to solving — amongst other important world problems — the scourge of xkcd.com/949:


xkcd: File Transfer, used under CC-BY-NC 2.5.

I wanted to experiment with WebRTC and understand its datachannels better, and I also felt like the existing code examples I’ve seen are unsatisfying in a specific way: it’s a peer-to-peer protocol, but the first thing you do (for example, on sites like conversat.io) is have everyone go to the same web server to find each other (this is called “signaling” in WebRTC) and share connection information.

If we’re going to have a peer-to-peer protocol, can’t we use it without all visiting the same centralized website first? Could we instead make a WebRTC app that just runs out of a file:/// path on your local disk, even if it means you have to manually tell the person you’re trying to talk to how to connect to you?

It turns out that we can: I’ve created a serverless-webrtc project on GitHub that decouples the “signaling server” exchange of connection information from the WebRTC code itself. To run the app:

  • download Firefox Nightly.
  • git clone git://github.com/cjb/serverless-webrtc.git
  • load file:///path/to/serverless-webrtc/serverless-webrtc.html

You’ll be asked whether you want to create or join a channel, and then you’re prompted to manually send the first party’s “WebRTC offer” to the second party (for example, over an instant message chat) and then to do the same thing with the second party’s “WebRTC answer” reply back. Once you’ve done that, the app provides text chat and file transfer between peers, all without any web server. (A STUN server is still used to find out your external IP for NAT-busting.)

There are open issues that I’d be particularly happy to receive pull requests for:

#1: The code doesn’t work on Chrome yet. Chrome is behind Firefox as far as DataChannels are concerned — Chrome doesn’t yet have support for binary transfers, or for “reliable” (TCP, not UDP) channels (Firefox does). These are both important for file transfers.

#2: Large file transfers often fail, or even hang the browser, but small transfers seem to work every time. I’m not sure whose code is at fault yet.

#3: File transfers should have a progress bar.

Thanks for reading this far! Here’s to the shared promise of actually being able to use the Internet to directly share files with each other some time soon.

Camera review: Lomography Belair X 6-12

The Belair 6-12 is an interesting new medium format film camera from Lomography. Here’s a mini-review, mixed in with photos from a roll of Velvia 50 that I shot with the camera at La Jolla Shores.

The camera looks amazing on paper — auto-exposure medium format cameras with interchangeable lenses are usually far more expensive, and the closest thing to the 6×12 panoramic format I can think of is the 35mm Hasselblad XPan II.

It’s much harder to get good results from the Belair than the XPan, though. The problems I’ve seen, starting with the most severe:

  • “Infinity focus” isn’t infinity sometimes. This could be a lens calibration problem, or the bellows not extending far enough, or the film not being held with enough tension to stay flat against the film plane. This seems to make the camera useless for landscapes, which is what I’d want to be shooting with a panoramic medium format camera.
  • When you take a shot, the shutter lever is on the front board that the lens is attached to via the bellows, rather than on the same side of the bellows as the camera’s body. This means that your sturdy tripod is keeping the camera’s body still while your finger is moving the lens board around and making your image blurry. This is probably the largest design flaw in the camera, and I thought the lack of reliable infinity was already pretty terrible. So, you actually shoot by putting the camera on the tripod, putting your outstretched index finger along the base of the front lens board to support it, and then using your thumb to activate the shutter extremely gently.
  • The viewfinder isn’t coupled to the lens; you focus blindly by setting the distance between you and the subject and relying on depth of field. This makes the focus problems much worse, because you can’t even tell whether the lens is failing to reach infinity.
  • The camera seems prone to “fat rolls”, where the spool doesn’t stay tight. This sometimes results in light leaks, and is what makes me think that the infinity problem might be about the spool not being flat against the film plane.
  • The autoexposure is inscrutable (you can’t see what shutter speed it’s chosen) and sometimes makes bad decisions.
  • There are two plastic lenses included, a 58mm (21mm at 35mm equivalent on 6×12) ultra wide angle and 90mm (32mm at 35mm equivalent on 6×12) wide angle. The lens quality is not good. Each lens can be used at f/8 or f/16, which results that I’d describe like this:
58mm f/8 Extremely soft everywhere, with strong vignetting
58mm f/16 Still pretty soft
90mm f/8 Soft, but somewhat usable
90mm f/16 Usable

You can see that the interchangeable lenses don’t add much if you’re interested in sharpness; you’ll want to stay at 90mm f/16 almost all of the time. (I should point out that Lomography is selling glass lenses for the camera as an upgrade — if you’re willing to spend more money on those, they’ll probably have better performance.)

With all that griping aside, how is it to shoot? It’s pretty fun. I like these shots, although I’ve had other rolls come back with unusably poor focus or exposure. In the shot below, I actually like that the two sides of the image (the seagulls and the cliff) are very soft; they make the photo look more painterly and surreal, which works here.

I can’t recommend buying the Belair, at least at its current price. I took advantage of a 40% preorder discount, and I think some of my problems with focusing might be caused by getting an early production camera that way. While I’m excited to see Lomography trying hard to innovate and keep film alive, the Belair seems to be more of a toy camera than a serious one.

Children in Peru write their own history on Wikipedia

Video link:

Over a million children in Peru have access to an offline Spanish Wikipedia snapshot on their OLPC laptop. The Wikimedia Foundation is e-mailing its supporters a link to a trailer of a documentary called Web that shows the effects of these laptops with Wikipedia on children in the remote Amazonas town of Palestina, Peru. I was involved in creating the Wikipedia snapshot, so it’s very rewarding to see the video.

I especially love that the film shows the children editing Wikipedia as well as browsing it, so that we’re involving new parts of the world in the Internet’s global conversation instead of merely giving our own knowledge to them.

Four of us (three OLPC volunteers and I) worked on this offline Wikipedia snapshot for less than a month in 2008, through ten releases and 190 Git commits, and then shipped it to Peru. It wasn’t something anyone asked us to work on — it just seemed like a good idea, and it remains one of the most important things I’ve worked on in my life. It’s a reminder to always be looking and ready for unexpected opportunities to make a large difference.

The Future of Javascript

I went to Rick Waldron‘s talk at Bocoup on The Future of JavaScript (ES6), and made a video recording. Here it is:

Takeaways from the talk for me:

  • JavaScript will be getting many new features that make it more attractive to write in — e.g. block scoping, weakmaps, sets, rest and spread parameters, default parameters, fat arrow syntax, and many other uses of syntactic sugar that I recognize from CoffeeScript — but it’ll take a few years before we can use them in client-side code reliably.
  • But if you’re writing for Node, you can start using them now with node --harmony.
  • Traceur can compile ES6 to ES3.
  • Continuum is a full ES6 VM written in ES3.