Experimenting with Panamax

Disclosure: This blog post is part of the Panamax Template Contest.

In my blog post about the Dell C6100 server I’ve been using, I mentioned that I run a full LXC userland for each application I deploy, and that I’d like to try out Docker but that this setup is in conflict with Docker’s philosophy – a Docker container only runs one process, which makes it difficult to use Docker for anything requiring interaction between processes. Here’s an example: this blog is running WordPress with MySQL. So, with LXC I create a fresh Ubuntu container for the blog and run apt-get install wordpress and I’m up and running, but trying to use Docker would leave me with an “orchestration” problem – if I’m supposed to have a separate web server and database server, how will they figure out how to talk to each other?

If the two Docker services are being run on the same host, you can use docker --link, which runs one service under a given name and then makes it available to any service it’s linked to. For example, I could call a postgres container db and then run something like docker --name web --link db:db wordpress. The wordpress container receives environment variables giving connection information for the database host, which means that as long as you can modify your application to use environment variables when deciding which database host to connect to, you’re all set. (If the two docker services are being run on separate hosts, you have an “ambassador” problem to figure out.)

All of which is a long-winded way to say that Panamax is a new piece of open source software that attempts to ameliorate the pain of solving orchestration problems like this one, and I decided to try it out. It’s a web service that you run locally, and it promises a drag-and-drop interface for building out complex multi-tier Docker apps. Here’s what it looks like when pairing a postgres database with a web server running a Django app, WagtailCMS:

The technical setup of Panamax is interesting. It’s distributed as a CoreOS image which you run inside Vagrant and Virtualbox, and then your containers are launched from the CoreOS image. This means that Panamax has no system dependencies other than Vagrant and Virtualbox, so it’s easily usable on Windows, OS X, or any other environment that can’t run Docker directly.

Looking through the templates already created, I noticed an example of combining Rails and Postgres. I like Django, so I decided to give Django and Postgres a try. I found mbentley’s Ubuntu + nginx + uwsgi + Django docker image on the Docker Hub. Comparing it to the Rails and Postgres template on Panamax, the Django container lacks database support, but does have support for overlaying your own app into the container, which means you can do live-editing of your app.

I decided to see if I could combine the best parts of both templates to come up with a Panamax template for hosting arbitrary Django apps, which supports using an external database and offers live-editing.  I ended up creating a new Docker image, with the unwieldy name of cjbprime/ubuntu-django-uwsgi-nginx-live. This image is based on mbentley’s, but supports having a Django app passed in as an image, and will try to install its requirements. You can also link this image to a database server, and syncdb/migrate will be run when the image starts to set things up. If you need to create an admin user, you can do that inside a docker_run.sh file in your app directory.

After combining this new Docker image with a Postgres container, I’m very happy with how my django-with-postgres template turned out – I’m able to take an existing Django app, make minor changes using a text editor on my local machine to use environment variables for the database connection, start up the Panamax template, and watch as a database is created (if necessary), dependencies are installed, migrations are run, an admin user is created (if necessary), and the app is launched.  All without using a terminal window at any point in the process.

To show a concrete example, I also made a template that bundles the Wagtail Django CMS demo. It’s equivalent to just using my django-with-postgres container with the wagtaildemo code passed through to the live-editing overlay image (in /opt/django/app), and it brings up wagtaildemo with a postgres DB in a separate container. Here’s what that looks like:

Now that I’ve explained where I ended up, I should talk about how Panamax helped.  Panamax introduced me to Docker concepts (linking between containers, overlaying images) that I hadn’t used before because they seemed too painful, and helped me create something cool that I wouldn’t otherwise have attempted.  There were some frustrations, though.  First, the small stuff:

Underscores in container names

This one should have been in big bold letters at the top of the release notes, I think.  Check this out: unit names with _{a-f}{a-f} in them cause dbus to crash. This is amusing in retrospect, but was pretty inscrutable to debug, and perhaps made worse by the Panamax design: there’s a separate frontend web service and backend API, and when the backend API throws an error, it seems that the web interface doesn’t have access to any more detail on what went wrong. I’m lucky that someone on IRC volunteered the solution straight away.

The CoreOS Journal box occasionally stays black

Doing Docker development depends heavily on being able to see the logs of the running containers to work out why they aren’t coming up as you thought they would.  In Docker-land this is achieved with docker -f logs <cid>, but Panamax brings the logs in to the web interface: remember, the goal is to avoid having to look at the terminal at all.  But it doesn’t work sometimes.  There’s a panamax ssh command to ssh into the CoreOS host and run docker logs there, but that’s breaking the “fourth wall” of Panamax.

Progress bar when pulling Docker images

A minor change: it’d be great to be able to see progress when Panamax is pulling down a Docker image. There’s no indicator of progress, which made me think that something had hung or failed. Further, systemd complained about the app failing to start, when it just needed more time for the docker pull to complete.

Out of memory when starting a container

The CoreOS host allocates 1GB RAM for itself: that’s for the Panamax webapp (written in Rails), its API backend, and any containers you write and launch.  I had to increase this to 2GB while developing, by modifying ~/.panamax/.env:

export PMX_VM_MEMORY=2048

Sharing images between the local host and the container

I mentioned how Panamax uses a CoreOS host to run everything from, and how this drastically reduces the install dependencies.  There’s a significant downside to this design – I want to allow my local machine to share a filesystem and networking with my Docker container, but now there’s a CoreOS virtual machine in the way – I can’t directly connect from my laptop to the container running Django without hopping through the VM somehow. I want to connect to it for two different reasons:

  1. To have a direct TCP connection from my laptop to the database server, so that I can make database changes if necessary.
  2. To share a filesystem with a container so that I can test my changes live.

Panamax makes the first type of connection reasonably easy. There’s a VirtualBox command for doing port forwarding from the host through to the guest – the guest in this case is the CoreOS host. So we end up doing two stages of port forwarding: Docker forwards port 80 from the Django app out to port 8123 on the CoreOS host, and then VirtualBox forwards port 8123 on my laptop to port 8123 on the CoreOS host. Here’s the command to make it work:

VBoxManage controlvm panamax-vm natpf1 rule1,tcp,,8123,,8123

The filesystem sharing is much trickier – we need to share a consistent view of a single directory between three hosts: again, the laptop, the CoreOS host, and the Docker app. Vagrant has a solution to this, which is that it can NFS share a guest OS from the CoreOS host back to my laptop. That works like this, modifying ~/.vagrant.d/boxes/panamax-coreos-box-367/0/virtualbox/Vagrantfile:

  config.vm.network "private_network", ip: "192.168.50.4"
  config.vm.synced_folder "/home/cjb/djangoapp", "/home/core/django",
  id: "django", :nfs => true, :mount_options => ['nolock,vers=3,udp']

So, we tell Panamax to share /opt/django/app with the CoreOS host as /home/core/django, and then we tell Vagrant to share /home/cjb/djangoappon my laptop with the CoreOS host as /home/core/django over NFS. After `apt-get install nfs-kernel-server`, trying this leads to a weird error:

exportfs: /home/cjb/djangoapp does not support NFS export

This turns out to be because I’m running ecryptfs for filesystem encryption on my Ubuntu laptop, and nfs-kernel-server can’t export the encrypted FS. To work around it, I mounted a tmpfs for my Django app and used that instead. As far as I know, OS X and Windows don’t have this problem.

Summary

Panamax taught me a lot about Docker, and caused me to publish my first two images to the Docker registry, which is more than I expected to gain from trying it out. I’m not sure I’m the target audience – I don’t think I’d want to run production Docker apps under it on a headless server (at least until it’s more stable), which suggests that its main use is as an easy way to experiment with the development of containerized systems. But the friction introduced by the extra CoreOS host seems too great for it to be an awesome development platform for me. I think it’s a solvable problem – if the team can find a way to make the network port forwarding and the filesystem NFS sharing be automatic, rather than manual, and to work with ecryptfs on Ubuntu, it would make a massive difference.

I am impressed with the newfound ability to help someone launch a database-backed Django app without using any terminal commands, even if they’re on Windows and have no kind of dev environment, and would consider recommending Panamax for someone in that situation. Ultimately, maybe what I’ll get out of Panamax is a demystification of Docker’s orchestration concepts. That’s still a pretty useful experience to have.

Serverless WebRTC, continued

Around a year ago, in WebRTC without a signaling server, I presented an simple app that can start a chat session with another browser without using a local web server (i.e. you just browse to file:///), and without using a signaling server (instead of both going to the same web page to share “offers”, you share them manually, perhaps via IM).

It’s been a busy year for WebRTC! When I released serverless-webrtc, Chrome didn’t support datachannels yet, so the code only worked on Firefox. Now it works in stable releases of both browsers, and is interoperable between the two, for both reliable (TCP-like) and unreliable (UDP-like) transfers. And I’ve just added Node to the mix (so you can do Node—Node / Node—Chrome / Node—Firefox) as well, with the first release of the serverless-webrtc NPM package. Here’s how to try it out:

$ git clone git://github.com/cjb/serverless-webrtc
$ cd serverless-webrtc
$ npm install
$ firefox serverless-webrtc.html &
$ node serverless-webrtc.js
<paste firefox's offer into node, hit return>
<paste node's answer into firefox, click ok>
<you're connected!>

And here’s a screenshot of what that looks like:

I’m able to do this thanks to the wrtc NPM module, which binds the WebRTC Native Code Package (written in C++) to Node, and then exposes a JS API on top of it that looks like the browser’s WebRTC JS API. It’s really impressive work, and the maintainers have been super-friendly.

Next I’d like to unwrap the JS from the node client and make a pure C++ version, because the Tor developers would like “to have two C++ programs that are capable of chatting with each other, after being given an offer and answer manually”, to help investigate WebRTC as a method of relaying Tor traffic.

Finally, a link that isn’t related to this project but is too cool not to mention – Feross Aboukhadijeh has a WebTorrent project to port a full BitTorrent client to the browser, also using WebRTC in a serverless way (with trackerless torrents, and peer-to-peer node introductions).

What would it mean if the next Wikipedia or GitHub (see Yurii Rashkovskii’s GitChain project!) didn’t have to spend tens of millions of dollars each year for servers and bandwidth, and could rely on peer-to-peer interaction? I’d love to find out, and I have a feeling WebRTC is going to show us.

A Robot for Timo

Here at FlightCar Engineering we’re a very small team, and one of us — Timo Zimmermann — works remotely from Heidelberg, Germany. Timo’s an expert in the web framework we use, Django, and is awesome to work with: mixing together good humour, an enjoyment of teaching and learning, and deep technical expertise.

One day a link to Double Robotics got passed around our internal chat room — it’s an unexpected use of Segway technology, putting an iPad on top of a mobile robot and letting a remote participant drive the robot around while video chatting. We keep a video chat with Timo open while at work, so we were pretty interested in this.

There wouldn’t be much point in FlightCar buying one of these robots; our local developers fit around a single desk. Still, it would be useful to be able to video chat with Timo and have him be able to choose which of us to “look” at, as well as being able to join in with office conversations in general. Could we come up with something much simpler that still has most of the advantages of the Double robot in our situation?

I have a little electronics experience (from my time at One Laptop Per Child, as well as a previous fun personal project) and recently received a Kickstarter backer RFduino. Alex Fringes and I decided to go ahead and build a basic, stable/unmoving telepresence device as a present for Timo. Here’s what we did:

Parts list

$140 Bescor MP-101 pan head with power supply and remote control
$68 RFduino “teaser kit” + prototyping shield + single AAA battery shield
$29 Rosco 8″ Snake Arm
$13 Rotolight Male 1/4″ to 1/4″ adapter
$15 Grifiti Nootle iPad mini Tripod Mount

Total: $265 USD

I’m not counting the cost of the iPad (the Double Robotics robot costs $2500 and doesn’t include an iPad either), or the tripod we’re putting the Bescor pan head on top of (I had a monopod already, and basic tripods are very cheap), but everything else we used is listed above. Here’s the final result:

How it works

The pan head is easy to control programmatically. It has a 7-pin port on the back, and four of the pins correspond directly to up/down/left/right — to move in a direction, you just apply voltage to that pin until you want to stop. This is a perfect match for an Arduino-style microcontroller; Arduino is a hobbyist electronics platform that makes it easy to cheaply prototype new hardware creations, by giving you I/O pins you can attach wires to and a simple programming interface. Local electronics hacker and Tiny Museum-cofounder Steve Pomeroy helped out by determining the pinout and soldering between the remote control port’s pins and our RFduino’s prototyping board, and Alex set to work writing the code that would run on the RFduino and iPads. We ended up with an architecture like this:

So, to expand on the diagram: Timo moves his iPhone, the orientation is sensed and passed on to our local iPad via the nodejs bridge (which exists just to proxy through NAT), which converts it into a single letter “r”, “l”, “u”, “d”, or “s” (for stop) and then the RFduino reads a character at a time over Bluetooth Low Energy and sends a voltage pulse to the appropriate pin. We chose iPhone orientation sensing as the control mechanism at Timo’s end, but you could also use explicit direction buttons, or even something like face detection.

We decided to hide the fact that we were building this from Timo and introduced it to him as a surprise, coincidentally on Valentine’s Day. We love you Timo!

Finally, we’ve put all of the code we wrote — for the RFduino, the nodejs bridge, and the local and remote iOS apps — under an open source license on GitHub, so we’ve shared everything we know about how to build these devices. We’d be very happy if other people can help improve the features we’ve started on and find a cheaper way to build more of these!

(By the way, we’re hiring for a Lead Front End Engineer in Cambridge, MA at the moment!)

More technical talks

Since my blog post arguing that Technical talks should be recorded, I’ve continued to record talks – here are the new recordings since that post, mostly from the Django Boston meetup group:

My award for “best anecdote” goes to Adam Marcus’s talk, which taught me that if you ask 100 Mechanical Turk workers to toss a coin and tell you whether it’s heads or tails, you’ll get approximately 70 heads. Consistently. This either means that everyone’s tossing biased/unfair coins, or (and this is the right answer) that you can’t trust the average Turk worker to actually perform a task that takes a couple of seconds. (Adam Marcus goes on to describe a hierarchy where you start out giving deterministic tasks to multiple workers as cross-checks against each other, and then over time you build relationships with and promote individual workers whose prior output has been proven trustworthy.)

Dell C6100 XS23-SB server

Last week’s laptop review reminds me that I should also write about a new server purchase. (I know, everyone’s moving to cloud computing, and here I am buying a rackmount server to colocate..)

Kelly Sommers has one of the best dev blogs out there, and she recently wrote about a new server she’s installing at home. It turns out that Dell made some dramatically useful servers around four years ago — the server is a slim rackmount size (2U) yet contains four independent nodes, each of which can carry dual Xeon processors, eight RAM banks, and three 3.5″ disks. Dell didn’t sell these via standard markets: they went to large enterprises and governments, and are now off-lease and cheaply available on eBay. They’re called “Dell C6100″ servers, and there are two models that are easy to find: XS23-SB, which uses older (LGA771) CPUs and DDR2 RAM; and XS23-TY3, which uses newer LGA1366 CPUs and DDR3. Here’s a Serve the Home article about the two models. (There are also new C6100 models available from Dell directly, but they’re different.)

I got one of these — each of the four nodes has two quad-core Xeon L5420s @ 2.5GHz and 24GB RAM, for a total of 8 CPUs and 96GB RAM for $750 USD. I’ve moved the RAM around a bit to end up with:

CPU RAM Disk
2 * L5420 32GB 128GB SSD (btrfs), 1TB (btrfs)
2 * L5420 24GB 3 * 1TB (raid5, ext4)
2 * L5420 24GB 3 * 750GB (raid5, ext4)
2 * L5420 16GB 2 * 1TB (raid1, ext4)

While I think this is a great deal, there are some downsides. These machines were created outside of the standard Dell procedures, and there aren’t any BIOS updates or support documentation available (perhaps Coreboot could help with that?). This is mainly annoying because the BIOS on my XS23-SB (version 1.0.9) is extremely minimal, and there are compatibility issues with some of the disks I’ve tried. A Samsung 840 EVO 128GB SSD is working fine, but my older OCZ Vertex 2 does not, throwing “ata1: lost interrupt” to every command. The 1TB disks I’ve tried (WD Blue, Seagate Barracuda) all work, but the 3TB disk I tried (WD Green) wouldn’t transfer at more than 2MB/sec, even though the same disk does 100MB/sec transfers over USB3, so I have to suspect the SATA controller — it also detected the disk as having 512-byte logical sectors instead of 4k sectors. Kelly says that 2TB disks work for her; perhaps we’re limited to 2TB per drive bay by this problem.

So what am I going to use the machine for? I’ve been running a server (void.printf.net) for ten years now, hosting a few services (like tinderbox.x.org, openetherpad.org and a Tor exit node) for myself and friends. But it’s a Xen VM on an old machine with a small disk (100GB), so the first thing I’ll do is give that machine an upgrade.

While I’m upgrading the hardware, what about the software? Some new technologies have come about since I gave out accounts to friends by just running “adduser”, and I’m going to try using some of them: for starters, LXC and Btrfs.

LXC allows you to “containerize” a process, isolating it from its host environment. When that process is /sbin/init, you’ve just containerized a entire distribution. Not having to provide an entirely separate disk image or RAM reservation for each “virtual host” saves on resources and overhead compared with full virtualization from KVM, VirtualBox or Xen. And Btrfs allows for copy-on-write snapshots, which avoid duplicating data shared between multiple snapshots. So here’s what I did:

$ sudo lxc-create -B btrfs -n ubuntu-base -t ubuntu

The “-B btrfs” has to be specified for initial creation.

$ sudo lxc-clone -s -o ubuntu-base -n guest1

The documentation suggested to me that the -s is unneeded on btrfs, but it’s required — otherwise you get a subvol but not a snapshot.

root@octavius:/home/cjb# btrfs subvol list /
ID 256 gen 144 top level 5 path @
ID 257 gen 144 top level 5 path @home
ID 266 gen 143 top level 256 path var/lib/lxc/ubuntu-base/rootfs
ID 272 gen 3172 top level 256 path var/lib/lxc/guest1/rootfs

We can see that the new guest1 subvol is a Btrfs snapshot:

root@octavius:/home/cjb# btrfs subvol list -s /
ID 272 gen 3172 cgen 3171 top level 256 otime 2014-02-07 21:14:37 path var/lib/lxc/guest1/rootfs

The snapshot appears to take up no disk space at all (as you’d expect for a copy-on-write snapshot) — at least not as seen by df or btrfs filesystem df /. So we’re presumably bounded by RAM, not disk. How many of these base system snapshots could we start at once?

Comparing free before and after starting one of the snapshots with lxc-start shows only a 40MB difference. It’s true that this is a small base system running not much more than an sshd, but still — that suggests we could run upwards of 700 containers on the 32GB machine. Try doing that with VirtualBox!

So, what’s next? You might by now be wondering why I’m not using Docker, which is the hot new thing for Linux containers; especially since Docker 0.8 was just released with experimental Btrfs support. It turns out that Docker’s better at isolating a single process, like a database server (or even an sshd). Containerizing /sbin/init, which they call “machine mode”, is somewhat in conflict with Docker’s strategy and not fully supported yet. I’m still planning to try it out. I need to understand how secure LXC isolation is, too.

I’m also interested in Serf, which combines well with containers — e.g. automatically finding the container that runs a database, or (thanks to Serf’s powerful event hook system) handling horizontal scaling for web servers by simply noticing when new ones exist and adding them to a rotation.

But the first step is to work on a system to provision a new container for a new user — install their SSH key to a user account, regenerate machine host keys, and so on — so that’s what I’ll be doing next.

Fujitsu Lifebook U904 review

I got a new laptop. Linux works reasonably well on it. Here’s my research in case you’re thinking about getting one too.

I wanted my next laptop to be an ultrabook (less than 1″ thick, less than 5 lbs) with a HiDPI/Retina display. That left me looking at the Samsung Ativ Book 9 Plus, Asus Zenbook UX301-LA, Dell XPS 15 Touch, and Fujitsu Lifebook U904.

Model Screen size, res RAM, upgradable? Disk, upgradable? Weight Price (approx. USD)
Samsung Ativ Book 9 Plus 13.3″, 3200×1800 4GB, no 128GB SSD, sort of (uses M.2) 3.1 lbs $1320
Asus Zenbook UX301-LA 13.3″, 2560×1440 8GB, no 2x128GB SSD in RAID 0, yes 2.6 lbs $1900
Dell XPS 15 Touch 15.6″, 3200×1800 16GB, no 1TB HDD + 32GB SSD, yes 4.6 lbs $1950
Fujitsu Lifebook U904 14″, 3200×1800 6GB, yes to 10GB 500GB HDD + 16GB SSD, yes 3.1 lbs $1350

In short, I decided that 13.3″ wasn’t large enough for comfortable viewing (especially at 3200×1800!) or typing, and the Dell was too heavy and bulky, so the Lifebook was the best option for me. (I also liked that the Lifebook has Intel graphics, whereas the Dell has nVidia Optimus.)

Fujitsu Lifebook U904

Some observations about the Lifebook under Ubuntu 13.10:

  • The screen is amazing. Fedora seems to scale its UI for HiDPI but Ubuntu doesn’t — menus are tiny in Ubuntu 13.10. Be warned that the screen is very glossy.

  • Web pages are unreadably tiny by default. You can set layout.css.devPixelsPerPx to 1.25 in Firefox, or “Page Zoom” in Advanced Settings to 120% to Chrome to fix. (Thanks to Alexander Patrakov for pointing me at the Chrome option in the comments.)

  • I’d use the touchscreen more if swipe-to-scroll worked on web pages in Firefox. Haven’t found a way to do that.

  • It’s the first time I’ve been able to have a row of xterms, then a full-length code editor window at comfortable reading size, then a (approximately half-screen-width) web browser all on the screen at once, and it feels very productive.

  • I saw graphics corruption (glitchy icons) on Fedora, both F20 and Rawhide. Ubuntu is fine.

  • The kernel (both Fedora and Ubuntu) always boots with minimum brightness, so you have to bring it back up each time you reboot.

  • Sometimes the mouse pointer doesn’t come back after suspend/resume. The fastest workaround I’ve found is just switching VT away and back — ctrl-alt-f1 followed by ctrl-alt-f7.

  • Sometimes audio doesn’t come back after resume. This doesn’t happen very often, but I haven’t found a way to bring it back other than rebooting.

  • Touchscreen, SD slot, USB3 work great.

  • Flash video makes the fan kick in, and it’s loud enough to be annoying. HTML5 video is fine. The fan’s usually very quiet.

  • While the RAM and disk are user-upgradable, it does require fully opening up the machine — it’s not as simple as removing a few screws on the bottom panel. I haven’t done it myself yet.

  • The onboard HDMI port only supports up to 1920×1080 on external monitors (this is pretty common). There’s an optional port replicator that has a DisplayPort port for higher res displays. If you use external monitors a lot, you might hold out for a laptop with a mini-DisplayPort built in.

  • I really miss my ThinkPad’s trackpoint; I’m going to try a tiling window manager.

Eight years and eight percent: Always giving more

(This is a joint blog post with Madeleine.)

Our tradition continues: to celebrate our eighth year of marriage Madeleine and I are giving 8% of our joint pretax income. (Each year we give 1% more.) This giving is made to organizations which we believe have the most concrete short term “estimated value” for helping others.

As people look forward to making resolutions for the coming year, we hope our own example helps inspire others to give – just as others have inspired us by giving more, despite financial pressures. Those who go ahead of us have blazed a trail we happily follow.

“Path Squiggles” by Dominic Alves“Path Squiggles” by Dominic Alves

As in previous years, we are guided by the research performed by GiveWell. Efficiency in good should matter, and for this reason our money will be going to help the developing world. Money can do more immediate good for the global poor – each dollar can accomplish more – than it can do to ameliorate the lives of those in first-world poverty.

Almost all of our giving this year will go to GiveDirectly. GiveDirectly aims to distribute 90% of the money it receives directly to poor individuals in the developing world. Their methods have been developed in Kenya, where the M-Pesa mobile-phone-based money transfer system facilitates the transfer of cash. GiveDirectly had a great year, with high profile and supportive articles in the New York Times, NPR’s This American Life podcast, and even The Economist. Even better, these articles often introduce one of the central ideas behind GiveWell (which has recommended GiveDirectly as one of three top charities) – that we can try to target donations to do the most good for the most people, and that acknowledging this involves a dramatic rethinking of which charities we choose to support.

Mobile Phone with Money in Kenya by Erik (HASH) Hersman“Mobile Phone with Money in Kenya” by Erik (HASH) Hersman

There are many ways to make our lives meaningful. We have been fortunate to grow our family with our first child: a concrete meaning and joy, though a local one. We’ve also been especially fortunate to have had employment (past and present) where our skills are used to improve the world. A third path to meaning – one we hope others will join us in celebrating – is to give, to give more, and to give wisely.

May you find the happiness of giving in the new year!

Technical talks should be recorded

I’ve picked up an interest in JavaScript and HTML5 this year, and have gone to a bunch of great technical talks in Boston. I brought a camera with me and recorded some of them, so you can see them too if you like. Here they are:

Rick Waldron – The Future of JavaScript

Mike Pennisi – Stress Testing Realtime Node.js Apps

Paul Irish – The Mobile Web Is In Deep Trouble

Daniel Rinehart – Debugging Node.js Applications

Ian Johnson – Prototyping data visualizations in d3.js

Kenneth Reitz – Heroku 101

I think these are world-class talks. But if I hadn’t brought my little camera with me and recorded them, they would be destroyed. No-one else offered to record them, even though they were popular — the Paul Irish talk had 110 people signed up to attend, and more than the same number again waitlisted who couldn’t go because they wouldn’t fit in the room. So there were more people in Boston who didn’t get to see the talk (but wanted to) than who did, even before we start counting the rest of the world’s interest in technical talks.

I’m happy that I’m able to help disseminate knowledge from Boston, which has an abundance of incredibly smart people living here or visiting, to wherever in the world you’re reading from now. But I’m also sad, because there are far more talks that I don’t go to here, and I expect most of those aren’t being recorded.

We’re technologists, right? So this should be easy. It’s not like I went to video camera school:

  • The equipment I’m using (Panasonic Lumix G2 camera and Lumix 20mm f/1.7 lens) costs under USD $800. Maybe it could be cheaper; maybe a recent cellphone (HTC One or Galaxy S4?) would be adequate.
  • I use a $20 tripod which is half broken.
  • I don’t use an external audio recorder (just the camera’s microphone) so the audio is noisier than it could be.
  • My camera’s sensor is small so it doesn’t have great low-light performance, and it records 720p instead of 1080p.
  • Sometimes the refresh rate/frequency of the projector is out of sync with the camera and there are strobing colors going across the screen in the final video. I don’t think I can do anything about this on the camera’s side?
  • I don’t do any editing because I don’t have time; I just upload the raw video file to YouTube and use YouTube’s “crop” feature to trim the start and end, that’s it.

I’d really like to know what the right answer is here. Am I overestimating how important it is to record these, and how privileged I am to be somewhere where there’s an interesting talk happening almost every day? Is owning a device that can record HD video for around 90 mins rare, even amongst well-paid developers and designers? If the presenter just recorded a screencast of their laptop with audio from its microphone, is that good enough or is that too boring for a full-length talk?

Might part of the problem be that people don’t know how to find videos of technical talks (I don’t know how anyone would find these unless they were randomly searching YouTube) so there isn’t as much demand as there should be — is there a popular website for announcing new recordings of tech talks somewhere? Maybe I just need to write up a document that describes how to record talks with a minimum of hassle and make sure people see it? Do we need to make a way for someone to signify their interest in having an upcoming talk be recorded, so that a team of volunteer videographers could offer to help with that?

WebRTC without a signaling server

WebRTC is incredibly exciting, and is starting to see significant deployment: it’s available by default in Chrome and Firefox releases now. Most people think of WebRTC as an API for video calling, but there’s a general purpose method for directly sharing data between web browsers (even when they’re behind NAT) in there if you look harder. For example:

  • P does peer-to-peer mesh networking in JavaScript.
  • TowTruck allows you to add collaboration features (collaborative text editing, text chat, voice chat) to websites.
  • PeerCDN forms a network from a site’s visitors, and uses it to offload serving up static content away from the web server and on to the networked peers.
  • The Tor Project is interested in using WebRTC to enable volunteers with JavaScript-enabled web browsers to become on-ramps onto the Tor network for users under censorship, as part of the Flash Proxies project. The idea is that censoring organizations may block the public Tor relays directly, but they can’t easily block every random web browser who might route traffic for those relays over WebRTC, especially if each web browser’s proxy is short-lived.

All of this activity means that we might finally be close to solving — amongst other important world problems — the scourge of xkcd.com/949:


xkcd: File Transfer, used under CC-BY-NC 2.5.

I wanted to experiment with WebRTC and understand its datachannels better, and I also felt like the existing code examples I’ve seen are unsatisfying in a specific way: it’s a peer-to-peer protocol, but the first thing you do (for example, on sites like conversat.io) is have everyone go to the same web server to find each other (this is called “signaling” in WebRTC) and share connection information.

If we’re going to have a peer-to-peer protocol, can’t we use it without all visiting the same centralized website first? Could we instead make a WebRTC app that just runs out of a file:/// path on your local disk, even if it means you have to manually tell the person you’re trying to talk to how to connect to you?

It turns out that we can: I’ve created a serverless-webrtc project on GitHub that decouples the “signaling server” exchange of connection information from the WebRTC code itself. To run the app:

  • download Firefox Nightly.
  • git clone git://github.com/cjb/serverless-webrtc.git
  • load file:///path/to/serverless-webrtc/serverless-webrtc.html

You’ll be asked whether you want to create or join a channel, and then you’re prompted to manually send the first party’s “WebRTC offer” to the second party (for example, over an instant message chat) and then to do the same thing with the second party’s “WebRTC answer” reply back. Once you’ve done that, the app provides text chat and file transfer between peers, all without any web server. (A STUN server is still used to find out your external IP for NAT-busting.)

There are open issues that I’d be particularly happy to receive pull requests for:

#1: The code doesn’t work on Chrome yet. Chrome is behind Firefox as far as DataChannels are concerned — Chrome doesn’t yet have support for binary transfers, or for “reliable” (TCP, not UDP) channels (Firefox does). These are both important for file transfers.

#2: Large file transfers often fail, or even hang the browser, but small transfers seem to work every time. I’m not sure whose code is at fault yet.

#3: File transfers should have a progress bar.

Thanks for reading this far! Here’s to the shared promise of actually being able to use the Internet to directly share files with each other some time soon.

Camera review: Lomography Belair X 6-12

The Belair 6-12 is an interesting new medium format film camera from Lomography. Here’s a mini-review, mixed in with photos from a roll of Velvia 50 that I shot with the camera at La Jolla Shores.

The camera looks amazing on paper — auto-exposure medium format cameras with interchangeable lenses are usually far more expensive, and the closest thing to the 6×12 panoramic format I can think of is the 35mm Hasselblad XPan II.

It’s much harder to get good results from the Belair than the XPan, though. The problems I’ve seen, starting with the most severe:

  • “Infinity focus” isn’t infinity sometimes. This could be a lens calibration problem, or the bellows not extending far enough, or the film not being held with enough tension to stay flat against the film plane. This seems to make the camera useless for landscapes, which is what I’d want to be shooting with a panoramic medium format camera.
  • When you take a shot, the shutter lever is on the front board that the lens is attached to via the bellows, rather than on the same side of the bellows as the camera’s body. This means that your sturdy tripod is keeping the camera’s body still while your finger is moving the lens board around and making your image blurry. This is probably the largest design flaw in the camera, and I thought the lack of reliable infinity was already pretty terrible. So, you actually shoot by putting the camera on the tripod, putting your outstretched index finger along the base of the front lens board to support it, and then using your thumb to activate the shutter extremely gently.
  • The viewfinder isn’t coupled to the lens; you focus blindly by setting the distance between you and the subject and relying on depth of field. This makes the focus problems much worse, because you can’t even tell whether the lens is failing to reach infinity.
  • The camera seems prone to “fat rolls”, where the spool doesn’t stay tight. This sometimes results in light leaks, and is what makes me think that the infinity problem might be about the spool not being flat against the film plane.
  • The autoexposure is inscrutable (you can’t see what shutter speed it’s chosen) and sometimes makes bad decisions.
  • There are two plastic lenses included, a 58mm (21mm at 35mm equivalent on 6×12) ultra wide angle and 90mm (32mm at 35mm equivalent on 6×12) wide angle. The lens quality is not good. Each lens can be used at f/8 or f/16, which results that I’d describe like this:
58mm f/8 Extremely soft everywhere, with strong vignetting
58mm f/16 Still pretty soft
90mm f/8 Soft, but somewhat usable
90mm f/16 Usable

You can see that the interchangeable lenses don’t add much if you’re interested in sharpness; you’ll want to stay at 90mm f/16 almost all of the time. (I should point out that Lomography is selling glass lenses for the camera as an upgrade — if you’re willing to spend more money on those, they’ll probably have better performance.)

With all that griping aside, how is it to shoot? It’s pretty fun. I like these shots, although I’ve had other rolls come back with unusably poor focus or exposure. In the shot below, I actually like that the two sides of the image (the seagulls and the cliff) are very soft; they make the photo look more painterly and surreal, which works here.

I can’t recommend buying the Belair, at least at its current price. I took advantage of a 40% preorder discount, and I think some of my problems with focusing might be caused by getting an early production camera that way. While I’m excited to see Lomography trying hard to innovate and keep film alive, the Belair seems to be more of a toy camera than a serious one.