Dell C6100 XS23-SB server

Last week’s laptop review reminds me that I should also write about a new server purchase. (I know, everyone’s moving to cloud computing, and here I am buying a rackmount server to colocate..)

Kelly Sommers has one of the best dev blogs out there, and she recently wrote about a new server she’s installing at home. It turns out that Dell made some dramatically useful servers around four years ago — the server is a slim rackmount size (2U) yet contains four independent nodes, each of which can carry dual Xeon processors, eight RAM banks, and three 3.5″ disks. Dell didn’t sell these via standard markets: they went to large enterprises and governments, and are now off-lease and cheaply available on eBay. They’re called “Dell C6100” servers, and there are two models that are easy to find: XS23-SB, which uses older (LGA771) CPUs and DDR2 RAM; and XS23-TY3, which uses newer LGA1366 CPUs and DDR3. Here’s a Serve the Home article about the two models. (There are also new C6100 models available from Dell directly, but they’re different.)

I got one of these — each of the four nodes has two quad-core Xeon L5420s @ 2.5GHz and 24GB RAM, for a total of 8 CPUs and 96GB RAM for $750 USD. I’ve moved the RAM around a bit to end up with:

CPU RAM Disk
2 * L5420 32GB 128GB SSD (btrfs), 1TB (btrfs)
2 * L5420 24GB 3 * 1TB (raid5, ext4)
2 * L5420 24GB 3 * 750GB (raid5, ext4)
2 * L5420 16GB 2 * 1TB (raid1, ext4)

While I think this is a great deal, there are some downsides. These machines were created outside of the standard Dell procedures, and there aren’t any BIOS updates or support documentation available (perhaps Coreboot could help with that?). This is mainly annoying because the BIOS on my XS23-SB (version 1.0.9) is extremely minimal, and there are compatibility issues with some of the disks I’ve tried. A Samsung 840 EVO 128GB SSD is working fine, but my older OCZ Vertex 2 does not, throwing “ata1: lost interrupt” to every command. The 1TB disks I’ve tried (WD Blue, Seagate Barracuda) all work, but the 3TB disk I tried (WD Green) wouldn’t transfer at more than 2MB/sec, even though the same disk does 100MB/sec transfers over USB3, so I have to suspect the SATA controller — it also detected the disk as having 512-byte logical sectors instead of 4k sectors. Kelly says that 2TB disks work for her; perhaps we’re limited to 2TB per drive bay by this problem.

So what am I going to use the machine for? I’ve been running a server (void.printf.net) for ten years now, hosting a few services (like tinderbox.x.org, openetherpad.org and a Tor exit node) for myself and friends. But it’s a Xen VM on an old machine with a small disk (100GB), so the first thing I’ll do is give that machine an upgrade.

While I’m upgrading the hardware, what about the software? Some new technologies have come about since I gave out accounts to friends by just running “adduser”, and I’m going to try using some of them: for starters, LXC and Btrfs.

LXC allows you to “containerize” a process, isolating it from its host environment. When that process is /sbin/init, you’ve just containerized a entire distribution. Not having to provide an entirely separate disk image or RAM reservation for each “virtual host” saves on resources and overhead compared with full virtualization from KVM, VirtualBox or Xen. And Btrfs allows for copy-on-write snapshots, which avoid duplicating data shared between multiple snapshots. So here’s what I did:

$ sudo lxc-create -B btrfs -n ubuntu-base -t ubuntu

The “-B btrfs” has to be specified for initial creation.

$ sudo lxc-clone -s -o ubuntu-base -n guest1

The documentation suggested to me that the -s is unneeded on btrfs, but it’s required — otherwise you get a subvol but not a snapshot.

root@octavius:/home/cjb# btrfs subvol list /
ID 256 gen 144 top level 5 path @
ID 257 gen 144 top level 5 path @home
ID 266 gen 143 top level 256 path var/lib/lxc/ubuntu-base/rootfs
ID 272 gen 3172 top level 256 path var/lib/lxc/guest1/rootfs

We can see that the new guest1 subvol is a Btrfs snapshot:

root@octavius:/home/cjb# btrfs subvol list -s /
ID 272 gen 3172 cgen 3171 top level 256 otime 2014-02-07 21:14:37 path var/lib/lxc/guest1/rootfs

The snapshot appears to take up no disk space at all (as you’d expect for a copy-on-write snapshot) — at least not as seen by df or btrfs filesystem df /. So we’re presumably bounded by RAM, not disk. How many of these base system snapshots could we start at once?

Comparing free before and after starting one of the snapshots with lxc-start shows only a 40MB difference. It’s true that this is a small base system running not much more than an sshd, but still — that suggests we could run upwards of 700 containers on the 32GB machine. Try doing that with VirtualBox!

So, what’s next? You might by now be wondering why I’m not using Docker, which is the hot new thing for Linux containers; especially since Docker 0.8 was just released with experimental Btrfs support. It turns out that Docker’s better at isolating a single process, like a database server (or even an sshd). Containerizing /sbin/init, which they call “machine mode”, is somewhat in conflict with Docker’s strategy and not fully supported yet. I’m still planning to try it out. I need to understand how secure LXC isolation is, too.

I’m also interested in Serf, which combines well with containers — e.g. automatically finding the container that runs a database, or (thanks to Serf’s powerful event hook system) handling horizontal scaling for web servers by simply noticing when new ones exist and adding them to a rotation.

But the first step is to work on a system to provision a new container for a new user — install their SSH key to a user account, regenerate machine host keys, and so on — so that’s what I’ll be doing next.

Comments

  1. Haha, sorry Chris, but I just had to comment on this after seeing you got U904 as well: I enquired “mrrackables” about shipping cost to Serbia last year, and were it not quoted at $900 (and, by virtue of Serbian customs law, I’d have to pay another ~30% of taxes on top of the base price + shipping cost), I’d be sporting Poweredge C6100 as my home server just like you ๐Ÿ™‚

    There were others who quoted shipping cost at “mere” ~$600, and while this is still a pretty good value when all is accounted for, for that money I can get myself a nice modern Ivy Bridge Xeon server configured that will actually do all I need (except it won’t be a platform to play with new server technologies on, since it’s not a “cloud server”).

    I guess we’ve got a very similar taste in computers. ๐Ÿ™‚

    Reply
  2. Thank you for sharing your experiences with the C6100; I am actually one of the less lucky people who fell for the deal offered by โ€œmrrackablesโ€ on ebay. I wish I read your blog before geting the box, but I guess it was meant to be.
    I can’t even get it to boot up, I’ve been trying for a long time now. It is not seeing my HDDs; I tried (WD Green) 2TB, and another segate HDD (300GB) I had on a dell precision t7400 with out luck.
    I have also tried to upgrade the BIOSs and BMCs on http://poweredgec.com/latest_fw.html, without luck.
    NOTHING WORKS ๐Ÿ™

    Anybody with solution please help!

    Reply
    • Sorry to hear! Perhaps try some more disks, make sure you’re installing them properly — do you get green lights on the disk caddies?

      Do you know which BIOS version you have, out of curiosity?

      Reply
      • Hi sorry for the late reply, I was hopping to get a notification whenever I got a reply.

        The leds jus blink and that’s it.

        When I boot, the only info on the BIOS I get are:

        * Phoenix ROM BIOS PLUS Version 1.10 1.0.9

        * [Xanadu-S X7DWT BIOS]

        Reply
  3. Hi bought a C6100 when they were under 900. Fantastic experience. I didn’t expect everything to work. Two of my nodes seem to have a different completely different BIOS than the others. Anyways … I was able to set them up just fine.

    These systems are no longer good value in my humble opinion. The cheapest I’ve found are between 1.5K and 2K. This is certainly good for production but a bit steep for a hobby.

    I want to fill out my rack (yes, I bought a rack to house my server) … any suggestions for good machines I can get from ebay at a cheap price? I don’t want to go below the C6100 in terms of generations. Would *really* appreciate any thoughts/suggestions.

    Reply
    • Mine was $700 a few weeks ago, but I agree that it looks like these were bought in lots of 100 off-lease, and the lots are finally running out. What becomes popular next will probably depend on the scheduling for a large batch becoming off-lease.

      I’d look for alternate ideas on the forum at http://www.servethehome.com/. The Supermicro Superserver 6026TT-HTRF looks like a candidate; same topology as C6100 (2u, 4 nodes, 12 3.5″ disks), and available barebones on eBay.

      Reply
  4. Also, if you have the 1068E RAID Controller and want to update to a recent firmware, use instructions from this site: http://www.virtualistic.nl/archives/750

    I tried to switch from RAID mode to IT(passthrough) mode, because I wanted to be able to use the TRIM command. But nothing worked. So I ended up using on-board Intel ports in AHCI mode.

    Reply
  5. Also, after flashing BMC firmware, MAC address will reset to zero.

    If you’re using Debian/Ubuntu, install ipmi tools first:

    apt-get install ipmitool openipmi -y

    After, use the script from this site to update MAC address to something random: http://www.webhostingtalk.com/showthread.php?s=0bc6070eb640aa3eb8db5a06ccc861c9&t=1288192

    Fan controller firmware:
    I had issues updating, so I didn’t try to force it yet. But I’m gonna try it anyway ๐Ÿ™‚

    Reply
  6. Forgot to also mention this:

    If you can’t save your BMC ip/netmask/gateway in BIOS and you get CMOS Checksum error at startup – you will need to reflash your BMC and BIOS(maybe even a couple of times). I did it from windows.

    After this, no more startup errors and everything saves fine.

    Regards

    Reply
  7. So before you start flashing things you should check the Service tag to see if it is an actual Dell. There are a bunch of DCS servers out there that do not have the same firmware. They were custom built for large clusters. The firmware on the poweredgec.com website is not the right stuff. You will brick your equipment and be left with an expensive and hard to get rid of ornament.

    Simply go to support.dell.com and put in the Service tag. If your service tag does not show up then don’t do anything to the firmware.

    Reply
  8. Pingback: Chris Ball » Experimenting with Panamax

    • ฮป sudo hdparm -tT /dev/sda
      /dev/sda:
      Timing cached reads: 11060 MB in 2.00 seconds = 5535.46 MB/sec
      Timing buffered disk reads: 632 MB in 3.00 seconds = 210.50 MB/sec

      Reply
      • Thanks; I’m trying to squeeze out some more speed out of these guys as well. Mine seems slow for a SATA 2 bus, stock Dell bios.

        # sudo hdparm -tT /dev/sda

        /dev/sda:
        Timing cached reads: 10666 MB in 1.99 seconds = 5353.75 MB/sec
        Timing buffered disk reads: 518 MB in 3.00 seconds = 172.47 MB/sec

        # sudo hdparm -tT /dev/md0

        /dev/md0:
        Timing cached reads: 10712 MB in 1.99 seconds = 5378.60 MB/sec
        Timing buffered disk reads: 482 MB in 3.01 seconds = 160.07 MB/sec

        Reply
        • Just gonna leave this here: Looks like the stock Dell bios locks the drives in IDE mode. Flashed the Supermicro bios (on a spare chip), switched to AHCI:

          # hdparm -Tt /dev/sda

          /dev/sda:
          Timing cached reads: 12044 MB in 1.99 seconds = 6048.98 MB/sec
          Timing buffered disk reads: 714 MB in 3.00 seconds = 237.66 MB/sec

          # hdparm -Tt /dev/md0

          /dev/md0:
          Timing cached reads: 11170 MB in 1.99 seconds = 5609.74 MB/sec
          Timing buffered disk reads: 740 MB in 3.01 seconds = 246.08 MB/sec

          Reply

Leave a Reply to Alex Cancel reply

Your email address will not be published. Required fields are marked *