1. Introduction
FreeBSD on a Supermicro 5017A-EF makes a power-efficient, reliable, quiet
and Hypervisor-free
[Some of the 5017A-EF’s likely uses happen to
match those of a virtual machine. Yet while Hypervisors can be
essential (e.g. to effectively use Windows), they have big drawbacks. ESX
and Hyper-V both depend on expensive hardware RAID, and both require
rigorous security configuration and ongoing patching, especially if
any of their VMs are accessible to untrusted people (e.g.
VMSA-2014-0005).
Hyper-V obviously needs Windows, and the ESX management tools tend
to be Windows-centric. For Unix, and especially FreeBSD with ZFS, I think
physical computers beat VMs, unless one really needs a Hypervisor’s high availability features.]
server for under a thousand bucks.
FreeBSD 10.1-RELEASE-p0 amd64 was used in this article.
2. Hardware
Here’s how the 5017A-EF comes packed:
This Quick Reference sheet is included.
2.1. Included Parts
The 5017A-EF includes a CSE-504-203B
case, X9SBAA-F
motherboard and PWD-203-1H
power supply.
This 1U rackmount chassis mounts with its front tabs only; it’s not very heavy. All the connectors are on the back.
Here’s another photo of the case, with a PCI card installed.
This case doesn’t come with any way to mount fans or 2.5" disks; special Supermicro brackets must be bought separately (see Optional Parts). |
There’s only one SATA activity LED on the front. One LED per port would have made multi-disk configurations a lot more fun to watch.
The front rackmount brackets are removable; they’re just held on with screws.
The case cover’s held on by five tiny screws (two along each side, and one in the back), which makes it
tedious to open up. A quick-fastening method like the 5018A-MHN4
's (or even just
thumb-screws) would have been nicer.
This Mini-ITX motherboard comes with an Intel Atom S1260 pre-installed. The S1260 has two 2GHz cores and draws only 8.5 Watts. When Hyper-Threading’s enabled, FreeBSD reports four CPUs.
The Atom S1260 can’t run bhyve. If something similar’s desired that runs bhyve, consider the more powerful Atom C2758. Supermicro makes several computers with this CPU, including the 5018A-MHN4 (which does run bhyve). |
Up to 8GB of ECC memory’s supported, which differentiaties the 5017A-EF from many other low-power computers, and makes it suitable for ZFS.
This computer supports only USB 3.0, which prevents all but the very newest of operating systems from being easily used. FreeBSD 10.1-RELEASE works well enough (see details in Installation Notes).
It’s possible to boot over the two onboard Intel i350AM2 GigE ports using PXE or iSCSI. I tried both, and both work, though booting from iSCSI isn’t well supported by many operating systems.
Four SATA 3.0 (6Gb) ports are provided by an onboard Marvell 88SE9230. This controller supports both hybrid software/hardware based RAID and normal JBOD mode, the latter of which should be chosen for ZFS use.
Qty | Type | Location |
---|---|---|
2 |
USB 3.0 |
External |
1 |
VGA |
|
1 |
Serial (DB9) |
|
2 |
Network (i350AM2) |
|
1 |
BMC Network |
|
4 |
SATA 3.0 (6 Gbps) |
Internal |
1 |
SO-DIMM |
This is a 200 Watt, 80 PLUS "GOLD" power supply. Its fan’s temperature regulated and spins very slowly when the computer’s in a room temperature setting.
While spinning slowly, it makes a quiet "growl" sound that seems characteristic of some motors. The growl’s replaced by a clean "whoosh" sound when it speeds up. Yet I’ve heard the "whoosh" only during POST, since even after hours of full load, I wasn’t able to warm the 5017A-EF up enough to increase the fan speed.
I’m guessing that Supermicro used this 200W PSU in such a low-power computer simply to standardize with many of their other 1U rackmounts, since this same PSU’s used in other models (many of which aren’t low-power at all).
2.2. Optional Parts
The hard disk and PCI slot areas partially contend with each other; some combos won’t fit. This is less of an problem with low-profile cards and 2.5" disks—it looks like four 2.5" disks and a low-profile card will fit.
The 5017A-EF doesn’t come with any way to mount 2.5" disks; Supermicro brackets have to be bought separately. With the right bracket(s), it’s possible to mount up to four 2.5" disks, or a mixture of one 3.5" and two 2.5" disks.
MCP-220-00044-0N
, shown here, mounts two 2.5" disks (screws are included).
The Supermicro documentation seems to claim that it’s not possible to combine a PCI card with four 2.5" disks. Yet at least with a card this small, it looks possible to me.
The 9.5mm CT128M4SSD2 SSDs fit with room to spare; though I didn’t measure carefully, it looks like it’s made to hold 15mm Enterprise 2.5" disks. The bracket’s held in by four screws which enter from underneath and poke through the plastic sheet that lines the interior like moose antlers.
Here’s how things look with the cables attached.
The 5017A-EF doesn’t come with any place to mount fans, but Supermicro makes optional brackets for this. I think fans are only necessary when conventional hard disks are being used.
MCP-320-81302-0B
(shown) adds one fan; there’s another part to add two.
I didn’t wind up installing this, since I went with an SSD-only configuration.
2.3. Non-Supermicro Parts
To the Supermicro parts, I added RAM, two SSDs and a NIC.
This is a single Crucial/Micron 8GB ECC SO-DIMM.
I added two Crucial/Micron 128GB SSDs, which I already had left over from an old project. These are 9.5mm thick.
I added an old Intel PRO/1000 GT (PWLA8391GTBLK
) for an extra network port.
2.4. Management
The 5017A-EF has a complete remote management system, including
a remote console, virtual media, sensor outputs and the usual
security hazards.
[This is probably old news to
this article’s readers, yet access to a server’s console
often allows a skilled person to bypass security and achieve root-level
access to the computer’s data. Remote management systems are capable
of exposing this vulnerable condition to untold villainous riffraff. And since
remote management systems are a freebie that comes with hardware, they’re famously buggy.
In short, they should only be plugged in when really necessary, and even
then only on isolated management networks. No comment’s required on ADMIN/ADMIN
.]
In short, the remote management system mostly works with FreeBSD 10.1, with one caveat regarding fully remote installs. See Installation Notes for special notes on using this with FreeBSD.
An independent Ethernet port’s provided for remote management, and its network settings are configured on these BIOS setup program screens.
Point a browser there once it’s configured to see the login screen. It’s
ADMIN/ADMIN
by default.
Not surprisingly, Java’s required to use the remote console feature. Java 7 on an Ubuntu 14.04 LTS desktop worked fine; it didn’t work with Java 8.
The contents of all the menus should give you a good idea of its remote management capabilities:
The sensor output screen's interesting. Unfortunately, the power consumption screen didn’t seem to work.
2.5. Power Consumption
State | Consumption |
---|---|
Powered down |
3 Watts |
Running FreeBSD, mostly idle |
18 Watts |
Running FreeBSD, quite busy |
23 Watts |
I measured the 5017A-EF’s power consumption with a Kill A Watt®, in this article’s configuration (two SSDs plus an extra PCI NIC). Two network ports and the remote console’s network port were plugged in during the measurements.
See Performance for unfair, real-world power use and efficiency comparisons with other computers.
2.6. Noise
This computer’s only fan’s in the PSU. It’s temperature-controlled and spins very slowly. With an SSD-only configuration, I wasn’t able to load the computer enough to yield any RPM increase at all.
There’s a quiet "growl" sound coming from the PSU’s fan (see the PSU section for more info).
In a very quiet room, this computer’s audible within a ~ten foot radius, but it’s not annoying. I found that any other ambient noise (like an average desktop computer, a howling Doberman or even a laptop fan) masked the 5017A-EF completely.
2.7. Setup Notes
Before installing FreeBSD, consider updating the BIOS and testing the memory.
My 5017A-EF came with BIOS level 1.0c, and I updated it to the newest level
as a matter of course.
[Firmware updates that go badly can yield an
inconsistent image that prevents the hardware from functioning at all, so vendors
try to discourage people from whimsically updating it. This makes sense. Yet server and
storage hardware that’s going to be relied upon should generally have its firmware
brought up to date at least once when it’s first installed. This initial update
advances the firmware’s state beyond the "barely stable enough to ship" level
that’s often pre-loaded. I’ve spent too many days debugging buggy Emulex HBAs
and Broadcom NICs to feel comfortable running pre-loaded firmware on anything
important. Shopping tip: buy QLogic HBAs and Intel NICs.]
Here’s the setup utility showing BIOS 1.1.
The Supermicro BIOS update program runs in DOS. To use it, I booted DOS from a USB memory stick and used a USB keyboard. Both the keyboard and memory stick worked fine despite the USB 3.0 ports.
It’s a good idea to test memory before relying on it. I’ve seen MemTest86+ report failures after several days of continuous tests, so let it run for a while; I ran mine for about four days.
I’ve had to exchange a number of defective memory boards on warranty over the years. This problem seems to have been on the decline since around 2007; prior to then, it seemed like new memory tested bad at least a third of the time.
I heard from Allan Jude and Kris Moore on their amazing BSDNow show that FreeBSD’s tuned by default to allocate 95% of the computer’s memory to ZFS.
ZFS counts on the memory’s integrity, so having ECC alone’s not enough; it has to be tested.
3. FreeBSD
3.1. Laundry List
3.2. Installation Notes
The Marvell 88SE9230 should be set to JBOD mode for ZFS use. Disks
in JBOD mode show up as Free Physical Disks
. Here are
JBOD configuration
and SSD detection
screenshots.
FreeBSD 10.0-RELEASE didn’t support this computer’s USB 3.0 ports well enough to do a regular install; a network install was required. Things work much better with 10.1. Here’s what works with each release:
Version | USB Kbd | USB CD | USB Memstick | Remote Kbd | Remote CD |
---|---|---|---|---|---|
10.0-RELEASE |
Works |
Symptom A |
Symptom B |
Symptom B |
Symptom C |
10.1-RELEASE |
Works |
Works |
Works |
Works |
Symptom A |
Symptom A: booting from this device worked, but reads began failing during the installation.
Symptom B: unreliable; worked ~10% of the time, sporadically.
Symptom C: it wasn’t possible to boot from this device.
Even with FreeBSD 10.1, the remote management system’s virtual CD drive’s unreliable (Symptom A). While racking the server, remember to leave a USB stick plugged in containing a FreeBSD installation image. This allows one to do future FreeBSD installs remotely, by simply overwriting the USB stick’s content with the new memstick image before rebooting. |
powerd
Enable powerd
when the FreeBSD installer mentions it. When the computer’s idle,
powerd
gradually lowers its CPU speed. When the CPU becomes busy, it cranks it
back up. I was amused to see it ultimately lower the speed down
to a nostalgic 75 MHz. Here’s its full output.
load 6%, current freq 75 MHz (21), wanted freq 75 MHz
The igb
driver enables TCP segmentation offloading on the onboard Intel i350AM2
ports by default. Yet TSO’s not compatible with libalias(3)
based NAT,
which is used by natd
and ipfw
.
If you plan to use libalias(3)
based NAT, disable TSO on the associated
port with an rc.conf
line like this:
ifconfig_igb1="inet 192.168.32.1 netmask 255.255.255.0 -tso"
There’s also a net.inet.tcp.tso
sysctl, but disabling it that way would
impact all ports, and not just the port associated with NAT.
The 5017A-EF’s CPU power and 8GB ECC memory capacity make it well suited to ZFS. Consider doing a root-on-ZFS install with at least a two-disk mirror; this’ll improve reliability a lot while still keeping the computer under a thousand bucks.
Here’s how my root-on-ZFS installation screen looks.
If the installer fails to write to the disks, you may need to drop to the installer’s shell and run this command to disable boot sector protection. This happens sometimes and not others, due to whatever content happens to exist in the boot sectors before starting the install.
sysctl kern.geom.debugflags=0x10
After installing FreeBSD 10.1-RELEASE, compression should already be
set to lz4
, and no change is required. Check compression’s status
like this (z
may be zroot
instead):
zfs get compression z
If compression’s not enabled, enable it like this:
zfs set compression=on z
ZFS compression can improve performance by reducing the amount of data being passed to and from the disks, and is computationally undemanding.
Ensure that compression’s enabled before storing data in the datasets, because only data written after compression’s enabled will be compressed. |
ZFS deduplication monitors data being stored for duplication. When duplication’s detected, the duplicate data’s not stored an additional time; instead, a reference to the original data’s stored.
This can reduce disk space usage a lot for some types of data, such as virtual machine disk images. For instance, if two disk images both contain Windows Server 2012 R2 guests with the same applications, big sections of the two disk images (files) are likely to be identical. The identical blocks are stored only once.
Deduplication uses a lot of memory; ZFS with deduplication enabled needs about 5GB of memory for each 1TB of data. Since the 5017A-EF’s maximum memory capacity is 8GB, deduplication’s only suitable up to about 1.5TB. Yet to reserve ample memory for applications, I wouldn’t enable deduplication on more than 1TB of data.
Since my storage capacity’s only about 111GB, deduplication ought to consume only ~500MB of memory.
# zfs get dedup z
NAME PROPERTY VALUE SOURCE
z dedup off default
# zfs set dedup=on z
# zfs get dedup z
NAME PROPERTY VALUE SOURCE
z dedup on local
3.3. Performance
I compared the 5017A-EF’s speed and efficiency to a menagerie of other
computers by running a clean 10.1-RELEASE make buildworld
on each.
- Compute [kWh]
-
The total power consumed during each computer’s
make buildworld
was reckoned. Needless to say, the computers with lower kWh values are more compute-efficient. - Resting [Idle W]
-
The resting power consumption was measured. This value’s most relevant to the 5017A-EF’s likely uses.
A Kill A Watt®
was used to measure power for computers a
, b
, c
, d
, e
and g
.
Computer f
had integrated power monitoring, and I skipped computer h
because I had no way to measure it.
To keep the computers busy, make -jN buildworld
was used, where N
equalled
the number of CPUs detected by FreeBSD. This ran a single clang instance on each
CPU. By "CPU," I mean anything FreeBSD detects as a CPU, even if it’s only a
Hyper-Threading logical core.
It’s unfair to compare computers a and b to the 5017A-EF, since they’re not
servers; they don’t support ECC memory, and their storage and networking options
are very limited. Yet it’s interesting that they’re so efficient
nonetheless. Also, computers d and f used remote storage systems which
weren’t included in their power measurements. |
# | Make | Model | Hypervisor | CPU | Flags | RAM | Storage | Time | kWh | Idle W |
---|---|---|---|---|---|---|---|---|---|---|
a |
Apple |
Mac Mini |
Fusion |
Core i5-3210M |
-j2 |
4 |
DAS: 1 SSD |
1:21 |
0.031 |
8 |
b |
Lenovo |
ThinkPad x230 |
Player |
Core i5-3320M |
-j2 |
4 |
DAS: 1 SSD |
1:07 |
0.031 |
13 |
c |
Supermicro |
5017A-EF |
N/A |
Atom S1260 |
-j4 |
8 |
DAS: 2 SSDs |
3:30 |
0.078 |
18 |
d |
Supermicro |
H8SCM |
ESX |
Opteron 4228HE |
-j4 |
8 |
SAN: GigE iSCSI |
0:43 |
0.064 |
38 |
e |
Supermicro |
5018A-MHN4 |
N/A |
Atom C2758 |
-j8 |
16 |
DAS: 4 HDDs |
0:57 |
0.057 |
54 |
f |
Hitachi |
CB 2000 [E55A2] |
ESX |
Xeon X5670 |
-j2 |
8 |
SAN: 4Gb FC |
1:10 |
0.139 |
104 |
g |
Supermicro |
H8DM3-2 |
N/A |
Opteron 2210HE |
-j2 |
4 |
DAS: 2 HDDs |
1:55 |
0.272 |
110 |
h |
HP |
DL385 G2 |
ESX |
Opteron 2220 |
-j2 |
8 |
DAS: 2 HDDs |
1:35 |
N/A |
N/A |
4. ESX
In short, ESX doesn’t work on the 5017A-EF, and VMware doesn’t support it.
I tried ESXi 5.5. Though it worked to a large degree (and could even power on and run a VM), it had these symptoms:
-
It was unable to save its configuration to the USB memory stick, so everything reverted to post-install defaults following every reboot. I’m guessing this had something to do with the computer’s USB 3.0 controllers not agreeing with ESX.
-
ESX showed bizarre CPU load readings, even with no VMs running at all.
Copyright © 2015 Robroy Gregg