Over the weekend I bought and collected 2 8th generation HPE Microservers. The two I have bought have been upgraded to 16GB of ECC memory and have had their stock processors replaced with E3-1240 v2 processors. They also came with 10GbE cards fitted, although currently I have no infrastructure to make use of these.

The main reason is we have been having awful problems with our broadband the last few weeks and although it was working fine despite this for a while, there is a lot of stuff running on the router that at times pushes the load average very high and it cannot be helping. I want to move as much as possible off the router, especially the very I/O heavy backup system, so it can just do it’s main job. Longer term, the plan is to also use these servers to provide a test-bed for things instead of making changes on live systems all the time.

They seem to be neat bits of kit. I was little concerned that the TDP of the replacement processors (69W) was substantially higher than the stock processor it replaced (35W), other people on-line have also said to stick to very low wattage processors, however this system was also available with a higher wattage (55W - although HP list it as 65W) i3-3240 which is not far off the replacement. The seller also assured me that he has been running these servers hard with these processors in without any issues and good temperatures reported by the the systems’ sensors.

Setting up

After testing with memtest overnight to satisfy myself that the processors and memory, at least, are functioning correctly I set about getting one up and running. I expect that I may learn some lessons along the way, so the other is available to setup “properly” and migrate to, if I do.

I was going to install VMWare ESXi on them, however there seems to be a consensus on the sites Google finds that KVM is faster for VM creation and start-up, a view echoed by Red Hat who say running VMs are also faster. Community discussions also agree with this view. So I have decided to use KVM instead.

I also have some Docker containers running that I want to migrate off. For the time being, I am going to set-up a virtual machine to host Docker as I am uncomfortable with how it takes over the netfilter FORWARD chain and so I want to have my own firewall between it and the physical network, which a VM makes easy (the host’s firewall can be used to protect the VM hosting Docker). I do plan to explore what Docker does in more detail, but that will be the subject of a future blog post.

iLO

The iLO(HP’s out-of-band management branding) can be configured to use the dedicated port, or share with the first physical ethernet port. It is configurable onto a VLAN in shared mode. For the time being, I have left it on the dedicated port however this does mean that both systems will tie up 6 ports on my switch, which I can reduce to 4 if I switch the iLO over to shared in the future.

For now, I just configured the switch port onto the management VLAN and set a static DHCP assignment to it in the server.

Setting the server name

As both of these systems are used, they have come with the previous owner’s configured host-names on them. Apparently resetting the BIOS and iLO to defaults did not clear this configuration item.

Annoyingly, the name is set in many different places; so far I’ve found server-name (BIOS), server-name (iLO), server FQDN (iLO) and iLO host-name.

BIOS

“Server Asset Text” -> “Server Info Text” -> “Server Name”

ILO

“Administration” -> “Access Settings” -> “Server Name”

“Administration” -> “Access Settings” -> “Server FQDN / IP Address”

“Network” -> “iLO Dedicated Network Port” -> “General” -> “iLO Subsystem Name (Hostname)”

The RAID card

The systems come with a HPE Dynamic Smart Array B120i controller built in. There are many complaints about poor performance with this controller, with VMware ESXi(although some claim workarounds fix the performance problem) and [Linux], even using HP’s binary drivers and pre-built install images.

Drivers are only provided for Red Hat Enterprise Linux (5, 6 & 7 - no drivers for RHEL 8 at time of writing), SUSE Enterprise Linux Server and VMWare ESXi up-to version 6.5 (some people report installing 6.5 then applying the update to 6.7 works but HP do not seem to have released an official 6.7 install disk image for 6.7 at the time of writing).

It transpires from the manual that this is a software RAID solution implemented in the driver but with disk compatibility with HP’s hardware products:

HP Dynamic Smart Array is a RAID solution combining a storage host bus adapter (HBA) and proprietary software components. Eliminating most of the hardware RAID controller components and relocating advanced RAID algorithms from a hardware-based controller into device driver software lowers the total solution cost, while still maintaining comparable RAID protection and full compatibility with Smart Array disk format, configuration utilities, and management/monitoring software.

The consensus on-line is to put the SATA controller into standard AHCI mode, disabling the RAID component, and use OS-level software RAID if required. Some users have installed separate PCI-express RAID cards and plugged the SATA back-plane’s SAS connector into them to work around the problem - certainly an option for the future if storage performance becomes an issue, although it would mean sacrificing the 10GbE cards (there is only 1 slot).

The setting for disabling the RAID controller is in the BIOS: “System Options” -> “SATA Controller Options” -> “Embedded SATA Configuration” -> “Enable SATA AHCI Support”.

Hard disk

Just to get up and running, I installed an old 2TB (plenty of space for VMs) spinning disk in the first disk bay. My only spare SATA SSD went into my PlayStation so this was the best option I had to hand.

Installing the OS

Ideally I would have just used my PXE boot service at home to kick off the install, however it has been broken for a while (fixing it is on my long to-do list). The system has a BIOS, despite only being 3-4 years old, so my existing UEFI USB boot stick will not work, I had to dig out a very old one to get it going.

The install was a typically straight-forward process. I partitioned the disk into a small(ish) /boot (5GB) and the rest as a physical volume for encryption, on top of which LVM was configured. I have been routinely encrypting in all of my systems, to protect against data theft if the systems themselves are physically stolen.

The encrypted volume had a single volume group created, named after the host-name (so that it will not conflict with any local volume groups if disks are put into another machine, e.g. for data recovery, in the future). I created logical volumes for swap (32GB), / (5GB), /usr (10GB), /srv (10GB), /var (20GB), /tmp (5GB), /home (20GB, 0 reserved blocks), /var/lib (500GB). Docker and KVM use /var/lib (/libvirt/images and /docker, respectively) for their images, hence the very large size. This leaves plenty of unallocated space to expand any of these volumes as required in the future.

I selected a minimal install (or, more specifically, deselected all the optional packages) then used SaltStack to configure and install my usual selection of base packages, get icinga2 (see previous blog post for generating a new certficiate) & munin monitoring it and backups running.

Next steps

The next tasks are:

  1. Decide whether to stick with configuring the network via /etc/networking/interfaces or use systemd-networkd or NetworkManager.
  2. Set-up LACP-based LAG between the two Ethernet ports
  3. Get KVM up and running
  4. Start migrating services from the router