Simon Haslam's Oracle Fusion Middleware blog

White-box Server Build 2014 - Part 1

Happy New Year readers! This post is about building a new "white-box" server to run VMware ESXi, using one of the latest Intel Core i7 "enthusiast" processors and components sourced in the last quarter of 2014 (hence the title). Rather than buying a server from a single vendor, such as HP or Oracle, white-box servers are those you build yourself, typically to get something tailored to your needs and at a much lower price.

Requirements

Selecting compatible components is the most challenging part of building a white-box server especially if, like me, you don't do it very often since PC parts change very quickly. Component specifications depend very much on your requirements (e.g. games machine, HT PC, home lab server, NAS etc) so before I go further I'll describe mine.

This will be a build machine, running many virtual machines for O-box test environments. It will run the free version of VMware vSphere ESXi (5.5 initially). I have been running VMware products on various lab servers since the free version of VMware Server in 2006 and, whilst I know there are alternatives, I have no reason to change as I don't need enterprise features (such as vMotion).

With active virtualised environments over-committing of memory doesn't work too well (e.g. Oracle VM / Xen doesn't even allow it) therefore plenty of memory is a key requirement - Veriton's current build servers each have 32GB which was a lot of memory when they were new (2008-12) but isn't really enough for building modern production-style Oracle Fusion Middleware platforms like SOA 12c. So 64GB would still be a very significant improvement, and I can see 128GB being useful in the near future.

On the other hand over-committing processor cores does work well - in build workloads I find the system is either I/O bound, or waiting on a small number of threads. Therefore I don't need lots of cores, but if they are fast that will help with build times.

Storage volume requirements aren't huge - backups and go to a separate NFS NAS - but IOPS will dictate build performance so multiple devices may be needed. As a fully automated test server there's no need for RAID - everything can be recreated.

Finally, I need multiple NICs to handle the different networks (management, test, cluster etc). Whilst I could trunk the VLANs I might as well have the simplicity and the extra bandwidth from multiple NICs since I know I will only be using GbE.

The relevance of rest of this post will depend on how close your requirements are to mine.

Components Selected

Now I'll describe the parts I chose and give some of my reasoning in choosing them.

Some of the parts for my Haswell-E X99 build

Processor: Intel Core i7-5820K

This was a pretty easy decision - I wanted to stick with a commodity i7 processor and non-ECC memory for cost reasons, so when I was first thinking about this server at the end of 2013, it was a question of whether to go with:

  • a Haswell processor, like the quad core i7-4770 3.4GHz released in Jun 2013, with a maximum of 32GB DDR3 memory,
  • a previous generation architecture Ivy Bridge-E ("enthusiast") processor, like the i7-4820K 3.9GHz, with a maximum of 64GB DDR3 memory,
  • wait for the Haswell-E processors and non-ECC DDR4 memory, which the road map/leaks were suggesting could support 128GB.

At the time it seemed like Haswell-E might be available by of end June 2014, and I was very busy anyway, so I decided to wait for that since DDR4 is a significant change to memory too. In the end Intel released the processor at the end of August 2014 but it has taken the quiet time at the end of December for me to find time to pull the pieces together.

Note the other option that I didn't consider in any detail was building a Haswell-EP Xeon based system with ECC memory. I'd expect this to be much more expensive for comparable performance, though the current price premium for non-ECC DDR4 might reduce the difference for a few months. 

Out of the Haswell-E range there are 3 processors: the hex-core i7-5820K and i7-5930K, and the octo-core i7-5960X (~£650+VAT) - prices around £250, £380, £650 + VAT respectively. The latter is the money-no-object gaming PC option and I don't need the extra cores. The difference between the i7-5820K and i7-5930K is primarily the number PCIe lanes (the i7-5820K only having one x16 slot possible, whereas the others have two) - presumably this is important if you want to run two high-end graphics cards with perhaps 4 displays. Anyway, the i7-5820K is more than good enough for me (and actually only about 1/3 more expensive than the i7-4770).

Note: the desktop -E processors, like their Xeon server counterparts, don't have any on-board graphics so you'll need a separate graphics processor of some sort (see later).

Motherboard: ASRock X99 Extreme4

Now this decision did take a lot of pondering! I had used an ASRock Z79 Extreme in a previous build and was impressed with its features/price (e.g. that was one of the few mid-price boards offering VT-d push-through at the time). However the Haswell-E processors have a new socket (LGA 2011-3) and new chipset (X99) to go with the new non-ECC DDR4 memory... so there were three new technologies being launched last autumn.

I was worrying about the video too - I hadn't had to worry about a separate graphics card previously as I'd only ever bought "proper" servers (which have management controllers that include basic video support), or else used desktop processors with on-board graphics. I'm also used to having lights-out management, which would be useful if this server ends up in co-lo.

Therefore I came across some pre-release specifications of this nice looking board from Supermicro which includes a management controller (BMC): C7X99-OCE-F. To cut a long story short, I spoke to Supermicro in October about specifications and when it would be available, was then redirected to the people at scan.co.uk and the general opinion was it would take "about 2 weeks" to arrive. I wanted to order in mid-December so that I could build the server over the Christmas holidays so decided to find something else. Note: today I can see that boston.co.uk have the C7X99-OCE-F in stock so maybe supply will quickly improve.

I decided to go for an ASRock X99 Extreme4 as it was readily available and I read about one or two successful builds using it on the forums. The Supermicro board was about £100/77% more expensive than the ASRock too, though it does have a second NIC. On the other hand, I was going to install an extra 2 port GbE NIC anyway, and the ASRock has an Ultra M.2 slot (aka NGFF) which I can see being useful for the next SSD storage I buy.

ASRock X99 Extreme4 motherboard 

However most mid-range X99 boards would probably be suitable - for this amount of memory you are looking for the lowest specification that has 8 DIMM slots (many have only 4), otherwise you end up paying for features that you don't need (e.g. more advanced audio or wifi) and that probably use more power.

Memory: 8 x 8GB Ballistix Sport DDR4 PC4-19200

Finally for now, the memory. I actually bought the memory in October, wishfully thinking I might find some time to spec/build the server back then! Non-ECC DDR4 prices are still very high and it is in relatively short supply I think - remarkably for technology products the price today - £666+VAT - is more than the £570 I paid for it back then!. Anyway, I bought two 4*8GB kits (Crucial part number BLS4C8G4D240FSA) - I would have preferred to have bought a single, matched 8*8GB kit but one isn't available and at least all my modules were from the same batch.

Note: 8GB is the largest non-ECC DDR4 DIMMs you can get at the moment and there's only speculation as to when 16GB DIMMs will become available (due to niche market).

This Ballistix memory is rated at 2400MHz, has respectable 16/16/16 latency and seems to have plenty of over-clocking potential if I wanted to try that. Compatibility-wise it wasn't listed in ASRock's validated memory list, but some Crucial modules were, so this was a bit of a punt. On the other hand, the memory controller is now built into the processor so maybe there will be fewer motherboard compatibility issues in future.

 

These were the biggest decisions to make - I need to wrap up now but will write about the remaining components and build experiences soon.

SPOILER ALERT: this server is up and running VMware ESXi 5.5U2 (see my recent tweets: @simon_haslam).

New Haswell-E white box server is running!

Comments:

Post a Comment:
Comments are closed for this entry.

Search this blog

About me
Oracle ACE Director (Middleware and SOA)
Presentation downloads



I'm speaking at Oracle OpenWorld


UKOUG Ambassador Partner
Oracle WebLogic Server 12c Certified Specialist
Oracle WebLogic Server 12c Certified Specialist

Links

Feeds