Virtualised Oracle Database Appliance POC - #2 Unboxing & Connecting Up
24 Jun 2013 by Simon Haslam (in Hardware)
|Well, just like all the best blog posts about new mobile phone models, today I bring you... "Oracle Database Appliance X3-2 unboxing!"
Whilst technically this may not be the most interesting article in my ODA POC series, we'll see what subtleties we can tease out from
just what's on the outside.|
On the right you will see the ODA
packaging or, more precisely, the 3 separate boxes the it arrives in.
In case you needed any further convincing following my previous posts, the X3-2 is
two servers and one (or two) array(s) loosely connected together, not a kind of 2 node blade system like the ODA v1.
Looking on the bright side, at least no individual box is all that heavy - compare this to something like an HP blade enclosure which is 10U tall and certainly not a one person job to install in a rack without a fancy fork lift.
So here's the first box open for one of the servers.
What is most significant in this photo is the access panel just below the component diagram label. This panel gives you access to the hot-swappable fans. To replace a faulty fan whilst keeping the server running you can slide the chassis out whilst still on its rails, by around 40cm (remember that), and take the panel off. All standard data centre stuff.
Here are the cables supplied with the ODA - I think all are 1m long
The green and yellow ones are crossover cables directly connecting the PCI card ports for the interconnect. Assuming the servers are installed one on top of the other, these 1000mm long cables have to reach between sockets that are 1U, i.e. <45mm apart. My initial expectation therefore was that very short cables would be supplied: 10cm CAT6 patch cables are easily available. But then when I looked at the X3-2 server's Service Manual, it specifies the components that don't need server power down to replace: HDD (front), power supplies (rear) and fans (top). Therefore to slide the server out far enough to replace the fans needs an extra 40cm slack. The ODA Service Manual talks about sliding out the server to "maintenance position" which will be the full depth of the server (~75cm) so, if you're going to do that, along with having a cable management arm (see later), you probably need close to 1m.
The lower photo shows the SAS cables - 4 for a one array (or "storage shelf" as Oracle calls it) ODA, which have coloured bands corresponding to labels on the servers/storage to help with connecting up.
In contrast, cable lengths, and cable management arms, are not concerns you normally have with blade systems - their hot replaceable components are mounted in the chassis or removable from the front of the blade.
Note: one small detail I do like is that Oracle provides printed labels for you to stick onto the components of your ODA - see below left:
Now, assuming we're un-boxed and installed in a rack, we have to connect it all up. The picture you need (above right) is in the ODA Owner's Guide and applicable to the single storage shelf version we have for this POC.
Firstly, ethernet: the green and yellow are labelled NET0 and NET1 respectively. As we'll see in a later post, these correspond to eth0 and eth1 in Dom0. The 4 on-board NICs are labelled NET0 to NET3 - within ODA they provide two separate bonded networks, in this case of this POC one is likely to be public, the other for storage. They correspond to eth3-eth5 in Dom0. It's a shame there aren't labels stuck over the NET0 chassis printing to equate to the internal interface numbers for simplicity. I noticed that on the full-sized diagram (click the image above for it) that the yellow and green NET numbers are blanked out, so I suspect that someone intended them to be 4 and 5 but the Linux udev rules meant that they were assigned to eth0 and eth1... which scuppered that plan! You also have an ethernet port to connect up for each of the servers' ILOM management module.
Next for the SAS cables. These are pretty self-explanatory - for both of the servers each SAS controller card connects to each of the storage shelf's controllers, giving 4 cables in total.
Finally you have 6 power cables to connected to 2 different PDUs, preferably on different phases (if local electrical regulations etc allow).
The servers do come with cable management arms but I really don't fancy anyone's chances of routing 2 or 4 SAS cables, 4 or 6 10GbE cables, 1 thinner ethernet cable plus two power cables on them, especially for the two array ODA (i.e. up to 13 cables!). Instead I think I'd be tempted to leave a 1U gap between the servers and arrays, put a filler panel in at the front, and just store all those cables coiled up in there out of harm's way. Certainly cabling up the X3-2 is not pretty compared to the simplicity of the first version of ODA.
Here are our connections so far. Obviously the cables are pretty cluttered and yet there's only half of the ones we'd have in production! Note: we're waiting for some 10GbE copper switch modules to be provisioned, more of which later.
Finally, a front view. As it is only a temporary installation for a few weeks the rack rails have not been used - this why the servers aren't quite aligned, but it does appear that the array will stick out a little too.
So there you are: our ODA is unpacked, installed, powered up and ready for action!
I hope you found this article interesting but please be sure to follow the detailed instructions in the ODA Owner's Guide (e.g. Release 2.6 or later) if you have an ODA X3-2 of your own to install
Stay tuned for the next ODA installment in the next few days...