How To Build Your Own XenServer With RAID

 small logo scaled for header

SMS IT Group


Written by Scott G. McCarthy

How to Build Your Own XenServer with RAID




How To Build Your Own XenServer With RAID

Rethink Your IT Infrastructure with New Technology

About the author: Scott G. McCarthy is the Director of SMS IT Group in Los Angeles, CA. Mr. McCarthy has been performing PCI and HIPPA audits for well over 9 years. He has a 100{ce92d213718bf382776617a85d6b0fddfd46b8b53e7ce8d6080f3edcd619f511} pass rate and has never failed an audit to date. Mr. McCarthy has worked with everyone from small doctors’ offices, Fortune 500 Corporations, and law firms. He has successfully passed PCI audits for both law firms and corporations and some of the world’s largest banks. Mr. McCarthy can be reached at or at the SMS IT Group at 213-222-5182.

The Great Philosophical Questions

To start, virtualization can be expensive. Especially in this era of slashed IT budgets and deciding between payroll expenses and equipment. Unless you work for a company riding out this awful economy to profit, you are probably struggling with how to cut costs. When SMS IT Group decided to virtualize its colocation environment, we struggled with costs like everyone else. We decided to take a fresh approach to it this time. One of our engineers asked the question, “Why don’t we do it like Google and build our own servers”. Being so conditioned to pick up the phone and order a Dell, I almost immediately dismissed the idea. After some discussion, we decided to give it a try.

The first question we had to answer was what virtualization platform to run. So we went out and looked at all of them from VMWare to Proxmox to VirtualBox. After a couple weeks of testing, we decided to go with XenServer. Why? Because XenServer was free and it seemed like the quality of the product was at the level we needed. Although it was no VMWare, it fit our budget and had most of the features we required. Citrix didn’t shove support contracts down your throat or make you purchase a bloated licensing agreement to start. We actually found the Citrix approach refreshing; start out with the free product and if you need support or advanced features down the line, you can purchase them.

Pondering the Server – To Build or Buy?

The second question we asked ourselves was what hardware. Obviously Dell or HP was a safe bet but demanded one hell of a budget. We also realized that many technologies had changed the server game including SATA drives, third party RAID cards, and other features. I also started to ask myself why Google built their own servers when they can simply cut a bulk purchasing deal with Dell. That question started to eat at me. So I sat down and priced out a comparable Dell server to a do it yourself clone server. I was shocked at how much we could save building our own servers. On top of that, if we did build our own servers, we wouldn’t need the expensive 4 hour contract because we would know how to fix them ourselves and have spare parts on hand. If we built them, we would obviously know how to fix them.

So I ran the numbers and was shocked, I mean absolutely shocked at how much we could save building our own boxes and not purchasing the support agreements. Yes, we would have to put some time into it, however, once servers are built they pretty much just run unless a component breaks. And let’s be honest, these days not much really breaks anymore. How hard is it to throw a new memory chip on the board or replace a motherboard or power supply? Maybe a half hour of time? An hour at most? After my cost savings excitement kicked in, I commissioned a project to build a clone server that can run and support XenServer. I still had this voice in the back of my head telling me I was going to regret this decision but was determined to push forward. I had to know!

The Decision Is Made – Build It!

So the next step was to figure out what hardware to purchase for our Frankenstein server. Everything was pretty straight forward but one of the challenges we presented to ourselves was to build the server with consumer grade components. If we were going to make this work, the cost savings had to be significant to make up for all the time invested. We were not going to run out and buy a server-grade motherboard, processor and HP RAID card because it would defeat the whole purpose. So we ordered everything we needed pretty quickly with the exception of one component; the RAID card!

XenServer is very flexible and has some great tools built into it. I personally think the management console is outstanding. With that said, when you compare XenServer to its main competitor VMWare, there are many challenges and shortcomings to overcome compared to VMWare. One of the biggest challenges SMS has faced is hardware compatibility specifically related to RAID controller. XenServer simply has one of the worst Hardware Compatibility Lists in the industry!

A simple search on Google will show hundreds of people all asking about which RAID controllers work with Xen Server and how to build a home test environment or build a server for Xen with lower cost components. Unfortunately, almost every one of those posts never gives anyone a clear answer as to what components work or don’t work. We were stuck on what RAID card to use and there was no clear answer after reading though hundreds of posts!

A Change of Plans – Cold Feet

After hitting this RAID card obstacle, I started to worry about whether we were going to pull this off. With our project schedules, clients to attend to and other responsibilities, I decided to fork the plan in the event our XenServer clone experiment failed. We needed a plan B. So I made the decision to order a Dell server in the event our clone experiment failed or took longer than anticipated. Plus, in the event we ever got our server going, we could test it against a commercially build box and we needed desperately to upgrade our aging email system.

So we would move forward with two server projects. The 1st project would continue on with our clone server and the 2nd would be done with a Dell xxx series server. Since Dell isn’t cutting us a check, I am not going to be an advertisement for their servers in this article. I will just say that the hardware is apples to apples with the exception of the high speed drives on the Dell. So we ordered the Dell server and then the clone components and waited for them to show up. Our goal was now to build two servers and decide which direction we would go long term.

The Nuts and Bolts of the Clone Server

I am going to preface this section by saying the information listed below is the result of a lot of blood, sweat and tears. Although we spent a great deal of our own time and money figuring this situation out, I decided to release this article because I didn’t want to see anyone else struggling with XenServer and hardware compatibility after reading through hundreds of unanswered posts. Get it together Citrix! How hard can it possibly be to list some third party RAID cards that work with XenServer? Seriously!

Now remember how I said that we ordered everything? Yea, everything except the RAID card for the clone server! We still could not figure out this dam RAID card! One night browsing around I ran across this really obscure post about LSI RAID cards working with XenServer but not being certified or making drivers for XenServer. When I went to the LSI website and looked at their models, they all had VMWare support and drivers but no XenServer.

Now in this post I ran across, the guy mentioned that the LSI card would use the built in Linux drivers and give you the ability to RAID the drives through the Linux driver. Although it sounded good, I was still not totally sold on the idea. HP and Dell RAID cards are a small fortune and would break the budget of our experiment. So after some contemplation and a roll of the dice, I decided to purchase an LSI 3Ware 9650SE-4LPML RAID card. This card supports 4 SATA drives and claims to support Linux. We made the final decision to try out SATA drives on our Frankenstein server with the LSI card. So I logged onto Amazon and placed the order.

A few days later, we had everything we needed in front of us. We had all the components to build our own server and a brand new Dell xxx server.

The Easy Route – Load Up The Dell Server

As expected, we took the Dell server out of the box, popped in the USB key for XenServer and loaded it up. Right away, it saw the RAID array and loaded up in about 45 minutes. It was absolutely mindless and simple. Our first Dell XenServer was ready to go in less than 2 hours’ time from unboxing to complete software setup. I was starting to re-think my decision. Man, these Dell servers sure aren’t cheap but they are simple to load and painless. No assembly, struggling with components, and peace of mind.

But there was still two major problems with the Dell server; it was expensive and the support contract was not cheap! We were basically at the mercy of Dell to provide spare parts and fix the thing if it ever broke. After dealing with my fair share of drunk techs and guys who just got out of technical school, the commercial support contracts always failed to impress me for what you paid for them.

The Road Less Traveled – The Clone Server

It’s almost laughable to think about building your own server these days. It is so convenient to just purchase one and load it up. They are pretty much a commodity and I guess that’s why so many people simply don’t want to bother building anything anymore. But the Google question still nagged at me like a scorned ex-wife. Why did Google build their own servers? There must be a reason!

So it was time to travel down the road less traveled. We sat down and built out our clone box. We put together the case, components, installed the RAID card, and got ready to fire it up. It only took about an hour to build and was surprisingly quick. I was going to fall out of my chair if the thing actually booted so I hit the power button and almost fell out of my chair when the BIOS screen came up and it started reading the XenServer install key!

I knew I wasn’t out of the clear yet. The big question was whether the RAID card was going to work and whether it was going to be reliable and something we could trust to put into production. So once XenServer got to the provisioning screen, I was a little confused and worried. I saw a choice of the LSI RAID set and then saw the regular 4 drives all listed out separately. My immediate through was great; so much for this! But I decided to pick the LSI set and continue on feeling disappointed. Well, XenServer finished loading up and I thought that there was no way this RAID set worked.

So here was the big test. I was going to pull one of the drives and probably watch the server crash and burn. So I pulled the 1st drive and to my total surprise, it was still running. Wait, was this possible? Is it really working? Yes, it was! So I rebooted the box and went back into the LSI tool and it showed one drive was offline and the RAID set was critical. So I took a new drive, threw it in the box and rebuilt the RAID array. Guess what? It actually rebuilt!

I am not going to bore you with all the testing details but I am going to tell you that we tested the hell out of the RAID card and the server worked perfectly every time. To my surprise, our clone server actually worked! A week later, we took the server and threw it in the colocation with the new Dell server and both loaded VMs on them. I decided to load our new email server on the clone and see how it held up against the Dell.

So How Is The Performance?

Believe it or not, for the past six months, our clone server now runs a VM email server with hundreds of users all based on XenServer. It also runs 4 other virtual servers all of which are extremely demanding. After running the clone server for 6 months, it has been wonderful! When you compare it to the Dell, it holds its own. The only difference I can see is the speed between the high speed drives in the Dell compared to our consumer grade SATA drives. But you know what? The speed difference is not that noticeable! Had I put high speed drives in the clone server, the performance would be the same.

So What Happened? What Was The Long Term Conclusion?

So it’s been about 6 months since we went through our little experiment and our clone server is still running at our colo now supporting our email system, MySQL server, management server, and monitoring server. It holds its own against our high-priced Dell servers. And you know what? We have built 6 more since our experiment. Why you ask? Why not just stick with your Dell servers?

For several reasons but I will bullet out the main points below:

  • Cheaper: Each server costs almost 1/3 of what a new Dell Server costs with the same specifications and this is a big deal. I don’t care who you are, saving money rocks!
  • Peace of Mind: Because we build our servers, we know exactly how they are built, what components we use, and how to fix them when they break. I don’t have to rely on some drunk technician coming out and blowing away my RAID set. Plus, we can stock all the spares we need because the components are cheap due to all the money we saved.
  • Faster Build Time: I can get components to build my own server a hell of a lot faster than ordering a custom Dell and waiting for Dell to build the thing and ship it to me. Dell typically takes 3 weeks to get a server to me and I can build my own in about 5 days.
  • More Customizable: Although commercial server manufacturers give you a lot of options when you order a server, they don’t let you control everything. When you build your own box, you have total control from the memory manufacturer to the motherboard options you want. Yes, this may be going a little too far but if you want some custom options, you can add them.

When I have to decide what server to load up my latest VM on, I always feel more comfortable putting it on my own clone servers. Why? Because I know we can fix them ourselves and we won’t be in a bind if we don’t renew the Dell support contract or the tech can’t get onsite in 4 hours. We have total control over the servers and since our servers are the life blood of our operation, I sleep better at night knowing our critical apps run on boxes we can fix.

So Why Does Google Do It?

Now I finally get it! Google builds their own boxes because it gives them control, they save an incredible amount of money and allows them to fix their own components without reliance on a third party vendor who may or may not provide good service. Is building your own servers and running XenServer for everyone? Of course not! It takes time to research the components, be willing to run components that are not certified for XenServer, and take responsibility for your own servers when they break. Basically, you own it. Personally, I good with that!

After we figured out the RAID array mystery, it ended up working for us. We have not purchased a brand name server since the experiment. We get really good quality servers that run great and are extremely reliable. On top of it, we can fix them quickly and know exactly how to handle RAID issues and hardware problems. I don’t think I can ever seeing us going back to brand name boxes.

The Most Important Part – The XenServer Hardware Compatibility Listing

Like I mentioned before, our goal was to build servers on lower costing consumer grade components. Some people may read this and think we are crazy and if you do, I will be happy to walk you through our colocation facility in Downtown Los Angeles and show you these servers actually chugging away with hundreds of users on them all running Windows and Linux VMs just as stable as the brand name boxes.

The point that most people forget is the difference between “consumer” and commercial is not that big of a difference anymore; most of it is just marketing. Is a high end consumer motherboard really that much different from one branded as a “server” motherboard? Of course not. It’s the same thing server manufacturers tried to pull when they would tell you that you had to buy their brand of memory or the server would not run properly. Don’t get so hung up on the labels. A good motherboard is a good motherboard. Its purpose is to run an operating system and components can’t tell the difference between Windows 7 and Windows Server. Its all 1’s and 0’s at the end of the day and mathematical processing.

They have proven to be reliable, cheap and run XenServer as good as any name brand box we have ever purchased. To this day, I have not hit one single issue with these servers and always chose to load our most critical VMs on them because I know we can fix them if something goes wrong. I don’t have that same comfort with a brand name box.

Below are the detailed specifications we used to build our custom server. We still use the same components to this day and never have any issues with the servers. You will notice that we chose to use lower end processors and some mid-grade components. Again, the servers all run perfectly. You are welcome to use this list to build your own production or test servers as you see fit. I can tell you from experience we know this equipment configuration works as it has been running for quite a while in production.

Of course, don’t use a Netgear NAS when your production environment calls for an EMC SAN. Use your best judgment as to what will work and won’t work for you.

Our Clone Server Specifications
+ RAID CARD/CONTROLLER: 3ware 9650SE-4LPML 256MB PCI Express to SATA II RAID Controller
(Keep in mind this controller support SATA hard drives – 4 maximum)
(This controller works perfect with Xen Server without loading a custom driver. It works right out of the box with the built in Linux drivers. We fully tested this controller and it works great with XenServer 6.2)

+ SERVER CASE: NORCO 2U Rack Mount Six Hot-Swappable SATA II, III/SAS 6G Drive Bays Server Chassis RPC-2106
(We like this case because it has 6 SATA drive bays and supports full size and SSD drives)
(A full size ATX power supply WILL NOT fit)

+ SERVER POWER SUPPLY: SeaSonic SS-500L2U 500W Single 2U Server Power Supply – 80PLUS Gold – OEM
(Fits perfectly in the server case. You can also buy a redundant power supply as well for this case)

(Excellent quality motherboard for servers. Supports AMD FX processors)

+ PROCESSOR: AMD FD6300WMHKBOX FX-6300 6-Core Processor Black Edition
(Believe it or not, AMD processors have come a LONG way. Years ago we would not touch them. Now, we are confident enough to use them in full production server systems. We beat the hell out of them in testing and they are rock solid processors. I would put this processor up against any Intel processor).
(AMD also makes an 8 core processor!)

+ MEMORY: Any compatible memory, At Least 32 Megs
(Just buy something of good quality)

+ HARD DRIVES: Seagate SATA Drives
(We extensively tested WD and Seagate and have never had a Seagate fail)
LINK: Seagate Barracuda 1 TB HDD SATA 6 Gb/s NCQ 64MB Cache 3.5-Inch Internal Bare Drive ST1000DM003

+ ISCSI NAS – NETGEAR ReadyNAS 102 2-Bay Diskless Network Attached Storage (RN10200-100NAS)
(We extensively tested several low end NAS units for our testing environment and found the Netgear NAS to be the most reliable and compatible. Make sure you update the firmware ASAP). THIS IS AN OPTIONAL COMPONENT and only needed if you want to test ISCSI or need more storage. This unit works perfect with XenServer as a Software ISCSI storage device and even supports CHAP authentication and IP lists for access. We love this unit)

The RAID CONTROLLER – The Most Important Point of This Article

The LSI 3ware 9650SE-4LPML 256MB PCI Express to SATA II RAID Controller is one of the best controllers we have found to work with XenServer 6.2 that is not a Dell or HP branded controller and supports SATA drives. If you need a controller for XenServer that supports SATA or SATA SSD drives, this controller is excellent. The best part of this controller is that it doesn’t require a custom software driver be loaded on XenServer during the installation. All you have to do is configure the RAID array with the controller firmware and install XenServer. During the install, you will see the LSI raid listed as an option when selecting drives.

DISCLOSURE ABOUT THIS CARD: Two points: First, there is an optional battery backup available for the unit. If you want to enable the write cache on the card, I highly suggest the battery backup unit. If you don’t need to enable it, don’t use the battery backup. We don’t use the battery backup in our test environments and it works fine. Second, the cards come with a full size PCI slot mount. We have had to replace the full size mount with a low profile mount to fit it in the case. Not a big deal as you simply remove two screws and put on the new mount but something to keep in mind.

About SMS IT Group

SMS IT Group is a Los Angeles-based IT consulting group that has three divisions consisting of a HIPPA/PCI/Records group, IT Security Group and IT Infrastructure Group. We work with existing IT Departments to assist with projects as well as support companies that chose to outsource parts or the total sum of their IT needs. SMS supports both large and small companies with dedicated experience for law firms and medical businesses and doctors. Feel free to call us if you would like to discuss your IT needs. We offer a free consultation and in some cases, a few free hours of consulting!

Scott G. McCarthy, SMS IT Group @