I was looking for a replacement for my dd-wrt router. It has served me well, but just didn’t have the features I was looking for anymore. I wanted something a bit more powerful. We use pfSense at my office and I really like it so I decided I would go with that. As I was pricing out a Mini-ITX system to buy and build for it I quickly discovered that the cost to turn it into a full virtualization system would not be that much more. Basically we were talking about an extra $40 for an SSD instead of a thumb drive and an extra $10 for more memory.
With that in mind I ordered up all the components I needed and set about building my system. I wasn’t able to use ESXi, which I am familiar with, but I kind of knew going in that would be the case. I was actually wanting something that I could manage purely from Mac instead of having to run VMware Fusion to fire up Windows to manage the ESXi host anyway. The chipset I was looking at using (the J1900 since it is quad-core) seemed to also have problems with Xen so I decided to just use Ubuntu.
I started with Ubuntu 14.04 since it is the most recent LTS support and, truthfully, it worked fine. I finally rebuild the system a day later (actually just upgraded) to 15.04 so that I could get the latest KVM and libVirt. The reason is it gave me the ability to do live backups. With KVM 2.1+ you can snapshot the VM, backup the base disk drive and then merge the smaller snapshot disk back into the base disk. In previous versions you had to move the data forward, so if you had a 4GB disk image you would have to copy all that data from the base disk into the new snapshot disk and then it would delete the base disk. That is a lot of extra data copying that I didn’t want to subject my SSD to.
So, hooking the new computer up to my 55″ TV and sitting 4 feet away due to short cables I set to configuring it up enough to be able to remote in. That may sound like a great way to work on a computer, but at 4ft a 55″ TV is just a bit too big. Especially when you are trying to read text on a screen. Anyway we did a base Ubuntu Server install with SSH and Virtualization enabled in the installer. After that (remotely) I was able to configure the NICs for bridge mode. NIC1/LAN (p3p1) was setup for bridge br0 with a static IP address and NIC2/WAN (p4p1) was setup for bridge br1 with no IP address.
Next I installed lxde and xrdp. This allowed me to get a graphical desktop on my Mac via Remote Desktop Client. I did not want to have to hook this little server up to the TV and a keyboard/mouse every time I needed to get to the console of a VM. This actually works pretty good. I now have a headless server that I can RDP into and work on the VMs while sitting on my couch.
Okay so anyway after that was ready I fired up Virtual Machine Manager and created a new VM for pfSense. I gave it 512MB RAM and 4GB storage (which is plenty, it used 1GB for swap and the full install only used up 500MB, so it still has 2.5GB free space). I configured two network cards and linked them to br0 and br1 and then linked up the pfSense 2.2.3 ISO and fired up the VM. Installation went perfectly smoothly and behaved just like I was installing it on real hardware. After a few minutes doing the basic configuration (IP address, etc.) I was able to switch to Safari and finish the setup and installation of pfSense. A bonus is that pfSense 2.2+ (maybe earlier, but I think it became fairly stable in 2.2) they include virtIO drivers. This means when I shutdown the host, it will cleanly shutdown the pfSense VM first before shutting down the host.
Performance on pfSense was good for my use. The NICs used for the VM were the RTL8139 (I think). This has pretty high interrupt CPU usage inside the VM. I gave pfSense a single CPU and at 8Mb/s (my max Internet speed) it was hitting 50% CPU usage. That does not mean it would top out at 16Mb/s, seeing as at 1.5Mb/s it was hitting 20% CPU usage. If I had to guess, I would say it would max out at around 25Mb/s. If you gave it a second CPU you might be able to double that. My Internet is only 8Mb/s (rural area) so I have no way to test that.
I will note that I was able to drop CPU usage to around 5% by switching the NICs to the VirtIO paravirtualized NIC driver. The system worked perfect that way for non-local traffic. What I mean is if my laptop tried to connect to the Internet it worked fine. If the VM host, or another VM running on the host, tried to get to the Internet there were problems. Pings worked fine, UDP was untested but TCP did not work. I could see via a packet trace that the packets were going out just fine and were received by the remote host. But the remote host was ignoring the TCP connection requests. I can’t say why.
Everything looked good on the tcpdump data, but obviously something was being changed that was causing issues. As soon as I switched the NIC back to the RTL8139 driver everything started working again. I don’t know if this is a KVM issue or a pfSense virtIO driver issue. Whenever I update either of those next I will probably try virtIO again. I might also try using virtIO on just the LAN or WAN and see if things still work. I am guessing it is something to do with the fact that the LAN port is bridged to other devices, but I am not sure. One last thing that could be tried is one of the other NIC drivers in KVM. There are 4 or 5 of them, the RTL8139 was just the default.
So at this point I have a $300 machine with 55GB of SSD storage, 3GB of RAM and 88.5% CPU available. I installed another Ubuntu VM (actually I migrated it from VirtualBox on my Mac Mini). This is what I use at home for node js development. As expected the VM host didn’t even twitch about having it on there since it is a very low-activity machine.
Now I wanted to get all this stuff backed up. In truth I actually got to this point on Ubuntu 14.04 on the host and ran into the above mentioned live backup issues. I upgraded to 15.04 at this point (after the upgrade is when I tried the virtIO stuff, so 15.04 doesn’t fix the problem). I also mentioned I have a QNAP with a RAID5 that I used for all my important data and time machine backups. I setup a new user on the QNAP and shared a folder via SMB to the VM host and setup the /etc/fstab to mount that volume at boot. Easy as pie. Next I needed a simple script to backup the various VMs. I won’t put the full script in here, but you can check out the gist over here to see what it does.
A few warnings about the script, it does NOT handle spaces in the paths anywhere. I tried, but using shell script I couldn’t find a way to deal with that. If I rewrote it in python I could probably do it no problem, but I don’t have any spaces in my paths right now so it is what it is. What the script does is take a path to the root backup folder, the VM (domain) name and optionally the max number of backups to keep. Since it does full VM backups you will want to keep the max pretty low.
The script creates a new folder under the root backup path for the VM and then a new folder for the dated backup. All disk images for the VM are snapshotted, copied to the backup folder, and then the snapshot is merged back into the base disk images. It will then dump the XML description of the VM to that folder as well. Finally it looks at all the dated backup folders for that VM and removes the oldest backups so only the “max backups” are kept.
That’s it. I run this from a cronjob set to e-mail me. Because of the space requirements I only have it run 2 backups a week. Again, none of my data is that critical so I will probably move to once a week later. If I have 30GB total of VM data, each backup will take 30GB. So keeping 6 backups will mean nearly 200GB of data. Note, when I say 30GB that is 30GB of actual USED data. If you use the qcow2 format then only the used data will be stored. So again in my pfSense example, it is a 4GB disk drive but the file is only 550MB. 500MB has been used by pfSense and of the 1GB swap space it allocated it hasn’t really touched it yet so there is just a little overhead from that. If you create a 20GB linux VM and it only uses 5GB on disk, then it will only backup 5GB. If, however, you use the pre-allocated (raw) disk images then it will backup 20GB even if only 5GB is being used.
So a summary of the performance. This thing works great. It’s fanless so it is quiet too. I suspect I will run out of RAM before I run out of CPU so I will probably add another 4GB chip at some point. Because I have the QNAP I can use it for VM storage. As I grow my VMs I will probably put just the OS data on the SSD and setup an iSCSI share on the QNAP for data disks. Either that if this thing starts getting heavily used I could easily upgrade the SSD storage. Let’s face it, you can get a high-end 500GB SSD for under $200 now. And if I was really paranoid about my data I could easily buy two of them and mirror them in that case. I could also run all the VMs entirely off iSCSI. I do that at work so I know it works. I just don’t know what kind of performance hit I’m going to take since the QNAP is not exactly a high-end unit and my Linux box is not a high-end unit either.
Anyway, there you have it. VM Host + pfSense for less than $300. I would guess I can get 2 or 3 Linux VMs on that host before I run out of RAM and storage. That should be plenty for my home network.
Components (Total Cost: $288 + tax)
- Mini-Box M350 Mini-ITX Case ($45)
- ADATA 64GB SSD ($50)
- Sabrent AD-LCD12 Power Adapter ($8)
- Crucial 4GB Memory Module ($30)
- Mitac PD11BI CC Mini-ITX Motherboard ($125)
- PicoPSU-90 90W ATX Power Supply ($30)
About the Components
I have used the M350 Mini-ITX case at work many times and it is a great little case. It can be wall mounted, mounted to a VESA plate, set on a desk, whatever you want. It has only a power LED and power button so it is bare bones. The front plate can be removed (by opening the case, so it is secure) to get to two hidden USB plugs. We use these for our Linux based digital signage. The OS runs off a thumb drive in that front compartment. This case does not allow the use of any PCIe cards, even if your mother board supports it. There just isn’t room in a case like this. The case does come with mounting hardware for a single SSD. You can buy a second SSD bracket for I think $8, both brackets only accept 2.5″ drives.
For my purposes, I don’t really care how reliable the SSD is, so I went for pretty much the cheapest SSD I could find. I have used ADATA thumb drives in the past and they worked fine so figured I would give this a shot. I will be backing up all my data to my RAID-5 NAS unit anyway so it isn’t that big a deal.
The power supply is basically a 12v LCD monitor power supply that will supply up to 80w of power. That is more than enough. This computer will only draw about 25w under full load.
The motherboard supports up to 8GB of RAM (2 * 4GB), but really 4GB is plenty for my needs. I could have gone cheaper and bought a 2GB or 1GB if I was going to be doing pure pfSense, but I wanted to virtualize some stuff so I wanted more RAM.
The motherboard is new. It is a replace for, I think, the DN2800CCE motherboard. The embedded J1900 CPU is a quad-core chip running at 2.0GHz. This is basically a Celeron chip so it isn’t that great, but it does support virtualization and it is quad-core, so the primate labs benchmarks come in at 3,000. Which is pretty good for the price. For those of you used to ESXi, unfortunately it will not run on the J1900 chipset. It is possible at some point later VMware will make it happen, but since it is not a target of theirs I’m not holding my breath. I ended up using a minimal Ubuntu 15.04 install as the virtualization host. The primary reason I got this motherboard is the dual LAN, which I needed for the firewall. There is a cheaper motherboard from Gigabyte (GA-J1900N-D3V) but it came DOA and looking at the reviews that was a common occurrence, so I sent it back and got this one instead.
The ATX power supply is a DC-DC converter. It takes the 12v coming in from the power brick and converts it to the 12v, 5v and whatever else is needed these days by the mother board. It also comes with one SATA power and one PATA power connector. And it’s freaking tiny.