As solid state drives continue to come down in price, it’s easier to justify putting them in your data center as they provide a significant boost to storage performance. All solid state drive SANs exist, but unless your SAN is up for replacement or you’re starting a new project that requires new storage, you’re probably not going to get the capital to rip and replace.
So how can you take advantage of the insanely high performance that solid state drives provide without having to invest in an entirely new storage infrastructure? A couple of companies have set out to answer that question and put solid state drives in your servers to accelerate your storage without having to buy a new SAN.
One of those companies is PernixData. PernixData has built a product that uses solid state drives on the server to accelerate fibre channel, iSCSI, and/or FCoE block storage.
Disclosure: This post was written using a beta version of PernixData FVP 1.5. I am not affiliated with PernixData in any way. |
What is PernixData?
PernixData officially labels the FVP product as a “Flash Hypervisor.” What it does, at a base level, is act as a storage caching layer on the host for block storage that can accelerate reads and writes. It can share flash amongst hosts in a cluster and is fully compatible with vMotion, HA, and other vSphere features.
Installation
PernixData FVP has two main components – a management application that runs on a Windows server and some new multipathing plugins that support that PernixData features that need to be installed on the hosts. A SQL Server database is required, and it can be run on a SQL Server Express instance, and a vCenter account with administrator privileges is also needed.
PernixData’s multipathing protocols are enabled once they are installed on the host, so the only additional configuration that is needed is to configure the flash clusters and the virtual machines or datastores that will take advantage of PernixData.
Overall, the installation and configuration is very easy. The documentation is very thorough and does a great job of walking users through the installation.
Use
When I was running PernixData in my lab, it was pretty much a maintenance-free product. Once it was put in, it just worked.
So how do you know that PernixData is working and actually accelerating storage? How do you know if your VMs are reading and writing to the local flash drives?
PernixData includes a vCenter plugin that provides great visualization of storage use. Graphs can show information on local flash, network flash, and datastore usage for a virtual machine or a host. These graphs are a much better way to visualize IOPS and latency than the graphs on the vCenter server performance tab.
Unlike a lot of reviews, you won’t see any performance graphs for how it improved storage under load. I didn’t run any of those types of tests. If you are interested in performance results that pushed the envelope, check out Luca Dell’oco’s performance testing results.
Other Notes
My home lab is mostly dedicated to running VMware View, and I run a lot of linked clone desktops. PernixData is compatible with linked clone desktops. I was initially confused about how PernixData worked with linked clones, and I wasn’t sure if PernixData was caching the same data multiple times. The explanation I received from Andy Daniel, one of the PernixData SEs, was that if the data was being referenced from the linked clone base disk, it was only being cached once.
System Requirements
As long as there is room on your servers for at least one solid state disk, PernixData can be added into the environment. It doesn’t require any special hardware and supports SATA, SAS, and PCiE solid state disks. It is supported on ESXi 5.0, 5.1, and with the latest version, 5.5.
PernixData is storage agnostic. It will work with any block storage SANs or storage devices that may be in your environment. I used it with 4GB Fibre Channel and a server running OmniOS and saw no issues during my trial.
NFS is not a supported protocol, and there are other products that will provide similar features.
When to Use It
There are a couple of areas where I see PernixData being a good option. These include:
- VDI deployments
- Resolving storage performance issues
This is a very attractive option if capital or space is not available to upgrade backend storage. Based on the most recent pricing I could find, the cost per host is $7500 for the Enterprise license with no limits on VMs or Flash devices.
I’m used to working in smaller environments, and the finance people I’ve worked with would have an easier time justifying $20,000 in server-side flash than an entirely new array or a tray of solid state drives for an existing array. There is also an SMB bundle that allows for four hosts and 100 VMs.
Final Thoughts
There are a lot of use cases for PernixData, and if you need storage performance without having to add disks or spend significant amounts of capital, it is worth putting the trial in to see if it resolves your issues.
I would recommend against using it for “Resolving storage performance issues”. Unless you have workloads that are consistently accessing the same data (in which case the array flash would likely already handle it well), you won’t see much for benefits. We purchased a 4 pack with 400GB S3700 Intel drives to help our ailing EMC SAN. Pernix helped make some applications run faster, but when the SAN was overused Pernix made no difference. We also experienced some rather alarming issues in our time running it. I believe most of them have been fixed now, but realize you are adding some complex programming in between your hosts and your storage. It comes with some significant risks associated with that.
With that said, it is a very well made product. It has excellent support and in the right cases offers amazing performance.