orchestrating Exchange with #vCO

Microsoft Exchange is a system that is ideally suited for automation.  It’s in almost every environment.  It has it’s own add-on to PowerShell that makes it easy to write scripts to handle tasks.  And most of the tasks that administrators perform after setup are rote tasks that are easily automated such as setting up mailboxes and adding IP addresses to a receive connector. 

Why vCenter Orchestrator?

Exchange already contains a robust automation platform with the PowerShell-based Exchange Management Shell.  This platform makes it easy to automate tasks through scripting.  But no matter how well these scripts are written, executing command line tasks can be error-prone if the end users of the scripts aren’t comfortable with a command line.  You may also want to limit input or provide a user-friendly interface to kicking off the script.

So what does that have to do with vCenter Orchestrator?  Orchestrator is an extensible workflow automation tool released by VMware and included with the vCenter Server license.   It supports Windows Remote Management and PowerShell through a plugin.

Start By Building a Jump Box/Scripting Server

Before we jump into configuring Orchestrator to talk to Exchange, we’ll need a Windows Server that we can configure to execute the scripts that Orchestrator will call.  This server should run Windows Server 2008 R2 at a minimum, and you should avoid Server 2012 R2 because the Exchange 2010 PowerShell cmdlets are not compatible with PowerShell 4.0. 

You will need to install the Exchange management tools on this server, and I would recommend a PowerShell IDE such as PowerGUI or Idera PowerShell Pro to aid in troubleshooting and testing.

Orchestrator and Exchange

As I mentioned above, Orchestrator can be used with PowerShell through a plugin.  This plugin uses WinRM to connect to a Windows Server instance to execute PowerShell commands and scripts.   In order to use this plugin, Orchestrator needs to be configured to support Kerberos authentication.

When I was testing out this combination, I was not able to get the Exchange Management Shell to load properly when using WinRM.  I think the issue has to do with Kerberos authentication and WinRM.

When you use WinRM, you’re remoting into another system using PowerShell.  In some ways, it is like Microsoft’s version of SSH – you’re logging into the system and working from a command line. 

The Exchange cmdlets add another hop in that process.  When you’re using the Exchange cmdlets, you’re executing those commands on one of your Exchange servers using a web service.  Unfortunately, Kerberos does not work well with multiple hops, so another to access the remote server is needed.

Another Option is Needed

So if WinRM and the Orchestrator PowerShell plugin don’t work, how can you manage Exchange with Orchestrator?  The answer is using the same remote access technology that is used for network hardware and Unix – SSH.

Since Exchange is Active Directory integrated, we’ll need an SSH server that runs on Windows, is compatible with PowerShell, and most importantly, supports Active Directory authentication.   There are a couple of options that fit here  such as the paid version of Bitvise, FreeSSHd, and nSoftware’s PowerShell Server.

There is one other catch, though.  Orchestrator has a built-in SSH plugin to support automating tasks over SSH.  However, this plugin does not support cached credentials, and it runs under whatever credentials the workflow is launched under.  One of the reasons that I initially looked at Orchestrator for managing Exchange was to be able to delegate certain tasks to the help desk without having to grant them additional rights on any systems. 

This leaves one option – PowerShell Server.  PowerShell Server has an Orchestrator Plugin that can use a shared credential that is stored in the workflow.  It is limited in some key ways, though, mainly that the plugin doesn’t process output from PowerShell.  Getting information out will require sending emails from PowerShell.

You will need to install PowerShell Server onto your scripting box and configure it for interactive sessions.

PowerShell Server Settings

Configuring the Exchange Management Shell for PowerShell Server

PowerShell Server supports the Exchange Management shell, but in a limited capacity.  The method that their support page recommends breaks a few cmdlets, and I ran into issues with the commands for configuring resource mailboxes and working with ActiveSync devices. 

One other method for launching the Exchange Management Shell from within your PowerShell SSH session is by using the following commands:

'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange.ps1'
Connect-ExchangeServer –auto
If you try that, though, you will receive an error that the screen size could not be changed.  This is due to the commands that run when the Exchange Management Shell loads – it resizes the PowerShell console window and prints a lot of text on the screen. 
The screen size change is controlled by a function in the RemoteExchange.ps1 script.  This file is located in the Exchange Install Directory\v14\Bin.  You need to open this file and comment out line 34.  This line calls a function that widens the window when the Exchange Management shell is loaded.  Once you’ve commented out this line, you need to save the modified file with a new file name in the same folder as the original.
Edit RemoteExchangePS1
In order to use this in a PowerShell script with Orchestrator, you will need to add it to each script or into the PowerShell profile for the account that will be executing the script.  The example that I use in my workflows looks like this:

'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange-Modified.ps1'
Connect-ExchangeServer –auto

Note: It may be possible to use the method outlined by Derek Schauland in this TechRepublic article in place of modifying the EMS script.  However, I have not tested this with technique with Orchestrator.

Putting It All Together

Earlier this month, I talked about this topic on vBrownbag, and I demonstrated two examples of this code in action.  You can watch it here.

One of the examples that I demonstrated during that vBrownbag talk was an employee termination workflow.  I had a request for that workflow and the scripts that the workflow called, so I posted them out on my github site.  The Terminate-DeactivateEmail.ps1 script that is found in the github repository is a working example. 

Horizon View 5.3 Part 15 – Horizon View, SSL, and You

Although they may be confusing and require a lot of extra work to set up, SSL certificate play a key role in any VMware environment.  The purpose of the certificates is to secure communications between the clients and the servers as well as between the servers themselves.

Certificates are needed on all of the major components of Horizon View and vCenter.  The certificates that are installed on vCenter Server, Composer, and the Connection Servers can come from an internal certificate authority.  If you are running Security Servers, or if you have a Bring-Your-Own—Beer Device environment, you’ll need to look at purchasing certificates from a public certificate authority.

Setting up a certificate authority is beyond the scope of this article.  If you are interested in learning more about setting up your own public key infrastructure, I’ll refer you to Derek Seaman, the VMware Community’s own SSL guy.  He recently released a three-part series about setting up a two-tier public key infrastructure on Windows Server 2012 R2I’ve also found this instruction set to be a good guide for setting up a public key infrastructure on Windows Server 2008 R2.

Improved Certificate Handling

Managing SSL certificates was a pain if you’ve worked with previous versions of Horizon View.  In previous versions, the certificates had be placed into a Java keystore file.  This changed with View 5.1, and the certificates are now stored in the Windows Certificate Store.  This has greatly improved the process for managing certificates.

Where to Get Certificates

Certificates can be minted on internal certificate authorities or public certificate authorities.  An internal certificate authority exists inside your environment and is something that you manage.  Certificates from these authorities won’t be trusted unless you deploy the root and intermediate certificates to the clients that are connecting.

The certificates used on a security server should be obtained from a commercial certificate vendor such as Thawte, GoDaddy, or Comodo.  One option that I like to use in my lab, and that I’ve used in the past when I didn’t have a budget for SSL certificates is StartSSL.  They provide free basic SSL certificates.

Generating Certificate Requests for Horizon View Servers

VMware’s methods for generating certificates for Horizon View is different than vCenter.  When setting up certificate requests for vCenter, you need to use OpenSSL to generate the requests.  Unlike vCenter, Horizon View uses the built-in Windows certificate tools are used to generate the certificate requests. VMware has a good PDF document that walks through generating certificates for Horizon View.  An online version of this article also exists.

Before we can generate a certificate for each server, we need to set up two things.  The first is a certificate template that can be used to issue certificates in our public key infrastructure.  I’m not an expert on public key infrastructure, so I’ll defer to the expert again.

The other thing that we need to create is a CSR configuration file.  This file will be used with certreq.exe to create the certificate signing request.  The template for this file, which is included in the VMware guide, is below.

;----------------- request.inf -----------------

Signature="$Windows NT$"


Subject = "CN=View_Server_FQDN, OU=Organizational_Unit_Name, O=Organization_Name, L=City_Name, S=State_Name, C=Country_Name" ; replace attributes in this line using example below
KeySpec = 1
KeyLength = 2048
; Can be 2048, 4096, 8192, or 16384.
; Larger key sizes are more secure, but have
; a greater impact on performance.
Exportable = TRUE
FriendlyName = "vdm"
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0


OID= ; this is for Server Authentication


; SAN="dns=FQDN_you_require&dns=other_FQDN_you_require"

The subject line contains some fields that we need to fill in with information from our environment.  These fields are:

  • CN=server_fqdn: This part of the Subject string should contain the fully qualified domain name that users will use when connecting to the server. An example for an internal, non-Internet facing server is internalbroker.internaldomain.local.  An internet-facing server should use the web address such as view.externaldomain.com
  • OU=organizational unit: I normally fill in the responsible department, so it would be IT.
  • O=Organization: The name of your company
  • L=City: The city your office is based in
  • S=State: The name of the State that you’re located in.  You should spell out the name of the state since some CAs will not accept abbreviated names.
  • C=Country: The two-letter ISO country code.  The United States, for example, is US.

A CSR configuration file will need to be created for each server with a Horizon View Component installed.  vCenter will also need certificates, but there are different procedures for creating and installing vCenter certificates depending on whether you are using the Windows application or the vCenter appliance.

Creating the certificate signing request requires the certreq.exe command line tool.   These steps will need to be performed on each connection server, security server, and  The steps for generating the request are:

  1. Open a command prompt as an Administrator on your View server. Note: This command should be run from the server where you want the certificate to reside.  It can be done from another machine, but it makes it more complicated.
  2. Navigate to the folder where you stored the request.inf file.
  3. Run the following command: certreq.exe –new request.inf server_name_certreq.txt

After the certificate request has been created, it needs to be submitted to the certificate authority in order to have the SSL certificate generated.  The actual process for submitting the CSR is beyond the scope of this article since this process can vary in each environment and with each commercial vendor.

Importing the Certificate

Once the certificate has been generated, it needs to be imported into the server.  The import command is:

certreq.exe –accept certname.cer

This will import the generated certificate into the Windows Certificate Store.

Using the Certificates

Now that we have these freshly minted certificates, we need to put them to work in the View environment.  There are a couple of ways to go about doing this.

1. If you haven’t installed the Horizon View components on the server yet, you will get the option to select your certificate during the installation process.  You don’t need to do anything special to set the certificate up.

2. If you have installed the Horizon View components, and you are using a self-signed certificate or a certificate signed from a different CA, you will need to change the friendly name of the old certificate and restart the Connection Server or Security Server services.

Horizon View requires the certificate to have a friendly name value of vdm.  The template that is posted above sets the friendly name of the new certificate to vdm automatically, but this will conflict with any existing certificates. 

Friendly Name

The steps for changing the friendly name are:

  1. Go to Start –> Run and enter MMC.exe
  2. Go to File –> Add/Remove Snap-in
  3. Select Certificates and click Add
  4. Select Computer Account and click Finish
  5. Click OK
  6. Right click on the old certificate and select Properties
  7. On the General tab, delete the value in the Friendly Name field, or change it to vdm_old
  8. Click OK
  9. Restart the View service on the server


Certificates and View Composer

Unfortunately, Horizon View Composer uses a different method of managing certificates.  Although the certificates are still stored in the Windows Certificate store, the process of replacing Composer certificates is a little more involved than just changing the friendly name.

The process for replacing or updating the Composer certificate requires a command prompt and the SVIConfig tool.  SVIConfig is the Composer command line tool.  If you’ve ever had to remove a missing or damaged desktop from your View environment, you’ve used this tool.

The process for replacing the Composer certificate is:

  1. Open a command prompt as Administrator on the Composer server
  2. Change directory to your VMware View Composer installation directory
    Note: The default installation directory is C:\Program Files (x86)\VMware\VMware View Composer
  3. Run the following command: sviconfig.exe –operation=replacecertificate –delete=false
  4. Select the correct certificate from the list of certificates in the Windows Certificate Store
  5. Restart the Composer Service


A Successful certificate swap

At this point, all of your certificates should be installed.  If you open up the View Administrator web page, the dashboard should have all green lights.

If you are using a certificate signed on an internal CA for servers that your end users connect to, you will need to deploy your root and intermediate certificates to each computer.  This can be done through Group Policy for Windows computers.  If you’re using Teradici PCoIP Zero Clients, you can deploy the certificates as part of a policy with the management VM.  If you don’t do this, users will not be able to connect without disabling certificate checking in the client.

Horizon View 5.3 Part 14 – Windows Server Desktops

Technology isn’t the most complicated part of any VDI deployment.  That honor belongs to Microsoft’s VDA licensing – a complex labyrinth of restrictions on how the Windows Desktop OS can be used in a VDI environment.  The VDA program either requires software assurance on Windows devices or a subscription for devices that aren’t covered under SA such as zero clients or employee-owned devices.

The VDA program is a management nightmare, and it has spawned a small movement in the community called #FixVDA to try and get Microsoft to fix the problems with this program.

The licensing for virtualizing Windows Server is much less complicated, and a licensing model for remote desktop access that isn’t dependent upon software assurance already exists.

Note: I am not an expert on Microsoft licensing.  Microsoft does update VDA and other licensing options, so check with your Microsoft Licensing representative before purchasing.  If you want more details about Microsoft’s licensing for 2008 R2 Remote Desktop Services, you can view the licensing brief here.

In previous versions of Horizon View, it was possible, although difficult to configure and unsupported, to use Windows Server 2008 R2 as a desktop OS.  Horizon View 5.3 has added official support for using Windows Server 2008 R2 as a desktop OS.  This opens up desktop virtualization for enterprises and service providers.

Batteries Not Included

Windows Server-based desktops are missing a number of features in View that other versions of Windows are able to take advantage of.  These features are:

  • Virtual Printing (AKA ThinkPrint)
  • Multimedia Redirection
  • Persona Management
  • vCOPs for View functionality
  • Local-Mode Support
  • Smart Card SSO
  • UC/Lync APIs and support

ThinPrint can be worked around – either by using Group Policy Preferences for users inside the firewall or buying the full product from Cortado.  Personal Management can also be worked around by using Roaming Profiles and folder redirection.

If you need smart cards, Lync 2013 support, Local-Mode, or vCOPs for View support, you will still need to pony up for a VDA subscription.

I suspect that more of these features will be working in the next version of View as they are fully tested and validated by VMware.

What’s Included Today

It seems like there are a lot of features in View 5.3 that aren’t supported or available with Windows Server 2008 R2 desktops.  So what is included? 

  • PCoIP Access
  • VMware Blast HTML5 Access – Installed separately with the Remote Experience Pack
  • USB and Audio Redirection

That doesn’t sound like much, but it may be worth the tradeoff if it saves on licensing.

Enabling Windows Server Desktop Support

Windows Server Desktop support is not enabled by default in Horizon View 5.3, but it isn’t too hard to enable.  There is one step that needs to be performed inside the View LDAP database to enable support, and the agent needs to be installed from the command line.

To configure View to support Server 2008 R2 desktops, you need to take the following steps:

  1. Connect to the View ADAM (LDAP) Database
  2. Expand dc=vdi, dc=vmware, dc=int
  3. Expand OU=Properties
  4. Expand OU=Global
  5. Right click on CN=Common and select Properties.
  6. Scroll to the attribute named “pae-EnableServerinDesktopMode”
  7. Click the Edit Button
  8. Change the value to 1 and click OK.
  9. Click OK
  10. Close ADSI Edit

After the View environment has been configured to support Windows Server as a desktop source, the desktop gold image can be configured.  Although the process is mostly the same as Part 11 – Building Your Desktop Golden Images, there are a few key differences.

These differences are:

  • The VMXNET3 network card should be used over the E1000 network card.
  • The Desktop Experience Feature needs to be installed before the View Agent.  This feature is important if you plan to use VMware Blast.
  • The VMware View Agent needs to be installed from the command line in order to force the agent to install in Desktop Mode.  The command test is “VMware-viewagent-x86_64-5.3.0-xxxxx.exe /v”VDM_FORCE_DESKTOP_AGENT=1″”


Aside from these differences, a Server 2008 R2 desktop source can be configured the same as a Windows 7 desktop source.

The next post in this series will be on securing the View environment with SSL certificates.

VMware’s New Certification Policy

Over the weekend, VMware quietly announced a new certification policy – existing holders of the VMware Certified Professional and higher certifications would need to recertify within two years of their most recent certification or lose them.  This announcement has caused a bit of an outcry on social media channels as the news spread.

After taking some time to think about it, the policy makes sense.  VMware releases a new version of vSphere every year, and while the new versions are usually marketed as point releases, they contain a lot of changes and additions to the way that the underlying system operations.  There are enough changes that vSphere 5.5 is a different beast than vSphere 5.0.

The policy, which you can read in full here, is that you have to recertify within two years of your most recently passed exam.  If you passed a VCP on January 1st, 2013, you would have until January 1st, 2015 to pass the same VCP, a VCP in another category, or a VCAP exam for your certification to remain valid.

VMware isn’t the first vendor to propose, or implement this sort of policy.  This has been Cisco’s policy for years, although Cisco allows the certification to remain valid for three years instead of two.

VMware’s motivations, as outlined in the announcement, are to ensure that VCP holders are keeping their skills up to date.  Some members of the VMware Certification team have also made comments in #vBrownbag podcasts about wanting to increase the number of people who hold VCAP-level certifications, and the requirement to recertify is one method to encourage that.

My Thoughts

There are other vendors, and entirely other fields, that require certification/license holders to retest, relicense or recertify on a regular basis.  And while someone in IT doesn’t have as much on the line as a medical professional, teacher or a licensed/bonded engineer in the mechanical/structural/electrical/aerospace/etc. disciplines, VMware does have a vested interest in making sure that it’s certification program retains it’s value as their products change.

However, this change is far from perfect.  The biggest issue that I have with it is that certifications are only valid for two years.  I think that certifications should be valid for three years, or VMware should at least provide a 1 year grace period where someone with a lapsed VCP could take a new exam without having to retake the class.

I also think that there needs to be more of an incentive to go take the VCAP-level exams.  These exams, especially the administration ones, require a lab set up to practice the items on the exam blueprint.  In order to encourage this, I think that VMware should provide anyone who registers for a VCAP-level exam with NFR license keys for the products covered in the exam.

One thing that I think VMware did right, though, is that they granted a one year grace period and removed classroom prerequisites for anyone who holds an older VCP.  This will allow a number of VCP holders to get current without having to sit through classroom training.

Hands-On: The Dell Wyse Cloud Connect

Sometime in the last couple of weeks, $work picked up a Dell Wyse Cloud Connect.  The Cloud Connect is essentially a thin client as a stick – it looks like an oversized thumb drive with an HDMI connection where the USB connection would be.


The old saying goes “Big things come in little packages.”  The package is little, but the only big thing that comes with it is potential.  The idea behind Cloud Connect is very sound, but the execution is lacking.  It is a first generation product, so there is plenty of room for improvement.

Hardware Overview

Cloud Connect packs a good bit of hardware into a very small package.  The system is built around an ARM Cortex-A9 system on a chip with Wireless-N and Bluetooth.  Other features on the device include a Bluetooth connection button for pairing devices, a mini-USB port for power, a Micro-USB port for connecting a peripheral device such as a keyboard or mouse, and a microSD port for expanded storage.  It can hook up to any display with an HDMI port and provide 1080P graphics with some 3D support.

Operating System

Cloud Connect runs Android 4.1 Jelly Bean.  The interface of the device I used was the standard Android interface, and it wasn’t optimized for keyboard and mouse usage.  It was difficult to navigate through the menus when hooked up to a 1080P TV, and I had trouble finding various menus because the icons were too small.  While I love Android, the combination of an older version of the Android OS and an interface that was optimized for touch usage means that there is a lot of room for improvement in this category.


Cloud Connect comes with a few standard apps that are mainly there to allow users to connect to various virtual desktop environments.  Those apps are:

  • Pocket Cloud Standard Edition
  • Citrix Receiver
  • VMware Horizon View Client

The version of the View Client that was installed on the device was version 2.1.  This client was a few releases behind, and I was not able to connect to the Horizon View 5.3 environment in my home lab.   I was unable to update the client to the most recent as the Google Play store claimed that the app was not supported on my device.

Another disappointment of this device is that it does not come with the Professional Edition of Wyse PocketCloud.  The standard edition has a reduced feature base – it is limited to one saved connection and can only connect via RDP or VNC.  PocketCloud Professional can utilize the PCoIP protocol for connecting to remote desktops and allows multiple saved connections.


I’m going to turn to the wise sage and critic extraordinaire Jay Sherman to sum up my thoughts on the Wyse Cloud Connect:


Frankly, it just didn’t work.  I wasn’t able to connect to virtual desktops in my environment.  I couldn’t update the old versions of the software to fix those issues, and the interface was painful to navigate because it was the standard Android interface with no skinning or overlay to improve the experience for keyboard and mouse use.

That’s not to say that this device doesn’t have potential or some great use cases.  I can see this being a good option for school computer labs, business travelers who do not want to carry a laptop, or even as a remote access terminal for teleworkers.  It’s just that the negatives for this current version outweigh the potential that this device has.

Recommendations for Improvement

So how can Dell fix some of these shortcomings?  The area that needs the biggest improvement is the user interface.  The standard Android interface works great for touch devices, but it’s not user friendly when the input device is something besides a finger or stylus.  Dell needs to build their own skin so they can optimize the experience for TVs, monitors, and projectors.  That means bigger icons, adding keyboard shortcuts, and making the system menus more accessible.

Addressing the user interface issues would go a long way towards improving this product.  It won’t fix all the issues, though, such as the View Client being listed as incompatible with this device in the Google Play Store.

My Experience With PernixData in the Lab

As solid state drives continue to come down in price, it’s easier to justify putting them in your data center as they provide a significant boost to storage performance.  All solid state drive SANs exist, but unless your SAN is up for replacement or you’re starting a new project that requires new storage, you’re probably not going to get the capital to rip and replace.

So how can you take advantage of the insanely high performance that solid state drives provide without having to invest in an entirely new storage infrastructure?  A couple of companies have set out to answer that question and put solid state drives in your servers to accelerate your storage without having to buy a new SAN.

One of those companies is PernixData.  PernixData has built a product that uses solid state drives on the server to accelerate fibre channel, iSCSI, and/or FCoE block storage.

Disclosure: This post was written using a beta version of PernixData FVP 1.5.  I am not affiliated with PernixData in any way.

What is PernixData?

PernixData officially labels the FVP product as a “Flash Hypervisor.”  What it does, at a base level, is act as a storage caching layer on the host for block storage that can accelerate reads and writes.  It can share flash amongst hosts in a cluster and is fully compatible with vMotion, HA, and other vSphere features.


PernixData FVP has two main components – a management application that runs on a Windows server and some new multipathing plugins that support that PernixData features that need to be installed on the hosts.  A SQL Server database is required, and it can be run on a SQL Server Express instance, and a vCenter account with administrator privileges is also needed.

PernixData’s multipathing protocols are enabled once they are installed on the host, so the only additional configuration that is needed is to configure the flash clusters and the virtual machines or datastores that will take advantage of PernixData.

Overall, the installation and configuration is very easy.  The documentation is very thorough and does a great job of walking users through the installation.


When I was running PernixData in my lab, it was pretty much a maintenance-free product.  Once it was put in, it just worked.

So how do you know that PernixData is working and actually accelerating storage?  How do you know if your VMs are reading and writing to the local flash drives?

PernixData includes a vCenter plugin that provides great visualization of storage use.  Graphs can show information on local flash, network flash, and datastore usage for a virtual machine or a host.  These graphs are a much better way to visualize IOPS and latency than the graphs on the vCenter server performance tab.

Host IOPs - 1 Week

Host Latency - 1 week

Unlike a lot of reviews, you won’t see any performance graphs for how it improved storage under load.  I didn’t run any of those types of tests.  If you are interested in performance results that pushed the envelope, check out Luca Dell’oco’s performance testing results.

Other Notes

My home lab is mostly dedicated to running VMware View, and I run a lot of linked clone desktops.  PernixData is compatible with linked clone desktops.  I was initially confused about how PernixData worked with linked clones, and I wasn’t sure if PernixData was caching the same data multiple times.  The explanation I received from Andy Daniel, one of the PernixData SEs, was that if the data was being referenced from the linked clone base disk, it was only being cached once. 

System Requirements

As long as there is room on your servers for at least one solid state disk, PernixData can be added into the environment.  It doesn’t require any special hardware and supports SATA, SAS, and PCiE solid state disks.  It is supported on ESXi 5.0, 5.1, and with the latest version, 5.5.

PernixData is storage agnostic.  It will work with any block storage SANs or storage devices that may be in your environment.  I used it with 4GB Fibre Channel and a server running OmniOS and saw no issues during my trial.

NFS is not a supported protocol, and there are other products that will provide similar features.

When to Use It

There are a couple of areas where I see PernixData being a good option.  These include:

    1. VDI deployments
    2. Resolving storage performance issues

This is a very attractive option if capital or space is not available to upgrade backend storage.  Based on the most recent pricing I could find, the cost per host is $7500 for the Enterprise license with no limits on VMs or Flash devices. 

I’m used to working in smaller environments, and the finance people I’ve worked with would have an easier time justifying $20,000 in server-side flash than an entirely new array or a tray of solid state drives for an existing array.  There is also an SMB bundle that allows for four hosts and 100 VMs.

Final Thoughts

There are a lot of use cases for PernixData, and if you need storage performance without having to add disks or spend significant amounts of capital, it is worth putting the trial in to see if it resolves your issues.