It’s Time To Reconsider My Thoughts on GPUs in VDI…

Last year, I wrote that it was too early to consider GPUs for general VDI use and that they  should be reserved only for VDI use cases where they are absolutely required.  There were a number of reasons for this including user density per GPU, lack of monitoring and vMotion, and economics.  That lead to a Frontline Chatter podcast discussing this topic in more depth with industry expert Thomas Poppelgaard.

When I wrote that post, I said that there would be a day when GPUs would make sense for all VDI deployments.  That day is coming soon.  There is a killer app that will greatly benefit all users (in certain cases) who have access to a GPU.

Last week, I got to spend some time out at NVIDIA’s Headquarters in Santa Clara taking part in NVIDIA GRID Days.  GRID Days was a two day event interacting with the senior management of NVIDIA’s GRID product line along with briefings on the current and future technology in GRID.

Disclosure: NVIDIA paid for my travel, lodging, and some of my meals while I was out in Santa Clara.  This has not influenced the content of this post.

The killer app that will drive GPU adoption in VDI environments is Blast Extreme.  Blast Extreme is the new protocol being introduced in VMware Horizon 7 that utilizes H.264 as the codec for the desktop experience.  The benefit of using H.264 over other codecs is that many devices include hardware for encoding and decoding H.264 streams.  This includes almost every video card made in the last decade.

So what does this have to do with VDI?

When a user is logged into a virtual desktop or is using a published application on an RDSH server, the desktop and applications that they’re interacting with are being rendered, captured, encoded, or converted into a stream of data, and then transported over the network to the client.  Normally, this encoding happens in software and uses CPU cycles.  (PCoIP has hardware offload in the form of APEX cards, but these only handle the encoding phase, rendering happens somewhere else…

When GPUs are available to virtual desktops or RDSH/XenApp servers, the rendering and encoding tasks can be pushed into the GPU where dedicated and optimized hardware can take over these tasks.  This reduces the amount of CPU overhead on each desktop, and it can lead to snappier user experience.  NVIDIA’s testing has also shown that Blast Extreme with GPU offload uses less bandwidth and has lower latency compared to PCoIP.

Note: These aren’t my numbers, and I haven’t had a chance to validate these finding in my lab.  When Horizon 7 is released, I plan to do similar testing of my own comparing PCoIP and Blast Extreme in both LAN and WAN environment.

If I use Blast Extreme, and I install GRID cards in my hosts, I gain two tangible user experience benefits.  Users now have access to a GPU, which many applications, especially Microsoft Office and most web browsers, tap into for processing and rendering.  And they gain the benefits of using that same GPU to encode the H.264 streams that Blast Extreme uses, potentially lowering the bandwidth and latency of their session.  This, overall, translates into significant improvements in their virtual desktop and published applications experience*.

Many of the limitations of vGPU still exist.  There is no vMotion support, and performance analytics are not fully exposed to the guest OS.  But density has improved significantly with the new M6 and M60 cards.  So while it may not be cost effective to retrofit GPUs into existing Horizon deployments, GPUs are now worth considering for new Horizon 7 deployments.

*Caveat: If users are on a high latency network connection, or if the connection has a lot of contention, you may have different results.