This morning, NVIDIA announced the latest version of the graphics virtualization stack – NVIDIA GRID 5.0. This latest releases continues the trend that NVIDIA started two years ago when they separated the GRID software stack from the Tesla data center GPUs in the GRID 2.0 release.
GRID 5.0 adds several new key features to the GRID product line. Along with these new features, NVIDIA is also adding a new Tesla card and rebranding the Virtual Workstation license SKU.
Quadro Virtual Data Center Workstation
Previous versions of GRID contained profiles designed for workstations and high-end professional applications. These profiles, which ended in a Q, provided Quadro level features for the most demanding applications. They also required the GRID Virtual Workstation license.
NVIDIA has decided to rebrand the professional series capabilities of GRID to better align with their professional visualization series of products. The GRID Virtual Workstation license will now be called the Quadro Virtual Data Center Workstation license. This change helps differentiate the Virtual PC and Apps features, which are geared towards knowledge users, from the professional series capabilities.
Tesla P6
The Tesla P6 is the Pascal-generation successor to the Maxwell-generation M6 GPU. It provides a GPU purpose-built for blade servers. In addition to using a Pascal-generation GPU, the P6 also increases the amount of Framebuffer to 16GB. The P6 can now support up to 16 users per blade, which provides more value to customers who want to adopt GRID for VDI on their blade platform.
Pascal Support for GRID
The next generation GRID software adds support for the Pascal-generation Tesla cards. The new cards that are supported in GRID 5.0 are the Tesla P4, P6, P40, and P100.
The P40 is the designated successor to the M60 card. It is a 1U board with a single GPU and 24GB of Framebuffer. The increased framebuffer also allows for a 50% increase in density, and the P40 can handle up to 24 users per board compared to the 16 users per M60.
Edit for Clarification – The comparison between the M60 and the P40 was done using the 1GB GRID profiles. The M60 can support up to 32 users per board when assigning each VM 512MB of framebuffer, but this option is not available in GRID 5.0.
On the other end of the scale is the P4. This is a 1U small form factor Pascal GPU with 8GB of Framebuffer. Unlike other larger Tesla boards, this board can run on 75W, so it doesn’t need any additional power. This makes it suitable for cloud and rack-dense computing environments.
In addition to better performance, the Pascal cards have a few key advantages over the previous generation Maxwell cards. First, there is no need to use the GPU-Mode-Switch utility to convert the Pascal board from compute mode to graphics mode. There is, however, a manual step that is required to disable ECC memory on the Pascal boards, but this is built into the NVIDIA-SMI utility. This change streamlines the GRID deployment process for Pascal boards.
The second advantage involves hardware-level preemption support. In previous generations of GRID, CUDA support was only available when using the 8Q profile. This dedicated an entire GPU to a single VM. Hardware preemption support enables Pascal cards to support CUDA on all profiles.
To understand why hardware preemption is required, we have to look at how GRID shares GPU resources. GRID uses round-robin time slicing to share GPU resources amongst multiple VMs, and each VM gets a set amount of time on the GPU. When the time slice expires, the GPU moves onto the next VM. When the GPU is rendering graphics to be displayed on the screen, the round-robin method works well because the GPU can typically complete all the work in the allotted time slice. CUDA jobs, however, pose a challenge because jobs can take hours to complete. Without the ability to preempt the running jobs, the CUDA jobs could fail when the time slice expired.
Preemption support on Pascal cards allows VMs with any virtual workstation profile to have access to CUDA features. This enables high-end applications to use smaller Quadro vDWS profiles instead of having to have an entire GPU dedicated to that specific user.
Fixed Share Round Robin Scheduling
As mentioned above, GRID uses round robin time slicing to share the GPU across multiple VMs. One disadvantage of this method is that if a VM doesn’t have anything for the GPU to do, it is skipped and the time slice is given to the next VM in line. This prevents the GPU from being idle if there are VMs that can utilize it. It also means that some VMs may get more access to the GPU than others.
NVIDIA is adding a new scheduler option in GRID 5.0. This option is called the Fixed Share Scheduler. The Fixed Share scheduler grants each VM that is placed on the GPU an equal share of resources. Time slices are still used in the fixed share scheduler, and fi a VM does not have any jobs for the GPU to execute, the GPU will be idled during that time slice.
As VMs are placed onto, or removed from, a GPU, the share of resources available to each VM is recalculated, and shares are redistributed to ensure that all VMs get equal access.
Enhanced Monitoring
GRID 5.0 adds new monitoring capabilities to the GRID platform. One of the new features is per-application monitoring. Administrators can now view GPU utilization on a per-application basis using the NVIDIA-SMI tool. This new feature allows administrators to see exactly how much of the GPU resources each application is using.
License Enforcement
In previous versions of GRID, the licensing server basically acted as an auditing tool. A license was required for GRID, but the GRID features would continue to function even if the licensed quantity was exceeded. GRID 5.0 changes that. Licensing is now enforced with GRID, and if a license is not available, the GRID drivers will not function. Users will get reduced quality when they sign into their desktops.
Because licensing is now enforced, the license server has built-in HA functionality. A secondary licensing server can be specified in the config of both the license server and the driver, and if the primary is not available, it will fall back to the secondary.
Other Announced Features
Two GRID 5.0 features were announced at Citrix Synergy back in May. The first was Citrix Director support for monitoring GRID. The second feature is beta Live Migration support for XenServer.