What’s New–Horizon 7.0.2 #VMworld2016

VMware has had a fairly steady release cadence for the Horizon Suite, and they have a new point release every 3-6 months.  These releases don’t just correct bugs in the software – they add new features that help close the gap with Citrix.

The next release of Horizon doesn’t disappoint.  Despite being a dot-dot release, Horizon 7.0.2 is packed with improvements.

Some of the highlights of the release are:

Blast Improvements

  • Further enhancements to the protocol
  • Improvements in the GPU-encode/decode that significantly lower bandwidth and latency
  • Improvements in the JPG/PNG codec to reduce bandwidth utilization by 6x
  • vRealize Operations integration with Blast Extreme.  I can now see Blast statistics in the vROPs console
  • UEM Smart Policies Integration with Blast.  I can now use the same PCoIP smart policies to control the Blast protocol.  This enhancement also allows administrators to set per-device policies so I can set different policies for Windows, Mac, Android, and IOS.
  • A Raspberry Pi client

3D Graphics

  • NVIDIA M10 support for high-density graphics acceleration use cases
  • Intel vDGA support on the Skylake platform using 1:1 PCI-E passthru

Horizon RDSH

VMware has continued to close the feature gap with Citrix XenApp, and the latest release checks off a few more boxes.    The main features in this release are:

  • Real-time Audio/Video support for RDSH
  • USB Redirection for RDSH on servers running Windows Server 2012 R2
  • Parameter Passthrough to RDSH Apps – this allows administrators to create custom links that pass parameters through to the application, such as command-line switches or authentication tokens, on launch.

Remote Experience

  • Expanded Windows OS support, including support for Windows 10 LTSB, Anniversary Update, and Pro virtual desktops
  • Flash redirection is now GA.  This allows flash content to be redirected to the local endpoint for rendering for a better experience.
  • Windows Media Redirection support for Windows 10 and Server 2016
  • Windows Media MMR support for Linux-based thin clients
  • Client Drive Redirection is now supported on port 443.  Enhancements have also been made to improve performance on high-latency networks and to speed up file and folder listings
  • DPI synchronization on native Windows clients to ensure crisp rendering of remote session
  • Enhanced clipboard with support for Microsoft Word and Excel
  • Clipboard size increased to 10 MB
  • Ability to link one smart card to multiple accounts

HTML Access Improvements

  • Time Zone Sync
  • File transfer between remote desktop and endpoint using web client
  • RTAV support for desktops and apps

What’s New in NVIDIA GRID August 2016

Over the last year, the great folks over at NVIDIA have been very busy.  Last year at this time, they announced the M6 and M60 cards, bringing the Maxwell architecture to GRID, adding support for blade server architectures, and introducing the software licensing model for the drivers.  In March, GRID 3.0 was announced, and it was a fix for the new licensing model.

Today, NVIDIA announced the August 2016 release of GRID.  This is the latest edition of the GRID software stack, and it coincides with the general availability of the high-density M10 card that supports up to 64 users.

So aside from the hardware, what’s new in this release?

The big addition to the GRID product line is monitoring.  In previous versions of GRID, there was a limited amount of performance data that any of the NVIDIA monitoring tools could see.  NVIDIA SMI, the hypervisor component, could only really report on the GPU core temperature and wattage, and the NVIDIA WMI counters on Windows VMs could only see framebuffer utilization.

The GRID software now exposes more performance metrics from the host and the guest VM level.  These metrics include discovery of the vGPU types currently in use on the physical card as well as utilization statistics for 3D, encode, and decode engines from the hypervisor and guest VM levels.  These stats can be viewed using the NVIDIA-SMI tool in the hypervisor or by using NVIDIA WMI in the guest OS.  This will enable 3rd-party monitoring tools, like Liquidware Stratusphere UX, to extract and analyze the performance data.  The NVIDIA SDK has been updated to provide API access to this data.

Monitoring was one of the missing pieces in the GRID stack, and the latest release addresses this.  It’s now possible to see how the GPU’s resources are being utilized and if the correct profiles are being utilized.

The latest GRID release supports the M6, M60 and M10 cards, and customers have an active software support contract with NVIDIA customers.  Unfortunately, the 1st generation K1 and K2 cards are not supported.

Horizon 7.0 Part 7–Installing Composer

The last couple of posts have dealt with preparing the environment to install Horizon 7.0.  We’ve covered prerequisites, design considerations, preparing Active Directory, and even setting up the service accounts that will be used for accessing services and databases.

Now its time to actually install and configure the Horizon View components.  These tasks will be completed in the following order:

  • Install Horizon Composer
  • Install Horizon Connection Servers
  • Configure the Environment for the first time
  • Install and Configure Remote Access Components

One note that I want to point out is that the installation process for most components has not changed significantly from previous versions.  If you’ve installed Horizon 6.x, this process will look very familiar to you.

Before we can install Composer, we need to create an ODBC Data Source to connect to the Composer database.  The database and the account for accessing the database were created in Part 6.  Composer can be installed once the ODBC data source has been created.

Composer can either be installed on your vCenter Server or on a separate Windows Server.  The first option is only available if you are using the Windows version of vCenter.  This walkthrough assumes that Composer is being installed on a separate server.

Service Account

Part 6 covers the steps for creating the Composer service account that will be used to connect Composer to vCenter.  This account will require local administrator rights on the server prior to installing Composer.

Creating the ODBC Data Source

Unfortunately, the Composer installer does not create the ODBC Data Source driver as part of the Composer installation, and this is something that will need to be created by hand before Composer can be successfully installed.  The View Composer database doesn’t require any special settings in the ODBC setup, so this step is pretty easy.

The SQL Server Native Client is not bundled with the Composer installation.  Prior to configuring the ODBC Data Source, the SQL Server Native Client for your version of SQL Server will need to be installed.  The Native Client for common versions of SQL Server can be found at:

The SQL Server Native Client was discontinued after SQL Server 2012, and it was replaced with a SQL Server ODBC Driver.  I do not know if this driver is supported with Composer, and I do not have a SQL Server 2014 database server to test with.

Once the Native Client is installed, you can begin creating the ODBC Data Source.

Note: The ODBC DSN setup can be launched from within the installer, but I prefer to create the data source before starting the installer.  The steps for creating the data source are the same whether you launch the ODBC setup from the start menu or in the installer.

1. Go to Start –> Administrative Tools –> Data Sources (ODBC).  On Windows Server 2012 R2, go to Start –> All Programs –> ODBC Data Sources (64-bit)

2. Click on the System DSN tab.

1

3. Click Add.

4. Select the correct SQL Server Native Client and click Finish.  If your database is on SQL Server 2008 R2, the native client will be version 10.0, and if it is on SQL Server 2012 or later, the correct version of the native client is 11.0. This will launch the wizard that will guide you through setting up the data source.

5. When the Create a New Data Source wizard launches, you will need to enter a name for the data source, a description, and the name of the SQL Server that the database resides on.  If you have multiple instances on your SQL Server, it should be entered as ServerName\InstanceName.  Click next to continue.

2

6. Select SQL Server Authentication.  Enter your SQL Server username and password that you created above.  Click Next to continue.

3

7. Change the default database to the viewComposer database that you created in Part 6.  Click Next to continue.

4

8. Click Test Data Source to verify that your settings are correct.

5

9. If your database settings are correct, you will see the windows below.  If you do not see the TESTS COMPLETED SUCCESSFULLY, verify that you have entered the correct username and password and that your login has the appropriate permissions on the database object.  Click OK to return to the previous window.

2014-01-04_22-29-37

10. Click OK to close the Data Source Administrator and return to the desktop.

 

Installing Horizon Composer

Once the database connection has been set up, Composer can be installed.  The steps for installing Composer are:

1.  Launch the Horizon 7 Composer installer.

2.  If .Net Framework 3.5 SP1 is not installed, you will be prompted to install the feature before continuing. Note: Windows Server 2012 R2 does not contain the binaries for the .Net 3.5 feature, and you need to choose an alternate source path before installing.  Please see this article from Microsoft.

3.  Click Next to continue.

1

4.  Accept the license agreement and click next.

2

5.  Select the destination folder where Composer will be installed.

3

6. Configure Composer to use the ODBC data source that you set up.  You will need to enter the data source name, SQL login, and password before continuing.

4

7. After the data source has been configured, you will need to select the port that Composer will use for communicating with the Horizon Connection Servers. 

5

8. Click Use an existing SSL certificate, and then click Choose.  Select the certificate and click OK.  Click Next.

6

Click Install to start the installation.

7

9. Once the installation is finished, you will be prompted to restart your computer.

10

So now that Composer is installed, what can we do with it?  Not much at the moment.  A connection server is required to configure and use Composer for linked clone desktops, and the next post in this series will cover how to install that Connection Server.

You Got Your Nutanix in My UCS #NTC

In the 1980s, there was a commercial for Reese’s Peanut Butter cups that described the product with the following lines: “You got your chocolate in my peanut butter.  You got your peanut butter in my chocolate.”

This describes  hyperconverged infrastructure and the Cisco UCS converged platform.

By combining storage and compute into the same nodes, a hyperconverged infrastructure offers many features for customers.  These include highly performant storage, simplified management, and a reduced footprint in the datacenter.  But networking has remained an island unto it’s own, and its still it’s own silo in data centers with hyperconverged infrastructure.

Cisco’s UCS provides similar benefits by converging compute and network – simplified policy-based management combined with high performance compute and networking.

So what happens when you combine them?

You get a best of breed platform that brings the fully-converged stack to life.  Network, storage, and compute are unified with a Nutanix-powered hyperconverged platform running with policy-based hardware management through UCSM.

Today, Nutanix announced support for UCS C-series rackmount servers.  Unlike the previous platform expansions, Nutanix on Cisco UCS will not be an OEM partnership.  It will be a meet-in-the-channel approach where customers would buy Cisco UCS and Nutanix licensing separately, and the integration would take place when the system is installed at the customer site.

Nutanix has certified some Cisco UCS C-series platforms, and they provide the certified Bill of Materials to simplify ordering with your preferred channel partner.  Nutanix is not supporting Cisco B-series blades.

The combination of Nutanix and UCS brings yet another powerful combination to customers who want to utilize the best-of-breed technologies in their data centers.

NVIDIA GRID Community Advisors Program Inaugural Class

This morning, NVIDIA announced the inaugural class of the GRID Community Advisors program.  As described in the announcement blog, the program “brings together the talents of individuals who have invested significant time and resources to become experts in NVIDIA products and solutions. Together, they give the entire NVIDIA GRID ecosystem access to product management, architects and support managers to help ensure we build the right products.”

I’m honored, and excited, to be a part of the inaugural class of the GRID Community Advisors Program along with several big names in the end-user computing and graphics virtualization fields.  The other members of this 20-person class are:

  • Durukan Artik – Dell, Turkey
  • Barry Coombs – ComputerWorld, UK
  • Tony Foster – EMC, USA, @wonder_nerd
  • Ronald Grass – Citrix, Germany
  • Richard Hoffman – Entisys, USA, @Rich_T_Hoffman
  • Magnar Johnson – Independent Consultant, Norway, @magnarjohnsen
  • Ben Jones – Ebb3, UK, @_BenWJones
  • Philip Jones – Independent Consultant, USA, @P2Vme
  • Arash Keissami – dRaster, USA, @akeissami
  • Tobias Kreidl – Northern Arizona University, USA, @tkreidl
  • Andrew Morgan – Zinopy/ControlUp, Ireland, @andyjmorgan
  • Rasmus Raun-Nielsen – Conecto A/S, Denmark, @RBRConecto
  • Soeren Reinertsen – Siemens Wind Power, Denmark
  • Marius Sandbu – BigTec / Exclusive Networks, Norway, @msandbu
  • Barry Schiffer – SLTN Inter Access, Netherlands, @barryschiffer
  • Kanishk Sethi – Koenig Solutions, India, @kanishksethi
  • Ruben Spruijt – Atlantis Computing, Netherlands, @rspruijt
  • Roy Textor – Textor IT, Germany, @RoyTextor
  • Bernhard (Benny) Tritsch – Independent Consultant, Germany, @drtritsch

Thank you to Rachel Berry for organizing this program and NVIDIA for inviting me to participate.

Horizon 7.0 Part 6–Service Accounts and Databases

Back in Part 4, I mentioned that Horizon required up to a few service accounts to function properly.  One of these accounts is for accessing vCenter to provision and manage the virtual machines that users will connect to.  The other service account will manage computer accounts within Active Directory, and this account is only required if you are using Horizon Composer or Instant Clones.

In addition to these two service accounts, two database accounts may need to be created for the Horizon Composer database and the Horizon Events Database.  Edit: The supported database matrix has changed significantly since Horizon 6.2.  Please validate that your database is compatible by checking the VMware Product Interoperability Matrix.

It’s important to build these accounts with the principle of least privileged access in mind.  These accounts should not have more rights than they would need.  So while the easy way out would be to give these accounts vCenter Administrator, Domain Administrator, and SQL Server or Oracle SysAdmin rights, it would not be a good idea as these accounts could potentially be compromised.

vCenter Service Account

The first account that needs to be created is a service account that Horizon will use for accessing vCenter.  Horizon uses this account for provisioning new virtual desktops and performing power operations.  The service account should be a standard Active Directory domain user account without any additional administrator-level rights on the domain or on the vCenter server.

There are a couple of different ways to configure your Horizon environment, so the actual rights required in vCenter will vary.  The specific permissions that are required can be found in the Configuring User Accounts for vCenter Server and View Composer section of the Horizon 7 documentation.

A new role will need to be created within vCenter in order to assign the appropriate permissions.  To create a new role in the vCenter Web Client, you need to go to Administration –> Roles from the main page.  This will bring up the roles page, and we can create a new role from here by clicking on the green plus sign.

2013-12-29_19-14-37

For the purposes of this walkthrough, I’ll be setting up my service account with permissions to deploy linked clone desktops using Horizon Composer.  The permissions that need to be assigned to our new role are:

Privilege Group

Privilege

Datastore
Allocate Space
Browse Datastore
Low Level File Operations

Folder
Create Folder
Delete Folder

Virtual Machine
Configuration –> All Items
Inventory –> All Items
Snapshot Management –> All Items
Interaction:
Power On
Power Off
Reset
Suspend
Provisioning:
Customizing
Deploy Template
Read Customization Spec
Clone Virtual Machine
Allow Disk Access

Resource
Assign Virtual Machine to Resource Pool
Migrate Powered-Off Virtual Machine

Global
Enable Methods
Disable Methods
System Tag
Act As vCenter Note 1

Network
All

Host
Configuration:
Advanced Settings Note 1

Note 1: Act as vCenter and Host Advanced Settings are only needed if Storage Accelerator are used.  If these features are not used, these permissions are not required.

After the role has been created, we will need to assign permissions for our vCenter Server service account to the vCenter root.  To do this from the roles screen, you will need to go back to the vCenter Web Client Home screen and take the following steps:

  1. Select vCenter
  2. Select vCenter Servers under Inventory Lists
  3. Select the vCenter that you wish to grant permissions on
  4. Click on the Manage Tab
  5. Click Permissions
  6. Click the Green Plus Sign to add a new permission
  7. Select the role for Horizon Composer
  8. Add the Domain User who should be assigned the role
  9. Click OK.

2013-12-29_20-33-59

Horizon Events Database Account

The Events Database is a repository for events that happen with the Horizon environment.  Some examples of events that are recorded include logon and logoff activity and Composer errors.

The Events Database requires a Microsoft SQL Server or Oracle database server, and it should be installed on an existing production database server.  There are two parts to configuring the events database.  The first part, creating the database and the database user, needs to be done in SQL Server Management Studio before the event database can be configured in Horizon Administrator.  The steps for configuring Horizon to use the Events database will happen in another post.

Note: Horizon also supports sending event data off to a syslog server.  This can be used in place of an events database.  Configuring a syslog server is beyond the scope of this article.

To set up the database, follow these steps:

1. Open SQL Server Management Studio and log in with an account that has permissions to create users and databases.

2. Expand Security –> Logins.

3. Right-click on Logins and Select New Login…

1. Create New User 1

4. Enter the SQL Login Name and Password and then click OK.

2. Create New User 2

5. Expand Databases.

6. Right-click on Databases and select New Database.

7. Enter the database name.  Select the database user that you created above as the database owner.  Click OK to create the database.

3. Create View Events Database

Note: SQL Server named instances are configured to use dynamic ports.  This means that SQL Server will use a new port every time the server is restarted.  The events database does not support dynamic ports, so a static port will need to be configured and the SQL instance restarted prior to configuring the events database in Horizon.  For instructions on how to configure a static ports in SQL Server, please see this article.

Active Directory Provisioning Account

The Active Directory Provisioning Service account is used by Horizon to manage the computer accounts that are created for Instant Clone and Linked Clone desktops.

This account can be created as a standard domain user, and it should not have domain administrator or account operator rights – it only needs a select group of permissions on the OU (or OUs) where the virtual desktop computer accounts will be placed.

After this account has been created, you need to delegate permissions to it on the OU (or OUs) where your VDI desktops will be placed.  If you use the structure like the one I outlined in Part 4, you only need to delegate permissions on the top-level OU and permission inheritance, if turned on, will apply them to any child or grandchild objects beneath it.

Note:  If inheritance is not turned on, you will need to check the Apply to All Child Objects checkbox before applying the permissions.

The permissions that need to be delegated on the OU are:

  • Create Computer Objects
  • Delete Computer Objects
  • Write All Properties
  • Reset Password

Note: Although granting this account Domain Administrator or Account Operator permissions may seem like an easy way to grant it the permissions it needs, it will grant a number of other permissions that are not needed and could pose a security risk if that account is compromised.  Only the required permissions should be granted in a production environment.

Horizon Composer Service Account

The last two accounts that need to be set up are for Horizon Composer.  These accounts are only required if you plan on using Composer and linked clone desktops.

I recommend two accounts for Composer.  These accounts are:

1. A Composer Service Account– This service account is by Horizon to connect to Composer.  It is a standard Active Directory user account that requires administrator rights on the Composer server.  This account is only required if Composer is not installed on the vCenter Server.

2. A Horizon Composer Database User – This service account is a local SQL Server user account and is required if the SQL Server database is located on a remote server.  If SQL Server is installed on the Composer Server, Windows authentication can be used.

Configuring the Composer Database and Database Service Account

Like the Event database above, Composer requires its own database.  This database is used to keep track of linked clones, replicas, and pending recompose operations.

The steps below will walk through setting up the Composer database.  If your Composer database is located on a separate server, you will have to use SQL authentication, and the steps for creating the SQL user are included.

Note: If your Composer database is located on the same server as the Composer service, you can use Windows Authentication for accessing the database.

1. Log into your database server and open SQL Server Management Studio.

2014-01-04_22-20-17

2. Log in as a user with administrator rights on SQL Server.

3. Create a new SQL Login by expanding Security –> Logins.  Right click on Logins and select New Login.

2014-01-04_22-21-46

4. Enter a login name such as HorizonComposerDB or HorizonComposerUser, select SQL Server Authentication, and enter a password twice.  You may also need to disable Enforce Password Expiration or Enforce Password Policy depending on your environment.  Click OK to create the account.  Note: Check with your DBA on password policy settings and requirements.  In the absence of existing policies, I recommend disabling Password Expiration and Password Policy requirements on this account because an expired SQL User password will break the environment.  There is a VMware KB on how to change the database user password, but I would recommend avoiding that issue entirely.

2014-01-04_22-23-50

5. After the SQL login is created, you need to create an empty database.  To create the database, right click on the database folder and select New Database.

2014-01-04_22-19-58

6. In the database name field, enter a name such as HorizonComposer.  This will be the name of the database.  To select an owner for the database, click on the … button and search for the database user account you created above.  Click OK to create the database.

2014-01-04_22-24-23

You will have a blank database that you can use for Composer after you click OK.

Configuring Composer to use this database will be covered during the Composer installation.

This wraps up all of the prerequisites for the environment.  In the next couple of sections, I will be covering the installation and configuration of VMware Horizon.

#GRIDDays Followup – Understanding NVIDIA GRID vGPU Part 1

Author Node: This post has been a few months in the making.  While GRIDDays was back in March, I’ve had a few other projects that have kept this on the sidelines until now.  This is Part 1.  Part 2 will be coming at some point in the future.  I figured 1200 words on this was good enough for one chunk.

The general rule of thumb is that if a virtual desktop requires some dedicated hardware – examples include serial devices, hardware license dongles, and physical cards, it’s probably not a good fit to be virtualized.  This was especially true of workloads that required high-end 3D acceleration.  If a virtual workload required 3D graphics, multiple high-end Quadro cards hard to be installed in the server and then passed through to the virtual machines that required them. 

Since pass-through GPUs can’t be shared amongst VMs, this design doesn’t scale well.  There is a limit to the number of cards I could install in a host, and that limited the number of 3D workloads I could run.  If I needed more, I would have to add hosts.  It also limits the flexibility in the environment as VMs with pass-through hardware can’t easily be moved to a new host if maintenance is needed or a hardware failure occurs.

NVIDIA created the GRID products to address the challenges of GPU virtualization.  GRID technology combines purpose-built graphics hardware, software, and drivers to allow multiple virtual machines to access a GPU. 

I’ve always wondered how it worked, and how it ensured that all configured VMs had equal access to the GPU.  I had the opportunity to learn about the technology and the underlying concepts a few weeks ago at NVIDIA GRID Days. 

Disclosure: NVIDIA paid for my travel, lodging, and some of my meals while I was out in Santa Clara.  This has not influenced the content of this post.

Note:  All graphics in this slide are courtesy of NVIDIA.

How it Works – Hardware Layer

So how does a GRID card work?  In order to understand it, we have to start with the hardware.  A GRID card is a PCIe card with multiple GPUs on the board.  The hardware includes the same features that many of the other NVIDIA products have including framebuffer (often referred to as video memory), graphics compute cores, and hardware dedicated to video encode and decode. 

image

Interactions between an operating system and a PCIe hardware device happen through the base address register.  Base address registers are used to hold memory addresses used by a physical device.  Virtual machines don’t have full access to the GPU hardware, so they are allocated a subset of the GPU’s base address registers for communication with the hardware.  This is called a virtual BAR. 

image

Access to the GPU Base Address Registers, and by extension the Virtual BAR, is handled through the CPU’s Memory Management Unit.  The MMU handles the translation of the virtual BAR memory addresses into the corresponding physical memory addresses used by the GPU’s BAR.  The translation is facilitated by page tables managed by the hypervisor.

The benefit of the virtual bar and hardware-assisted translations is that it is secure.  VMs can only access the registers that they are assigned, and they cannot access any other locations outside of the virtual BAR.

image

The architecture described above – assigning a virtual base address register space that corresponds to a subset of the physical base address register allows multiple VMs to securely share one physical hardware device.  That’s only one part of the story.  How does work actually get from the guest OS driver to the GPU?  And how does the GPU actually manage GPU workloads from multiple VMs?

When the NVIDIA driver submits a job or workload to the GPU, it gets placed into a channel.  A channel is essentially a queue or a line that is exposed through each VM’s virtual BAR.  Each GPU has a fixed number of channels available, and channels are allocated to each VM by dividing the total number of channels by the number of users that can utilize a profile.  So if I’m using a profile that can support 16 VMs per GPU, each VM would get 1/16th of the channels. 

When a virtual desktop user opens an application that requires resources on the GPU, the NVIDIA driver in the VM will dedicate a channel to that application.  When that application needs the GPU to do something, the NVIDIA driver will submit that job to channels allocated to the application on the GPU through the virtual BAR.

image

So now that the application is queue up for execution, something needs to get it into the GPU for execution.  That job is handled by the scheduler.  The scheduler will move work from active channels into the GPU engines.  The GPU has four engines for handling a few different tasks – graphics compute, video encode and decode, and a copy engine.  The GPU engines are timeshared (more on that below), and they execute jobs in parallel.

When active jobs are placed on an engine, they are executed sequentially.  When a job is completed, the NVIDIA driver is signaled that the work has been completed, and the scheduler loads the next job onto the engine to begin processing.

image

Scheduling

There are two types of scheduling in the computing world – sequential and parallel.  When sequential scheduling is used, a single processor executes each job that it receives in order.  When it completes that job, it moves onto the next.  This can allow a single fast processor to quickly move through jobs, but complex jobs can cause a backup and delay the execution of waiting jobs.

Parallel scheduling uses multiple processors to execute jobs at the same time.  When a job on one processor completes, it moves the next job in line onto the processor.  Individually, these processors are too slow to handle a complex job.  But they prevent a single job from clogging the pipeline.

A good analogy to this would be the checkout lane at a department store.  The cashier (and register) is the processor, and each customer is a job that needs to be executed.  Customers are queued up in line, and as the cashier finishes checking out one customer, the next customer in the queue is moved up.  The cashier can usually process users efficiently and keep the line moving, but if a customer with 60 items walks into the 20 items or less lane, it would back up the line and prevent others from checking out.

This example works for parallel execution as well.  Imagine that same department store at Christmas.  Every cash register is open, and there is a person at the front of the line directing where people go.  This person is the scheduler, and they are placing customers (jobs) on registers  (GPU engines) as soon as they have finished with their previous customer.

Graphics Scheduling

So how does GRID ensure that all VMs have equal access to the GPU engines?  How does it prevent one VM from hogging all the resources on a particular engine?

The answer comes in the way that the scheduler works.  The scheduler uses a method called round-robin time slicing.  Round-robin time slicing works by giving each channel a small amount of time on a GPU engine.  The channel has exclusive access to the GPU engine until the timeslice expires or until there are no more work items in the channel.

If all of the work in a channel is completed before the timeslice expires, any spare cycles are redistributed to other channels or VMs.  This ensures that the GPU isn’t sitting idle while jobs are queued in other channels.

The next part of the Understanding vGPU series will cover memory management on the GRID cards.