You Got Your Nutanix in My UCS #NTC

In the 1980s, there was a commercial for Reese’s Peanut Butter cups that described the product with the following lines: “You got your chocolate in my peanut butter.  You got your peanut butter in my chocolate.”

This describes  hyperconverged infrastructure and the Cisco UCS converged platform.

By combining storage and compute into the same nodes, a hyperconverged infrastructure offers many features for customers.  These include highly performant storage, simplified management, and a reduced footprint in the datacenter.  But networking has remained an island unto it’s own, and its still it’s own silo in data centers with hyperconverged infrastructure.

Cisco’s UCS provides similar benefits by converging compute and network – simplified policy-based management combined with high performance compute and networking.

So what happens when you combine them?

You get a best of breed platform that brings the fully-converged stack to life.  Network, storage, and compute are unified with a Nutanix-powered hyperconverged platform running with policy-based hardware management through UCSM.

Today, Nutanix announced support for UCS C-series rackmount servers.  Unlike the previous platform expansions, Nutanix on Cisco UCS will not be an OEM partnership.  It will be a meet-in-the-channel approach where customers would buy Cisco UCS and Nutanix licensing separately, and the integration would take place when the system is installed at the customer site.

Nutanix has certified some Cisco UCS C-series platforms, and they provide the certified Bill of Materials to simplify ordering with your preferred channel partner.  Nutanix is not supporting Cisco B-series blades.

The combination of Nutanix and UCS brings yet another powerful combination to customers who want to utilize the best-of-breed technologies in their data centers.

NVIDIA GRID Community Advisors Program Inaugural Class

This morning, NVIDIA announced the inaugural class of the GRID Community Advisors program.  As described in the announcement blog, the program “brings together the talents of individuals who have invested significant time and resources to become experts in NVIDIA products and solutions. Together, they give the entire NVIDIA GRID ecosystem access to product management, architects and support managers to help ensure we build the right products.”

I’m honored, and excited, to be a part of the inaugural class of the GRID Community Advisors Program along with several big names in the end-user computing and graphics virtualization fields.  The other members of this 20-person class are:

  • Durukan Artik – Dell, Turkey
  • Barry Coombs – ComputerWorld, UK
  • Tony Foster – EMC, USA, @wonder_nerd
  • Ronald Grass – Citrix, Germany
  • Richard Hoffman – Entisys, USA, @Rich_T_Hoffman
  • Magnar Johnson – Independent Consultant, Norway, @magnarjohnsen
  • Ben Jones – Ebb3, UK, @_BenWJones
  • Philip Jones – Independent Consultant, USA, @P2Vme
  • Arash Keissami – dRaster, USA, @akeissami
  • Tobias Kreidl – Northern Arizona University, USA, @tkreidl
  • Andrew Morgan – Zinopy/ControlUp, Ireland, @andyjmorgan
  • Rasmus Raun-Nielsen – Conecto A/S, Denmark, @RBRConecto
  • Soeren Reinertsen – Siemens Wind Power, Denmark
  • Marius Sandbu – BigTec / Exclusive Networks, Norway, @msandbu
  • Barry Schiffer – SLTN Inter Access, Netherlands, @barryschiffer
  • Kanishk Sethi – Koenig Solutions, India, @kanishksethi
  • Ruben Spruijt – Atlantis Computing, Netherlands, @rspruijt
  • Roy Textor – Textor IT, Germany, @RoyTextor
  • Bernhard (Benny) Tritsch – Independent Consultant, Germany, @drtritsch

Thank you to Rachel Berry for organizing this program and NVIDIA for inviting me to participate.

Horizon 7.0 Part 6–Service Accounts and Databases

Back in Part 4, I mentioned that Horizon required up to a few service accounts to function properly.  One of these accounts is for accessing vCenter to provision and manage the virtual machines that users will connect to.  The other service account will manage computer accounts within Active Directory, and this account is only required if you are using Horizon Composer or Instant Clones.

In addition to these two service accounts, two database accounts may need to be created for the Horizon Composer database and the Horizon Events Database.  Edit: The supported database matrix has changed significantly since Horizon 6.2.  Please validate that your database is compatible by checking the VMware Product Interoperability Matrix.

It’s important to build these accounts with the principle of least privileged access in mind.  These accounts should not have more rights than they would need.  So while the easy way out would be to give these accounts vCenter Administrator, Domain Administrator, and SQL Server or Oracle SysAdmin rights, it would not be a good idea as these accounts could potentially be compromised.

vCenter Service Account

The first account that needs to be created is a service account that Horizon will use for accessing vCenter.  Horizon uses this account for provisioning new virtual desktops and performing power operations.  The service account should be a standard Active Directory domain user account without any additional administrator-level rights on the domain or on the vCenter server.

There are a couple of different ways to configure your Horizon environment, so the actual rights required in vCenter will vary.  The specific permissions that are required can be found in the Configuring User Accounts for vCenter Server and View Composer section of the Horizon 7 documentation.

A new role will need to be created within vCenter in order to assign the appropriate permissions.  To create a new role in the vCenter Web Client, you need to go to Administration –> Roles from the main page.  This will bring up the roles page, and we can create a new role from here by clicking on the green plus sign.

2013-12-29_19-14-37

For the purposes of this walkthrough, I’ll be setting up my service account with permissions to deploy linked clone desktops using Horizon Composer.  The permissions that need to be assigned to our new role are:

Privilege Group

Privilege

Datastore
Allocate Space
Browse Datastore
Low Level File Operations

Folder
Create Folder
Delete Folder

Virtual Machine
Configuration –> All Items
Inventory –> All Items
Snapshot Management –> All Items
Interaction:
Power On
Power Off
Reset
Suspend
Provisioning:
Customizing
Deploy Template
Read Customization Spec
Clone Virtual Machine
Allow Disk Access

Resource
Assign Virtual Machine to Resource Pool
Migrate Powered-Off Virtual Machine

Global
Enable Methods
Disable Methods
System Tag
Act As vCenter Note 1

Network
All

Host
Configuration:
Advanced Settings Note 1

Note 1: Act as vCenter and Host Advanced Settings are only needed if Storage Accelerator are used.  If these features are not used, these permissions are not required.

After the role has been created, we will need to assign permissions for our vCenter Server service account to the vCenter root.  To do this from the roles screen, you will need to go back to the vCenter Web Client Home screen and take the following steps:

  1. Select vCenter
  2. Select vCenter Servers under Inventory Lists
  3. Select the vCenter that you wish to grant permissions on
  4. Click on the Manage Tab
  5. Click Permissions
  6. Click the Green Plus Sign to add a new permission
  7. Select the role for Horizon Composer
  8. Add the Domain User who should be assigned the role
  9. Click OK.

2013-12-29_20-33-59

Horizon Events Database Account

The Events Database is a repository for events that happen with the Horizon environment.  Some examples of events that are recorded include logon and logoff activity and Composer errors.

The Events Database requires a Microsoft SQL Server or Oracle database server, and it should be installed on an existing production database server.  There are two parts to configuring the events database.  The first part, creating the database and the database user, needs to be done in SQL Server Management Studio before the event database can be configured in Horizon Administrator.  The steps for configuring Horizon to use the Events database will happen in another post.

Note: Horizon also supports sending event data off to a syslog server.  This can be used in place of an events database.  Configuring a syslog server is beyond the scope of this article.

To set up the database, follow these steps:

1. Open SQL Server Management Studio and log in with an account that has permissions to create users and databases.

2. Expand Security –> Logins.

3. Right-click on Logins and Select New Login…

1. Create New User 1

4. Enter the SQL Login Name and Password and then click OK.

2. Create New User 2

5. Expand Databases.

6. Right-click on Databases and select New Database.

7. Enter the database name.  Select the database user that you created above as the database owner.  Click OK to create the database.

3. Create View Events Database

Note: SQL Server named instances are configured to use dynamic ports.  This means that SQL Server will use a new port every time the server is restarted.  The events database does not support dynamic ports, so a static port will need to be configured and the SQL instance restarted prior to configuring the events database in Horizon.  For instructions on how to configure a static ports in SQL Server, please see this article.

Active Directory Provisioning Account

The Active Directory Provisioning Service account is used by Horizon to manage the computer accounts that are created for Instant Clone and Linked Clone desktops.

This account can be created as a standard domain user, and it should not have domain administrator or account operator rights – it only needs a select group of permissions on the OU (or OUs) where the virtual desktop computer accounts will be placed.

After this account has been created, you need to delegate permissions to it on the OU (or OUs) where your VDI desktops will be placed.  If you use the structure like the one I outlined in Part 4, you only need to delegate permissions on the top-level OU and permission inheritance, if turned on, will apply them to any child or grandchild objects beneath it.

Note:  If inheritance is not turned on, you will need to check the Apply to All Child Objects checkbox before applying the permissions.

The permissions that need to be delegated on the OU are:

  • Create Computer Objects
  • Delete Computer Objects
  • Write All Properties
  • Reset Password

Note: Although granting this account Domain Administrator or Account Operator permissions may seem like an easy way to grant it the permissions it needs, it will grant a number of other permissions that are not needed and could pose a security risk if that account is compromised.  Only the required permissions should be granted in a production environment.

Horizon Composer Service Account

The last two accounts that need to be set up are for Horizon Composer.  These accounts are only required if you plan on using Composer and linked clone desktops.

I recommend two accounts for Composer.  These accounts are:

1. A Composer Service Account– This service account is by Horizon to connect to Composer.  It is a standard Active Directory user account that requires administrator rights on the Composer server.  This account is only required if Composer is not installed on the vCenter Server.

2. A Horizon Composer Database User – This service account is a local SQL Server user account and is required if the SQL Server database is located on a remote server.  If SQL Server is installed on the Composer Server, Windows authentication can be used.

Configuring the Composer Database and Database Service Account

Like the Event database above, Composer requires its own database.  This database is used to keep track of linked clones, replicas, and pending recompose operations.

The steps below will walk through setting up the Composer database.  If your Composer database is located on a separate server, you will have to use SQL authentication, and the steps for creating the SQL user are included.

Note: If your Composer database is located on the same server as the Composer service, you can use Windows Authentication for accessing the database.

1. Log into your database server and open SQL Server Management Studio.

2014-01-04_22-20-17

2. Log in as a user with administrator rights on SQL Server.

3. Create a new SQL Login by expanding Security –> Logins.  Right click on Logins and select New Login.

2014-01-04_22-21-46

4. Enter a login name such as HorizonComposerDB or HorizonComposerUser, select SQL Server Authentication, and enter a password twice.  You may also need to disable Enforce Password Expiration or Enforce Password Policy depending on your environment.  Click OK to create the account.  Note: Check with your DBA on password policy settings and requirements.  In the absence of existing policies, I recommend disabling Password Expiration and Password Policy requirements on this account because an expired SQL User password will break the environment.  There is a VMware KB on how to change the database user password, but I would recommend avoiding that issue entirely.

2014-01-04_22-23-50

5. After the SQL login is created, you need to create an empty database.  To create the database, right click on the database folder and select New Database.

2014-01-04_22-19-58

6. In the database name field, enter a name such as HorizonComposer.  This will be the name of the database.  To select an owner for the database, click on the … button and search for the database user account you created above.  Click OK to create the database.

2014-01-04_22-24-23

You will have a blank database that you can use for Composer after you click OK.

Configuring Composer to use this database will be covered during the Composer installation.

This wraps up all of the prerequisites for the environment.  In the next couple of sections, I will be covering the installation and configuration of VMware Horizon.

#GRIDDays Followup – Understanding NVIDIA GRID vGPU Part 1

Author Node: This post has been a few months in the making.  While GRIDDays was back in March, I’ve had a few other projects that have kept this on the sidelines until now.  This is Part 1.  Part 2 will be coming at some point in the future.  I figured 1200 words on this was good enough for one chunk.

The general rule of thumb is that if a virtual desktop requires some dedicated hardware – examples include serial devices, hardware license dongles, and physical cards, it’s probably not a good fit to be virtualized.  This was especially true of workloads that required high-end 3D acceleration.  If a virtual workload required 3D graphics, multiple high-end Quadro cards hard to be installed in the server and then passed through to the virtual machines that required them. 

Since pass-through GPUs can’t be shared amongst VMs, this design doesn’t scale well.  There is a limit to the number of cards I could install in a host, and that limited the number of 3D workloads I could run.  If I needed more, I would have to add hosts.  It also limits the flexibility in the environment as VMs with pass-through hardware can’t easily be moved to a new host if maintenance is needed or a hardware failure occurs.

NVIDIA created the GRID products to address the challenges of GPU virtualization.  GRID technology combines purpose-built graphics hardware, software, and drivers to allow multiple virtual machines to access a GPU. 

I’ve always wondered how it worked, and how it ensured that all configured VMs had equal access to the GPU.  I had the opportunity to learn about the technology and the underlying concepts a few weeks ago at NVIDIA GRID Days. 

Disclosure: NVIDIA paid for my travel, lodging, and some of my meals while I was out in Santa Clara.  This has not influenced the content of this post.

Note:  All graphics in this slide are courtesy of NVIDIA.

How it Works – Hardware Layer

So how does a GRID card work?  In order to understand it, we have to start with the hardware.  A GRID card is a PCIe card with multiple GPUs on the board.  The hardware includes the same features that many of the other NVIDIA products have including framebuffer (often referred to as video memory), graphics compute cores, and hardware dedicated to video encode and decode. 

image

Interactions between an operating system and a PCIe hardware device happen through the base address register.  Base address registers are used to hold memory addresses used by a physical device.  Virtual machines don’t have full access to the GPU hardware, so they are allocated a subset of the GPU’s base address registers for communication with the hardware.  This is called a virtual BAR. 

image

Access to the GPU Base Address Registers, and by extension the Virtual BAR, is handled through the CPU’s Memory Management Unit.  The MMU handles the translation of the virtual BAR memory addresses into the corresponding physical memory addresses used by the GPU’s BAR.  The translation is facilitated by page tables managed by the hypervisor.

The benefit of the virtual bar and hardware-assisted translations is that it is secure.  VMs can only access the registers that they are assigned, and they cannot access any other locations outside of the virtual BAR.

image

The architecture described above – assigning a virtual base address register space that corresponds to a subset of the physical base address register allows multiple VMs to securely share one physical hardware device.  That’s only one part of the story.  How does work actually get from the guest OS driver to the GPU?  And how does the GPU actually manage GPU workloads from multiple VMs?

When the NVIDIA driver submits a job or workload to the GPU, it gets placed into a channel.  A channel is essentially a queue or a line that is exposed through each VM’s virtual BAR.  Each GPU has a fixed number of channels available, and channels are allocated to each VM by dividing the total number of channels by the number of users that can utilize a profile.  So if I’m using a profile that can support 16 VMs per GPU, each VM would get 1/16th of the channels. 

When a virtual desktop user opens an application that requires resources on the GPU, the NVIDIA driver in the VM will dedicate a channel to that application.  When that application needs the GPU to do something, the NVIDIA driver will submit that job to channels allocated to the application on the GPU through the virtual BAR.

image

So now that the application is queue up for execution, something needs to get it into the GPU for execution.  That job is handled by the scheduler.  The scheduler will move work from active channels into the GPU engines.  The GPU has four engines for handling a few different tasks – graphics compute, video encode and decode, and a copy engine.  The GPU engines are timeshared (more on that below), and they execute jobs in parallel.

When active jobs are placed on an engine, they are executed sequentially.  When a job is completed, the NVIDIA driver is signaled that the work has been completed, and the scheduler loads the next job onto the engine to begin processing.

image

Scheduling

There are two types of scheduling in the computing world – sequential and parallel.  When sequential scheduling is used, a single processor executes each job that it receives in order.  When it completes that job, it moves onto the next.  This can allow a single fast processor to quickly move through jobs, but complex jobs can cause a backup and delay the execution of waiting jobs.

Parallel scheduling uses multiple processors to execute jobs at the same time.  When a job on one processor completes, it moves the next job in line onto the processor.  Individually, these processors are too slow to handle a complex job.  But they prevent a single job from clogging the pipeline.

A good analogy to this would be the checkout lane at a department store.  The cashier (and register) is the processor, and each customer is a job that needs to be executed.  Customers are queued up in line, and as the cashier finishes checking out one customer, the next customer in the queue is moved up.  The cashier can usually process users efficiently and keep the line moving, but if a customer with 60 items walks into the 20 items or less lane, it would back up the line and prevent others from checking out.

This example works for parallel execution as well.  Imagine that same department store at Christmas.  Every cash register is open, and there is a person at the front of the line directing where people go.  This person is the scheduler, and they are placing customers (jobs) on registers  (GPU engines) as soon as they have finished with their previous customer.

Graphics Scheduling

So how does GRID ensure that all VMs have equal access to the GPU engines?  How does it prevent one VM from hogging all the resources on a particular engine?

The answer comes in the way that the scheduler works.  The scheduler uses a method called round-robin time slicing.  Round-robin time slicing works by giving each channel a small amount of time on a GPU engine.  The channel has exclusive access to the GPU engine until the timeslice expires or until there are no more work items in the channel.

If all of the work in a channel is completed before the timeslice expires, any spare cycles are redistributed to other channels or VMs.  This ensures that the GPU isn’t sitting idle while jobs are queued in other channels.

The next part of the Understanding vGPU series will cover memory management on the GRID cards.

Horizon 7.0 Part 5–SSL Certificates

SSL certificates are an important part of all Horizon environments .  They’re used to secure communications from client to server as well as between the various servers in the environment.  Improperly configured or maintained certificate authorities can bring an environment to it’s knees – if a connection server cannot verify the authenticity of a certificate – such as an expired revocation list from an offline root CA, communications between servers will break down.  This also impacts client connectivity – by default, the Horizon client will not connect to Connection Servers, Security Servers, or Access Points unless users change the SSL settings.

Most of the certificates that you will need for your environment will need to be minted off of an internal certificate authority.  If you are using a security server to provide external access, you will need to acquire a certificate from a public certificate authority.  If you’re building a test lab or don’t have the budget for a commercial certificate from one of the major certificate providers, you can use a free certificate authority such as StartSSL or a low-cost certificate provider such as NameCheap. 

Prerequisites

Before you can begin creating certificates for your environment, you will need to have a certificate authority infrastructure set up.  Microsoft has a great 2-Tier PKI walkthrough on TechNet. 

Note: If you use the walkthrough to set up your PKI environment., you will need to alter the configuration file to remove the  AlternateSignatureAlgorithm=1 line.  This feature does not appear to be supported on vCenter  and can cause errors when importing certificates.

Once your environment is set up, you will want to create a template for all certificates used by VMware products.  Derek Seaman has a good walkthrough on creating a custom VMware certificate template. 

Note: Although a custom template isn’t required, I like to create one per Derek’s instructions so all VMware products are using the same template.  If you are unable to do this, you can use the web certificate template for all Horizon certificates.

Creating The Certificate Request

Horizon 7.0 handles certificates on the Windows Server-based components the same way as Horizon 6.x and Horizon 5.3.  Certificates are stored in the Windows certificate store, so the best way of generating certificate requests is to use the certreq.exe certificate tool.  This tool can also be used to submit the request to a local certificate authority and accept and install a certificate after it has been issued.

Certreq.exe can use a custom INF file to create the certificate request.  This INF file contains all of the parameters that the certificate request requires, including the subject, the certificate’s friendly name, if the private key can be exported, and any subject alternate names that the certificate requires.

If you plan to use Subject Alternate Names on your certificates, I highly recommend reviewing this article from Microsoft.  It goes over how to create a certificate request file for SAN certificates.

A  certificate request inf file that you can use as a template is below.  To use this template, copy and save the text below into a text file, change the file to match your environment, and save it as a .inf file.

;----------------- request.inf -----------------
[Version]

Signature="$Windows NT$"

[NewRequest]

Subject = "CN=<Server Name>, OU=<Department>, O=<Company>, L=<City>, S=<State>, C=<Country>" ; replace attribues in this line using example below
KeySpec = 1
KeyLength = 2048
; Can be 2048, 4096, 8192, or 16384.
; Larger key sizes are more secure, but have
; a greater impact on performance.
Exportable = TRUE
FriendlyName = "vdm"
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0

[EnhancedKeyUsageExtension]

OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication

[Extensions]

2.5.29.17 = "{text}"
_continue_ = "dns=<DNS Short Name>&"
_continue_ = "dns=<Server FQDN>&"
_continue_ = "dns=<Alternate DNS Name>&"

[RequestAttributes]

CertificateTemplate = VMware-SSL

;-----------------------------------------------

Note: When creating a certificate, the state or province should not be abbreviated.  For instance, if you are in Wisconsin, the full state names should be used in place of the 2 letter state postal abbreviation.

Note:  Country names should be abbreviated using the ISO 3166 2-character country codes.

The command the generate the certificate request is:

certreq.exe –New <request.inf> <certificaterequest.req>

Submitting the Certificate Request

Once you have a certificate request, it needs to be submitted to the certificate authority.  The process for doing this can vary greatly depending on the environment and/or the third-party certificate provider that you use. 

If your environment allows it, you can use the certreq.exe tool to submit the request and retrieve the newly minted certificate.  The command for doing this is:

certreq –submit -config “<ServerName\CAName>” “<CertificateRequest.req>” “<CertificateResponse.cer>

If you use this method to submit a certificate, you will need to know the server name and the CA’s canonical name in order to submit the certificate request.

Accepting the Certificate

Once the certificate has been generated, it needs to be imported into the server.  The import command is:

certreq.exe –accept “<CertificateResponse.cer>

This will import the generated certificate into the Windows Certificate Store.

Using the Certificates

Now that we have these freshly minted certificates, we need to put them to work in the Horizon environment.  There are a couple of ways to go about doing this.

1. If you haven’t installed the Horizon Connection Server components on the server yet, you will get the option to select your certificate during the installation process.  You don’t need to do anything special to set the certificate up.

2. If you have installed the Horizon components, and you are using a self-signed certificate or a certificate signed from a different CA, you will need to change the friendly name of the old certificate and restart the Connection Server or Security Server services.

Horizon requires the Connection Server certificate to have a friendly name value of vdm.  The template that is posted above sets the friendly name of the new certificate to vdm automatically, but this will conflict with any existing certificates. 

1
Friendly Name

The steps for changing the friendly name are:

  1. Go to Start –> Run and enter MMC.exe
  2. Go to File –> Add/Remove Snap-in
  3. Select Certificates and click Add
  4. Select Computer Account and click Finish
  5. Click OK
  6. Right click on the old certificate and select Properties
  7. On the General tab, delete the value in the Friendly Name field, or change it to vdm_old
  8. Click OK
  9. Restart the Horizon service on the server

2

Certificates and Horizon Composer

Unfortunately, Horizon Composer uses a different method of managing certificates.  Although the certificates are still stored in the Windows Certificate store, the process of replacing Composer certificates is a little more involved than just changing the friendly name.

The process for replacing or updating the Composer certificate requires a command prompt and the SVIConfig tool.  SVIConfig is the Composer command line tool.  If you’ve ever had to remove a missing or damaged desktop from your Horizon environment, you’ve used this tool.

The process for replacing the Composer certificate is:

  1. Open a command prompt as Administrator on the Composer server
  2. Change directory to your VMware Horizon Composer installation directory
    Note: The default installation directory is C:\Program Files (x86)\VMware\VMware View Composer
  3. Run the following command: sviconfig.exe –operation=replacecertificate –delete=false
  4. Select the correct certificate from the list of certificates in the Windows Certificate Store
  5. Restart the Composer Service

3

A Successful certificate swap

At this point, all of your certificates should be installed.  If you open up the Horizon Administrator web page, the dashboard should have all green lights.  If you do not see all green lights, you may need to check the health of your certificate environment to ensure that the Horizon servers can check the validity of all certificates and that a CRL hasn’t expired.

If you are using a certificate signed on an internal CA for servers that your end users connect to, you will need to deploy your root and intermediate certificates to each computer.  This can be done through Group Policy for Windows computers or by publishing the certificates in the Active Directory certificate store.  If you’re using Teradici PCoIP Zero Clients, you can deploy the certificates as part of a policy with the management VM.  If you don’t deploy the root and intermediate certificates, users will not be able to connect without disabling certificate checking in the Horizon client.

Access Point Certificates

Unlike Connection Servers and Composer, Horizon Access Point does not run on Windows.  It is a Linux-based virtual appliance, and it requires a different certificate management technique.  The certificate request and private key file for the Access Point should be generated with OpenSSL, and the certificate chain needs to be concatenated into a single file.  The certificate is also installed during, or shortly after, deployment using Chris Halstead’s fling or the appliance REST API. 

Because this is an optional component, I won’t go into too much detail on creating and deploying certificates for the Access Point in this section.  I will cover that in more detail in the section on deploying the Access Point appliance later in the series.

Introducing StacksWare

Who uses this application?  How often are they using it?  Are we getting business value out of the licensing we purchased?  Is our licensing right-sized for our usage or environment?  How do we effectively track application usage in the era of non-persistent virtual desktops with per-user application layers?

These are common questions that the business asks when planning for an end-user computing environment, when planning to add licenses, or when maintenance and support are up for renewal.  They aren’t easy questions to answer either, and while IT can usually see where applications are installed, it’s not always easy to see who is using them and  how often they are being used.

The non-persistent virtual desktop world also has it’s own challenges.  The information required to effectively troubleshoot problems and issues is lost the minute the user logs out of the desktop.

Most tools that cover the application licensing compliance and monitoring space are difficult to set up and use.

I ran into StacksWare in the emerging vendors section at last year’s VMworld, and their goal is to address these challenges.  Their goal is to provide an easy-to-use tool that provides real-time insight into the   The company  was started as a research project at Stanford University that was sponsored by Sanjay Poonen of VMware, and it has received $2 million in venture capital funding from Greylock and Lightspeed.

StacksWare is a cloud-based application that uses an on-premises virtual appliance to track endpoints, users, and applications.  The on-premises portion integrates with vSphere and Active Directory to retrieve information about the environment, and data is presented in a cloud-based web interface.

It offers an HTML5 interface with graphs that update in real-time.  The interface is both fast and attractive, and it does not offer a lot of clutter or distraction.  It’s very well organized as well, and the layout makes it easy to find the information that you’re looking for quickly.

image

Caption: The StacksWare main menu located on the left side of the interface.

image

Caption: Application statistics for a user account in my lab.

image

Caption: Application list and usage statistics.

StacksWare can track both individual application usage and application suite usage.  So rather than having to compile the list of users who are using Word, Excel, and PowerPoint individually, you can track usage by the version of the Microsoft Office Suite that you have installed.  You can also assign license counts, licensing type, and costs associated with licensing.  StacksWare can track this information and generate licensing reports that can be used for compliance or budgetary planning.

image

Caption: The license management screen

One thing I like about StacksWare’s on-premises appliance is that it is built on Docker.  Upgrading the code on the local appliance is as simple as rebooting it, and it will download and install any updated containers as part of the bootup process.  This simplifies the updating process and ensures that customers get access to the latest code without any major hassles.

One other nice feature of StacksWare is the ability to track sessions and activity within sessions.  StacksWare can show me what applications I’m launching and when I am launching them in my VDI session. 

image

Caption: Application details for a user session.

For more information on StacksWare, you can check out their website at www.stacksware.com.  You can also see some of their newer features in action over at this YouTube video.  StacksWare will also be at BriForum and VMworld this year, and you can check them out in the vendor solutions area.

Full-Stack Engineering Starts With A Full-Stack Education

A few weeks ago, my colleague Eric Shanks wrote a good piece called “So You Wanna Be  A Full Stack Engineer…”  In the post, Eric talks about some the things you can expect when you get into a full-stack engineering role. 

You may be wondering what exactly a full-stack engineer is.  It’s definitely a hot buzzword.  But what does someone in a full-stack engineering role do?  What skills do they need to have? 

The closest definition I can find for someone who isn’t strictly a developer is “an engineer who is responsible for an entire application stack including databases, application servers, and hardware.”  This is an overly broad definition, and it may include things like load balancers, networking, and storage.  It can also include software development.  The specifics of the role will vary by environment – one company’s full-stack engineer may be more of an architect while another may have a hands-on role in managing all aspects of an application or environment.  The size and scope of the environment play a role too – a sysadmin or developer in a smaller company may be a full-stack engineer by default because the team is small.

However that role shakes out, Eric’s post gives a good idea of what someone in a full-stack role can expect.

Sounds fun, doesn’t it?  But how do you get there? 

In order to be a successful full-stack engineer, you need to get a full-stack education.  A full-stack education is a broad, but shallow, education in multiple areas of IT.    Someone in a full-stack engineering role needs to be able to communicate effectively with subject-matter experts in their environments, and that requires knowing the fundamentals of each area and how to identify, isolate, and troubleshoot issues. 

A full-stack engineer would be expected to have some knowledge of the following areas:

  • Virtualization: Understanding the concepts of a hypervisor, how it interacts with networking and storage, and how to troubleshoot using some basic tools for the platform
  • Storage: Understanding the basic storage concepts for local and remote (SAN/NAS) storage and how it integrates with the environment.  How to troubleshoot basic storage issues.
  • Network: Understand the fundamentals of TCP/IP, how the network interacts with the environment,and the components that exist on the network.  How to identify different types of networking issues using various built-in tools like ping, traceroute, and telnet.
    Networking, in particular, is a very broad topic that has many different layers where problems can occur.  Large enterprise networks can also be very complex when VLANs, routing, load balancers and firewalls are considered.  A full-stack engineer wouldn’t need to be an expert on all of the different technologies that make up the network, but one should have enough networking experience to identify and isolate where potential problems can occur so they can communicate with the various networking subject matter experts to resolve the issue.
  • Databases: Understand how the selected database technology works, how to query the database, and how troubleshoot performance issues using built-in or 3rd-party database tools.  The database skillset can be domain-specific as there are different types of databases (SQL vs. No-SQL vs. Column-Based vs. Proprietary), differences in the query languages between SQL database vendors (Oracle, MS SQL Server, and PostgreSQL) and how the applications interact with the database.
  • Application Servers: Understand how to deploy, configure, and troubleshoot the application servers.  Two of the most important things to note are the location of log files and how it interacts with the database (ie – can the application automatically reconnect to the database or does it need to be restarted). This can also include a middleware tier that sits between the application servers  The skills required for managing the middleware tier will greatly depend on the application, the underlying operating system, and how it is designed. 
  • Operating Systems: Understand the basics of the operating system, where to find the logs, and how to measure performance of the system.
  • Programming: Understand the basics of software development and tools that developers use for maintaining code and tracking bugs.  Not all full-stack engineers develop code, but one might need to check out production ready code to deploy it into development and production environments.  Programming skills are also useful for automating tasks, and this can lead to faster operations and fewer mistakes.

Getting A Full-Stack Education

So how does one get exposure and experience in all of the areas above?

There is no magic bullet, and much of the knowledge required for a full-stack engineering role is gained through time and experience.  There are a few ways you can help yourself, though.

  • Get a theory-based education: Get into an education program that covers the fundamental IT skills from a theory level.  Many IT degree programs are hands-on with today’s hot technologies, and they may not cover the business and technical theory that will be valuable across different technology silos.  Find ways to apply that theory outside of the classroom with side projects or in the lab.
  • Create learning opportunities: Find ways to get exposure to new technology.  This can be anything from building a home lab and learning technology hands-on to shadowing co-workers when they perform maintenance.
  • Be Curious: Don’t be afraid to ask questions, pick up a book, or read a blog on something you are interested in and want to learn.

Horizon 7.0 Part 4–Active Directory Design Considerations

Traditionally, virtual desktops run Windows, and the servers that provide the virtual desktop infrastructure services also run on Windows.  Because of the heavy reliance on Windows, Active Directory plays a huge role in Horizon environments.  Even Linux desktops, which are a new option, can be configured for Active Directory and utilize the Horizon user’s AD credentials for Single Sign-On. 

When you’re planning a new Horizon deployment, or re-architecting an existing deployment, the design of your Active Directory environment is a critical element that needs to be considered.  How you organize your virtual desktops, templates, and security groups impacts Group Policy, helpdesk delegation rights, and Horizon Composer.

Some Active Directory objects need to be configured before any Horizon View components are installed.  Some of these objects require special configuration either in Active Directory or inside vCenter.  The Active Directory objects that need to be set up are:

  • An organizational unit structure for Horizon Desktops and desktop templates 
  • Basic Group Policy Objects for the different organizational units
  • An organization unit for Microsoft RDS servers if published apps or RDSH-desktops are deployed

Optionally, you may want to set up an organizational unit for any security groups that might be used for entitling access to the Horizon View desktop pools.  This can be useful for organizing those groups and/or delegating access to Help Desk or other staff who don’t need Account Operator or Domain Administrator rights.

CREATING AN ORGANIZATIONAL UNIT FOR HORIZON DESKTOPS

The first think that we need to do to prepare Active Directory for a Horizon deployment is to create an organizational unit structure for Horizon View desktops.  This OU structure will hold all of the desktops created and used by Horizon View.  A separate OU structure within your Active Directory environment is important because you will want to apply different group policies to your Horizon desktops than you would your regular desktops.  There are also specific permissions that you will need to delegate to the Horizon Composer and/or Instant Clones Administrator service account.

There are a lot of ways that you can set up an Active Directory OU structure for Horizon.  My preferred organizational method looks like this:

2013-12-28_21-55-14

View Desktops is a top-level OU (ie – one that sites in the root of the domain).  I like to set up this OU for two reasons.  One is that is completely segregates my VDI desktops from my non-VDI desktops and servers.  The other is that it gives me one place to apply group policy that should apply to all VDI desktops.

I create three child OUs under the View Desktops OU to separate persistent desktops, non-persistent desktops, and desktop templates.  This allows me to apply different group policies to the different types of desktops.  For instance, you may want to disable Windows Updates and use Persona Management on non-persistent desktops but allow Windows Updates on the desktop templates.

You don’t need to create all three OUs.  If your environment consists entirely of Persistent desktops, you don’t need an OU for non-persistent desktops.  The opposite is true as well.

Finally, I tend to create use-case specific OUs for pools that require additional Group Policy options above and beyond the top-level. These grandchild OUs are completely optional.  If there is no need to set any custom policy for a specific use case, then they don’t need to be created.  However, if a grandchild OU is needed, then an entire pool will need to be created as desktop pools are assigned to OUs.  Adding additional pools can add management overhead to a VDI environment.

I’ve found that there is less of a need for these use-case specific OUs as I’ve learned more about modern UEM tools like RES and VMware User Environment Manager.  These tools can be a scalpel that allow administrators to dynamically apply context-aware policies and settings to specific users or groups without having to create additional pools or OUs for Group Policy configurations.

Creating an Organizational Unit for RDS Servers

Horizon 6.0 added PCoIP support for multi-user desktops running on Windows Server with the Remote Desktop Session Host role.  These new abilities also added support for remote application publishing.

RDSH servers need to be handled differently than virtual desktops.  They’re managed differently than your virtual desktops, and some features such as Persona Management are not available to RDS servers.

If application remoting or multi-user desktops are going to be deployed, an organizational unit for RDS servers should be created underneath your base servers organizational unit. Since RDSH servers are often heavily locked down through Group Policy, I also recommend creating an RDSH Maintenance Mode OU.  This OU is where RDSH servers can be placed when administrators need to remove restrictive group policies such blocking the command prompt or MSI installers removed to perform maintenance on the server.

Horizon Group Policy Objects

Horizon contains a number of custom group policy objects that can be used for configuring features like Persona Management and optimizing the PCoIP protocol.  The number of Group Policy objects and templates is same as what was available in Horizon 6.

Unfortunately, most of the Group Policy templates are distributed as ADM files.  There are a number of drawbacks to ADM files in modern Active Directory environments.  The main one is that you cannot store the Group Policy files in the Central Store.

If you plan on using the Group Policy templates, it’s a good idea to convert them into the ADMX format.  I had previously written about converting the View Group Policy templates into the ADMX format and the reasons for converting here.

Horizon Service Accounts

Horizon requires a service account for accessing vCenter to provision new virtual machines.  If Horizon Composer or Instant Clones are used, a second service account will be needed to create computer accounts in Active Directory for managing computer accounts for the clones.  I will cover setting up those account in a future section.

In the next section, I’ll cover SSL certificates for Horizon servers.

Horizon 7.0 Part 3–Desktop Design Considerations

Whether it is Horizon, XenDesktop, or a cloud-based Desktop-as-a-Service provider, the implementation of a virtual desktop and/or published applications environment requires a significant time investment during the design phase.  If care isn’t taken, the wrong design could be put into production, and the costs of fixing it could easily outweigh the benefits of implementing the solution.

So before we move into installing the actual components for a Horizon environment, we’ll spend the next two posts on design considerations.  This post, Part 3, will discuss design considerations for the Horizon virtual desktops, and Part 4 will discuss design considerations for Active Directory.

Virtual desktop environments are all about the end user and what they need.  So before you go shopping for storage arrays and servers, you need to start looking at your desktops.

There are four types of desktops in Horizon 7:

  • Full Clone Desktops – Each desktop is a full virtual machine deployed from a template and managed as an independent virtual machine.
  • Linked Clone Desktops – A linked clone is a desktop that shares its virtual disks with a central replica desktop, and any changes are written to its own delta disk.  Linked clones can be recomposed when the base template is updated or refreshed to a known good state at periodic intervals.  This feature requires Horizon Composer.
  • Instant Clone Desktops – Instant Clone desktops are new to Horizon 7, and they are built off of the VMfork technology introduced with vSphere 6.0.  Instant Clones are essentially a rapid clone of a running virtual machine with extremely fast customization.
  • Remote Desktop Session Host Pools – Horizon 6 expanded RDSH support to include PCoIP support and application remoting.  When RDSH desktops and/or application remoting are used, multiple users are logged into servers that host user sessions.  This feature requires Windows Server 2008 R2 or Server 2012 R2 with the RDSH features enabled.

There are two desktop assignment types for desktop pools:

  • Dedicated Assignment – users are assigned to a particular desktop during their first login, and they will be logged into this desktop on all subsequent logins.
  • Floating Assignment – users are temporarily assigned to a desktop on each login.  On logout, the desktop will be available for other users to log into.  A user may not get the same desktop on each login.

Understanding Use Cases

When you design a virtual desktop environment, you have to design around the use cases.  Use cases are the users, applications, peripherals, and how they are used to complete a task, and they are used to define many of the requirements in the environment.  The requirements of the applications in the type of desktops that are used and how they are assigned to users.

Unless you have some overriding constraints or requirements imposed upon your virtual desktop project, the desktop design choices that you make will influence and/or drive your subsequent purchases.   For instance, if you’re building virtual desktops to support CAD users, blade servers aren’t an option because high-end graphics cards will be needed, and if you want/need full clone desktops, you won’t invest in a storage array that doesn’t offer deduplication.

Other factors that may impact the use cases or the desktop design decisions include existing management tools, security policies, and other policies.

Once you have determined your use cases and the impacts that the use cases have on desktop design, you’ll be able to put together a design document with the following items:

  • Number of linked clone base images and/or full clone templates
  • Number and type of desktop pools
  • Number of desktops per pool
  • Number of Connection Servers needed
  • The remote access delivery method

If you’re following the methodology that VMware uses in their design exams, your desktop design document should provide you with your conceptual and logical designs.

The conceptual and logical designs, built on details from the use cases, will influence the infrastructure design.  This phase would cover the physical hardware to run the virtual desktop environment, the network layer, storage fabric, and other infrastructure services such as antivirus.

The desktop design document will have a heavy influence on the decisions that are made when selecting components to implement Horizon 7.  The components that are selected need to support and enable the type of desktop environment that you want to run.

In part four, we will cover Active Directory design for Horizon environments.

Horizon 7.0 Part 2–Horizon Requirements

In order to deliver virtual desktops to end users, a Horizon environment requires multiple components working together in concert.  Most of the components that Horizon relies upon are VMware products, but some of the components, such as the database and Active Directory, are 3rd-party products.

The Basics

The smallest Horizon environment only requires four components to serve virtual desktops to end users: ESXi, vCenter, a View Connection Server, and Active Directory.  The hardware for this type of environment doesn’t need to be anything special, and one server with direct attached storage and enough RAM could support a few users.

All Horizon environments, from the simple one above to a complex multi-site Cloud Pod environment, are built on this foundation.  The core of this foundation is the View Connection Server.

Connection Servers are the broker for the environment.  They handle desktop provisioning, user authentication and access.  They also manage connections to multi-user desktops and published applications.   Connection Servers also manage the

There are four types of Connection Server roles, and all four roles have the same requirements.  These roles are:

  • Standard Connection Server – The first Connection Server installed in the environment.
  • Replica Connection Server – Additional Connection Servers that replicate from the standard connection server
  • Security Server – A stripped down version of the Connection Server, designed to sit in the DMZ and proxy traffic to the Connection Servers.  A Security Server must be “paired” with a Connection Server.
  • Enrollment Server – A new role introduced in Horizon 7.  The Enrollment Server is used to facilitate the new True SSO feature.

The requirements for a Connection Server are:

  • 1 CPU, 4 vCPUs recommended
  • Minimum 4GB RAM, 10GB recommended if 50 or more users are connecting
  • Windows Server 2008 R2 or Windows Server 2012 R2
  • Joined to an Active Directory domain
  • Static IP Address

Note: The requirements for the  Security Server  and Enrollment Server are the same as the requirements for Connection Server.  Security Servers do not need to be joined to an Active Directory domain.

Aside from the latest version of the View Connection Server, the requirements are:

ESXi – ESXi is required for hosting the virtual machine The versions of ESXi that are supported by Horizon 7 can be found in the VMware compatibility matrix.  ESXi 5.0 Update 1 and newer, excluding ESXi 5.5 vanilla, are currently supported.  However, ESXi 6.0 Update 1 and newer are required for Instant Clones.

vCenter Server – The versions of vCenter that are supported by Horizon 7 can be found in the VMware compatibility matrix.  vCenter Server 5.0 Update 1 and newer, excluding vCenter 5.5 vanilla, are currently supported, and vCenter 6.0 Update 1 and newer are required to support Instant Clones.  The vCenter Server Appliance and the Windows vCenter Server application are supported.

Active Directory – An Active Directory environment is required to handle user authentication to virtual desktops, and the domain must be set to at least the Server 2008 functional level.  Group Policy is used for configuring parts of the environment, including desktop settings, roaming profiles, user data redirection, UEM, and the remoting protocol.   

Advanced Features

Horizon View has a lot of features, and many of those features require additional components to take advantage of them.  These components add options like secure remote access, profile management, and linked-clone desktops.

Secure Remote Access – There are a couple of options for providing secure remote access to virtual desktops and published applications.  Traditionally, remote access has been provided by the Horizon Security Server.  The Security Server is a stripped down version of the connection server that is designed to be deployed into a DMZ.  It also requires each server to be paired with a Connection Server.

There are two other remote access options.  The first is the Horizon Access Point.  The access point comes from the Horizon Air platform, and it was introduced in the on-premises solution in Horizon 6.2.2.  The Access Point is a hardened Linux appliance that is designed to be managed like a cloud appliance, and it serves the same function as the Security Server.  Unlike the Security Server, the Access Point does not need to be paired with a Connection Server.

Both the Security Server and the Access Point can be load balanced for high availability.

The other remote access option is the Horizon proxy built into the F5 APM module.  The APM module combines load balancing and rule-based secure remote access.  It can also replace the portal feature in vIDM.

Linked-Clone Desktops – Linked Clones are virtual machines that share a set of parent disks.  They are ideal for some virtual desktop environments because they can provide a large number of desktops without having to invest in new storage technologies, and they can reduce the amount of work that IT needs to do to maintain the environment.  Linked Clones are enabled by Horizon Composer.

The requirements for Horizon Composer are:

  • 2 vCPUs, 4 vCPUs recommended 
  • 4 GB RAM, 8GB required for deployments of 50 or more desktops
  • Windows Server 2008 R2 or Server 2012 R2
  • Database server – supported databases include Oracle and Microsoft SQL Server.  Please check the compatibility matrix for specific versions and service packs.
  • Static IP Address

Horizon Composer also requires a database.  The database requirements can be found in the VMware Product Interoperability Matrix.  The current requirements include SQL Server 2014 (RTM and SP1), SQL Server 2012 (SP2) and Oracle 12c Release 1.

Networking Requirements – Horizon requires a number of ports to be opened to allow the various components of the infrastructure to communicate.  The best source for showing all of the ports required by the various components is the VMware Horizon 7 Network Ports diagram.  It’s available in PDF format from here.

Other Components:  The Horizon Suite includes a number of tools to provide administrators with a full-fledged ecosystem for managing their virtual end-user computing environments.  These tools are App Volumes, User Environment Manager, VMware Identity Manager (vIDM), and vRealize Operations for Horizon.  The requirements for these tools will be covered in their respective sections.