VMware Horizon and Horizon Cloud Enhancements – Part 1

This morning, VMware announced enhancements to both the on-premises Horizon Suite and Horizon Cloud product sets.  Although there are a lot of additions to all products in the Suite, the VMware blog post did not go too indepth into many of the new features that you’ll be seeing in the upcoming editions.

VMware Horizon 7.5

Let’s start with the biggest news in the blog post – the announcement of Horizon 7.5.  Horizon 7.5 brings several new, long-awaited, features with it.  Some of these features are:

  1. Support for Horizon on VMC (VMware on AWS)
  2. The “Just-in-Time” Management Platform (JMP)
  3. Horizon 7 Extended Service Branch (ESB)
  4. Instant Clone improvements, including support for the new vSphere 6.7 Instant Clone APIs
  5. Support for IPv4/IPv6 Mixed-Mode Operations
  6. Cloud-Pod Architecture support for 200K Sessions
  7. Support for Windows 10 Virtualization-Based Security (VBS) and vTPM on Full Clone Desktops
  8. RDSH Host-based GPO Support for managing protocol settings

I’m not going to touch on all of these items.  I think the first four are the most important for this portion of the suite.

Horizon on VMC

Horizon on VMC is a welcome addition to the Horizon portfolio.  Unlike Citrix, the traditional VMware Horizon product has not had a good cloud story because it has been tightly coupled to the VMware SDDC stack.  By enabling VMC support for Horizon, customers can now run virtual desktops in AWS, or utilize VMC as a disaster recovery option for Horizon environments.

Full clone desktops will be the only desktop type supported in the initial release of Horizon on VMC.  Instant Clones will be coming in a future release, but some additional development work will be required since Horizon will not have the same access to vCenter in VMC as it has in on-premises environments.  I’m also hearing that Linked Clones and Horizon Composer will not be supported in VMC.

The initial release of Horizon on VMC will only support core Horizon, the Unified Access Gateway, and VMware Identity Manager.  Other components of the Horizon Suite, such as UEM, vRealize Operations, and App Volumes have not been certified yet (although there should be nothing stopping UEM from working in Horizon on VMC because it doesn’t rely on any vSphere components).  Security Server, Persona Management, and ThinApp will not be supported.

Horizon Extended Service Branches

Under the current release cadence, VMware targets one Horizon 7 release per quarter.  The current support policy for Horizon states that a release only continues to receive bug fixes and security patches if a new point release hasn’t been available for at least 60 days.  Let’s break that down to make it a little easier to understand.

  1. VMware will support any version of Horizon 7.x for the lifecycle of the product.
  2. If you are currently running the latest Horizon point release (ex. Horizon 7.4), and you find a critical bug/security issue, VMware will issue a hot patch to fix it for that version.
  3. If you are running Horizon 7.4, and Horizon 7.5 has been out for less than 60 days when you find a critical bug/security issue, VMware will issue a hot patch to fix it for that version.
  4. If you are running Horizon 7.4, and Horizon 7.5 has been out for more than 60 days when you find a critical bug/security issue, the fix for the bug will be applied to Horizon 7.5 or later, and you will need to upgrade to receive the fix.

In larger environments, Horizon upgrades can be non-trivial efforts that enterprises may not undertake every quarter.  There are also some verticals, such as healthcare, where core business applications are certified against specific versions of a product, and upgrading or moving away from that certified version can impact support or support costs for key business applications.

With Horizon 7.5, VMware is introducing a long-term support bundle for the Horizon Suite.  This bundle will be called the Extended Service Branch (ESB), and it will contain Horizon 7, App Volumes, User Environment Manager, and Unified Access Gateway.  The ESB will have 2 years of active support from release date where it will receive hot fixes, and each ESB will receive three service packs with critical bug and security fixes and support for new Windows 10 releases.  A new ESB will be released approximately every twelve months.

Each ESB branch will support approximately 3-4 Windows 10 builds, including any recent LTSC builds.  That means the Horizon 7.5 ESB release will support the Windows 10 1709, 1803, 1809 and 1809 LTSC builds of Windows 10.

This packaging is nice for enterprise organizations that want to limit the number of Horizon upgrades they want to apply in a year or require long-term support for core business applications.  I see this being popular in healthcare environments.

Extended Service Branches do not require any additional licensing, and customers will have the option to adopt either the current release cadence or the extended service branch when implementing their environment.

JMP

The Just-in-Time Management Platform, or JMP, is a new component of the Horizon Suite.  The intention is to bring together Horizon, Active Directory, App Volumes, and User Environment Manager to provide a single portal for provisioning instant clone desktops, applications, and policies to users.  JMP also brings a new, HTML5 interface to Horizon.

I’m a bit torn on the concept.  I like the idea behind JMP and providing a portal for enabling user self-provisioning.  But I’m not sure building that portal into Horizon is the right place for it.  A lot of organizations use Active Directory Groups as their management layer for Horizon Desktop Pools and App Volumes.  There is a good reason for doing it this way.  It’s easy to audit who has desktop or application access, and there are a number of ways to easily generate reports on Active Directory Group membership.

Many customers that I talk to are also attempting to standardize their IT processes around an ITSM platform that includes a Service Catalog.  The most common one I run across is ServiceNow.  The customers that I’ve talked to that want to implement self-service provisioning of virtual desktops and applications often want to do it in the context of their service catalog and approval workflows.

It’s not clear right now if JMP will include an API that will allow customers to integrate it with an existing service catalog or service desk tool.  If it does include an API, then I see it being an important part of automated, self-service end-user computing solutions.  If it doesn’t, then it will likely be another yet-another-user-interface, and the development cycles would have been better spent on improving the Horizon and App Volumes APIs.

Not every customer will be utilizing a service catalog, ITSM tool and orchestration. For those customers, JMP could be an important way to streamline IT operations around virtual desktops and applications and provide them some benefits of automation.

Instant Clone Enhancements

The release of vSphere 6.7 brought with it new Instant Clone APIs.  The new APIs bring features to VMFork that seem new to pure vSphere Admins but have been available to Horizon for some time such as vMotion.  The new APIs are why Horizon 7.4 does not support vSphere 6.7 for Instant Clone desktops.

Horizon 7.5 will support the new vSphere 6.7 Instant Clone APIs.  It is also backward compatible with the existing vSphere 6.0 and 6.5 Instant Clone APIs.

There are some other enhancements coming to Instant Clones as well.  Instant Clones will now support vSGA and Soft3D.  These settings can be configured in the parent image.  And if you’re an NVIDIA vGPU customer, more than one vGPU profile will be supported per cluster when GPU Consolidation is turned on.  NVIDIA GRID can only run a single profile per discrete GPU, so this feature will be great for customers that have Maxwell-series boards, especially the Tesla M10 high-density board that has four discrete GPUs.  However, I’m not sure how beneficial it will be with customer that adopt Pascal-series or Volta-series Tesla cards as these only have a single discrete GPU per board.   There may be some additional design considerations that need to be worked out.

Finally, there is one new Instant Clone feature for VSAN customers.  Before I explain the feature, I can to explain how Horizon utilizes VMFork and Instant Clone technology.  Horizon doesn’t just utilize VMFork – it adds it’s own layers of management on top of it to overcome the limitations of the first generation technology.  This is how Horizon was able to support Instant Clone vMotion when the standard VMFork could not.

This additional layer of management also allows VMware to do other cool things with Horizon Instant Clones without having to make major changes to the underlying platform.  One of the new features that is coming in Horizon 7.5 for VSAN customers is the ability to use Instant Clones across cluster boundaries.

For those who aren’t familiar with VSAN, it is VMware’s software-defined storage product.  The storage boundary for VSAN aligns with the ESXi cluster, so I’m not able to stretch a VSAN datastore between vSphere clusters.  So if I’m running a large EUC environment using VSAN, I may need multiple clusters to meet the needs of my user base.  And unlike 3-tier storage, I can’t share VSAN datastores between clusters.  Under the current setup in Horizon 7.4, I would need to have a copy of my gold/master/parent image in each cluster.

Due to some changes made in Horizon 7.5, I can now share an Instant Clone gold/master/parent image across VSAN clusters without having to make a copy of it in each cluster first.  I don’t have too many specific details on how this will work, but it could significantly reduce the management burden of large, multi-cluster Horizon environments on VSAN.

Blast Extreme Enhancements

The addition of Blast Extreme Adaptive Transport, or BEAT as it’s commonly known, provided an enhanced session remoting experience when using Blast Extreme.  It also required users and administrators to configure which transport they wanted to use in the client, and this could lead to less than optimal user experience for users who frequently moved between locations with good and bad connectivity.

Horizon 7.5 adds some automation and intelligence to BEAT with a feature called Blast Extreme Network Intelligence.  NI will evaluate network conditions on the client side and automatically choose the correct Blast Extreme transport to use.  Users will no longer have to make that choice or make changes in the client.  As a result, the Excellent, Typical, and Poor options are being removed from future versions of the Horizon client.

Another major enhancment coming to Blast Extreme is USB Redirection Port Consolidation.  Currently, USB redirection utilizes a side channel that requires an additional port to be opened in any external-facing firewalls.  Starting in Horizon 7.5, customers will have the option to utilize USB redirection over ports 443/8443 instead of the side channel.

Performance Tracker

The last item I want to cover in this post is Performance Tracker.  Performance Tracker is a tool that Pat Lee demonstrated at VMworld last year, and it is a tool to present session performance metrics to end users.  It supports both Blast Extreme and PCoIP, and it provides information such as session latency, frames per second, Blast Extreme transport type, and help with troubleshooting connectivity issues between the Horizon Agent and the Horizon Client.

Part 2

As you can see, there is a lot of new stuff in Horizon 7.5.  We’ve hit 1900 words in this post just talking about what’s new in Horizon.  We haven’t touched on client improvements, Horizon Cloud, App Volumes, UEM or Workspace One Intelligence yet.  So we’ll have to break those announcements into another post that will be coming in the next day or two.

Horizon 6 and Profile Management? It’s Not the Big Deal That Some Are Making It Out To be…

When VMware announced Horizon 6 last month, there was a lot of excitement because the Horizon Suite was finally beefing up their Remote Desktop Session Host component to support PCoIP and application publishing.  Shortly after that announcement, word leaked out that Persona Management would not be available for RDSH sessions and published applications.

There seems to be this big misconception that without Persona Management, there will be no way to manage user settings, and companies that wish to overcome this shortcoming will need to utilize a 3rd party product for profile management.  A lot of that misconception revolves around the idea that there should only be one user profile and set of applications settings that apply to the user regardless of what platform the user logs in on.

Disclosure: Horizon 6 is still in beta.  I am not a member of the beta testing team, and I have not used Horizon 6.

There are two arguments being made about the lack of unified profile management in Horizon 6.  The first argument is that without some sort of profile management, users won’t be able to save their application settings.  The article quotes a systems administrator who says, “If you rely on linked clones and want to use [RDSH] published apps, it won’t remember your settings.”

This is not correct.  When setting up an application publishing environment using RDSH, a separate user profile is created for each user on the server, or servers, where the applications are hosted.  That user profile is separate from the user profile that is used when logging into the desktop.  In order to ensure that those settings follow the user if they log into a different server, technologies like roaming profiles and folder redirection are used to store user settings in a central network location.

This ability isn’t an add-on feature, and the ability to do roaming profiles and folder redirection are included with Active Directory as options that can be configured using Group Policy.  Active Directory is a requirement for Horizon environments, so the ability to save and roam settings exists without having to invest in additional products.

The other argument revolves around the idea of a “single, unitary profile” that will be used in both RDSH sessions for application publishing and virtual desktops.  There are a couple of reasons why that this should not be considered a holdup for deploying Horizon 6:

  1. Microsoft’s best practices for roaming profiles (2003 version, 2008 R2 version, RDS Blog) do not recommend using the same profile across multiple platforms, and Microsoft recommends using a different roaming profile for each RDSH farm or platform.
  2. Citrix’s best practices for User Profile Manager, the application that the article above references as providing a single profile across application publishing and virtual desktops, do not recommend using the same profile for multiple platforms or across different versions of Windows.

There are a couple of reasons for this.  The main reason is that there are settings in desktop profiles that don’t apply to servers and vice versa or across different generations of Windows.  There is also the possibility of corruption if a profile is used in multiple places at the same time, and one server can easily overwrite changes to the profile.

Although there may be some cases where application settings may need to roam between an RDSH session and a virtual desktop session, I haven’t encountered any cases where that would be important.  That doesn’t mean those cases don’t exist, but I don’t see a scenario where this would hold up adopting a platform like the article above suggests.

Horizon View 5.3 Part 15 – Horizon View, SSL, and You

Although they may be confusing and require a lot of extra work to set up, SSL certificate play a key role in any VMware environment.  The purpose of the certificates is to secure communications between the clients and the servers as well as between the servers themselves.

Certificates are needed on all of the major components of Horizon View and vCenter.  The certificates that are installed on vCenter Server, Composer, and the Connection Servers can come from an internal certificate authority.  If you are running Security Servers, or if you have a Bring-Your-Own—Beer Device environment, you’ll need to look at purchasing certificates from a public certificate authority.

Setting up a certificate authority is beyond the scope of this article.  If you are interested in learning more about setting up your own public key infrastructure, I’ll refer you to Derek Seaman, the VMware Community’s own SSL guy.  He recently released a three-part series about setting up a two-tier public key infrastructure on Windows Server 2012 R2I’ve also found this instruction set to be a good guide for setting up a public key infrastructure on Windows Server 2008 R2.

Improved Certificate Handling

Managing SSL certificates was a pain if you’ve worked with previous versions of Horizon View.  In previous versions, the certificates had be placed into a Java keystore file.  This changed with View 5.1, and the certificates are now stored in the Windows Certificate Store.  This has greatly improved the process for managing certificates.

Where to Get Certificates

Certificates can be minted on internal certificate authorities or public certificate authorities.  An internal certificate authority exists inside your environment and is something that you manage.  Certificates from these authorities won’t be trusted unless you deploy the root and intermediate certificates to the clients that are connecting.

The certificates used on a security server should be obtained from a commercial certificate vendor such as Thawte, GoDaddy, or Comodo.  One option that I like to use in my lab, and that I’ve used in the past when I didn’t have a budget for SSL certificates is StartSSL.  They provide free basic SSL certificates.

Generating Certificate Requests for Horizon View Servers

VMware’s methods for generating certificates for Horizon View is different than vCenter.  When setting up certificate requests for vCenter, you need to use OpenSSL to generate the requests.  Unlike vCenter, Horizon View uses the built-in Windows certificate tools are used to generate the certificate requests. VMware has a good PDF document that walks through generating certificates for Horizon View.  An online version of this article also exists.

Before we can generate a certificate for each server, we need to set up two things.  The first is a certificate template that can be used to issue certificates in our public key infrastructure.  I’m not an expert on public key infrastructure, so I’ll defer to the expert again.

The other thing that we need to create is a CSR configuration file.  This file will be used with certreq.exe to create the certificate signing request.  The template for this file, which is included in the VMware guide, is below.

;----------------- request.inf -----------------
[Version]

Signature="$Windows NT$"

[NewRequest]

Subject = "CN=View_Server_FQDN, OU=Organizational_Unit_Name, O=Organization_Name, L=City_Name, S=State_Name, C=Country_Name" ; replace attributes in this line using example below
KeySpec = 1
KeyLength = 2048
; Can be 2048, 4096, 8192, or 16384.
; Larger key sizes are more secure, but have
; a greater impact on performance.
Exportable = TRUE
FriendlyName = "vdm"
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0

[EnhancedKeyUsageExtension]

OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication

[RequestAttributes]

; SAN="dns=FQDN_you_require&dns=other_FQDN_you_require"
;-----------------------------------------------

The subject line contains some fields that we need to fill in with information from our environment.  These fields are:

  • CN=server_fqdn: This part of the Subject string should contain the fully qualified domain name that users will use when connecting to the server. An example for an internal, non-Internet facing server is internalbroker.internaldomain.local.  An internet-facing server should use the web address such as view.externaldomain.com
  • OU=organizational unit: I normally fill in the responsible department, so it would be IT.
  • O=Organization: The name of your company
  • L=City: The city your office is based in
  • S=State: The name of the State that you’re located in.  You should spell out the name of the state since some CAs will not accept abbreviated names.
  • C=Country: The two-letter ISO country code.  The United States, for example, is US.

A CSR configuration file will need to be created for each server with a Horizon View Component installed.  vCenter will also need certificates, but there are different procedures for creating and installing vCenter certificates depending on whether you are using the Windows application or the vCenter appliance.

Creating the certificate signing request requires the certreq.exe command line tool.   These steps will need to be performed on each connection server, security server, and  The steps for generating the request are:

  1. Open a command prompt as an Administrator on your View server. Note: This command should be run from the server where you want the certificate to reside.  It can be done from another machine, but it makes it more complicated.
  2. Navigate to the folder where you stored the request.inf file.
  3. Run the following command: certreq.exe –new request.inf server_name_certreq.txt

After the certificate request has been created, it needs to be submitted to the certificate authority in order to have the SSL certificate generated.  The actual process for submitting the CSR is beyond the scope of this article since this process can vary in each environment and with each commercial vendor.

Importing the Certificate

Once the certificate has been generated, it needs to be imported into the server.  The import command is:

certreq.exe –accept certname.cer

This will import the generated certificate into the Windows Certificate Store.

Using the Certificates

Now that we have these freshly minted certificates, we need to put them to work in the View environment.  There are a couple of ways to go about doing this.

1. If you haven’t installed the Horizon View components on the server yet, you will get the option to select your certificate during the installation process.  You don’t need to do anything special to set the certificate up.

2. If you have installed the Horizon View components, and you are using a self-signed certificate or a certificate signed from a different CA, you will need to change the friendly name of the old certificate and restart the Connection Server or Security Server services.

Horizon View requires the certificate to have a friendly name value of vdm.  The template that is posted above sets the friendly name of the new certificate to vdm automatically, but this will conflict with any existing certificates. 

1
Friendly Name

The steps for changing the friendly name are:

  1. Go to Start –> Run and enter MMC.exe
  2. Go to File –> Add/Remove Snap-in
  3. Select Certificates and click Add
  4. Select Computer Account and click Finish
  5. Click OK
  6. Right click on the old certificate and select Properties
  7. On the General tab, delete the value in the Friendly Name field, or change it to vdm_old
  8. Click OK
  9. Restart the View service on the server

2

Certificates and View Composer

Unfortunately, Horizon View Composer uses a different method of managing certificates.  Although the certificates are still stored in the Windows Certificate store, the process of replacing Composer certificates is a little more involved than just changing the friendly name.

The process for replacing or updating the Composer certificate requires a command prompt and the SVIConfig tool.  SVIConfig is the Composer command line tool.  If you’ve ever had to remove a missing or damaged desktop from your View environment, you’ve used this tool.

The process for replacing the Composer certificate is:

  1. Open a command prompt as Administrator on the Composer server
  2. Change directory to your VMware View Composer installation directory
    Note: The default installation directory is C:\Program Files (x86)\VMware\VMware View Composer
  3. Run the following command: sviconfig.exe –operation=replacecertificate –delete=false
  4. Select the correct certificate from the list of certificates in the Windows Certificate Store
  5. Restart the Composer Service

3

A Successful certificate swap

At this point, all of your certificates should be installed.  If you open up the View Administrator web page, the dashboard should have all green lights.

If you are using a certificate signed on an internal CA for servers that your end users connect to, you will need to deploy your root and intermediate certificates to each computer.  This can be done through Group Policy for Windows computers.  If you’re using Teradici PCoIP Zero Clients, you can deploy the certificates as part of a policy with the management VM.  If you don’t do this, users will not be able to connect without disabling certificate checking in the client.

Windows 8.1 Win-X Menu and Roaming Profiles

One of the features of the new version of Horizon View 5.3 is support for Windows 8.1, and I used this as my desktop OS of choice as I’ve worked through installing View in my home lab.  After all, why not test the latest version of a desktop platform with the latest supported version of Microsoft Windows.

Like all new OSes, it has its share of issues.  Although I’m not sure that anyone is looking to do a widespread deployment of 8.1 just yet, there is an issue that could possibly hold up any deployment if roaming profiles are needed.

When Microsoft replaced the Start Menu with Metro in Windows 8, they kept something similar to the old Start menu that could be accessed by pressing Win+X.  This menu, shown below, retained a layout that was similar to the start menu and could be used to access various systems management utilities that were hidden by Metro.

image

The folder for the WinX menu is stored in the local appdata section of the Windows 8.1 user profile, so it isn’t included as part of the roaming profile.  Normally this wouldn’t be a big deal, but there seems to be a bug that doesn’t recreate this folder on login for users with roaming profiles.

While this doesn’t “break” Windows, it does make it inconvenient for power users. 

This won’t be an issue for persistent VDI environments where the user always gets the same desktop or where roaming profiles aren’t used.  However, it could pose some issues to non-persistent VDI environments.

Unfortunately, there aren’t many alternatives to roaming profiles on Windows 8.1.  Unlike the old Start Menu, there is no option to use folder redirection on the WinX folder.  VMware’s Persona Management doesn’t support this version of Windows yet, and even though the installer allows it as an option, it doesn’t actually install.  If Persona Management was supported, this issue could be resolved by turning on the feature to roam the local appdata folder.

The current version of Liquidware Labs’ ProfileUnity product does provide beta support for Windows 8.1, but I haven’t tried it in my lab yet to see how ProfileUnity works with 8.1.

The last option, and the one that many end users would probably appreciate, is to move away from the Metro-style interface entirely with a program like Start8 or Classic Shell.  These programs replace the Metro Start Menu with the classic Start Menu from earlier versions of Windows. 

I’ve used Classic Shell in my lab.  It’s an open source program that is available for free, and it includes ADMX files for managing the application via group policy.  It also works with roaming profiles, and it might be a good way to move forward with Windows 8/8.1 without having to retrain users.

Patch Tuesday VDI Pains? We’ve got a script for that…Part 2

In my last post, I went discussed the thought process that went into the Patch Tuesday script that we use at $work for updating our Linked-Clone parent VMs. In this post, I will dive deeper into the code itself, including the tools that we need to execute this script.

Prerequisites

There are only two prerequisites for actually running this script. There is one prerequisite that is needed on the machine that will execute the script, and there is one that will need to be installed on the linked-clone parent VMs. The server prerequisite is PowerCLI.

As I mentioned in the last post, the Windows Update PowerShell Module will need to be deployed on each of the linked-clone parents that you wish to update with this script. I use Group Policy Preferences to deploy these files to the same folder on each machine. This has the two benefits – they are deployed the same spot automatically via policy and I can make updates on the version that is stored centrally, and they will propogate via policy. Because this module will be invoked using Invoke-VMScript, I’m not sure how it will work if it is called directly from a network share.

The Windows Update PowerShell Module has one main limitation that will need to be worked around. That limitation is that the Windows Update API cannot be called remotely, so PowerShell remoting cannot be used to launch a script using this module. It’s fairly simple to work around this, though, by using the Invoke-VMScript cmdlet.

This script will need to be executed in a 32-bit PowerShell session in order to use Invoke-VMScript to install Windows Updates. As of PowerCLI 5.1, VIX was a 32-bit only component, and the Invoke-VMScript and Wait-VMTools commands will not work if used in a 64-bit PowerShell window.

Parameters

There are a number of parameters that will control how this script is executed. The parameters and their descriptions are below.

vCServer – vCenter Server
Folder – vCenter Folder where templates are located
ParentVM – Name of a specific Linked-Clone ParentVM that you want to update
SnapshotType – Type of updates that are being installed. This is used for building the snapshot name string. Defaults to Windows Updates
Administrator – Account with local adminsitrator privileges on the ParentVM. If using a domain account, use the domain\username format for account names. Mandatory parameter
Password – Password of the local adminsitrator account. Mandatory Parameter

The Administrator and Password parameters are mandatory, and the script will not run without these parameters being defined. The account that is used in this section should have local administrator rights on the target linked-clone parent VMs as administrator permissions will be required when executing the Windows Updates section.

There are two parameters for determining which VMs are updated. You can choose to update all of the Linked-Clone parents that exist within a vCenter folder with the -Folder parameter, or you can use a comma-separated list of VMs with the -ParentVM parameter.

Finally, if your environment is small, I would recommend setting defaults for the vCServer, Folder, and SnapShotType parameters. Having a few default values coded in will make executing this script easier.

Executing the Script

The first thing that the script does is build the string that will be used for the snapshot name. The basic naming scheme for snapshots in the $work environment is “Month SnapshotType Parameter Date installed.” So if we’re doing Windows Updates for the month of September, the snapshot name string would look like “September Windows Update 9-xx-2013.”

After the snapshot name is created, the script will take the following actions:

  1. Power-on the VM
  2. Execute the Windows Update script
  3. Reboot the VM
  4. Wait for the VM to come up, then shut it down
  5. Take Snapshot

Step 1 Power On VM

Powering on the VM is probably the easiest step in this whole script. It’s a simple Start-VM command to get it powered up, and then the output from this command is piped into Wait-VMTools. Wait-VMTools is used here to wait for the VMware Tools to be registered as running before continuing onto the next step, which will rely heavily on the tools.

Step 2. Execute the Windows Update Script

Once the VMware Tools are ready, we can continue onto the next step – installing Windows Updates. To do this, the script will use the Invoke-VMScript cmdlet to execute a command from the Windows Update PowerShell Module from within the VM. In order to execute this successfully, the Invoke-VMScript cmdlet will need to use credentials that have local administrator rights on the linked-clone parent VM.

The command for this section will look something like this:

Invoke-VMScript -VM $vm -ScriptText “Path\to\script\get-wuinstall.ps1 -acceptall” -GuestUser $Administrator -GuestPassword $Password

This section of the script will take longer to run than any other section.

Step 3 – Reboot the VM

Once the updates have finished installing, we need to reboot the VM so they take effect. There is a AutoReboot switch as part of the Get-WUInstall script that is run to install the updates, but it doesn’t seem to work correctly when using the Invoke-VMScript cmdlet. This job also needs to watch the status of the VM and VMware Tools as we’ll need to know the status of both in order to know when to shut the VM down and snapshot it.

Rebooting the VM is fairly simple, and it just uses the Restart-GuestVM cmdlet.

Checking the status of VMware Tools during the shutdown phase of the reboot, thoutgh, is difficult. The normal method of checking status is to use the Wait-VMTools cmdlet, but if you pipe the output from Restart-GuestVM to Wait-VMTools, it will immediately move onto the next section because it shows that the tools are up and online. So this script needs a different method for checking the status of VMware Tools, and for that, a custom function will need to be written.

The custom function will use the Get-VM and Get-View to return the status of VMware tools The results of will be put into a Do-Until loop to watch for a change in the status before continuing onto the next section. We’ll run this Do-Until loop twice – once to check to see if the tools are shut down, and then once to see that the tools come back up.

The code for checking the status of the VMware Tools is:

$toolstatus = (Get-VM $vm | % { get-view $_.ID } | Select-Object @{ Name=”ToolsStatus”; Expression={$_.guest.toolsstatus}}).ToolsStatus

Step 4 – Shut down the VM

Once the reboot completes and the VMware Tools show that they are ready, it is time to shut down the linked-clone parent vm. The Shutdown-GuestVM cmdlet is used for this. After the shutdown is initiated, the script checks to see that the VM is fully powered off before moving onto the final step.

Step 5. – Snapshot the VM

The final step of this process is to take a snapshot of the updated linked-clone parent. This snapshot will be used for the linked-clones in VMware View during the next recompose operation. The snapshot name that the script put together as it’s first action will be used here. The command for taking the snapshot is New-Snapshot -VM vmname -Name $snapshotname.

Wrapup

If you’re only updating one VM, then the script will disconnect from vCenter and end. If you have multiple VMs that need updating, then it will look through and start at step 1 for the next linked-clone parent, and it will continue until all of the of the desktops have been updated.

The code for this script and others is available on github.

Patch Tuesday VDI Pains? We’ve got a script for that…Part 1

Having a non-persistent VDI environment doesn’t mean that Patch Tuesday is no longer a pain. In fact, it may mean more work for the poor VDI administrator who needs to make sure that all the Parent VMs are updated for a recompose. Some of the techniques for addressing patching, such as wake-on-LAN and scheduled installs, don’t necessarily apply to VDI Parent VMs and there is the additional step of snapshoting to prepare those parent VMs for deployment. And if an extended patch management solution like Solarwinds Patch Manager (formerly EminentWare) or Shavlik SCUPdates is not available, you’ll still need to manage updates for a
security risksproducts like Adobe Flash and Java.

So how can you streamline this process and make the process easier for the VDI administrator? There are a couple of ways to address the various pain points of patching VDI desktops for both Windows Updates and for other applications that might be installed in your environment.

This will be part 1 of 2 posts on how to automate updates for Linked-Clone parent VMs. This post will cover the process of updating, and the second post will dive into the code.

Patch Tuesday Process

Before we can start to address the various pain points, let’s look at how Patch Tuesday works for a non-Persistent linked-clone VDI environment and how it differs from a normal desktop environment. In a normal desktop environment, you can schedule Windows Updates to install after hours and use a Wake-on-LAN tool to make sure that every desktop is powered on to receive those updates and reboot automatically through Group Policy.

That procedure doesn’t apply for Linked-Clone desktops, and some additional orchestration is required to get the Linked-Clone parent VMs patched and ready for recompose. When patching Linked-Clone desktop images, you need to do the following:

  1. Power-on the Linked-Clone Parent VMs.
  2. Log into to install Windows and other updates
  3. Reboot the machine
  4. Repeat steps two and three as necessary
  5. Once all updates are installed, shut down the VM
  6. Take a snapshot

Adobe seems to have selected Microsoft’s Patch Tuesday (2nd Tueday of the Month) as their patch release date. Oracle, however, does not release on the same cycle, so updates to Java would require a second round of updates and recomposes if it is needed in your environment.

If you have more than a few linked-clone parent VMs in your environment, there is a significant time commitment involved in keeping them up-to-date. Even if you do your updates concurrently and do other things while the updates are installing, there are still parts that have to be done by hand such as power operations, snapshotting, and installing 3rd-party updates that aren’t handled through WSUS.

One Patch Source to Rule Them All

Rather than dealing with downloading and running the update on each Linked-Clone parent VM or the auto-update utility that comes with some of these smaller applications and plugins, we’ve standardized on one primary delivery mechanism at $Work – Microsoft WSUS. WSUS handles the majority of the patches we would deploy during the month, and it has a suite of reports built around it to track updates and the computer they’re installed, or fail to install, on. This makes it the perfect centerpiece to patch management in the environment at $work.

But WSUS doesn’t handle Adobe, Java, or other updates natively. WSUS 3.0 introduced an API that a number of patch management products use to add 3rd-party updates to the system. One of these products is an open-source solution called Local Updates Publisher.

Local Updates Publisher is a solution that allows an administrator to take a software update, be it an EXE, MSI, or MSP file, repackage it, and deploy it through WSUS. Additional rules can be built around this package to determine which machines are eligible for the update, and those updates can be approved, rejected, and/or targeted to various deployment groups right from within the application. It will also accept SCUP catalogs as an update source.

There is a bit of manual work involved with this method as some of the applications that are frequently updated do not come with SCUP catalogs – primarily Java. Adobe provides SCUP catalogs for Reader and Flash. There is a 3rd party SCUP catalog that does contain Java and other open-source applications from PatchMyPC.net (note – I have not used this product), and there are other options such as Solarwinds Patch Manager and Shavlik.

Having one centralized patch source will make it easier to automate patch installation.

Automating Updates

Once there is a single source for providing both Microsoft and 3rd party updates, the work on automating the installation of updates can begin. Automating the vSphere side of things will be done in PowerCLI, so the windows update solution should also use PowerShell. This leaves two options – POSHPaig, a hybrid solution that uses PowerShell to generate vbscript to run the updates, and the Windows Update PowerShell Module. POSHPaig is a good tool, but in my experience, it is more of a GUI product that works with multiple machines while the Windows Update PowerShell Module is geared more for scripted interactions.

The Windows Update PowerShell Module is a free module developed by Microsoft MVP Michal Gadja. It is a collection of local commands that will connect to the default update source – either WSUS or Microsoft Updates, download all applicable updates for the system, and automatically install them. The module will need to be stored locally on the Linked-Clone Parent VMs. I use Group Policy Preferences to load the module onto each machine as it ensures that the files will be loaded into the same place and updates will propogate automatically.

One of the limits of the Windows Updates API is that it cannot be called remotely, so the commands from this module will not work with PowerShell Remoting. There is another way to remotely call this script, though. The Invoke-VMScript can be used to launch this script through VMware Tools. In order to use Invoke-VMScript, the VIX API will need to be installed and the script run in a 32-bit instance of PowerShell.

On the Next Episode…

I didn’t originally plan on breaking this into two parts. But this is starting to run a little long. Rather than trying to cram everything into one post, I will be breaking this up into two parts and cover the script and some of the PowerShell/PowerCLI gotchas that came up when testing it out.

Updated Script – Start-Recompose.ps1

I will be giving my first VMUG presentation on Thursday, September 26th  at the Wisconsin VMUG meeting in Appleton, WI.  The topic of my presentation will be three scripts that we use in our VMware View environment to automate routine and time consuming tasks.

One of the scripts that I will be including in my presentation is the Start-Recompose script that I posted a few weeks ago.  I’ve made some updates to this script to address a few things that I’ve always wanted to improve with this script.  I’ll be posting about the other two scripts this week.

These improvements are:

  • Getting pool information directly from the View LDAP datastore instead of using the Get-Pool cmdlet
  • Checking for space on the Replica volume before scheduling the Recompose operation
  • Adding email and event logging alerts
  • The ability to recompose just one pool if multiple pools share the same base image.

The updated script will still need to be run from the View Connection Server as it requires the View PowerCLI cmdlets.  The vSphere PowerCLI cmdlets and the Quest AD cmdlets will also need to be available.  A future update will probably remove the need for the Quest cmdlets, but I didn’t feel like reinventing the wheel at the time.

The script can be downloaded from github here.

VMware View Pool Recompose PowerCLI Script

Edit – I’ve updated this script recently.  The updated version includes some additional features such as checking to make sure there is enough space on the replica volume to for a successful clone.  You can read more about it here: http://www.seanmassey.net/2013/09/updated-script-start-recomposeps1.html

One of the many hats that I wear at $work is the administration of our virtual desktop environment that is built on VMware View.  Although the specific details of our virtual desktop environment may be covered in another post, I will provide a few details here for background.  Our View environment has about 240 users and 200 desktops, although we only have about 150 people logged in at a given time.  It is almost 100% non-persistent, and the seven desktops that are set up as persistent are only due to an application licensing issue and refresh on logout like our non-persistent desktops.
Two of the primary tasks involved with that are managing snapshots and scheduling pool recompose operations as part of our patching cycle.  I wish I could say that it was a set monthly cycle, but a certain required plugin…*cough* Java *cough*…and one application that we use for web conferencing seem to break and require an update every time there is a slight update to Adobe Air.  There is also the occassional request that a department needs that is a priority and falls outside of the normal update cycle such as an application that needs to be added on short notice.
Our 200 desktops are grouped into sixteen desktop pools, and there are seven Parent VMs that are used as the base images for these sixteen pools.  That seems like a lot given the total number of desktops that we have, but there are business reasons for all of these including restrictions on remote access, department applications that don’t play nicely with ThinApp, and restrictions on the number of people from certain departments that can be logged in at one time. 
Suffice it to say that with sixteen pools to schedule recompose actions for as part of the monthly patch cycle, it can get rather tedious and time consuming to do it through the VMware View Administrator.  That is where PowerShell comes in.  View ships with set of PowerCLI cmdlets, and these can be run from any connection broker in your environment.  You can execute the script remotely, but the script file will need to be placed on your View Connection Broker.
I currently schedule this script to run using the Community Edition of the JAMS Job Scheduler, but I will be looking at using vCenter Orchestrator in the future to tie in automation of taking and removing snapshots.
The original inspiration for, and bits of, this script were originally written by Greg Carriger.  You can view his work on his blog.  My version does not take or remove any snapshots, and my version will work with multiple pools that are based on the same Parent VM.  The full script is availabe to download here.
Prerequisites:
PowerCLI 5.1 or greater installed on the Connection Broker
View PowerCLI snapin
PowerShell 2.0 or greater
View requires the full snapshot path in order to update the pool and do a recompose, so one of the first things that needs to be done is build the snapshot path.  This can be a problem if you’re not very good at cleaning up old snapshots (like I am…although I have a script for that now too).  That issue can be solved with the code below.
Function
Build-SnapshotPath
{
Param($ParentVM)
##
CreateSnapshotPath$Snapshots=Get-Snapshot-VM$ParentVM$SnapshotPath=“”ForEach($Snapshotin$Snapshots){$SnapshotName=$Snapshot.name$SnapshotPath=$SnapshotPath+“/”+$snapshotname}Return$snapshotpath}

Once you have our snapshot path constructed, you need to identify the pools that are based around the ParentVM.
$Pools
=Get-Pool|Where {$_.ParentVMPath-like“*$ParentVM*”}
A simple foreach loop can be used to iterate through and update your list of pools once you know which pools you need to update.  This section of code will update the default snapshot used for desktops in the pool, schedule the recompose operation, and write out to the event log that the operation was scheduled. 
Stop on Error is set to false as this script is intended to be run overnight, and View can, and will, stop a recompose operation over the slightest error.  This can leave destkops stuck in a halted state and inacccessible when staff come in to work the following morning.
ForEach
($Poolin$Pools)
{
$PoolName=$Pool.Pool_ID$ParentVMPath=$Pool.ParentVMPath#Update Base Image for PoolUpdate-AutomaticLinkedClonePool-pool_id$Poolname-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPath## Recompose
##Stop on Error set to false. This will allow the pool to continue recompose operations after hours if a single vm encounters an error rather than leaving the recompose tasks in a halted state.
Get-DesktopVM-pool_id$Poolname|Send-LinkedCloneRecompose-schedule$Time-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPathforceLogoff:$truestopOnError:$falseWrite-EventLogLogNameApplicationSourceVMwareView” –EntryTypeInformationEventID 9000 –MessagePool$Poolnamewillstarttorecomposeat$Timeusing$snapshotname.”
}