Horizon 8.0 Part 10: Deploying the Unified Access Gateway

And we’re back…this week with the final part of deploying a Horizon 2006 environment – deploying the Unified Access Gateway to enable remote access to desktops.

Before we go into the deployment process, let’s dive into the background on the appliance.

The Unified Access Gateway (also abbreviated as UAG) is a purpose built virtual appliance that is designed to be the remote access component for VMware Horizon and Workspace One.  The appliance is hardened for deployment in a DMZ scenario, and it is designed to only pass authorized traffic from authenticated users into a secure network.

As of Horizon 2006, the UAG is the primary remote access component for Horizon.  This wasn’t always the case – previous Horizon releases the Horizon Security Server.  The Security Server was a Windows Server running a stripped-down version of the Horizon Connection Server, and this component was deprecated and removed with Horizon 2006.

The UAG has some benefits over the Security Server.  First, it does not require a Windows license.  The UAG is built on Photon, VMware’s lightweight Linux distribution, and it is distributed as an appliance.  Second, the UAG is not tightly coupled to a connection server, so you can use a load balancer between the UAG and the Connection Server to eliminate single points of failure.

And finally, multifactor authentication is validated on the UAG in the DMZ.  When multi-factor authentication is enabled, users are prompted for that second factor first, and they are only prompted for their Active Directory credentials if this authentication is successful.  The UAG can utilize multiple forms of MFA, including RSA, RADIUS, and SAML-based solutions, and setting up MFA on the UAG does not require any changes to the connection servers.

There have also been a couple of 3rd-party options that could be used with Horizon. I won’t be covering any of the other options in this post.

If you want to learn more about the Unified Access Gateway, including a deeper dive on its capabilities, sizing, and deployment architectures, please check out the Unified Access Gateway Architecture guide on VMware Techzone.

Deploying the Unified Access Gateway

There are two main ways to deploy the UAG.  The first is a manual deployment where the UAG’s OVA file is manually deployed through vCenter, and then the appliance is configured through the built-in Admin interface.  The second option is the PowerShell deployment method, where a PowerShell script and OVFTool are used to automatically deploy the OVA file, and the appliance’s configuration is injected from an INI file during deployment.

Typically, I prefer using the PowerShell deployment method.  This method consists of a PowerShell Deployment Script and an INI file that contains the configuration for each appliance that you’re deploying.  I like the PowerShell script over deploying the appliance through vCenter because the appliance is ready to use on first boot. It also allows administrators to track all configurations in a source control system such as Github, which provides both documentation for the configuration and change tracking.  This method makes it easy to redeploy or upgrade the Unified Access Gateway because I rerun the script with my config file and the new OVA file.

The PowerShell script requires the OVF Tool to be installed on the server or desktop where the PowerShell script will be executed.  The latest version of the OVF Tool can be downloaded from MyVMware.  PowerCLI is not required when deploying the UAG as OVF Tool will be deploying the appliance and injecting the configuration.

The zip file that contains the PowerShell scripts includes sample templates for different use cases.  This includes Horizon use cases with RADIUS and RSA-based multifactor authentication.  You can also find the reference guide for all options here.

If you haven’t deployed a UAG before, are implementing a new feature on the UAG, or you’re not comfortable creating the INI configuration file from scratch, then you can use the manual deployment method to configure your appliance and then export the configuration in the INI file format that the PowerShell deployment method can consume.  This exported configuration only contains the appliance’s Workspace ONE or Horizon configuration – you would still have to add in your vSphere and SSL Certificate configuration.

You can export the configuration from the UAG admin interface.  It is the last item in the Support Settings section.

UAG-Config-Export

One other thing that can trip people up when creating their first UAG deployment file is the deployment path used by OVFTool.  This is not always straightforward, and vCenter has some “hidden” objects that need to be included in the path.  OVFTool can be used to discover the path where the appliance will be deployed.

You can use OVFTool to connect to your vCenter with a partial path, and then view the objects in that location.  It may require multiple connection attempts with OVFTool to build out the path.  You can see an example of this over at the VMwareArena blog on how to export a VM with OVFTool or in question 8 in the troubleshooting section of the Using PowerShell to Deploy the Unified Access Gateway guide.

Before deploying the UAG, we need to get some prerequisites in place.  These are:

  1. Download the Unified Access Gateway OVA file, PowerShell deployment script zip file, and the latest version of OVFTool from MyVMware.
  2. Right click on the PowerShell zip file and select Properties.
  3. Click Unblock.  This step is required because the file was downloaded from the Internet, and is untrusted by default, and this can prevent the scripts from executing after we unzip them.
  4. Extract the contents of the downloaded ZIP file to a folder on the system where the deployment script will be run.  The ZIP file contains multiple files, but we will only be using the uagdeploy.ps1 script file and the uagdeploy.psm1 module file.  The other scripts are used to deploy the UAG to Hyper-V, Azure, and AWS EC2.The zip file will also contain a number of default templates.  When deploying the access points for Horizon, I recommend starting with the UAG2-Advanced.ini template.  This template provides the most options for configuring Horizon remote access and networking.  Once you have the UAG deployed successfully, I recommend copying the relevant portions of the SecurID or RADIUS auth templates into your working AP template.  This allows you to test remote access and your DMZ networking and routing before adding in MFA.
  5. Before we start filling out the template for our first access point, there are some things we’ll need to do to ensure a successful deployment. These steps are:
    1. Ensure that the OVF Tool is installed on your deployment machine.
    2. Locate the UAG’s OVA file and record the full file path.  The OVA file can be placed on a network share.
    3. We will need a copy of the certificate, including any intermediate and root CA certificates, and the private key in PFX or PEM format.  Place these files into a folder on the local or network folder and record the full path.If you are using PEM files, the certificate files should be concatenated so that the certificate and any CA certificates in the chain are in one file, and the private key should not have a password on it.  If you are using PFX files, you will be prompted for a password when deploying the UAG.
    4. We need to create the path to the vSphere resources that OVF Tool will use when deploying the appliance.  This path looks like: vi://user@PASSWORD:vcenter.fqdn.orIP/DataCenter Name/host/Host or Cluster Name/OVF Tool is case sensitive, so make sure that the datacenter name and host or cluster names are entered as they are displayed in vCenter.

      The uppercase PASSWORD in the OVFTool string is a variable that prompts the user for a password before deploying the appliance.  If you are automating your deployment, you can replace this with the password for the service account that will be used for deploying the UAG.

      Note: I don’t recommend saving the service account password in the INI files. If you plan to do this, remember best practices around saving passwords in plaintext files and ensure that your service account only has the required permissions for deploying the UAG appliances.


    5. Generate the passwords that  you will use for the appliance Root and Admin passwords.
    6. Get the SSL Thumbprint for the certificate on your Connection Server or load balancer that is in front of the connection servers.
  6. Fill out the template file.  The file has comments for documentation, so it should be pretty easy to fill out. You will need to have a valid port group for all three networks, even if you are only using the OneNic deployment option.
  7. Save your INI file as <UAGName>.ini in the same directory as the deployment scripts.

There is one change that we will need to configure on our Connection Servers before we deploy the UAGs – disabling the Blast and PCoIP secure gateways.  If these are not disabled, the UAG will attempt to tunnel the user protocol session traffic through the Connection Server, and users will get a black screen instead of a desktop.

The steps for disabling the gateways are:

  1. Log into your Connection Server admin interface.
  2. Go to Settings -> Servers -> Connection Servers
  3. Select your Connection Server and then click Edit.
  4. Uncheck the following options:
    1. Use Secure Tunnel to Connect to machine
    2. Use PCoIP Secure Gateway for PCoIP connections to machine
  5. Under Blast Secure Gateway, select Use Blast Secure Gateway for only HTML Access connections to machine.  This option may reduce the number of certificate prompts that users receive if using the HTML5 client to access their desktop.
  6. Click OK.

Connection Server Settings

Once all of these tasks are done, we can start deploying the UAGs.  The steps are:

  1. Open PowerShell and change to the directory where the deployment scripts are stored.
  2. Run the deployment script.  The syntax is .\UAGDeploy.ps1 –inifile <apname>.ini
  3. Enter the appliance root password twice.
  4. Enter the admin password twice.  This password is optional, however, if one is not configured, the REST API and Admin interface will not be available.
    Note: The UAG Deploy script has parameters for the root and admin passwords.  These can be used to reduce the number of prompts after running the script.
  5. If RADIUS is configured in the INI file, you will be prompted for the RADIUS shared secret.
  6. After the script opens the OVA and validates the manifest, it will prompt you for the password for accessing vCenter.  Enter it here.
  7. If a UAG with the same name is already deployed, it will be powered off and deleted.
  8. The appliance OVA will be deployed.  When the deployment is complete, the appliance will be powered on and get an IP address from DHCP.
  9. The appliance configuration defined in the INI file will be injected into the appliance and applied during the bootup.  It may take a few minutes for configuration to be completed.

image

Testing the Unified Access Gateway

Once the appliance has finished it’s deployment and self-configuration, it needs to be tested to ensure that it is operating properly. The best way that I’ve found for doing this is to use a mobile device, such as a smartphone or cellular-enabled tablet, to access the environment using the Horizon mobile app.  If everything is working properly, you should be prompted to sign in, and desktop pool connections should be successful.

If you are not able to sign in, or you can sign in but not connect to a desktop pool, the first thing to check is your firewall rules.  Validate that TCP and UDP ports 443, 8443 and 4172 are open between the Internet and your Unified Access Gateway.  You may also want to check your Connection Server configuration and ensure that HTTP Secure Gateway, PCoIP Secure Gateway, and Blast Secure Gateway are disabled.

If you’re deploying your UAGs with multiple NICs and your desktops live in a different subnet than your UAGs and/or your Connection Servers, you may need to statically define routes.  The UAG typically has the default route set on the Internet or external interface, so it may not have routes to the desktop subnets unless they are statically defined.  An example of a route configuration may look like the following:

routes1 = 192.168.2.0/24 192.168.1.1,192.168.3.0/24 192.168.1.1

If you need to make a routing change, the best way to handle it is to update the ini file and then redeploy the appliance.

Once deployed and tested, your Horizon infrastructure is configured, and you’re ready to start having users connect to the environment.

Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey

Recently, Mark Brookfield posted a three-part series on the process he uses for building Windows 10 images in HobbitCloud (Part 1, Part 2, Part 3). Mark has put together a great series of posts that explain the tools and the processes that he is using in his lab, and it has inspired me to revisit this topic and talk about the process and tooling I currently use in my lab and the requirements and decisions that influenced this design.

Why Automate Image Building?

Hand-building images is a time-intensive process.  It is also potentially error-prone as it is easy to forget applications and specific configuration items, requiring additional work or even building new images depending on the steps that were missed.  Incremental changes that are made to templates may not make it into the image building documentation, requiring additional work to update the image after it has been deployed.

Automation helps solve these challenges and provide consistent results.  Once the process is nailed down, you can expect consistent results on every build.  If you need to make incremental changes to the image, you can add them into your build sequence so they aren’t forgotten when building the next image.

Tools in My Build Process

When I started researching my image build process back in 2017, I was looking to find a way to save time and provide consistent results on each build.  I wanted a tool that would allow me to build images with little interaction with the process on my part.  But it also needed to fit into my lab.  The main tools I looked at were Packer with the JetBrains vSphere plugin and Microsoft Deployment Toolkit (MDT).

While Packer is an incredible tool, I ended up selected MDT as the main tool in my process.  My reason for selecting MDT has to do with NVIDIA GRID.  The vSphere Plugin for Packer does not currently support provisioning machines with vGPU, so using this tool would have required manual post-deployment work.

One nice feature of MDT is that it can utilize a SQL Server database for storing details about registered machines such as the computer name, the OU where the computer object should be placed, and the task sequence to run when booting into MDT.  This allows a new machine to be provisioned in a zero-touch fashion, and the database can be populated from PowerShell.

Unlike Packer, which can create and configure the virtual machine in vCenter, MDT only handles the operating system deployment.  So I needed some way to create and configure the VM in vCenter with a vGPU profile.  The best method of doing this is using PowerCLI.  While there are no native commandlets for managing vGPUs or other Shared PCI objects in PowerCLI, there are ways to utilize vSphere extension data to add a vGPU profile to a VM.

While MDT can install applications as part of a task sequence, I wanted something a little more flexible.  Typically, when a new version of an application is added, the way I had structured my task sequences required them to be updated to utilize the newer version.  The reason for this is that I wasn’t using Application Groups for certain applications that were going into the image, mainly the agents that were being installed, as I wanted to control the install order and manage reboots. (Yes…I may have been using this wrong…)

I wanted to reduce my operational overhead when applications were updated so I went looking for alternatives.  I ended up settling on using Chocolatey to install most of the applications in my images, with applications being hosted in a private repository running on the free edition of ProGet.

My Build Process Workflow

My build workflow consists of 7 steps with one branch.  These steps are:

  1. Create a new VM in vCenter
  2. Configure VM options such as memory reservations and video RAM
  3. GPU Flag Only – Add a virtual GPU with the correct profile to the VM.
  4. Identify Task Sequence that will be used.  There are different task sequences for GPU and non-GPU machines and logic in the script to create the task sequence name. Various parameters that are called when running the script help define the logic.
  5. Create a new computer entry in the MDT database.  This includes the computer name, MAC address, task sequence name, role, and a few other variables.  This step is performed in PowerShell using the MDTDB PowerShell module.
  6. Power on the VM. This is done using PowerCLI. The VM will PXE boot to a Windows PE environment configured to point to my MDT server.

Build Process

After the VM is powered on and boots to Windows PE, the rest of the process is hands off. All of the MDT prompts, such as the prompt for a computer name or the task sequence, are disabled, and the install process relies on the database for things like computer name and task sequence.

From this point forward, it takes about forty-five minutes to an hour to complete the task sequence. MDT installs Windows 10 and any drivers like the VMXNET3 driver, install Windows Updates from an internal WSUS server, installs any agents or applications, such as VMware Tools, the Horizon Agent, and the UEM DEM agent, silently runs the OSOT tool, and stamps the registry with the image build date.

Future Direction and Process Enhancements

While this process works well today, it is a bit cumbersome. Each new Windows 10 release requires a new task sequence for version control. It is also difficult to work tools like the OSDeploy PowerShell scripts by David Segura (used for slipstreaming updated into a Windows 10 WIM) into the process. While there are ways to automate MDT, I’d rather invest time in automating builds using Packer.

There are a couple of post-deployment steps that I would like to integrate into my build process as well. I would like to utilize Pester to validate the image build after it completes, and then if it passes, execute a shutdown and VM snapshot (or conversion to template) so it is ready to be consumed by Horizon. My plan is to utilize a tool like Jenkins to orchestrate the build pipeline and do something similar to the process that Mark Brookfield has laid out.

The ideal process that I am working towards will have multiple workflows to manage various aspects to the process. Some of these are:

1. A process for automatically creating updated Windows 10 ISOs with the latest Windows Updates using the OSDeploy PowerShell module.

2. A process for creating Chocolatey package updates and submitting them to my ProGet repository for applications managed by Chocolatey.

3. A process to build new images when Windows 10 or key applications (such as VMware Tools, the Horizon Agent, or NVIDIA Drivers) are updated. This process will ideally use Packer as the build tool to simplify management. The main dependency for this step is adding NVIDIA GRID support for the JetBrains Packer vSphere Plug-in.

So this is what I’m doing for image builds in my lab, and the direction I’m planning to go.

VMware Horizon and Horizon Cloud Enhancements – Part 1

This morning, VMware announced enhancements to both the on-premises Horizon Suite and Horizon Cloud product sets.  Although there are a lot of additions to all products in the Suite, the VMware blog post did not go too indepth into many of the new features that you’ll be seeing in the upcoming editions.

VMware Horizon 7.5

Let’s start with the biggest news in the blog post – the announcement of Horizon 7.5.  Horizon 7.5 brings several new, long-awaited, features with it.  Some of these features are:

  1. Support for Horizon on VMC (VMware on AWS)
  2. The “Just-in-Time” Management Platform (JMP)
  3. Horizon 7 Extended Service Branch (ESB)
  4. Instant Clone improvements, including support for the new vSphere 6.7 Instant Clone APIs
  5. Support for IPv4/IPv6 Mixed-Mode Operations
  6. Cloud-Pod Architecture support for 200K Sessions
  7. Support for Windows 10 Virtualization-Based Security (VBS) and vTPM on Full Clone Desktops
  8. RDSH Host-based GPO Support for managing protocol settings

I’m not going to touch on all of these items.  I think the first four are the most important for this portion of the suite.

Horizon on VMC

Horizon on VMC is a welcome addition to the Horizon portfolio.  Unlike Citrix, the traditional VMware Horizon product has not had a good cloud story because it has been tightly coupled to the VMware SDDC stack.  By enabling VMC support for Horizon, customers can now run virtual desktops in AWS, or utilize VMC as a disaster recovery option for Horizon environments.

Full clone desktops will be the only desktop type supported in the initial release of Horizon on VMC.  Instant Clones will be coming in a future release, but some additional development work will be required since Horizon will not have the same access to vCenter in VMC as it has in on-premises environments.  I’m also hearing that Linked Clones and Horizon Composer will not be supported in VMC.

The initial release of Horizon on VMC will only support core Horizon, the Unified Access Gateway, and VMware Identity Manager.  Other components of the Horizon Suite, such as UEM, vRealize Operations, and App Volumes have not been certified yet (although there should be nothing stopping UEM from working in Horizon on VMC because it doesn’t rely on any vSphere components).  Security Server, Persona Management, and ThinApp will not be supported.

Horizon Extended Service Branches

Under the current release cadence, VMware targets one Horizon 7 release per quarter.  The current support policy for Horizon states that a release only continues to receive bug fixes and security patches if a new point release hasn’t been available for at least 60 days.  Let’s break that down to make it a little easier to understand.

  1. VMware will support any version of Horizon 7.x for the lifecycle of the product.
  2. If you are currently running the latest Horizon point release (ex. Horizon 7.4), and you find a critical bug/security issue, VMware will issue a hot patch to fix it for that version.
  3. If you are running Horizon 7.4, and Horizon 7.5 has been out for less than 60 days when you find a critical bug/security issue, VMware will issue a hot patch to fix it for that version.
  4. If you are running Horizon 7.4, and Horizon 7.5 has been out for more than 60 days when you find a critical bug/security issue, the fix for the bug will be applied to Horizon 7.5 or later, and you will need to upgrade to receive the fix.

In larger environments, Horizon upgrades can be non-trivial efforts that enterprises may not undertake every quarter.  There are also some verticals, such as healthcare, where core business applications are certified against specific versions of a product, and upgrading or moving away from that certified version can impact support or support costs for key business applications.

With Horizon 7.5, VMware is introducing a long-term support bundle for the Horizon Suite.  This bundle will be called the Extended Service Branch (ESB), and it will contain Horizon 7, App Volumes, User Environment Manager, and Unified Access Gateway.  The ESB will have 2 years of active support from release date where it will receive hot fixes, and each ESB will receive three service packs with critical bug and security fixes and support for new Windows 10 releases.  A new ESB will be released approximately every twelve months.

Each ESB branch will support approximately 3-4 Windows 10 builds, including any recent LTSC builds.  That means the Horizon 7.5 ESB release will support the Windows 10 1709, 1803, 1809 and 1809 LTSC builds of Windows 10.

This packaging is nice for enterprise organizations that want to limit the number of Horizon upgrades they want to apply in a year or require long-term support for core business applications.  I see this being popular in healthcare environments.

Extended Service Branches do not require any additional licensing, and customers will have the option to adopt either the current release cadence or the extended service branch when implementing their environment.

JMP

The Just-in-Time Management Platform, or JMP, is a new component of the Horizon Suite.  The intention is to bring together Horizon, Active Directory, App Volumes, and User Environment Manager to provide a single portal for provisioning instant clone desktops, applications, and policies to users.  JMP also brings a new, HTML5 interface to Horizon.

I’m a bit torn on the concept.  I like the idea behind JMP and providing a portal for enabling user self-provisioning.  But I’m not sure building that portal into Horizon is the right place for it.  A lot of organizations use Active Directory Groups as their management layer for Horizon Desktop Pools and App Volumes.  There is a good reason for doing it this way.  It’s easy to audit who has desktop or application access, and there are a number of ways to easily generate reports on Active Directory Group membership.

Many customers that I talk to are also attempting to standardize their IT processes around an ITSM platform that includes a Service Catalog.  The most common one I run across is ServiceNow.  The customers that I’ve talked to that want to implement self-service provisioning of virtual desktops and applications often want to do it in the context of their service catalog and approval workflows.

It’s not clear right now if JMP will include an API that will allow customers to integrate it with an existing service catalog or service desk tool.  If it does include an API, then I see it being an important part of automated, self-service end-user computing solutions.  If it doesn’t, then it will likely be another yet-another-user-interface, and the development cycles would have been better spent on improving the Horizon and App Volumes APIs.

Not every customer will be utilizing a service catalog, ITSM tool and orchestration. For those customers, JMP could be an important way to streamline IT operations around virtual desktops and applications and provide them some benefits of automation.

Instant Clone Enhancements

The release of vSphere 6.7 brought with it new Instant Clone APIs.  The new APIs bring features to VMFork that seem new to pure vSphere Admins but have been available to Horizon for some time such as vMotion.  The new APIs are why Horizon 7.4 does not support vSphere 6.7 for Instant Clone desktops.

Horizon 7.5 will support the new vSphere 6.7 Instant Clone APIs.  It is also backward compatible with the existing vSphere 6.0 and 6.5 Instant Clone APIs.

There are some other enhancements coming to Instant Clones as well.  Instant Clones will now support vSGA and Soft3D.  These settings can be configured in the parent image.  And if you’re an NVIDIA vGPU customer, more than one vGPU profile will be supported per cluster when GPU Consolidation is turned on.  NVIDIA GRID can only run a single profile per discrete GPU, so this feature will be great for customers that have Maxwell-series boards, especially the Tesla M10 high-density board that has four discrete GPUs.  However, I’m not sure how beneficial it will be with customer that adopt Pascal-series or Volta-series Tesla cards as these only have a single discrete GPU per board.   There may be some additional design considerations that need to be worked out.

Finally, there is one new Instant Clone feature for VSAN customers.  Before I explain the feature, I can to explain how Horizon utilizes VMFork and Instant Clone technology.  Horizon doesn’t just utilize VMFork – it adds it’s own layers of management on top of it to overcome the limitations of the first generation technology.  This is how Horizon was able to support Instant Clone vMotion when the standard VMFork could not.

This additional layer of management also allows VMware to do other cool things with Horizon Instant Clones without having to make major changes to the underlying platform.  One of the new features that is coming in Horizon 7.5 for VSAN customers is the ability to use Instant Clones across cluster boundaries.

For those who aren’t familiar with VSAN, it is VMware’s software-defined storage product.  The storage boundary for VSAN aligns with the ESXi cluster, so I’m not able to stretch a VSAN datastore between vSphere clusters.  So if I’m running a large EUC environment using VSAN, I may need multiple clusters to meet the needs of my user base.  And unlike 3-tier storage, I can’t share VSAN datastores between clusters.  Under the current setup in Horizon 7.4, I would need to have a copy of my gold/master/parent image in each cluster.

Due to some changes made in Horizon 7.5, I can now share an Instant Clone gold/master/parent image across VSAN clusters without having to make a copy of it in each cluster first.  I don’t have too many specific details on how this will work, but it could significantly reduce the management burden of large, multi-cluster Horizon environments on VSAN.

Blast Extreme Enhancements

The addition of Blast Extreme Adaptive Transport, or BEAT as it’s commonly known, provided an enhanced session remoting experience when using Blast Extreme.  It also required users and administrators to configure which transport they wanted to use in the client, and this could lead to less than optimal user experience for users who frequently moved between locations with good and bad connectivity.

Horizon 7.5 adds some automation and intelligence to BEAT with a feature called Blast Extreme Network Intelligence.  NI will evaluate network conditions on the client side and automatically choose the correct Blast Extreme transport to use.  Users will no longer have to make that choice or make changes in the client.  As a result, the Excellent, Typical, and Poor options are being removed from future versions of the Horizon client.

Another major enhancment coming to Blast Extreme is USB Redirection Port Consolidation.  Currently, USB redirection utilizes a side channel that requires an additional port to be opened in any external-facing firewalls.  Starting in Horizon 7.5, customers will have the option to utilize USB redirection over ports 443/8443 instead of the side channel.

Performance Tracker

The last item I want to cover in this post is Performance Tracker.  Performance Tracker is a tool that Pat Lee demonstrated at VMworld last year, and it is a tool to present session performance metrics to end users.  It supports both Blast Extreme and PCoIP, and it provides information such as session latency, frames per second, Blast Extreme transport type, and help with troubleshooting connectivity issues between the Horizon Agent and the Horizon Client.

Part 2

As you can see, there is a lot of new stuff in Horizon 7.5.  We’ve hit 1900 words in this post just talking about what’s new in Horizon.  We haven’t touched on client improvements, Horizon Cloud, App Volumes, UEM or Workspace One Intelligence yet.  So we’ll have to break those announcements into another post that will be coming in the next day or two.

Horizon 6 and Profile Management? It’s Not the Big Deal That Some Are Making It Out To be…

When VMware announced Horizon 6 last month, there was a lot of excitement because the Horizon Suite was finally beefing up their Remote Desktop Session Host component to support PCoIP and application publishing.  Shortly after that announcement, word leaked out that Persona Management would not be available for RDSH sessions and published applications.

There seems to be this big misconception that without Persona Management, there will be no way to manage user settings, and companies that wish to overcome this shortcoming will need to utilize a 3rd party product for profile management.  A lot of that misconception revolves around the idea that there should only be one user profile and set of applications settings that apply to the user regardless of what platform the user logs in on.

Disclosure: Horizon 6 is still in beta.  I am not a member of the beta testing team, and I have not used Horizon 6.

There are two arguments being made about the lack of unified profile management in Horizon 6.  The first argument is that without some sort of profile management, users won’t be able to save their application settings.  The article quotes a systems administrator who says, “If you rely on linked clones and want to use [RDSH] published apps, it won’t remember your settings.”

This is not correct.  When setting up an application publishing environment using RDSH, a separate user profile is created for each user on the server, or servers, where the applications are hosted.  That user profile is separate from the user profile that is used when logging into the desktop.  In order to ensure that those settings follow the user if they log into a different server, technologies like roaming profiles and folder redirection are used to store user settings in a central network location.

This ability isn’t an add-on feature, and the ability to do roaming profiles and folder redirection are included with Active Directory as options that can be configured using Group Policy.  Active Directory is a requirement for Horizon environments, so the ability to save and roam settings exists without having to invest in additional products.

The other argument revolves around the idea of a “single, unitary profile” that will be used in both RDSH sessions for application publishing and virtual desktops.  There are a couple of reasons why that this should not be considered a holdup for deploying Horizon 6:

  1. Microsoft’s best practices for roaming profiles (2003 version, 2008 R2 version, RDS Blog) do not recommend using the same profile across multiple platforms, and Microsoft recommends using a different roaming profile for each RDSH farm or platform.
  2. Citrix’s best practices for User Profile Manager, the application that the article above references as providing a single profile across application publishing and virtual desktops, do not recommend using the same profile for multiple platforms or across different versions of Windows.

There are a couple of reasons for this.  The main reason is that there are settings in desktop profiles that don’t apply to servers and vice versa or across different generations of Windows.  There is also the possibility of corruption if a profile is used in multiple places at the same time, and one server can easily overwrite changes to the profile.

Although there may be some cases where application settings may need to roam between an RDSH session and a virtual desktop session, I haven’t encountered any cases where that would be important.  That doesn’t mean those cases don’t exist, but I don’t see a scenario where this would hold up adopting a platform like the article above suggests.

Horizon View 5.3 Part 15 – Horizon View, SSL, and You

Although they may be confusing and require a lot of extra work to set up, SSL certificate play a key role in any VMware environment.  The purpose of the certificates is to secure communications between the clients and the servers as well as between the servers themselves.

Certificates are needed on all of the major components of Horizon View and vCenter.  The certificates that are installed on vCenter Server, Composer, and the Connection Servers can come from an internal certificate authority.  If you are running Security Servers, or if you have a Bring-Your-Own—Beer Device environment, you’ll need to look at purchasing certificates from a public certificate authority.

Setting up a certificate authority is beyond the scope of this article.  If you are interested in learning more about setting up your own public key infrastructure, I’ll refer you to Derek Seaman, the VMware Community’s own SSL guy.  He recently released a three-part series about setting up a two-tier public key infrastructure on Windows Server 2012 R2I’ve also found this instruction set to be a good guide for setting up a public key infrastructure on Windows Server 2008 R2.

Improved Certificate Handling

Managing SSL certificates was a pain if you’ve worked with previous versions of Horizon View.  In previous versions, the certificates had be placed into a Java keystore file.  This changed with View 5.1, and the certificates are now stored in the Windows Certificate Store.  This has greatly improved the process for managing certificates.

Where to Get Certificates

Certificates can be minted on internal certificate authorities or public certificate authorities.  An internal certificate authority exists inside your environment and is something that you manage.  Certificates from these authorities won’t be trusted unless you deploy the root and intermediate certificates to the clients that are connecting.

The certificates used on a security server should be obtained from a commercial certificate vendor such as Thawte, GoDaddy, or Comodo.  One option that I like to use in my lab, and that I’ve used in the past when I didn’t have a budget for SSL certificates is StartSSL.  They provide free basic SSL certificates.

Generating Certificate Requests for Horizon View Servers

VMware’s methods for generating certificates for Horizon View is different than vCenter.  When setting up certificate requests for vCenter, you need to use OpenSSL to generate the requests.  Unlike vCenter, Horizon View uses the built-in Windows certificate tools are used to generate the certificate requests. VMware has a good PDF document that walks through generating certificates for Horizon View.  An online version of this article also exists.

Before we can generate a certificate for each server, we need to set up two things.  The first is a certificate template that can be used to issue certificates in our public key infrastructure.  I’m not an expert on public key infrastructure, so I’ll defer to the expert again.

The other thing that we need to create is a CSR configuration file.  This file will be used with certreq.exe to create the certificate signing request.  The template for this file, which is included in the VMware guide, is below.

;----------------- request.inf -----------------
[Version]

Signature="$Windows NT$"

[NewRequest]

Subject = "CN=View_Server_FQDN, OU=Organizational_Unit_Name, O=Organization_Name, L=City_Name, S=State_Name, C=Country_Name" ; replace attributes in this line using example below
KeySpec = 1
KeyLength = 2048
; Can be 2048, 4096, 8192, or 16384.
; Larger key sizes are more secure, but have
; a greater impact on performance.
Exportable = TRUE
FriendlyName = "vdm"
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0

[EnhancedKeyUsageExtension]

OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication

[RequestAttributes]

; SAN="dns=FQDN_you_require&dns=other_FQDN_you_require"
;-----------------------------------------------

The subject line contains some fields that we need to fill in with information from our environment.  These fields are:

  • CN=server_fqdn: This part of the Subject string should contain the fully qualified domain name that users will use when connecting to the server. An example for an internal, non-Internet facing server is internalbroker.internaldomain.local.  An internet-facing server should use the web address such as view.externaldomain.com
  • OU=organizational unit: I normally fill in the responsible department, so it would be IT.
  • O=Organization: The name of your company
  • L=City: The city your office is based in
  • S=State: The name of the State that you’re located in.  You should spell out the name of the state since some CAs will not accept abbreviated names.
  • C=Country: The two-letter ISO country code.  The United States, for example, is US.

A CSR configuration file will need to be created for each server with a Horizon View Component installed.  vCenter will also need certificates, but there are different procedures for creating and installing vCenter certificates depending on whether you are using the Windows application or the vCenter appliance.

Creating the certificate signing request requires the certreq.exe command line tool.   These steps will need to be performed on each connection server, security server, and  The steps for generating the request are:

  1. Open a command prompt as an Administrator on your View server. Note: This command should be run from the server where you want the certificate to reside.  It can be done from another machine, but it makes it more complicated.
  2. Navigate to the folder where you stored the request.inf file.
  3. Run the following command: certreq.exe –new request.inf server_name_certreq.txt

After the certificate request has been created, it needs to be submitted to the certificate authority in order to have the SSL certificate generated.  The actual process for submitting the CSR is beyond the scope of this article since this process can vary in each environment and with each commercial vendor.

Importing the Certificate

Once the certificate has been generated, it needs to be imported into the server.  The import command is:

certreq.exe –accept certname.cer

This will import the generated certificate into the Windows Certificate Store.

Using the Certificates

Now that we have these freshly minted certificates, we need to put them to work in the View environment.  There are a couple of ways to go about doing this.

1. If you haven’t installed the Horizon View components on the server yet, you will get the option to select your certificate during the installation process.  You don’t need to do anything special to set the certificate up.

2. If you have installed the Horizon View components, and you are using a self-signed certificate or a certificate signed from a different CA, you will need to change the friendly name of the old certificate and restart the Connection Server or Security Server services.

Horizon View requires the certificate to have a friendly name value of vdm.  The template that is posted above sets the friendly name of the new certificate to vdm automatically, but this will conflict with any existing certificates. 

1
Friendly Name

The steps for changing the friendly name are:

  1. Go to Start –> Run and enter MMC.exe
  2. Go to File –> Add/Remove Snap-in
  3. Select Certificates and click Add
  4. Select Computer Account and click Finish
  5. Click OK
  6. Right click on the old certificate and select Properties
  7. On the General tab, delete the value in the Friendly Name field, or change it to vdm_old
  8. Click OK
  9. Restart the View service on the server

2

Certificates and View Composer

Unfortunately, Horizon View Composer uses a different method of managing certificates.  Although the certificates are still stored in the Windows Certificate store, the process of replacing Composer certificates is a little more involved than just changing the friendly name.

The process for replacing or updating the Composer certificate requires a command prompt and the SVIConfig tool.  SVIConfig is the Composer command line tool.  If you’ve ever had to remove a missing or damaged desktop from your View environment, you’ve used this tool.

The process for replacing the Composer certificate is:

  1. Open a command prompt as Administrator on the Composer server
  2. Change directory to your VMware View Composer installation directory
    Note: The default installation directory is C:\Program Files (x86)\VMware\VMware View Composer
  3. Run the following command: sviconfig.exe –operation=replacecertificate –delete=false
  4. Select the correct certificate from the list of certificates in the Windows Certificate Store
  5. Restart the Composer Service

3

A Successful certificate swap

At this point, all of your certificates should be installed.  If you open up the View Administrator web page, the dashboard should have all green lights.

If you are using a certificate signed on an internal CA for servers that your end users connect to, you will need to deploy your root and intermediate certificates to each computer.  This can be done through Group Policy for Windows computers.  If you’re using Teradici PCoIP Zero Clients, you can deploy the certificates as part of a policy with the management VM.  If you don’t do this, users will not be able to connect without disabling certificate checking in the client.

Windows 8.1 Win-X Menu and Roaming Profiles

One of the features of the new version of Horizon View 5.3 is support for Windows 8.1, and I used this as my desktop OS of choice as I’ve worked through installing View in my home lab.  After all, why not test the latest version of a desktop platform with the latest supported version of Microsoft Windows.

Like all new OSes, it has its share of issues.  Although I’m not sure that anyone is looking to do a widespread deployment of 8.1 just yet, there is an issue that could possibly hold up any deployment if roaming profiles are needed.

When Microsoft replaced the Start Menu with Metro in Windows 8, they kept something similar to the old Start menu that could be accessed by pressing Win+X.  This menu, shown below, retained a layout that was similar to the start menu and could be used to access various systems management utilities that were hidden by Metro.

image

The folder for the WinX menu is stored in the local appdata section of the Windows 8.1 user profile, so it isn’t included as part of the roaming profile.  Normally this wouldn’t be a big deal, but there seems to be a bug that doesn’t recreate this folder on login for users with roaming profiles.

While this doesn’t “break” Windows, it does make it inconvenient for power users. 

This won’t be an issue for persistent VDI environments where the user always gets the same desktop or where roaming profiles aren’t used.  However, it could pose some issues to non-persistent VDI environments.

Unfortunately, there aren’t many alternatives to roaming profiles on Windows 8.1.  Unlike the old Start Menu, there is no option to use folder redirection on the WinX folder.  VMware’s Persona Management doesn’t support this version of Windows yet, and even though the installer allows it as an option, it doesn’t actually install.  If Persona Management was supported, this issue could be resolved by turning on the feature to roam the local appdata folder.

The current version of Liquidware Labs’ ProfileUnity product does provide beta support for Windows 8.1, but I haven’t tried it in my lab yet to see how ProfileUnity works with 8.1.

The last option, and the one that many end users would probably appreciate, is to move away from the Metro-style interface entirely with a program like Start8 or Classic Shell.  These programs replace the Metro Start Menu with the classic Start Menu from earlier versions of Windows. 

I’ve used Classic Shell in my lab.  It’s an open source program that is available for free, and it includes ADMX files for managing the application via group policy.  It also works with roaming profiles, and it might be a good way to move forward with Windows 8/8.1 without having to retrain users.

Patch Tuesday VDI Pains? We’ve got a script for that…Part 2

In my last post, I went discussed the thought process that went into the Patch Tuesday script that we use at $work for updating our Linked-Clone parent VMs. In this post, I will dive deeper into the code itself, including the tools that we need to execute this script.

Prerequisites

There are only two prerequisites for actually running this script. There is one prerequisite that is needed on the machine that will execute the script, and there is one that will need to be installed on the linked-clone parent VMs. The server prerequisite is PowerCLI.

As I mentioned in the last post, the Windows Update PowerShell Module will need to be deployed on each of the linked-clone parents that you wish to update with this script. I use Group Policy Preferences to deploy these files to the same folder on each machine. This has the two benefits – they are deployed the same spot automatically via policy and I can make updates on the version that is stored centrally, and they will propogate via policy. Because this module will be invoked using Invoke-VMScript, I’m not sure how it will work if it is called directly from a network share.

The Windows Update PowerShell Module has one main limitation that will need to be worked around. That limitation is that the Windows Update API cannot be called remotely, so PowerShell remoting cannot be used to launch a script using this module. It’s fairly simple to work around this, though, by using the Invoke-VMScript cmdlet.

This script will need to be executed in a 32-bit PowerShell session in order to use Invoke-VMScript to install Windows Updates. As of PowerCLI 5.1, VIX was a 32-bit only component, and the Invoke-VMScript and Wait-VMTools commands will not work if used in a 64-bit PowerShell window.

Parameters

There are a number of parameters that will control how this script is executed. The parameters and their descriptions are below.

vCServer – vCenter Server
Folder – vCenter Folder where templates are located
ParentVM – Name of a specific Linked-Clone ParentVM that you want to update
SnapshotType – Type of updates that are being installed. This is used for building the snapshot name string. Defaults to Windows Updates
Administrator – Account with local adminsitrator privileges on the ParentVM. If using a domain account, use the domain\username format for account names. Mandatory parameter
Password – Password of the local adminsitrator account. Mandatory Parameter

The Administrator and Password parameters are mandatory, and the script will not run without these parameters being defined. The account that is used in this section should have local administrator rights on the target linked-clone parent VMs as administrator permissions will be required when executing the Windows Updates section.

There are two parameters for determining which VMs are updated. You can choose to update all of the Linked-Clone parents that exist within a vCenter folder with the -Folder parameter, or you can use a comma-separated list of VMs with the -ParentVM parameter.

Finally, if your environment is small, I would recommend setting defaults for the vCServer, Folder, and SnapShotType parameters. Having a few default values coded in will make executing this script easier.

Executing the Script

The first thing that the script does is build the string that will be used for the snapshot name. The basic naming scheme for snapshots in the $work environment is “Month SnapshotType Parameter Date installed.” So if we’re doing Windows Updates for the month of September, the snapshot name string would look like “September Windows Update 9-xx-2013.”

After the snapshot name is created, the script will take the following actions:

  1. Power-on the VM
  2. Execute the Windows Update script
  3. Reboot the VM
  4. Wait for the VM to come up, then shut it down
  5. Take Snapshot

Step 1 Power On VM

Powering on the VM is probably the easiest step in this whole script. It’s a simple Start-VM command to get it powered up, and then the output from this command is piped into Wait-VMTools. Wait-VMTools is used here to wait for the VMware Tools to be registered as running before continuing onto the next step, which will rely heavily on the tools.

Step 2. Execute the Windows Update Script

Once the VMware Tools are ready, we can continue onto the next step – installing Windows Updates. To do this, the script will use the Invoke-VMScript cmdlet to execute a command from the Windows Update PowerShell Module from within the VM. In order to execute this successfully, the Invoke-VMScript cmdlet will need to use credentials that have local administrator rights on the linked-clone parent VM.

The command for this section will look something like this:

Invoke-VMScript -VM $vm -ScriptText “Path\to\script\get-wuinstall.ps1 -acceptall” -GuestUser $Administrator -GuestPassword $Password

This section of the script will take longer to run than any other section.

Step 3 – Reboot the VM

Once the updates have finished installing, we need to reboot the VM so they take effect. There is a AutoReboot switch as part of the Get-WUInstall script that is run to install the updates, but it doesn’t seem to work correctly when using the Invoke-VMScript cmdlet. This job also needs to watch the status of the VM and VMware Tools as we’ll need to know the status of both in order to know when to shut the VM down and snapshot it.

Rebooting the VM is fairly simple, and it just uses the Restart-GuestVM cmdlet.

Checking the status of VMware Tools during the shutdown phase of the reboot, thoutgh, is difficult. The normal method of checking status is to use the Wait-VMTools cmdlet, but if you pipe the output from Restart-GuestVM to Wait-VMTools, it will immediately move onto the next section because it shows that the tools are up and online. So this script needs a different method for checking the status of VMware Tools, and for that, a custom function will need to be written.

The custom function will use the Get-VM and Get-View to return the status of VMware tools The results of will be put into a Do-Until loop to watch for a change in the status before continuing onto the next section. We’ll run this Do-Until loop twice – once to check to see if the tools are shut down, and then once to see that the tools come back up.

The code for checking the status of the VMware Tools is:

$toolstatus = (Get-VM $vm | % { get-view $_.ID } | Select-Object @{ Name=”ToolsStatus”; Expression={$_.guest.toolsstatus}}).ToolsStatus

Step 4 – Shut down the VM

Once the reboot completes and the VMware Tools show that they are ready, it is time to shut down the linked-clone parent vm. The Shutdown-GuestVM cmdlet is used for this. After the shutdown is initiated, the script checks to see that the VM is fully powered off before moving onto the final step.

Step 5. – Snapshot the VM

The final step of this process is to take a snapshot of the updated linked-clone parent. This snapshot will be used for the linked-clones in VMware View during the next recompose operation. The snapshot name that the script put together as it’s first action will be used here. The command for taking the snapshot is New-Snapshot -VM vmname -Name $snapshotname.

Wrapup

If you’re only updating one VM, then the script will disconnect from vCenter and end. If you have multiple VMs that need updating, then it will look through and start at step 1 for the next linked-clone parent, and it will continue until all of the of the desktops have been updated.

The code for this script and others is available on github.

Patch Tuesday VDI Pains? We’ve got a script for that…Part 1

Having a non-persistent VDI environment doesn’t mean that Patch Tuesday is no longer a pain. In fact, it may mean more work for the poor VDI administrator who needs to make sure that all the Parent VMs are updated for a recompose. Some of the techniques for addressing patching, such as wake-on-LAN and scheduled installs, don’t necessarily apply to VDI Parent VMs and there is the additional step of snapshoting to prepare those parent VMs for deployment. And if an extended patch management solution like Solarwinds Patch Manager (formerly EminentWare) or Shavlik SCUPdates is not available, you’ll still need to manage updates for a
security risksproducts like Adobe Flash and Java.

So how can you streamline this process and make the process easier for the VDI administrator? There are a couple of ways to address the various pain points of patching VDI desktops for both Windows Updates and for other applications that might be installed in your environment.

This will be part 1 of 2 posts on how to automate updates for Linked-Clone parent VMs. This post will cover the process of updating, and the second post will dive into the code.

Patch Tuesday Process

Before we can start to address the various pain points, let’s look at how Patch Tuesday works for a non-Persistent linked-clone VDI environment and how it differs from a normal desktop environment. In a normal desktop environment, you can schedule Windows Updates to install after hours and use a Wake-on-LAN tool to make sure that every desktop is powered on to receive those updates and reboot automatically through Group Policy.

That procedure doesn’t apply for Linked-Clone desktops, and some additional orchestration is required to get the Linked-Clone parent VMs patched and ready for recompose. When patching Linked-Clone desktop images, you need to do the following:

  1. Power-on the Linked-Clone Parent VMs.
  2. Log into to install Windows and other updates
  3. Reboot the machine
  4. Repeat steps two and three as necessary
  5. Once all updates are installed, shut down the VM
  6. Take a snapshot

Adobe seems to have selected Microsoft’s Patch Tuesday (2nd Tueday of the Month) as their patch release date. Oracle, however, does not release on the same cycle, so updates to Java would require a second round of updates and recomposes if it is needed in your environment.

If you have more than a few linked-clone parent VMs in your environment, there is a significant time commitment involved in keeping them up-to-date. Even if you do your updates concurrently and do other things while the updates are installing, there are still parts that have to be done by hand such as power operations, snapshotting, and installing 3rd-party updates that aren’t handled through WSUS.

One Patch Source to Rule Them All

Rather than dealing with downloading and running the update on each Linked-Clone parent VM or the auto-update utility that comes with some of these smaller applications and plugins, we’ve standardized on one primary delivery mechanism at $Work – Microsoft WSUS. WSUS handles the majority of the patches we would deploy during the month, and it has a suite of reports built around it to track updates and the computer they’re installed, or fail to install, on. This makes it the perfect centerpiece to patch management in the environment at $work.

But WSUS doesn’t handle Adobe, Java, or other updates natively. WSUS 3.0 introduced an API that a number of patch management products use to add 3rd-party updates to the system. One of these products is an open-source solution called Local Updates Publisher.

Local Updates Publisher is a solution that allows an administrator to take a software update, be it an EXE, MSI, or MSP file, repackage it, and deploy it through WSUS. Additional rules can be built around this package to determine which machines are eligible for the update, and those updates can be approved, rejected, and/or targeted to various deployment groups right from within the application. It will also accept SCUP catalogs as an update source.

There is a bit of manual work involved with this method as some of the applications that are frequently updated do not come with SCUP catalogs – primarily Java. Adobe provides SCUP catalogs for Reader and Flash. There is a 3rd party SCUP catalog that does contain Java and other open-source applications from PatchMyPC.net (note – I have not used this product), and there are other options such as Solarwinds Patch Manager and Shavlik.

Having one centralized patch source will make it easier to automate patch installation.

Automating Updates

Once there is a single source for providing both Microsoft and 3rd party updates, the work on automating the installation of updates can begin. Automating the vSphere side of things will be done in PowerCLI, so the windows update solution should also use PowerShell. This leaves two options – POSHPaig, a hybrid solution that uses PowerShell to generate vbscript to run the updates, and the Windows Update PowerShell Module. POSHPaig is a good tool, but in my experience, it is more of a GUI product that works with multiple machines while the Windows Update PowerShell Module is geared more for scripted interactions.

The Windows Update PowerShell Module is a free module developed by Microsoft MVP Michal Gadja. It is a collection of local commands that will connect to the default update source – either WSUS or Microsoft Updates, download all applicable updates for the system, and automatically install them. The module will need to be stored locally on the Linked-Clone Parent VMs. I use Group Policy Preferences to load the module onto each machine as it ensures that the files will be loaded into the same place and updates will propogate automatically.

One of the limits of the Windows Updates API is that it cannot be called remotely, so the commands from this module will not work with PowerShell Remoting. There is another way to remotely call this script, though. The Invoke-VMScript can be used to launch this script through VMware Tools. In order to use Invoke-VMScript, the VIX API will need to be installed and the script run in a 32-bit instance of PowerShell.

On the Next Episode…

I didn’t originally plan on breaking this into two parts. But this is starting to run a little long. Rather than trying to cram everything into one post, I will be breaking this up into two parts and cover the script and some of the PowerShell/PowerCLI gotchas that came up when testing it out.

Updated Script – Start-Recompose.ps1

I will be giving my first VMUG presentation on Thursday, September 26th  at the Wisconsin VMUG meeting in Appleton, WI.  The topic of my presentation will be three scripts that we use in our VMware View environment to automate routine and time consuming tasks.

One of the scripts that I will be including in my presentation is the Start-Recompose script that I posted a few weeks ago.  I’ve made some updates to this script to address a few things that I’ve always wanted to improve with this script.  I’ll be posting about the other two scripts this week.

These improvements are:

  • Getting pool information directly from the View LDAP datastore instead of using the Get-Pool cmdlet
  • Checking for space on the Replica volume before scheduling the Recompose operation
  • Adding email and event logging alerts
  • The ability to recompose just one pool if multiple pools share the same base image.

The updated script will still need to be run from the View Connection Server as it requires the View PowerCLI cmdlets.  The vSphere PowerCLI cmdlets and the Quest AD cmdlets will also need to be available.  A future update will probably remove the need for the Quest cmdlets, but I didn’t feel like reinventing the wheel at the time.

The script can be downloaded from github here.

VMware View Pool Recompose PowerCLI Script

Edit – I’ve updated this script recently.  The updated version includes some additional features such as checking to make sure there is enough space on the replica volume to for a successful clone.  You can read more about it here: http://www.seanmassey.net/2013/09/updated-script-start-recomposeps1.html

One of the many hats that I wear at $work is the administration of our virtual desktop environment that is built on VMware View.  Although the specific details of our virtual desktop environment may be covered in another post, I will provide a few details here for background.  Our View environment has about 240 users and 200 desktops, although we only have about 150 people logged in at a given time.  It is almost 100% non-persistent, and the seven desktops that are set up as persistent are only due to an application licensing issue and refresh on logout like our non-persistent desktops.
Two of the primary tasks involved with that are managing snapshots and scheduling pool recompose operations as part of our patching cycle.  I wish I could say that it was a set monthly cycle, but a certain required plugin…*cough* Java *cough*…and one application that we use for web conferencing seem to break and require an update every time there is a slight update to Adobe Air.  There is also the occassional request that a department needs that is a priority and falls outside of the normal update cycle such as an application that needs to be added on short notice.
Our 200 desktops are grouped into sixteen desktop pools, and there are seven Parent VMs that are used as the base images for these sixteen pools.  That seems like a lot given the total number of desktops that we have, but there are business reasons for all of these including restrictions on remote access, department applications that don’t play nicely with ThinApp, and restrictions on the number of people from certain departments that can be logged in at one time. 
Suffice it to say that with sixteen pools to schedule recompose actions for as part of the monthly patch cycle, it can get rather tedious and time consuming to do it through the VMware View Administrator.  That is where PowerShell comes in.  View ships with set of PowerCLI cmdlets, and these can be run from any connection broker in your environment.  You can execute the script remotely, but the script file will need to be placed on your View Connection Broker.
I currently schedule this script to run using the Community Edition of the JAMS Job Scheduler, but I will be looking at using vCenter Orchestrator in the future to tie in automation of taking and removing snapshots.
The original inspiration for, and bits of, this script were originally written by Greg Carriger.  You can view his work on his blog.  My version does not take or remove any snapshots, and my version will work with multiple pools that are based on the same Parent VM.  The full script is availabe to download here.
Prerequisites:
PowerCLI 5.1 or greater installed on the Connection Broker
View PowerCLI snapin
PowerShell 2.0 or greater
View requires the full snapshot path in order to update the pool and do a recompose, so one of the first things that needs to be done is build the snapshot path.  This can be a problem if you’re not very good at cleaning up old snapshots (like I am…although I have a script for that now too).  That issue can be solved with the code below.
Function
Build-SnapshotPath
{
Param($ParentVM)
##
CreateSnapshotPath$Snapshots=Get-Snapshot-VM$ParentVM$SnapshotPath=“”ForEach($Snapshotin$Snapshots){$SnapshotName=$Snapshot.name$SnapshotPath=$SnapshotPath+“/”+$snapshotname}Return$snapshotpath}

Once you have our snapshot path constructed, you need to identify the pools that are based around the ParentVM.
$Pools
=Get-Pool|Where {$_.ParentVMPath-like“*$ParentVM*”}
A simple foreach loop can be used to iterate through and update your list of pools once you know which pools you need to update.  This section of code will update the default snapshot used for desktops in the pool, schedule the recompose operation, and write out to the event log that the operation was scheduled. 
Stop on Error is set to false as this script is intended to be run overnight, and View can, and will, stop a recompose operation over the slightest error.  This can leave destkops stuck in a halted state and inacccessible when staff come in to work the following morning.
ForEach
($Poolin$Pools)
{
$PoolName=$Pool.Pool_ID$ParentVMPath=$Pool.ParentVMPath#Update Base Image for PoolUpdate-AutomaticLinkedClonePool-pool_id$Poolname-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPath## Recompose
##Stop on Error set to false. This will allow the pool to continue recompose operations after hours if a single vm encounters an error rather than leaving the recompose tasks in a halted state.
Get-DesktopVM-pool_id$Poolname|Send-LinkedCloneRecompose-schedule$Time-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPathforceLogoff:$truestopOnError:$falseWrite-EventLogLogNameApplicationSourceVMwareView” –EntryTypeInformationEventID 9000 –MessagePool$Poolnamewillstarttorecomposeat$Timeusing$snapshotname.”
}