VDI in the Time of Frequent Windows 10 Upgrades

The longevity of Windows 7, and Windows XP before that, has spoiled many customers and enterprises.  It provided IT organizations with a stable base to build their end-user computing infrastructures and applications on, and users were provided with a consistent experience.  The update model was fairly well known – a major service pack with all updates and feature enhancements would come out after about one year.

Whether this stability was good for organizations is debatable.  It certainly came with trade-offs, security of the endpoint being the primary one.

The introduction of Windows 10 has changed that model, and Microsoft is continuing to refine that model.  Microsoft is now releasing two major “feature updates” for Windows 10 each year, and these updates will only be supported for about 18 months each.  Microsoft calls this the “Windows as a Service” model, and it consists of two production-ready semi-annual release channels – a targeted deployment that is used to pilot users to test applications, and a broad deployment that replaces the “Current Branch for Business” option for enterprises.

Gone are the days where the end user’s desktop will have the same operating system for it’s entire life cycle.

(Note: While there is still a long-term servicing branch, Microsoft has repeatedly stated that this branch is suited for appliances and “machinery” that should not receive frequent feature updates such as ATMs and medical equipment.)

In order to facilitate this new delivery model, Microsoft has refined their in-place operating system upgrade technology.  While it has been possible to do this for years with previous versions of Windows, it was often flaky.  Settings wouldn’t port over properly, applications would refuse to run, and other weird errors would crop up.  That’s mostly a thing of the past when working with physical Windows 10 endpoints.

Virtual desktops, however, don’t seem to handle in-place upgrades well.  Virtual desktops often utilize various additional agents to deliver desktops remotely to users, and the in-place upgrade process can break these agents or cause otherwise unexpected behavior.  They also have a tendancy to reinstall Windows Modern Applications that have been removed or reset settings (although Microsoft is supposed to be working on those items).

If Windows 10 feature release upgrades can break, or at least require significant rework of, existing VDI images, what is the best method for handling them in a VDI environment?

I see two main options.  The first is to manually uninstall the VDI agents from the parent VMs, take a snapshot, and then do an in-place upgrade.  After the upgrade is complete, the VDI agents would need to be reinstalled on the machine.  In my opinion, this option has a couple of drawbacks.

First, it requires a significant amount of time.  While there are a number of steps that could be automated, validating the image after the upgrade would still require an administrator.  Someone would have to log in to validate that all settings were carried over properly and that Modern Applications were not reinstalled.  This may become a significant time sink if I have multiple parent desktop images.

Second, this process wouldn’t scale well.  If I have a large number of parent images, or a large estate of persistent desktops, I have to build a workflow to remove agents, upgrade Windows, and reinstall agents after the upgrade.  Not only do I have to test this workflow significantly, but I still have to test my desktops to ensure that the upgrade didn’t break any applications.

The second option, in my view, is to rebuild the desktop image when each new version of Windows 10 is released.  This ensures that you have a clean OS and application installation with every new release, and it would require less testing to validate because I don’t have to check to see what broke during the upgrade process.

One of the main drawbacks to this approach is that image building is a time consuming process.  This is where automated deployments can be helpful.  Tools like Microsoft Deployment Toolkit can help administrators build their parent images, including any agents and required applications, automatically as part of a task sequence.  With this type of toolkit, and administrator can automate their build process so that when a new version of Windows 10 is released, or a core desktop component like the Horizon or XenDesktop agent is updated, the image will have the latest software the next time a new build is started.

(Note: MDT is not the only tool in this category.  It is, however, the one I’m most familiar with.  It’s also the tool that Trond Haavarstein, @XenAppBlog, used for his Automation Framework Tool.)

Let’s take this one step further.  As an administrator, I would be doing a new Windows 10 build every 6 months to a year to ensure that my virtual desktop images remain on a supported version of Windows.  At some point, I’ll want to do more than just automate the Windows installation so that my end result, a fully configured virtual desktop that is deployment ready, is available at the push of a button.  This can include things like bringing it into Citrix Provisioning Services or shutting it down and taking a snapshot for VMware Horizon.

Virtualization has allowed for significant automation in the data center.  Tools like VMware PowerCLI and the Nutanix REST API make it easy for administrators to deploy and manage virtual machines using a few lines of PowerShell.   Using these same tools, I can also take details from this virtual machine shell, such as the name and MAC address, and inject them into my MDT database along with a Task Sequence and role.  When I power the VM on, it will automatically boot to MDT and start the task sequence that has been defined.

This is bringing “Infrastructure as Code” concepts to end-user computing, and the results should make it easier for administrators to test and deploy the latest versions of Windows 10 while reducing their management overhead.

I’m in the process of working through the last bits to automate the VM creation and integration with MDT, and I hope to have something to show in the next couple of weeks.

 

Getting Started with VMware UEM

One of the most important aspects of any end-user computing environment is user experience, and a big part of user experience is managing the user’s Windows and application preferences.  This is especially true in non-persistent environments and published application environments where the user may not log into the same machine each time.

So why is this important?  A big part of a user’s experience on any desktop is maintaining their customizations.  Users invest time into personalizing their environment by setting a desktop background, creating an Outlook signature, or configuring the applications to connect to the correct datasets, and the ability to retain these settings make users more productive because they don’t have to recreate these every time they log in or open the application.

User settings portability is nothing new.  Microsoft Roaming Profiles have been around for a long time.  But Roaming Profiles also have limitations, such as casting a large net by moving the entire profile (or the App Data roaming folder on newer versions of Windows) or being tied to specific versions of Windows.

VMware User Environment Manager, or UEM for short, is one of a few 3rd-party user environment management tools that can provide a lighter-weight solution than Roaming Profiles.  UEM can manage both the user’s personalization of the environment by capturing Windows and application settings as well as apply settings to the desktop or RDSH session based on the user’s context.  This can include things like setting up network drives and printers, Horizon Smart Policies to control various Horizon features, and acting as a Group Policy replacement for per-user settings.

UEM Components

There are four main components for VMware UEM.  The components are:

  • UEM Management Console – The central console for managing the UEM configuration
  • UEM Agent – The local agent installed on the virtual desktop, RDSH server, or physical machine
  • Configuration File Share – Network File Share where UEM configuration data is stored
  • User Data File Share – Network File Share where user data is stored.  Depending on the environment and the options used, this can be multiple file shares.

The UEM Console is the central management tool for UEM.  The console does not require a database, and anything that is configured in the console is saved as a text file on the configuration file share.  The agent consumes these configuration files from the configuration share during logon and logoff, and it saves the application or Windows settings configuration when the application is closed or when the user logs off, and it stores them on the user data share as a ZIP file.

The UEM Agent also includes a few other optional tools.  These are a Self-Service Tool, which allows users to restore application configurations from a backup, and an Application Migration Tool.  The Application Migration Tool allows UEM to convert settings from one version of an application to another when the vendor uses different registry keys and AppData folders for different versions.  Microsoft Office is the primary use case for this feature, although other applications may require it as well.

UEM also includes a couple of additional tools to assist administrators with maintaining environment.  The first of these tools is the Application Profiler Tool.  This tool runs on a desktop or an RDSH Server in lieu of the UEM Agent.  Administrators can use this tool to create UEM profiles for applications, and it does this by running the application and tracking where the application writes to.  It can also be used to create default settings that are applied to an application when a user launches it, and this can be used to reduce the amount of time it takes to get users applications configured for the first time.

The other support tool is the Help Desk support tool.  The Helpdesk support tool allows helpdesk agents or other IT support to restore a backup of a user settings archive.

Planning for a UEM Deployment

There are a couple of questions you need to ask when deploying UEM.

  1. How many configuration shares will I have, and where will they be placed? – In multisite environments, I may need multiple configuration shares so the configs are placed near the desktop environments.
  2. How many user data shares will I need, and where will they be placed?  – This is another factor in multi-site environments.  It is also a factor in how I design my overall user data file structure if I’m using other features like folder redirection.  Do I want to keep all my user data together to make it easier to manage and back up, or do I want to place it on multiple file shares.
  3. Will I be using file replication technology? What replication technology will be used? – A third consideration for multi-site environments.  How am I replicating my data between sites?
  4. What URL/Name will be used to access the shares? – Will some sort of global namespace, like a DFS Namespace, be used to provide a single name for accessing the shares?  Or will each server be accessed individually?  This can have some implications around configuring Group Policy and how users are referred to the nearest file server.
  5. Where will I run the management console?  Who will have access to it?
  6. Will I configure UEM to create backup copies of user settings?  How many backup copies will be created?

These are the main questions that come up from an infrastructure and architecture perspective, and they influence how the UEM file shares and Group Policy objects will be configured.

Since UEM does not require a database, and it does not actively use files on a network share, planning for multisite deployments is relatively straight forward.

In the next post, I’ll talk about deploying the UEM supporting infrastructure.