Integrating Microsoft Azure MFA with VMware Unified Access Gateway 3.8

One of the common questions I see is around integrating VMware Horizon with Microsoft Azure MFA. Natively, Horizon only supports RSA and RADIUS-based multifactor authentication solutions. While it is possible to configure Azure MFA to utilize RADIUS, it requires Network Policy Server and a special plugin for the integration (Another option existed – Azure MFA Server, but that is no longer available for new implementations as of July 2019).

Earlier this week, VMware released Horizon 7.11 with Unified Access Gateway 3.8. The new UAG contains a pretty cool new feature – the abilility to utilize SAML-based multifactor authentication solutions.  SAML-based multifactor identifaction allows Horizon to consume a number of modern cloud-based solutions.  This includes Microsoft’s Azure MFA solution.

And…you don’t have to use TrueSSO in order to implement this.

If you’re interested in learning about how to configure Unified Access Gateways to utilize Okta for MFA, as well as tips around creating web links for Horizon applications that can be launched from an MFA portal, you can read the operational tutorial that Andreano Lanusso wrote.  It is currently available on the VMware Techzone site.

Prerequisites

Before you can configure Horizon to utilize Azure MFA, there are a few prerequisites that will need to be in place.

First, you need to have licensing that allows your users to utilize the Azure MFA feature.  Microsoft bundles this into their Office 365 and Microsoft 365 licensing skus as well as their free version of Azure Active Directory.

Note: Not all versions of Azure MFA have the same features and capabilities. I have only tested with the full version of Azure MFA that comes with the Azure AD Premium P1 license.  I have not tested with the free tier or MFA for Office 365 feature-level options.

Second, you will need to make sure that you have Azure AD Connect installed and configured so that users are syncing from the on-premises Active Directory into Azure Active Directory.  You will also need to enable Azure MFA for users or groups of users and configure any MFA policies for your environment.

If you want to learn more about configuring the cloud-based version of Azure MFA, you can view the Microsoft documentation here.

There are a few URLs that we will need when configuring single sign-on in Azure AD.  These URLs are:

Case sensitivity matters here.  If you put caps in the SAML URL, you may receive errors when uploading your metadata file.

Configuring Horizon UAGs as a SAML Application in Azure AD

The first thing we need to do is create an application in Azure Active Directory.  This will allow the service to act as a SAML identity provider for Horizon.  The steps for doing this are:

  1. Sign into your Azure Portal.  If you just have Office 365, you do have Azure Active Directory, and you can reach it from the Office 365 Portal Administrator console.
  2. Go into the Azure Active Directory blade.
  3. Click on Enterprise Applications.
    1. Enterprise AD-Updated
  4. Click New Application.
    2. New Enterprise Application-Updated
  5. Select Non-Gallery Application.
    3. Non-Gallery Application
  6. Give the new application a name.
    4. Enterprise Application Name
  7. Click Add.
  8. Before we can configure our URLs and download metadata, we need to assign users to the app.  Click 1. Assign Users and Groups
    5. Assign Users and Groups
  9. Click Add User
    5a. Add Users
  10. Click where it says Users and Groups – None Selected.
    5b. Select Users
  11. Select the Users or Groups that will have access to Horizon. There is a search box at the top of the list to make finding groups easier in large environments.
    Note: I recommend creating a large group to nest your Horizon user groups in to simplify setup.
  12. Click Add.
  13. Click Overview.
    5c. Return to main menu
  14. Click 2. Set up single sign on.
    6. Configure SSO
  15. In the section labeled Basic SAML Configuration, click the pencil in the upper right corner of the box. This will allow us to enter the URLs we use for our SAML configuration.
    8. Basic SAML Configuration
  16. Enter the following items.  Please note that the URL paths are case sensitive, and putting in PORTAL, Portal, or SAMLSSO will prevent this from being set up successfully:
    1. In the Identifier (Entity ID) field, enter your portal URL.  It should look like this:
      https://horizon.uag.url/portal
    2. In the Reply URL (Assertion Consumer Service URL) field, enter your uag SAML SSO URL.  It should look like this:
      https://horizon.uag.url/portal/samlsso
    3. In the Sign on URL, enter your uag SAML SSO URL.  It should look like this:
      https://horizon.uag.url/portal/samlsso
      9. Basic SAML Configuration URLS
  17. Click Save.
  18. Review your user attributes and claims, and adjust as necessary for your environment. Horizon 7 supports logging in with a user principal name, so you may not need to change anything.
  19. Click the download link for the Federation XML Metadata file.
    10. Download Metadata URL File

We will use our metadata file in the next step to configure our identity provider on the UAG.

Once the file is downloaded, the Azure AD side is configured.

Configuring the UAG

Once we have completed the Azure AD configuration, we need to configure our UAGs to utilize SAML for multifactor authentication.

In order to do these steps, you will need to have an admin password set on the UAG appliance in order to access the Admin interface.  I recommend doing the initial configuration and testing on a non-production appliance.  Once testing is complete, you can either manually apply the settings to the production UAGs or download the configuration INI file and copy the SAML configuration into the production configuration files for deployment.

Note: You can configure SAML on the UAGs even if you aren’t using TrueSSO.  If you are using this feature, you may need to make some configuration changes on your connection servers.  I do not use TrueSSO in my lab, so I have not tested Azure MFA on the UAGs with TrueSSO.

The steps for configuring the UAG are:

  1. Log into the UAG administrative interface.
  2. Click Configure Manually.
  3. Go to the Identity Bridging Settings section.
  4. Click the gear next to Upload Identity Provider Metadata.
    11. Identity Provider Metadata
  5. Leave the Entity ID field blank.  This will be generated from the metadata file you upload.
  6. Click Select.
    12. Select IDP Metadata file
  7. Browse to the path where the Azure metadata file you downloaded in the last section is stored.  Select it and click Open.
    13. Select XML File Updated
  8. If desired, enable the Always Force SAML Auth option.
    Note: SAML-based MFA acts differently than RADIUS and RSA authentication. The default behavior has you authenticate with the provider, and the provider places an authentication cookie on the machine. Subsequently logins may redirect users from Horizon to the cloud MFA site, but they may not be force to reauthenticate. Enabling the Always Force SAML Auth option makes SAML-based Cloud MFA providers behave similiarly to the existing RADIUS and RSA-based multifactor solutions by requiring reauthentication on every login. Please also be aware that things like Conditional Access Policies in Azure AD and Azure AD-joined Windows 10 devices may impact the behavior of this solution.
  9. Click Save.
    14. Save IDP Data-Updated
  10. Go up to Edge Services Settings and expand that section.
  11. Click the gear icon next to Horizon Edge Settings.
  12. Click the More button to show all of the Horizon Edge configuration options.
  13. In the Auth Methods field, select one of the two options to enable SAML:
    1. If you are using TrueSSO, select SAML
    2. If you are not using TrueSSO, select SAML and Passthrough
      15. Select MFA Configuration
  14. Select the identity provider that will be used.  For Azure MFA, this will be the one labeled https://sts.windows.net
    16. Select Identity Provider
  15. Click Save.

SAML authentication with Azure MFA is now configured on the UAG, and you can start testing.

User Authentication Flows when using SAML

Compared to RADIUS and RSA, user authentication behaves a little differently when using SAML-based MFA.  When a user connects to a SAML-integrated environment, they are not prompted for their RADIUS or RSA credentials right away.

After connecting to the Horizon environment, the user is redirected to the website for their authentication solution.  They will be prompted to authenticate with this solution with their primary and secondary authentication options.  Once this completes, the Horizon client will reopen, and the user will be prompted for their Active Directory credentials.

You can configure the UAG to use the same username for Horizon as the one that is used with Azure AD, but the user will still be prompted for a password unless TrueSSO is configured.

Configuring SAML with Workspace ONE for AVI Networks

Earlier this year, VMware closed the acquisition of Avi Networks.  Avi Networks provides an application delivery controller solution designed for the multi-cloud world. While many ADC solutions aggregate the control plane and data plane on the same appliance, Avi Networks takes a different approach.  They utilize a management appliance for the control plane and multiple service engine appliances that handle load balancing, web application firewall, and other services for the data plane.

Integrating Avi Networks with Workspace ONE Access

The Avi Networks Controller appliance offers multiple options for integrating the management console into enterprise environments for authentication management.  One of the options that is avaiable is SAML.  This enables integration into Workspace ONE Access and the ability to take advantage of the App Catalog, network access restrictions and step-up authentication when administrators sign in.

Before I walk through the steps for integrating Avi Networks into Workspace ONE Access via SAML, I want to thank my colleague Nick Robbins.  He provided most of the information that enabled this integration to be set up in my lab environments and this blog post.  Thank you, Nick!

There are three options that can be selected for the URL when configuring SAML integration for Avi Networks.  The first option is to use the cluster VIP address.  This is a shared IP address that is used by all management nodes when they are clustered.  The second option is to use a fully-qualified domain name.

These options determine the SSO URL and entity ID that are used in the SAML configuration, and they are automatically generated by the system.

The third option is to use a user-provided entity ID.

For this walkthrough, we are going to use a fully-qualified domain name.

Prerequisites

Before we can begin configuring SAML integration, there are a few things we need to do.

First, we need to make sure a DNS record is in place for our Avi Controller.  This will be used for the fully-qualified domain name that is used when signing into our system.

Second, we need to get the Workspace One Access IDP metadata.  Avi does not import this automatically by providing a link the idp.xml file, so we need to download this file.  The steps for retrieving the metadata are:

  1. Log into your Workspace One Access administrator console.
  2. Go to App Catalog
  3. Click Settings
    7a. idp metadata WS1 Catalog Settings
  4. Under SaaS Apps, click SAML Metadata7b. idp metadata WS1 Catalog Settings idp
  5. Right click on Identity Provider Metadata and select Save Link As.  Save the file as idp.xml7c. idp metadata WS1 Catalog Settings idp
  6. Open the idp.xml file in your favorite text editor.  We will need to copy this into the Avi SAML configuration in the next step.

Avi Networks Configuration

The first thing that needs to be done is to configure an authentication profile to support SAML on the Avi Networks controller.  The steps for this are:

  1. Log into your Avi Networks controller as your administrative user.
  2. Go to Templates -> Security -> Auth Profile.
  3. Click Create to create a new profile.
  4. Provide a name for the profile in the Name field.
  5. Under Type, select SAML.

    6. SAML

  6. Copy the Workspace ONE SAML idp information into the idp Metadata field.  This information is located in the idp.xml file that we save in the previous section.8. Copy idp metadata to AVI SAML Profile
  7. Select Use DNS FQDN
  8. Fill in your organizational details.
  9. Enter the fully-qualified domain name that will be used for the SAML configuration in the FQDN field.
  10. Click Save

Next, we will need to collect some of our service provider metadata.  Avi Networks does not generate an xml file that can be imported into Workspace ONE Access, so we will need to enter our metadata manually.  There are three things we need to collect:

  • Entity ID
  • SSO URL
  • Signing Certificate

We will get the Entity ID and SSO URL from the Service Provider Settings screen.  Although this screen also has a field for signing certificate, it doesn’t seem to populate anyting in my lab, so we will have to get the certificate information from the SSL/TLS Certificate tab.

The steps for getting into the Service Provider Settings are:

  1. Go to Templates -> Security -> Auth Profile.
  2. Find the authentication profile that you created.
  3. Click on the Verify box on the far right side of the screen.  This is the square box with a question mark in it.  10. Get Auth Profile Details
  4. Copy the Entity ID and SSO URL and paste them into your favorite text editor.  We will be using these in the next step.11. Service Provider Settings
  5. Close the Service Provider Settings screen by clicking the X in the upper right-hand corner.

Next, we need to get the signing certificate.  This is the System-Default-Portal-Cert.  The steps to get it are:

  1. Go to Templates -> Security -> SSL/TLS Certificates.
  2. Find the System-Default-Portal-Cert.
  3. Click the Export button.  This is the circle with the down arrow on the right side of the screen.13. Export System-Default-Portal-Cert
  4. The certificate information is in the lower box labeled certificate.
  5. Click the Copy to Clipboard button underneath the certificate box.
  6. Paste the certificate in your favorite text editor.  We will also need this in the next step.
  7. Click Done to close the Export Certificate screen.

Configuring the Avi Networks Application Catalog item in Workspace One Access

Now that we have our SAML profile created in the Avi Networks Controller, we need to create our Workspace ONE catalog entry.  The steps for this are:

  1. Log into your Workspace One Access admin interface.
  2. Go to the Catalog tab.
  3. Click New to create a new App Catalog entry.14. Create WS1 New SaaS Application
  4. Provide a name for the new Avi Networks entry in the App Catalog.  14. WS1 New SaaS Application
  5. If you have an icon to use, click Select File and upload the icon for the application.
  6. Click Next.
  7. Enter the following details.  For the next couple of steps, you need to remain on the Configuration screen.  Don’t click next until you complete all of the configuration items:
    1. Authentication Type: SAML 2.0
    2. Configuration Type: Manual
    3. Single Sign-On URL: Use the single sign-on URL that you copied from the Avi Networks Service Provider Settings screen.
    4. Recipient URL: Same as the Single Sign-On URL
    5. Application ID: Use the Entity ID setting that you copied from the Avi Networks Service Provider Settings screen.15a. WS1 New SaaS App Configuration
    6. Username Format: Unspecified
    7. Username Value: ${user.email}
    8. Relay State URL: FQDN or IP address of your appliance15b. WS1 New SaaS App Configuration
  8. Expand Advanced Properties and enter the following values:
    1. Sign Response: Yes
    2. Sign Assertion: Yes15c. WS1 New SaaS App Configuration - Advanced
    3. Copy the value of the System-Default-Portal-Cert certificate that you copied in the previous section into the Request Signature field.15d. WS1 New SaaS App Configuration - Advanced
    4. Application Login URL: FQDN or IP address of your appliance.  This will enable SP-initiated login workflows.
  9. Click Next.
  10. Select an Access Policy to use for this application.  This will determine the rules used for authentication and access to the application.16. Assign Access Policy
  11. Click Next.
  12. Review the summary of the configuration.17. Save and Assign
  13. Click Save and Assign
  14. Select the users or groups that will have access to this application and the deployment type.18. Assign Users
  15. Click Save.

Enabling SAML Authentication in Avi Networks

In the last couple of steps, we created our SAML profile in Avi Networks and a SAML catalog item in Workspace One Access.  However, we haven’t actually turned SAML on yet or assigned any users to roles.  In this next section, we will enable SAML and grant superuser rights to SAML users.

Note: It is possible to configure more granular role-based access control by adding application parameters into the Workspace One Access catalog item and then mapping those parameters to different roles in Avi Networks.  This walkthrough will just provide a simple setup, and deeper RBAC integration will be covered in a possible future post.

  1. Log into your Avi Networks Management Console.
  2. Go Administration -> Settings -> Authentication/Authorization2. Settings
  3. Click the pencil icon to edit the Authentication/Authorization settings.
  4. Under Authentication, select Remote.
  5. 4. Authentication Remote
  6. Under Auth Profile, select the SAML profile that you created earlier.
  7. Make sure the Allow Local User Login box is checked.  If this box is not checked, and there is a configuration issue, you will not be able to log back into the controller.
  8. Click Save.9. Save AVI SAML Profile
  9. After saving the authentication settings, some new options will appear in the Authentication/Authorization screen to enable role mapping.
  10. Click New Mapping.9a. Create Role Mapping
  11. For Attribute, select Any
  12. Check the box labelled Super User9b. SuperUser
  13. Click Save.

SAML authentication is now configured on the Avi Networks Management appliance.

Testing SAML Authentication and Troubleshooting

So now that we have our authentication profiles configured in both Avi Networks and Workspace One Access, we need to test it to ensure our admin users can sign in.  There are two tests that should be run.  The first is launching Avi Networks from the Workspace One Access app catalog, and the second is doing an SP-initiated login by going to your Avi Networks URL.

In both cases, you should see a Workspace One Access authentication screen for login before being redirected to the Avi Networks management console.

In my testing, however, I had some issues in one of my labs where I would get a JSON error when attempting SAML authentication.  If you see this error, and you validate that all of your settings match, then reboot the appliance.  This solved the issue in my lab.

If SAML authentication breaks, and you need to gain access to the appliance management interface with a local account, then you need to provide a different URL.  That URL is https://avi-management-fqdn-or-ip/#!/login?local=1.

Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey

Recently, Mark Brookfield posted a three-part series on the process he uses for building Windows 10 images in HobbitCloud (Part 1, Part 2, Part 3). Mark has put together a great series of posts that explain the tools and the processes that he is using in his lab, and it has inspired me to revisit this topic and talk about the process and tooling I currently use in my lab and the requirements and decisions that influenced this design.

Why Automate Image Building?

Hand-building images is a time-intensive process.  It is also potentially error-prone as it is easy to forget applications and specific configuration items, requiring additional work or even building new images depending on the steps that were missed.  Incremental changes that are made to templates may not make it into the image building documentation, requiring additional work to update the image after it has been deployed.

Automation helps solve these challenges and provide consistent results.  Once the process is nailed down, you can expect consistent results on every build.  If you need to make incremental changes to the image, you can add them into your build sequence so they aren’t forgotten when building the next image.

Tools in My Build Process

When I started researching my image build process back in 2017, I was looking to find a way to save time and provide consistent results on each build.  I wanted a tool that would allow me to build images with little interaction with the process on my part.  But it also needed to fit into my lab.  The main tools I looked at were Packer with the JetBrains vSphere plugin and Microsoft Deployment Toolkit (MDT).

While Packer is an incredible tool, I ended up selected MDT as the main tool in my process.  My reason for selecting MDT has to do with NVIDIA GRID.  The vSphere Plugin for Packer does not currently support provisioning machines with vGPU, so using this tool would have required manual post-deployment work.

One nice feature of MDT is that it can utilize a SQL Server database for storing details about registered machines such as the computer name, the OU where the computer object should be placed, and the task sequence to run when booting into MDT.  This allows a new machine to be provisioned in a zero-touch fashion, and the database can be populated from PowerShell.

Unlike Packer, which can create and configure the virtual machine in vCenter, MDT only handles the operating system deployment.  So I needed some way to create and configure the VM in vCenter with a vGPU profile.  The best method of doing this is using PowerCLI.  While there are no native commandlets for managing vGPUs or other Shared PCI objects in PowerCLI, there are ways to utilize vSphere extension data to add a vGPU profile to a VM.

While MDT can install applications as part of a task sequence, I wanted something a little more flexible.  Typically, when a new version of an application is added, the way I had structured my task sequences required them to be updated to utilize the newer version.  The reason for this is that I wasn’t using Application Groups for certain applications that were going into the image, mainly the agents that were being installed, as I wanted to control the install order and manage reboots. (Yes…I may have been using this wrong…)

I wanted to reduce my operational overhead when applications were updated so I went looking for alternatives.  I ended up settling on using Chocolatey to install most of the applications in my images, with applications being hosted in a private repository running on the free edition of ProGet.

My Build Process Workflow

My build workflow consists of 7 steps with one branch.  These steps are:

  1. Create a new VM in vCenter
  2. Configure VM options such as memory reservations and video RAM
  3. GPU Flag Only – Add a virtual GPU with the correct profile to the VM.
  4. Identify Task Sequence that will be used.  There are different task sequences for GPU and non-GPU machines and logic in the script to create the task sequence name. Various parameters that are called when running the script help define the logic.
  5. Create a new computer entry in the MDT database.  This includes the computer name, MAC address, task sequence name, role, and a few other variables.  This step is performed in PowerShell using the MDTDB PowerShell module.
  6. Power on the VM. This is done using PowerCLI. The VM will PXE boot to a Windows PE environment configured to point to my MDT server.

Build Process

After the VM is powered on and boots to Windows PE, the rest of the process is hands off. All of the MDT prompts, such as the prompt for a computer name or the task sequence, are disabled, and the install process relies on the database for things like computer name and task sequence.

From this point forward, it takes about forty-five minutes to an hour to complete the task sequence. MDT installs Windows 10 and any drivers like the VMXNET3 driver, install Windows Updates from an internal WSUS server, installs any agents or applications, such as VMware Tools, the Horizon Agent, and the UEM DEM agent, silently runs the OSOT tool, and stamps the registry with the image build date.

Future Direction and Process Enhancements

While this process works well today, it is a bit cumbersome. Each new Windows 10 release requires a new task sequence for version control. It is also difficult to work tools like the OSDeploy PowerShell scripts by David Segura (used for slipstreaming updated into a Windows 10 WIM) into the process. While there are ways to automate MDT, I’d rather invest time in automating builds using Packer.

There are a couple of post-deployment steps that I would like to integrate into my build process as well. I would like to utilize Pester to validate the image build after it completes, and then if it passes, execute a shutdown and VM snapshot (or conversion to template) so it is ready to be consumed by Horizon. My plan is to utilize a tool like Jenkins to orchestrate the build pipeline and do something similar to the process that Mark Brookfield has laid out.

The ideal process that I am working towards will have multiple workflows to manage various aspects to the process. Some of these are:

1. A process for automatically creating updated Windows 10 ISOs with the latest Windows Updates using the OSDeploy PowerShell module.

2. A process for creating Chocolatey package updates and submitting them to my ProGet repository for applications managed by Chocolatey.

3. A process to build new images when Windows 10 or key applications (such as VMware Tools, the Horizon Agent, or NVIDIA Drivers) are updated. This process will ideally use Packer as the build tool to simplify management. The main dependency for this step is adding NVIDIA GRID support for the JetBrains Packer vSphere Plug-in.

So this is what I’m doing for image builds in my lab, and the direction I’m planning to go.

Horizon 7 Administration Console Changes

Over the last couple of releases, VMware has included an HTML5-based Horizon Console for managing Horizon 7.  Each release has brought this console closer to feature parity with the Flash-based Horizon Administrator console that is currently used by most administrators.

With the end-of-life date rapidly approaching for Adobe Flash, and some major browsers already making Flash more difficult to enable and use, there will be some changes coming to Horizon Administration.

  • The HTML5 console will reach feature parity with the Flash-based Horizon Administrator in the next release.  This includes a dashboard, which is one of the major features missing from the HTML5 console.  Users will be able to access the HTML5 console using the same methods that are used with the current versions of Horizon 7.
  • In the releases that follow the next Horizon release, users connecting to the current Flash-based console will get a page that provides them a choice to either go to the HTML5 console or continue to the Flash-based console.  This is similar to the landing page for vCenter where users can choose which console they want to use.

More information on the changes will be coming as the next version of Horizon is released.

More Than VDI…Let’s Make 2019 The Year of End-User Computing

It seems like the popular joke question at the beginning of every year is “Is this finally the year of VDI?”  The answer, of course, is always no.

Last week, Johan Van Amersfoort wrote a blog post about the virtues of VDI technology with the goal of making 2019 the “Year of VDI.”  Johan made a number of really good points about how the technology has matured to be able to deliver to almost every use case.

And today, Brian Madden published a response.  In his response, Brian stated that while VDI is a mature technology that works well, it is just a small subset of the broader EUC space.

I think both Brian and Johan make good points. VDI is a great set of technologies that have matured significantly since I started working with it back in 2011.  But it is just a small subset of what the EUC space has grown to encompass.

And since the EUC space has grown, I think it’s time to put the “Year of VDI” meme to bed and, in it’s place start talking about 2019 as the “Year of End-User Computing.”

When I say that we should make 2019 the “Year of End-User Computing,” I’m not referring to some tipping point where EUC solutions become nearly ubiquitous. EUC projects, especially in large organizations, require a large time investment for discovery, planning, and testing, so you can’t just buy one and call it a day.

I’m talking about elevating the conversation around end-user computing so that as we go into the next decade, businesses can truly embrace the power and flexibility that smartphones, tablets, and other mobile devices offer.

Since the new year is only a few weeks away, and the 2019 project budgets are most likely allocated, conversations you have around any new end-user computing initiatives will likely be for 2020 and beyond.

So how can you get started with these conversations?

If you’re in IT management or managing end-user machines, you should start taking stock of your management technologies and remote access capabilities.  Then talk to your users.  Yes…talk to the users.  Find out what works well, what doesn’t, and what capabilities they’d like to have.  Talk to the data center teams and application owners to find out what is moving to the cloud or a SaaS offering.  And make sure you have a line of communication open with your security team because they have a vested interest in protecting the company and its data.

If you’re a consultant or service provider organization, you should be asking your customers about their end-user computing plans and talking to the end-user computing managers. It’s especially important to have these conversations when your customers talk about moving applications out to the cloud because moving the applications will impact the users, and as a trusted advisor, you want to make sure they get it right the first time.  And if they already have a solution, make sure the capabilities of that solution match the direction they want to go.

End-Users are the “last mile of IT.” They’re at the edges of the network, consuming the reosurces in the data center. At the same time, life has a tendency to pull people away from the office, and we now have the technology to bridge the work-life gap.  As applications are moved from the on-premises data center to the cloud or SaaS platforms, a solid end-user computing strategy is critical to delivering business critical services while providing those users with a consistently good experience.

Rubrik 5.0 “Andes” – A Refreshing Expansion

Since they came out of stealth in 2015, Rubrik has significantly expanded the features and capabilities of their core product.  They have had 13 major releases and added features for cloud providers, multi-tenant environments, Polaris, a software-as-a-service platform that provides enhanced cloud features and global management, and Radar, a service that detects and protects against ransomware attacks.

Today, Rubrik is announcing their 14th major release – Andes 5.0.  The Andes release builds on top of Rubrik’s feature rich platform to further expand the capabilities of the product.  It expands support for both on-premises mission critical applications as well as cloud native applications, and it extends or enhances existing product features.

Key features of this release are:

Enhanced Oracle Protection

Oracle database backup support was introduced in the Rubrik 4.0 Alta release, and it was basically a scripted RMAN backup to a Rubrik managed volume.  The Rubrik team has been hard at work enhnacing this feature.

Rubrik is introducing a connector agent that can be installed on Oracle hosts or RAC nodes.  This connector will be able to discover instances and databases automatically, allowing SLAs to be applied directly to the hosts or the databases directly.

Simplified administration of Oracle backups isn’t the only Oracle enhancement in the Andes release.  The popular Live Mount feature has now been extended to Oracle environments.  If you’re not familiar with Live Mount, it is the ability to run a virtual machine or database directly from the backup.  This is useful for test and development environments or retrieving a single table or row that was accidentally dropped from a database.

Point-in-time recovery of Oracle environments is another new Oracle enhancement.  This feature allows Oracle administrators to restore their database to a specific point in time.  Rubrik will orchestrate the recovery of the database and replay log files to reach the specified point in time.

SAP HANA Protection

SAP HANA is the in-memory database that drives many SAP implementations.  In Andes 5.0, Rubrik offers an SAP-certified HANA backup solution that utilizes SAP’s BackInt APIs for HANA data protection.  This solution integrates with HANA Studio and SAP Cockpit.  The SAP HANA protection feature also supports point-in-time recovery and log management features.

HANA protection relies on another new feature of Andes called Elastic App Service.  Elastic App Service is a managed volume mounted on the Rubrik CDM and provide the same SLA driven policies that other Rubrik objects get.

Microsoft SQL Server Enhancements

Rubrik has supported Microsoft SQL Server backups since the 3.0 release, and there has been a steady stream of enhancements to this feature.  The Andes release is no different, and it adds two major SQL Server backup features.

The first is the introduction of Changed Block Tracking for SQL Server databases. This feature will act similarly to the CBT function provided in VMware vSphere.  The benefit of this feature is that the Rubrik backup service can now look at the database change file to determine what blocks need to be backed up rather than scanning the database for changes, allowing for a shorter backup window and reduced overhead on the SQL Server host.

Another SQL Server enhancement is group Volume Shadow Copy Service (VSS) snapshots.  Rubrik utilizes Microsoft’s VSS SQL Writer Service to provide a point-in-time copy of the database.  The SQL Writer Service does this by freezing all operations on, or quiescing, the database to take a VSS snapshot.  Once the snapshot is completed, the database resumes operations while Rubrik performs any backup operations against the snapshot.  This process needs to be repeated on each individual database that Rubrik backs up, and this can lead to lengthy backup windows when there are multiple databases on each SQL Server.

Group VSS snapshots allow Rubrik to protect multiple databases on the same server in with one VSS snapshot action.  Databases that are part of the same SLA group will have their VSS snapshots taken and processed at the same time.  This essentially parallelizes backup operations for that SLA group.  The benefits of this are a reduction in SQL Server backup times and the ability to perform backups more frequently.

Windows Bare-Metal Recovery

Rubrik started off as a virtualization backup product.  However, there are still large workloads that haven’t been virtualized.  While Rubrik supported some phyiscal backups, such as SQL Server database backups, it never supported full backup and recovery of physical Windows Servers.  This meant that it couldn’t fully support all workloads in the database.

The Andes 5.0 release introduces the ability to protect workloads and data that reside on physical Windows Servers.  This is done with the same level of simplicity as all other virtualized and physical database workloads.

Physical Windows backup is done through the existing Rubrik Backup Service that is used for database workloads.  The initial backup is a full system backup that is saved to a VHDX file, and all subsequent backups utilize changed block tracking to only backup the changes to the volumes.

Restoring to bare metal isn’t fully automated, but it seems fairly straightforward.  The host server boots to a WinPE environment, mounts a Live Mount of the Windows Volume snapshots, and then runs a PowerShell script to restore the volumes. Once the restore is complete, the server can be rebooted to the normal boot drive.

This option is not only good for backing up and protecting physical workloads, but it can also be used for P2V and P2C (or physical-to-cloud) migrations.

The Windows BMR feature only supports Windows Server 2008 R2, Server 2012 R2, and Server 2016.  It does not support Windows 7 or Windows 10.

SLA Policy Enhancements

Setting up backup policies inside of Rubrik is fairly simple.  You create an SLA domain, you set the frequency and retention period of backup points, and you apply that policy to virtual machines, databases, or other objects.

But what if you need more control over when certain backups are taken?  There may be policies in place that determine when certain kinds of backups need to occur.

Andes 5.0 introduces Advanced SLA Policy Configuration. This is an optional feature that enables administrators to not only specify the frequency and retention period of a backup point, but is also allows that administrator to specify when those backups take place.

For example, my policy may dictate that I need to take my monthly backup on the last day of each month.  Under Rubrik’s normal scheduling engine, I can only specify a monthly backup.  I can’t create a schedule that is only applied on the last day of the month.

Office365 Backup

Office365 is quickly replacing on-premises Exchange and Sharepoint servers as organizations move to the Software-as-a-Service model. While Micorsoft provides tools to help retain data, it is possible to permanently delete data. There are also scenarios where it is not easy to move data – such as migrating to a new Office365 tenant.

Starting with the Andes 5.0 release, Rubrik will support backup and recovery of Office365 email and calendar objects through the Polaris platform. Polaris will act as the control plane for Office365 backup operations, and it will utilize the customer’s own Azure cloud storage to host the backup data and the search index.

SLAs can be applied to individual users or to all users in a tenant.  When it is applied to all users, new users and mailboxes will automatically inherit the SLA so they are protected as soon as they are created.

The Office365 protection feature allows for individual items, folders, or entire mailboxes to be recovered.  These items can be restored to the original mailbox location or exported to another user’s mailbox.

Other Enhancements

The Andes 5.0 release is a very large release, and I’m scratching the surface of what’s being included.  Some other key highlights of this release are:

  • NAS Direct Archive – Direct backup of NAS filesets into the Cloud
  • Live Mount VMDKs from Snapshots
  • Improved vCenter Recovery – Can recover directly to ESXi host
  • EPIC EHR Database Backup on Pure Storage
  • Snapshot Retention Enhancements
  • Support for RSA Multi-factor Authentication
  • API Tokens for Authentication
  • Cloud Archive Consolidation

Thoughts

This is another impressive release from Rubrik.  There are a number of long-awaited feature enhancements in this release, and they continue to add new features at a rapid pace.

#DW3727KU – The Digital Workspaces Showcase Keynote Live Blog

In a few minutes, the Digital Workspace Showcase keynote will take place. This keynote will show the future of end-user computing. I will be updating this blog as they make announcements and perform demonstrations.

4:32 PM – Room is pretty full. Looks like we are running a few minutes behind while everyone takes their seats.

4:34 PM – The keynote is starting with a video about EUC issues. Some laughter in the crowd.

4:35 PM – Shankar Iyer and Noah Wasmer take the stage. They’re talking about the history of EUC at VMware. Noah is talking about the business transformation that VMware EUC can provide to a variety of use cases.

4:38 PM – Companies with engaged workforce’s earn 147% more per share than their non-engaged competitors.

4:39 PM – Horizon cloud services are available over 25 regions between AWS, Azure, and IBM Softlayer. Workspace ONE is processing over 450 BILLION events per month.

4:41 PM – CIOs are saying that they can’t recruit talent unless they upgrade their end-user computing infrastructure.

4:43 PM – Workspace ONE is the platform that will unify, abstract, and reduce device silos. There are five key pillars around reducing digital silos:

  • Employee Experience
  • Modern Management
  • Virtualization
  • Insights
  • Automation
  • There are three core ideas to bring an intelligence-driven digital workspace to life. These three pillars are built on a foundation of intelligence and automation.
  • 4:45 PM – Shankar announces the first Employees-First Award to highlight a customer that brings digital transformation to their employees. Adobe Systems wins the award.
  • 4:49 PM – Shawn Bass, VMware EUC CTO, takes the stage to talk about Redefining Modern Management of End-User Computing
  • The first announcement is Dell “Ready to Work” Solutions. Brett Hansen, VP at Dell, joins Shawn on stage.
  • The first point that they are discussing is the ability to manage Dell hardware with Workspace ONE.
  • Dell is providing factory integration of Workspace ONE so users can receive a laptop directly from Dell, already integrated with Workspace ONE, so they can boot it and have their applications provisioned as if they were a mobile device. Large applications are preloaded in the factory. This sounds like the existing Dell factory process with Workspace ONE preinstalled and registered.
  • In order to prevent untrusted applications from being run on the machine, Workspace ONE will integrate with Device Guard to prevent untrusted applications from running. Trusted applications can be downloaded and run through the Workspace ONE portal.
  • 4:57 PM – Announcing Windows 10 Industry Baselines. These are prepopulated templates with policies configured to meet various industry baselines. Baselines can be updated and modified by administrators. This solution provides 100% GPO coverage and 100% modern policy management coverage.
  • Device Update Readiness is an automation capability that will let IT fully automate the process of application compatibility testing. Workspace ONE Intelligence will detect applications that are blocking the deployment of the latest version of Windows and allow IT to automatically send alerts to the developers.
  • CVE Vulnerability Remediation is a Workspace ONE Intelligence service. It pulls in a CVE database into intelligence and provide information about the vulnerability and provides the ability to automate the approval of a patch, deployment of the patch, and alerting the security team that it was being proactively addressed.
  • 5:05 PM – Windows 10 isn’t the only ecosystem that is being updated. Enhancements are coming to MacOS, Android, Chrome, Google Glass, and Rugged/IoT.
  • 5:05 PM – Changes to work styles require changes to IT security.
  • Zero-trust security is a principle that states the device should never be trusted. Workspace ONE can help create a zero-trust environment. Workspace ONE allows for a defense-in-depth strategy where security can be applied at multiple layers.
  • The partnership with Okta enables IT to set policies that can prevent users from accessing Okta applications unless the device is managed. When attempting to access the application, WS ONE will perform a device check before sending the user to Okta for authentication.
  • Workspace ONE Trust Networks allows security tools to integrate with Workspace ONE Intelligence. This allows Workspace ONE to automate actions to prevent the user from introducing security risks into the environment.
  • VMware is also announcing four new Trust Networks partners – Checkpoint, Palo Alto Networks, Trend Micro, and zScaler.
  • 5:15 PM – Shikha Mittal and Angela Ge take the stage to discuss modernization of Windows Application Delivery.
  • Intelligence and automation are being added into the Horizon control plane. This is built into the Horizon Cloud Control Plane. A cloud connector will be available to enable automation and intelligence for on-premises Horizon environments. Horizon is also available on VMC, and the cloud connector enables management of these environments as well. Horizon Cloud on Softlayer and Horizon Cloud on Azure IaaS are managed directly from the Horizon Cloud Management Console.
  • The Horizon Cloud Management Console allows administrators to view all of their Horizon environments, both on-premises and in the cloud, and perform management actions against them. It also allows administrators to provision both Horizon on VMC and Horizon Cloud on Azure.
  • 5:25 PM – The Workpsace ONE agent can be installed on Horizon desktops, and when the desktop is provisioned, it becomes a managed device. This enables VMware UEM policies to be applied to Horizon desktops as well as view intelligence about the desktop, applications, and security posture of the entire physical and virtual desktop estate.
  • 5:30 PM – Announcing the Workspace ONE Intelligent Hub – combining the Workspace ONE app and the Airwatch Agent. Workspace ONE Intelligent Hub enables workflow driven activities with integrations into other enterprise systems like Service Now, an internal people directory, and a notifications page where the user can keep track of tickets and alerts from application notifications.
  • 5:36 PM – Shawn wraps up the keynote by announcing the EUC Beta Program. You can learn more at https://goo.gl/wZmXqK
  • VMworld Vegas Tips and Tricks

    VMworld is only a few weeks away.  Like the last two VMworlds, VMworld 2018 will be held at the Mandalay Bay Conference Center in Las Vegas.  This will be the last year that VMworld is at Mandalay Bay – it should make a return to San Francisco’s Moscone Center for 2019.

    Whether you’re a seasoned pro or attending VMworld for the first time, there are a few things you should know for getting the most out of your VMworld experience.

    1. Wear Comfortable, Broken-In Shoes – You will be doing A LOT of walking. And I mean a lot.  If you track your steps, you will probably find that you do over 20,000 steps each day.  And when you’re not walking, you will probably be spending a lot of time on your feet.  Having a comfortable pair of walking shoes is key to surviving the week.  Make sure you break these shoes in before you go to Vegas.
    2. Lighten Your Load – If your backpack is anything like mine, it’s filled with most things that we think we need on a day-to-day basis.  This could be an extra power supply, dongles and adapters for projectors, spare whiteboard markers, or whatever else ends up in our backpacks.  That can be a lot of extra weight that you carry around.  You won’t need most of this for VMworld.  Clean out your backpack before you go and leave the extra stuff at home.  If you plan to bring electronics with you that you won’t carry every day, make sure you take advantage of the safe in your hotel room to keep them secure.
    3. Spend Time in the Community Areas and Solutions Exchange – VMworld is about the sessions, right?  Nope.  While the sessions are important, don’t fill your entire schedule with back-to-back sessions and talks.  You will want to spend time exploring the solutions exchange to talk to vendors and in the community areas.  The Blogger Tables and the vBrownbag Community Stage are great places to meet others.
    4. Join Twitter – If you’re not already on Twitter, make sure you join it for VMworld.  There is a lot going on, and you can keep up with sessions and after-hours activities by tracking various hashtags like #VMworld and #VMworld3Word.  It’s also a great way to meet people.
    5. Go Outside – Yes, Vegas is hot.  But you’ll be spending most of the day indoors breathing recycled and air-conditioned air.  Step outside, even if its for 15 minutes and get some fresh air.
    6. Be Safe – There is a lot to do in Vegas, but if you step out at night to explore the town, make sure you’re safe.  The usual tourism rules apply.  Don’t carry any more cash than you need to, keep your wallet and cell phone in your front pocket, and be aware of your surroundings.

    Getting Started With UEM Part 2: Laying The Foundation – File Services

    In my last post on UEM, I discussed the components and key considerations that go into deploying VMware UEM.  UEM is made up of multiple components that rely on a common infrastructure of file shares and Group Policy to manage the user environment, and in this post, we will cover how to deploy the file share infrastructure.

    There are two file shares that we will be deploying.  These file shares are:

    • UEM Configuration File Share
    • UEM User Data Share

    Configuration File Share

    The first of the two UEM file shares is the configuration file share.  This file share holds the configuration data used by the UEM agent that is installed in the virtual desktops or RDSH servers.

    The UEM configuration share contains a few important subfolders.  These subfolders are created by the UEM management console during it’s initial setup, and they align with various tabs in the UEM Management Console.  We will discuss this more in a future article on using the UEM Management console.

    • General – This is the primary subfolder on the configuration share, and it contains the main configuration files for the agent.
    • FlexRepository – This subfolder under General contains all of the settings configured on the “User Environment” tab.  The settings in this folder tell the UEM agent how to configure policies such as Application Blocking, Horizon Smart Policies, and ADMX-based settings.

    Administrators can create their own subfolders for organizing application and Windows  personalization.  These are created in the user personalization tab, and when a folder is created in the UEM Management Console, it is also created on the configuration share.  Some folders that I use in my environment are:

    • Applications – This is the first subfolder underneath the General folder.  This folder contains the INI files that tell the UEM agent how to manage application personalization.  The Applications folder makes up one part of the “Personalization” tab.
    • Windows Settings – This folder contains the INI files that tell the UEM agent how to manage the Windows environment personalization.  The Windows Settings folder makes up the other part of the Personalization tab.

    Some environments are a little more complex, and they require additional configuration sets for different use cases.  UEM can create a silo for specific settings that should only be applied to certain users or groups of machines.  A silo can have any folder structure you choose to set up – it can be a single application configuration file or it can be an entire set of configurations with multiple sub-folders.  Each silo also requires its own Group Policy configuration.

    User Data File Share

    The second UEM file share is the user data file share.  This file share holds the user data that is managed by UEM.  This is where any captured application profiles are stored. It can also contain other user data that may not be managed by UEM such as folders managed by Windows Folder Redirection.  I’ve seen instances where the UEM User Data Share also contained other data to provide a single location where all user data is stored.

    The key thing to remember about this share is that it is a user data share.  These folders belong to the user, and they should be secured so that other users cannot access them.  IT administrators, system processes such as antivirus and backup engines, and, if allowed by policy, the helpdesk should also have access to these folders to support the environment.

    User application settings data is stored on the share.  This consists of registry keys and files and folders from the local user profile.  When this data is captured by the UEM agent, it is compressed in a zip file before being written out to the network.  The user data folder also can contain backup copies of user settings, so if an application gets corrupted, the helpdesk or the user themselves can easily roll back to the last configuration.

    UEM also allows log data to be stored on the user data share.  The log contains information about activities that the UEM agent performs during logon, application launch and close, and logoff, and it provides a wealth of troubleshooting information for administrators.

    UEM Shared Folder Replication

    VMware UEM is perfect for multi-site end-user computing environments because it only reads settings and data at logon and writes back to the share at user logoff.  If FlexDirect is enabled for applications, it will also read during an application launch and write back when the last instance of the application is closed.  This means that it is possible to replicate UEM data to other file shares, and the risk of file corruption is minimized due to file locks being minimized.

    Both the UEM Configuration Share and the UEM User Data share can be replicated using various file replication technologies.

    DFS Namespaces

    As environments grow or servers are retired, this UEM data may need to be moved to new locations.  Or it may need to exist in multiple locations to support multiple sites.  In order to simplify the configuration of UEM and minimize the number of changes that are required to Group Policy or other configurations, I recommend using DFS Namespaces to provide a single namespace for the file shares.  This allows users to use a single path to access the file shares regardless of their location or the servers that the data is located on.

    UEM Share Permissions

    It’s not safe assume that everyone is using Windows-based file servers to provide file services in their environment.  Because of that, setting up network shares is beyond the scope of this post.  The process of creating the share and applying security varies based on the device hosting the share.

    The required Share and NTFS/File permissions are listed in the table below. These contain the basic permissions that are required to use UEM.  The share permissions required for the HelpDesk tool are not included in the table.

    Share Share Permissions NTFS Permissions
    UEMConfiguration Administrators: Full Control

    UEM Admins: Change

    Authenticated Users: Read

    Administrators: Full Control

    UEM Admins: Full Control

    Authenticated Users: Read and Execute

    UserData Administrators: Full Control

    UEM Admins: Full Control

    Authenticated Users: Change

    Administrators: Full Control

    UEM Admins: Full Control

    Authenticated Users (This folder Only):

    Read and Execute

    Create Folders/Append Data

    Creator Owner (Subfolders and files only):

    Full Control

    Wrapup and Next Steps

    This post just provided a basic overview of the required UEM file shares and user permissions.  If you’re planning to do a multi-site environment or have multiple servers, this would be a good time to configure replication.

    The next post in this series will cover the setup and initial configuration of the UEM management infrastructure.  This includes setting up the management console and configuring Group Policy.

    Moving to the Cloud? Don’t Forget End-User Experience

    The cloud has a lot to offer IT departments.  It provides the benefits of virtualization in a consumption-based model, and it allows new applications to quickly be deployed while waiting for, or even completely forgoing, on-premises infrastructure.  This can provide a better time-to-value and greater flexibility for the business.  It can help organizations reduce, or eliminate, their on-premises data center footprint.

    But while the cloud has a lot of potential to disrupt how IT manages applications in the data center, it also has the potential to disrupt how IT delivers services to end users.

    In order to understand how cloud will disrupt end-user computing, we first need to look at how organizations are adopting the cloud.  We also need to look at how the cloud can change application development patterns, and how that will change how IT delivers services to end users.

    The Current State of Cloud

    When people talk about cloud, they’re usually talking about three different types of services.  These services, and their definitions, are:

    • Infrastructure-as-a-Service: Running virtual machines in a hosted, multi-tenant virtual data center.
    • Platform-as-a-Service: Allows developers to subscribe to build applications without having to build the supporting infrastructure.  The platform can include some combination of web services, application run time services (like .Net or Java), databases, message bus services, and other managed components.
    • Software-as-a-Service: Subscription to a vendor hosted and managed application.

    The best analogy to explain this comparing the different cloud offerings with different types of pizza restaurants using the graphic below from episerver.com:

    pizza

    Image retrieved from: http://www.episerver.com/learn/resources/blog/fred-bals/pizza-as-a-service/

    So what does this have to do with End-User Computing?

    Today, it seems like enterprises that are adopting cloud are going in one of two directions.  The first is migrating their data centers into infrastructure-as-a-service offerings with some platform-as-a-service mixed in.  The other direction is replacing applications with software-as-a-service options.  The former is migrating your applications to Azure or AWS EC2, the latter is replacing on-premises services with options like ServiceNow or Microsoft Office 365.

    Both options can present challenges to how enterprises deliver applications to end-users.  And the choices made when migrating on-premises applications to the cloud can greatly impact end-user experience.

    The challenges around software-as-a-service deal more with identity management, so this post will focus on migrating on-premises applications to the cloud.

    Know Thy Applications – Infrastructure-As-A-Service and EUC Challenges

    Infrastructure-as-a-Service offerings provide IT organizations with virtual machines running in a cloud service.  These offerings provide different virtual machines optimized for different tasks, and they provide the flexibility to meet the various needs of an enterprise IT organization.  They allow IT organizations to bring their on-premises business applications into the cloud.

    The lifeblood of many businesses is Win32 applications.  Whether they are commercial or developed in house, these applications are often critical to some portion of a business process.  Many of these applications were never designed with high availability or the cloud in mind, and the developer and/or the source code may be long gone.  Or they might not be easily replaced because they are deeply integrated into critical processes or other enterprise systems.

    Many Win32 applications have clients that expect to connect to local servers.  But when you move those servers to a remote datacenter, including the cloud, it can introduce problems that makes the application nearly unusable.  Common problems that users encounter are longer application load times, increased transaction times, and reports taking longer to preview and/or print.

    These problems make employees less productive, and it has an impact on the efficiency and profitability of the business.

    A few jobs ago, I was working for a company that had its headquarters, local office, and data center co-located in the same building.  They also had a number of other regional offices scattered across our state and the country.  The company had grown to the point where they were running out of space, and they decided to split the corporate and local offices.  The corporate team moved to a new building a few miles away, but the data center remained in the building.

    Many of the corporate employees were users of a two-tier business application, and the application client connected directly to the database server.  Moving users of a fat client application a few miles down the road from the database server had a significant impact on application performance and user experience.  Application response suffered, and user complaints rose.  Critical business processes took longer, and productivity suffered as a result.

    More bandwidth was procured. That didn’t solve the issue, and IT was sent scrambling for a new solution.  Eventually, these issues were addressed with a solution that was already in use for other areas of the business – placing the core applications into Windows Terminal Services and provide users at the corporate office with a published desktop that provided their required applications.

    This solution solved their user experience and application performance problems.  But it required other adjustments to the server environment, business process workflows, and how users interact with the technology that enables them to work.  It took time for users to adjust to the changes.  Many of the issues were addressed when the business moved everything to a colocation facility a hundred miles away a few months later.

    Ensuring Success When Migrating Applications to the Cloud

    The business has said it’s time to move some applications to the cloud.  How do you ensure it’s a success and meets the business and technical requirements of that application while making sure an angry mob of users don’t show up at your office with torches and pitchforks?

    The first thing is to understand your application portfolio.  That understanding goes beyond having visibility into what applications you have in your environment and how those applications work from a technical perspective.  You need wholistic view of your applications and  keep the following questions in mind:

    • Who uses the application?
    • What do the users do in the application?
    • How do the users access the application?
    • Where does it fit into business processes and workflows?
    • What other business systems does the application integrate with?
    • How is that integration handled?

    Applications rarely exist in a vacuum, and making changes to one not only impacts the users, but it can impact other applications and business processes as well.

    By understanding your applications, you will be able to build a roadmap of when applications should migrate to the cloud and effectively mitigate any impacts to both user experience and enterprise integrations.

    The second thing is to test it extensively.  The testing needs to be more extensive than functional testing to ensure that the application will run on the server images built by the cloud providers, and it needs to include extensive user experience and user acceptance testing.  This may include spending time with users measuring tasks with a stop-watch to compare how long tasks take in cloud-hosted systems versus on-premises systems.

    If application performance isn’t up to user standards and has a significant impact on productivity, you may need to start investigating solutions for bringing users closer to the cloud-hosted applications.  This includes solutions like Citrix, VMware Horizon Cloud, or Amazon Workspaces or AppStream. These solutions bring users closer to the applications, and it can give users an on-premises experience in the cloud.

    The third thing is to plan ahead.  Having a roadmap and knowing your application portfolio enables you to plan for when you need capacity or specific features to support users, and it can guide your architecture and product selection.  You don’t want to get three years into a five year migration and find out that the solution you selected doesn’t have the features you require for a use case or that the environment wasn’t architected to support the number of users.

    When planning to migrate applications from your on-premises datacenters to an infrastructure-as-a-service offering, it’s important to know your applications and take end-user experience into account.   It’s important to test, and understand, how these applications perform when the application servers and databases are remote to the application client.  If you don’t, you not only anger your users, but you also make them less productive and less profitable overall.