Horizon 8.0 Part 2: Horizon Requirements

In order to deliver virtual desktops to end users, a Horizon environment requires multiple components working together in concert.  Most of the components that Horizon relies upon are VMware products, but some of the components, such as the database and Active Directory, are 3rd-party products.

The Basics

The smallest Horizon environment only requires three components to deliver a remote desktop session to end users: a desktop, a View Connection Server, and Active Directory.  Technically speaking, you do not need vCenter or ESXi as the Horizon agent can be installed on physical desktops.

Many environments, though, are built on vSphere, and the virtual infrastructure for this type of environment doesn’t need to be anything special.  For small proof of concepts or upgrade testing, one server with direct attached storage and enough RAM could support a few users.

All Horizon environments, from the simple one above to a complex multi-site Cloud Pod environment, are built on this foundation.  The core of this foundation is the View Connection Server.

Connection Servers are at the core of a Horizon environment.  They handle desktop provisioning, user authentication and broker user sessions to desktops.  They manage connections to multi-user desktops and published applications.

There are three roles that can be installed using the Connection Server installer, and all three roles have the same requirements.  These roles are:

  • Standard Connection Server – The first Connection Server installed in the environment.
  • Replica Connection Server – Additional Connection Servers that replicate from the standard connection server
  • Enrollment Server – The Enrollment Server was introduced in Horizon 7.  role is used to facilitate the new True SSO feature in conjunction with Workspace ONE Access and a local certificate authority.

The requirements for a Connection Server are:

  • 1 CPU, 4 vCPUs recommended
  • Minimum 4GB RAM, 10GB recommended if 50 or more users are connecting
  • Windows Server 2012 R2 or newer
  • Joined to an Active Directory domain
  • Static IP Address

Note: The requirements for the Enrollment Server are the same as the requirements for Connection Server.

Aside from the latest version of the Connection Server, the requirements are:

ESXi – ESXi is required for hosting the virtual machine The versions of ESXi that are supported by Horizon 2006 can be found in the VMware compatibility matrix.

vCenter Server – The versions of vCenter that are supported by Horizon 2006 can be found in the VMware compatibility matrix.

Active Directory – An Active Directory environment is required to handle user authentication to virtual desktops, and the domain must be set to at least the Server 2012 R2 functional level.  Group Policy is used for configuring parts of the environment, including desktop settings, user data redirection, UEM, and the remoting protocol.

Advanced Features

Horizon has a lot of features, and many of those features require additional components to take advantage of them.  These components add options like secure remote access, profile management, and instant clone desktops.

Secure Remote Access – The options for delivering secure remote access with Horizon have been simplified in Horizon 2006.  Traditionally, remote access had been provided by the Horizon Security Server, but this feature has been removed.  The Unified Access Gateway replaces the Security Server for all remote access functionality.

Networking Requirements – Horizon requires a number of ports to be opened to allow communication between the user’s endpoint and the remote desktop as well as communication between the management components.  The best source for showing all of the ports required by the various components is the VMware Horizon Network Ports diagram.  The Network Ports diagram can be found on TechZone.

Instant Clone Desktops – Instant Clones are a rapid desktop provisioning model.  With instant clones, desktops are created when the user signs in and are provisioned and ready to use within seconds.  When the user signs out, the desktop is destroyed.  Instant clones allow for elastic capacity and rolling image upgrades.  They support both floating and dedicated desktops.

One new feature of Horizon 2006 is a change to the Instant Clone provisioning model.  In Horizon 7, Instant Clones relied on a tree of VMs, including a powered-on parent VM on each host in the cluster that all of the desktops are forked from.  An additional deplopyment model is being added in Horizon 2006 that enables the benefits of Instant Clones without having to have that parent VM consuming resources on each host.

Other Components:  The Horizon Suite includes a number of tools to provide administrators with a full-fledged ecosystem for managing their virtual end-user computing environments.  These tools are App Volumes, Dynamic Environment Manager, and an on-premises version of Workspace ONE Access.

Horizon subscription licensing, including Horizon Universal Licensing, include the Horizon Service and it’s associated cloud features including Cloud Monitoring Service, Image Management Service, and Universal Broker, and an entitlement to ControlUp for user experience monitoring. The Horizon subscription licensing SKUs are required for running Horizon on cloud-based SDDCs like VMware Cloud on AWS, Azure VMware Service, and Google Cloud VMware Engine. These licenses also allow customers to utilize Horizon Cloud on Azure and Horizon Cloud on IBM Cloud.

Horizon 8.0 Part 1: Introduction

The last time VMware released a new major version of Horizon was back in 2016.  In the four years since Horizon 7 was released, there have been significant additions to the core product, including HTML5-based management and support consoles, significant enhancements to the Blast protocol and the Instant Clone provisioning model, the introduction of an Extended Service Branch for long-term support, and new client redirection features to support access to local drives, Skype for Business, and multimedia redirection.

Today, VMware has released the next major release of VMware Horizon.  Horizon 8, also known as Horizon 2006, brings several changes to the platform.  Some of these changes are large changes that bring new functionality to the platform, and other changes are deprecating or removing obsolete features.

Some of the features that are being removed, and their replacements are:

Deprecated/Removed Feature Replacement
Linked Clones and Composer* Instant Clones (available in all desktop SKUs)
Persistent Disks* and Persona Management DEM Standard/Enterprise and AV User Writable Volumes
Windows 7, Windows 8.1 and Server 2008 Support Windows 10 and Server 2012R2 and Newer
JMP Server Multicloud Assignments (Part of Horizon Subscription)
Horizon Administrator (FLEX/Flash Based) Horizon Console (HTML5)
ThinPrint VMware Integrated Printing
Security Server Unified Access Gateway
vRealize Operations for Horizon Cloud Monitoring Service and ControlUp Entitlement (Part of Horizon Subscription)

*Note: Linked Clones, Composer, and Persistent Disks are deprecated.  All other features listed have been removed from Horizon 2006.

Some of these changes have been in the works for a while.  Instant Clones have been around since Horizon 7 was released in 2016, and they have seen significant improvements with every release.  As part of Horizon 2006, Instant Clones will no longer be restricted to Horizon Enterprise and Horizon Apps Advanced.  Unified Access Gateway has been the designated Security Server replacement for a while now.

One of the most visible changes that comes with Horizon 8 is a change in branding and versioning.  Horizon is now moving to a naming scheme that involves the month and the year in the version that is in line with how many other brand their products.  This will make it easier to keep track of when a version is released. The first release of Horizon 8 will be 2006.

Some of the other changes that are included with Horizon 2006 are:

  • Expanded REST API that includes new primitives for managing entitlements and inventory items such as desktop pools and RDSH farms.
  • A new Instant Clone provisioning model that frees up resources on hosts by removing the instant clone parent VM that is deployed on each host.
  • Built-in digital watermarking tool to help protect intellectual property in virtual desktops

This is not an inclusive list, so please be sure to check out the release blog.  There will be more content explaining these features on Techzone in the coming days.

Series Overview

If you noticed the title, this is part 1 of a new series on Horizon.  The first part of this series will focus doing a basic Horizon architecture and setup.  After that, I hope to move into more advanced topics as time allows.  These include those that were not covered in my last series (or were left unfinished), including App Volumes, DEM, and RDSH.

Deep Dive – How Horizon Utilizes Active Directory

Microsoft Active Directory is the backbone of almost every enterprise network. It is also a very complex system, and large, multi-site organizations can have incredibly complex environments that stretch across multiple Active Directory forests.

I was recently on a support escalation with one of our service provider partners. The escalation revolved around integrating Horizon into a complex Active Directory environment that involved multiple Active Directory forests connected over a trust. While both Horizon and Active Directory were working properly, the design of these particular Active Directory environments caused issues that manifested in Horizon and other applications.

Active Directory

Before talking about how Horizon utilizes Active Directory, I want to do a little level setting. I won’t go into a full overview of Active Directory. This is a very large topic that can, and has, fill books, and Microsoft has some very good documentation on their public documentation site.

One Active Directory design concept that is important for Horizon deployments, especially large deployments where resource forests may be used, is Sites. Active Directory Sites are part of the logical representation of the physical network. They map physical IP space to logical network locations, and they serve multiple purposes in an Active Directory environment. One key role that sites fill is helping clients locate the closest computer that is providing a service. This includes domain controllers.

Windows has a built-in process for locating domain controllers. This process is part of the NetLogon service. During startup, the computer’s NetLogon service detects the site that the computer is located in. The site name is stored in the registry. During logon, NetLogon will use the site name to query for DNS SRV records to locate the domain controller for that site. This process is outlined in this Microsoft blog post. It gets more complicated when you have multiple forests as the site lookup is based on the domain membership of the computer, not the user.

How Horizon Interacts with Active Directory

So what does this have to do with Horizon and how it interacts with Active Directory?

When you set up a new Horizon pod, you’re not required to do any Active Directory setup. The Horizon Connection Server services run in the context of the local system account, and they utilize built-in processes to identify the domain.

The Windows NetLogon service includes processes to retrieve information about the local Active Directory environment, and there are Win32 APIs to allow applications to trigger this process. Horizon utilizes these APIs to discover the local domain and any trusted domains. The Windows DC Locator process will identify the closest domain controller to the site, and any queries against the domain will be targeted to that domain controller using the system’s Active Directory account. (Note: Write Operations, such as creating computer accounts for Instant Clones, will use not use the computer account credentials.)

If the Connection Server is not able to determine the site that it is in, then it will use any domain controller that is returned when querying DNS, and the DC Locator process will continue to query for domain controllers on a regular basis.

When it comes to integrating with Active Directory, Horizon isn’t doing anything special. We’re just building on top of what Microsoft has in Windows Server.

Troubleshooting

If AD sites are not set up properly, you may see performance issues, especially in network scenarios where Horizon cannot reach the domain controller that DNS is pointing them to.

These issues can include Active Directory user and group search results taking a long time to return, issues with user authentication, and issues with computer accounts for provisioned machines. This may also impact user login experience and site-aware services like file shares fronted by DFS Namespaces. These issues are mainly seen in large Active Directory environments with many sites, or in environments with trusts between forests, where sites are not properly set up or maintained.

So how do you troubleshoot Horizon issues with Active Directory? This Microsoft blog post provides a good starting point. You will need to use NetLogon debugging and the nltest command line tool to see what Active Directory site your servers are a member of and what domain controllers are being resolved when the DC Locator process runs.

This can get a little more complicated in cloud deployments, large enterprises or service provider scenarios where resource forests are being used. Site names become very important in these scenarios as the computer will use the local domain site name when searching for domain controllers across trusts. Fixing Active Directory issues in these environments may require site topology changes.

Best Practices

Horizon utilizes native Windows features when integrating with Active Directory. It’s important to have a solid Active Directory architecture and site topology to ensure good performance and user experience. This means having sites defined and subnets assigned to the correct site.

A well-defined site topology becomes very important in environments where a resource forest, connected to the on-premises Active Directory environment with a trust, will be used as the site names must match in both Active Directory environments for the DC Locator process to work properly. Active Directory design needs to be a part of the Horizon design process to avoid issues after deployment.

The Virtual Horizon Lab – February 2020

It’s been a while since I’ve done a home lab update.  In fact, the last one was over four years ago. William Lam’s home lab project and appearing on a future episode of “Hello from My Home Lab” with Lindy Collier has convinced me that it’s time to do an update.

My lab has both changed and grown since that last update.  Some of this was driven by vSphere changes – vSphere 6.7 required new hardware to replace my old R710s.  Changing requirements, new technology, and replacing broken equipment have also driven lab changes at various points.

My objectives have changed a bit too.  At the time of my last update, there were four key technologies and capabilities that I wanted in my lab.  These have changed as my career and my interests have changed, and my lab has evolved with it as well.  Today, my lab primarily focuses on end-user computing, learning Linux and AI, and running Minecraft servers for my kids.

vSphere Overview

The vSphere environment is probably the logical place to start.  My vSphere environment now consists of two vCenter Servers – one for my compute workloads and one for my EUC workloads.  The compute vCenter has two clusters – a 4 node cluster for general compute workloads and a 1 node cluster for backup.  The EUC vCenter has a single 2-node cluster for running desktop workloads.

Both environments run vSphere 6.7U3 and utilize the vCenter Server virtual appliance.  The EUC cluster utilzies VSAN and Horizon.  I don’t currently have NSX-T or vRealize Operations deployed, but those are on the roadmap to be redeployed.

Compute Overview

My lab has grown a bit in this area since the last update, and this is where the most changes have happened.

Most of my 11th generation Dell servers have been replaced, and I only have a single R710 left.  They were initially replaced by Cisco C220 M3 rackmounts, but I’ve switched back to Dell.  I preferred the Dell servers due to cost, availability, and HTML5-based remote management in the iDRACs.  Here are the specs for each of my clusters:

Compute Cluster – 4 Dell PowerEdge R620s with the following specs:

The R620s each have a 10GbE network card, but these cards are for future use.

Backup Cluster – 1 Dell PowerEdge R710 with the following specs:

This server is configured with local storage for my backup appliance.  This storage is provided by 1TB SSD SATA drives.

VDI Cluster – 2 Dell PowerEdge R720s with the following specs:

  • 2x Intel Xeon E5-2630 Processors
  • 96 GB RAM
  • NVIDIA Tesla P4 Card

Like the R620s, the R720s each have 10GbE networking available.

I also have an R730, however, it is not currently being used in the lab.

Network Overview

When I last wrote about my lab, I was using a pair of Linksys SRW2048 switches.  I’ve since replaced these with a pair of 48-port Cisco Catalyst 3560G switches.  One of the switches has PoE, and the other is a standard switch.  In addition to switching, routing has been enabled on these switches, and they act as the core router in the network.  HSRP is configured for redundancy.  These uplink to my firewall. Traffic in the lab is segregated into multiple VLANs, including a DMZ environment.

I use Ubiquiti AC-Lite APs for my home wifi.  The newer ones support standard PoE, which is provided by one of the Cisco switches.  The Unifi management console is installed on a Linux VM running in the lab.

For network services, I have a pair of PiHole appliances.  These appliances are running as virtual machines in the lab. I also have AVI Networks deployed for load balancing.

Storage Overview

There are two main options for primary storage in the lab.  Most primary storage is provided by Synology.  I’ve updated by Synology DS1515+ to a DS1818+.  The Synology appliance has four 4TB WD RED drives for capacity and four SSDs.  Two of the SSDs are used for a high-performance datastore, and the other two are used as a read-write cache for my primary datastore.  The array presents NFS-backed datastores to the VMware environment, and it also presents CIFS for file shares.

VSAN is the other form of primary storage in the lab.  The VSAN environment is an all-flash deployment in the VDI cluster, and it is used for serving up storage for VDI workloads.

The Cloud

With the proliferation of cloud providers and cloud-based services, it’s inevitable that cloud services work their way into home lab setups. My lab is no exception.

I use a couple of different cloud services in operating my lab across a couple of SaaS and cloud providers. These include:

  • Workspace ONE UEM and Workspace ONE Access
  • Office 365 and Azure – integrated with Workspace ONE through Azure AD
  • Amazon Web Services – management integrated into Workspace ONE Access, S3 as a offsite repository for backups
  • Atlassian Cloud – Jira and Confluence Free Tier integrated into Workspace ONE with Atlassian Access

Plans Going Forward

Home lab environments are dynamic, and they need to change to meet the technology and education needs of the users. My lab is no different, and I’m planning on growing my lab and it’s capabilities over the next year.

Some of the things I plan to focus on are:

  • Adding 10 GbE capability to the lab. I’m looking at some Mikrotik 24-port 10GbE SFP+ switches.
  • Upgrading my firewall
  • Implementing NSX-T
  • Deploying VMware Tunnel to securely publish out services like Code-Server
  • Putting my R730 back into production
  • Expanding my knowledge around DevOps and building pipelines to find ways to bring this to EUC
  • Work with Horizon Cloud Services and Horizon 7

Installing and Configuring the NVIDIA GRID License Server on CentOS 7.x

The release of NVIDIA GRID 10 included a new version of the GRID license server.  Rather than do an inplace upgrade of my existing Windows-based license servers that I was using in my lab, I decided to rebuild them on CentOS.

Prerequisites

In order to deploy the NVIDIA GRID license server, you will need two servers.  The license servers should be deployed in a highly-available architecture since the features enabled by the GRID drivers will not function if a license cannot be checked out.  These servers should be fully patched.  All of my CentOS boxes run without a GUI. All of the install steps will be done through the console, so you will need SSH access to the servers.

The license servers only require 2 vCPU and 4GB of RAM for most environments.  The license server component runs on Tomcat, so we will need to install Java and the Tomcat web server.  We will do that as part of our install.  Newer versions of Java default to IPv6, so if you are not using this technology in your environment, you will need to disable IPv6 on the server.  If you don’t, the license server will not be listening on any IPv4 addresses. While there are other ways to change Java’s default behavior, I find it easier to just disable IPv6 since I do not use it in my environment.

The documentation for the license server can be found on the NVIDIA docs site at https://docs.nvidia.com/grid/ls/2019.11/grid-license-server-user-guide/index.html

Installing the Prerequisites

First, we need to prepare the servers by installing and configuring our prerequisites.  We need to disable IPv6, install Java and Tomcat, and configure the Tomcat service to start automatically.

If you are planning to deploy the license servers in a highly available configuration, you will need to perform all of these steps on both servers.

The first step is to disable IPv6.  As mentioned above, Java appears to default to IPv6 for networking in recent releases on Linux.

The steps to do this are:

  1. Open the sysctl.conf file with the following command (substitute your preferred editor for nano).

    sudo nano /etc/sysctl.conf

  2. Add the following two lines at the end of the file:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1

  3. Save the file.
  4. Reboot to allow the changes to take effect.

Note: There are other ways to prevent Java from defaulting to IPv6.  These methods usually involve making changes to the application parameters when Java launches.  I selected this method because it was the easiest route to implement and I do not use IPv6 in my lab.

After the system reboots, the install can proceed.  The next steps are to install and configure Java and Tomcat.

  1. Install Java and Tomcat using the following commands:

    sudo yum install -y java tomcat tomcat-webapps

  2. Enable the tomcat service so that it starts automtically on reboot

    sudo systemctl enable tomcat.service

  3. Start Tomcat.

    sudo systemctl start tomcat.service

Finally, we will want to configure our JAVA_HOME variable.  The license server includes a command line tool, nvidialsadmin, that can be used to configure password authentication for the license server management console, and that tool requires a JAVA_HOME variable to be configured.  These steps will create the variable for all users on the system.

  1. Run the following command to see the path to the Java install:

    sudo alternatives –config java

  2. Copy the path to the Java folder, which is in parenthesis.  Do not include anyting after “jre/’
  3. Create a Bash plugin for Java with the following command:

    sudo nano /etc/profile.d/java.sh

  4. Add the following lines to the file:

    export JAVA_HOME=(Your Path to Java)
    export PATH=$PATH:$JAVA_HOME/bin

  5. Save the file.
  6. Reboot the system.
  7. Test to verify that the JAVA_HOME variable is set up properly

    echo $JAVA_HOME

Installing the NVIDIA License Server

Now that the prerequisites are configured, the NVIDIA license server software can be installed.  The license server binaries are stored on the NVIDIA Enterprise Licensing portal, and they will need to be downloaded on another machine and copied over using a tool like WinSCP.

The steps for installing the license server once the installer has been copied to the servers re:

  1. Set the binary to be executable.

    chmod +x setup.bin

  2. Run the setup program in console mode.

    sudo ./setup.bin -i console

  3. The first screen is a EULA that will need to be accepted.  To scroll down through the EULA, press Enter until you get to the EULA acceptance.
  4. Press Y to accept the EULA.
  5. When prompted, enter the path for the Tomcat WebApps folder.  On CentOS, this path is:
    /usr/share/tomcat
  6. When prompted, press 1 to enable firewall rules for the license server.  This will enable the license server port on TCP7070.
    Since this is a headless server, the management port on TCP8080 will also need to be enabled.  This will be done in a later step.
  7. Press Enter to install.
  8. When the install completes, press enter to exit the installer.

After the install completes, the management port firewall rules will need to be configured.  While the management interface can be secured with usernames and passwords, this is not configured out of the box.  The normal recommendation is to just use the browser on the local machine to set the configuration, but since this is a headless machine, that’s not avaialble either. For this step, I’m applying the rules to an internal zone and restricting access to the management port to the IP address of my management machine.  The steps for this are:

  1. Create a firewall rule for port TCP port 8080.

    sudo firewall-cmd –permanent –zone=internal –add-port=8080/tcp

  2. Create a firewall rule for the source IP address.

    sudo firewall-cmd –permanent –zone=internal –add-source=Management-Host-IP/32

  3. Reload the firewall daemon so the new rules take effect:

    sudo firewall-cmd –reload

Configuring the License Server For High Availability

Once the firewall rules for accessing the management port are in place, the server configuration can begin.  These steps will consist of configuring the high availability features.  Registering the license servers with the NVIDIA Licensing portal and retrieving and applying licenses will not be handled in this step.

In order to set the license servers up for high availability, you will need two servers running the same version of the license server software.  You will also need to identify which servers will be the primary and secondary servers in the infrastructure.

  1. Open a web browser on your management machine and go to http://<primary license server hostname or IP>:8080/licserver
  2. Click on Configuration
  3. In the License Generation section, fill in the following details:
    1. Backup URI:
      http://<secondary license server hostname or IP>:7070/fne/bin/capability
    2. Main URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  4. In the Settings for server to server sync between License servers section, fill in the following details:
    1. Synchronization to fne enabled: True
    2. Main FNE Server URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  5. Click Save.
  6. Open a new browser window or tab and go to go to http://<secondary license server hostname or IP>:8080/licserver
  7. Click on Configuration
  8. In the License Generation section, fill in the following details:
    1. Backup URI:
      http://<secondary license server hostname or IP>:7070/fne/bin/capability
    2. Main URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  9. In the Settings for server to server sync between License servers section, fill in the following details:
    1. Synchronization to fne enabled: True
    2. Main FNE Server URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  10. Click Save.

Summary

After completing the high availability setup section, the license servers are ready for the license file.  In order to generate and install this, the two license servers will need to be registered with the NVIDIA licensing service.  The steps to complete those tasks will be covered in a future post.

Integrating Rubrik Andes 5.1 with Workspace ONE Access

Early in December, Rubrik released the latest version of their core data protection platform – Andes 5.1. One of the new features in this release is support for SAML identity providers.  SAML integration provides new capabilities to service providers and large enterprises by enabling integration into enterprise networks without having to directly integrate into Active Directory.

Rubrik also supports multi-factor authentication, but the only method supported out of the box is RSA SecurID.  SAML integration enables enterprises to utilize other forms of multi-factor authentication, including RADIUS-based services and Azure MFA.  It also allows for other security policies to be implemented including device-based compliance checks.

Prerequisites

Before we can begin configuring SAML integration, there are a few things we need to do.  These prerequisites are similar to the Avi Networks SAML setup, but we won’t need to open the Workspace ONE Access metadata file in a text editor.

First, we need to make sure a DNS record is in place for our Rubrik environment.  This will be used for the fully-qualified domain name that is used when signing into our system.

Second, we need to get the Workspace One Access IDP metadata.  Rubrik does not import this automatically by providing a link the idp.xml file, so we need to download this file.  The steps for retrieving the metadata are:

  1. Log into your Workspace One Access administrator console.
  2. Go to App Catalog
  3. Click Settings
    7a. idp metadata WS1 Catalog Settings
  4. Under SaaS Apps, click SAML Metadata7b. idp metadata WS1 Catalog Settings idp
  5. Right click on Identity Provider Metadata and select Save Link As.  Save the file as idp.xml7c. idp metadata WS1 Catalog Settings idp

Rubrik SAML Configuration

Once the prerequisites are taken care of, we can start the SAML configuration on the Rubrik side.  This consists of generating the Rubrik SAML metadata and uploading the Workspace ONE metadata file.

  1. Log into your Rubrik Appliance.
  2. Go to the Gear icon in the upper right corner and select Users1. Users Menu
  3. Select Identity Providers2. Identity Providers
  4. Click Add Identity Provider3. Add Identity Providers
  5. Provide a name in the Identity Provider Name field.
  6. Click the folder icon next to the Identity Provider Metadata field.
  7. Upload the idp.xml file we saved in the last step.
  8. Select the Service Provider Host Address Option.  This can be a DNS Name or the cluster floating IP depending on your environment configuration.  For this setup, we will be doing a DNS Name.
  9. Enter the DNS name in the field.
  10. Click Download Rubrik Metadata.4. Rubrik Identity Provider Config
  11. Click Add.
  12. Open the Rubrik Metadata file in a text editor.  We will need this in the next step.

Workspace ONE Configuration

Now that the Rubrik side is configured, we need to create our Workspace ONE catalog entry.  The steps for this are:

  1. Log into your Workspace One Access administrator panel.
  2. Go to the Catalog tab.
  3. Click New to create a new App Catalog entry.
  4. Provide a name for the new Rubrik entry in the App Catalog.
  5. If you have an icon to use, click Select File and upload the icon for the application.
    5. New SaaS Application
  6. Click Next.
  7. In the Authentication Type field, select SAML 2.0
  8. In Configuration, select URL/XML
    6. SaaS Configuration 1
  9. Copy the contents of the Rubrik Metadata XML file.
  10. Paste them into the URL/XML textbox.
  11. Scroll down to the Advanced Properties section.
  12. Expand Advanced Properties.
  13. Click the toggle switch under Sign Assertion
    7. Sign Assertion
  14. Click Next.
  15. Select an Access Policy to use for this application. This will determine the rules used for authentication and access to the application.
    16. Assign Access Policy
  16. Click Next.
  17. Review the Summary of the Configuration
  18. Click Save and Assign
  19. Select the users or groups that will have access to this application
  20. Click Save.

Authorizing SAML Users in Rubrik

The final configuration step is to authorize Workspace ONE users within Rubrik and assign them to a role.  This step only works with individual users.  While testing, I couldn’t find a way to have it accept users based on a group or SAML attribute.

The steps for authorizing Workspace ONE users is:

  1. Log into your Rubrik Appliance.
  2. Go to the Gear icon in the upper right corner and select Users1. Users Menu
  3. Select Users and Groups8. Users and Groups
  4. Click Grant Authorization9. Grant Authorization
  5. Select the directory.
    10. Select Directory
  6. Select User and enter the username that the user will use when signing into Workspace ONE.11. Enter Username
  7. Click Continue.
  8. Select the role to assign to the user and click Assign.12. Assign Rights
  9. The SAML user has been authorized to access the Rubrik appliance through SSO.

Testing SAML Authentication and Troubleshooting

So now that we have our authentication profiles configured in both Rubrik and Workspace One Access, we need to test it to ensure our admin users can sign in.  In order to test access, you need to sign out of your Rubrik appliance.  When you return to the login screen, you’ll see that it has changed slightly, and there will be a large “Sign in with SSO” button above the username field.  When pressed, users will be directed to Workspace ONE and authenticated.

While Rubrik may be listed in the Workspace ONE Access App Catalog, launching from the app catalog will just bring you to the login page.  I could not figure out how to get IdP-initiated logins to work, and some of my testing resulted in error pages that showed metadata errors.

Integrating Microsoft Azure MFA with VMware Unified Access Gateway 3.8

One of the common questions I see is around integrating VMware Horizon with Microsoft Azure MFA. Natively, Horizon only supports RSA and RADIUS-based multifactor authentication solutions. While it is possible to configure Azure MFA to utilize RADIUS, it requires Network Policy Server and a special plugin for the integration (Another option existed – Azure MFA Server, but that is no longer available for new implementations as of July 2019).

Earlier this week, VMware released Horizon 7.11 with Unified Access Gateway 3.8. The new UAG contains a pretty cool new feature – the abilility to utilize SAML-based multifactor authentication solutions.  SAML-based multifactor identifaction allows Horizon to consume a number of modern cloud-based solutions.  This includes Microsoft’s Azure MFA solution.

And…you don’t have to use TrueSSO in order to implement this.

If you’re interested in learning about how to configure Unified Access Gateways to utilize Okta for MFA, as well as tips around creating web links for Horizon applications that can be launched from an MFA portal, you can read the operational tutorial that Andreano Lanusso wrote.  It is currently available on the VMware Techzone site.

Prerequisites

Before you can configure Horizon to utilize Azure MFA, there are a few prerequisites that will need to be in place.

First, you need to have licensing that allows your users to utilize the Azure MFA feature.  Microsoft bundles this into their Office 365 and Microsoft 365 licensing skus as well as their free version of Azure Active Directory.

Note: Not all versions of Azure MFA have the same features and capabilities. I have only tested with the full version of Azure MFA that comes with the Azure AD Premium P1 license.  I have not tested with the free tier or MFA for Office 365 feature-level options.

Second, you will need to make sure that you have Azure AD Connect installed and configured so that users are syncing from the on-premises Active Directory into Azure Active Directory.  You will also need to enable Azure MFA for users or groups of users and configure any MFA policies for your environment.

If you want to learn more about configuring the cloud-based version of Azure MFA, you can view the Microsoft documentation here.

There are a few URLs that we will need when configuring single sign-on in Azure AD.  These URLs are:

Case sensitivity matters here.  If you put caps in the SAML URL, you may receive errors when uploading your metadata file.

Configuring Horizon UAGs as a SAML Application in Azure AD

The first thing we need to do is create an application in Azure Active Directory.  This will allow the service to act as a SAML identity provider for Horizon.  The steps for doing this are:

  1. Sign into your Azure Portal.  If you just have Office 365, you do have Azure Active Directory, and you can reach it from the Office 365 Portal Administrator console.
  2. Go into the Azure Active Directory blade.
  3. Click on Enterprise Applications.
    1. Enterprise AD-Updated
  4. Click New Application.
    2. New Enterprise Application-Updated
  5. Select Non-Gallery Application.
    3. Non-Gallery Application
  6. Give the new application a name.
    4. Enterprise Application Name
  7. Click Add.
  8. Before we can configure our URLs and download metadata, we need to assign users to the app.  Click 1. Assign Users and Groups
    5. Assign Users and Groups
  9. Click Add User
    5a. Add Users
  10. Click where it says Users and Groups – None Selected.
    5b. Select Users
  11. Select the Users or Groups that will have access to Horizon. There is a search box at the top of the list to make finding groups easier in large environments.
    Note: I recommend creating a large group to nest your Horizon user groups in to simplify setup.
  12. Click Add.
  13. Click Overview.
    5c. Return to main menu
  14. Click 2. Set up single sign on.
    6. Configure SSO
  15. In the section labeled Basic SAML Configuration, click the pencil in the upper right corner of the box. This will allow us to enter the URLs we use for our SAML configuration.
    8. Basic SAML Configuration
  16. Enter the following items.  Please note that the URL paths are case sensitive, and putting in PORTAL, Portal, or SAMLSSO will prevent this from being set up successfully:
    1. In the Identifier (Entity ID) field, enter your portal URL.  It should look like this:
      https://horizon.uag.url/portal
    2. In the Reply URL (Assertion Consumer Service URL) field, enter your uag SAML SSO URL.  It should look like this:
      https://horizon.uag.url/portal/samlsso
    3. In the Sign on URL, enter your uag SAML SSO URL.  It should look like this:
      https://horizon.uag.url/portal/samlsso
      9. Basic SAML Configuration URLS
  17. Click Save.
  18. Review your user attributes and claims, and adjust as necessary for your environment. Horizon 7 supports logging in with a user principal name, so you may not need to change anything.
  19. Click the download link for the Federation XML Metadata file.
    10. Download Metadata URL File

We will use our metadata file in the next step to configure our identity provider on the UAG.

Once the file is downloaded, the Azure AD side is configured.

Configuring the UAG

Once we have completed the Azure AD configuration, we need to configure our UAGs to utilize SAML for multifactor authentication.

In order to do these steps, you will need to have an admin password set on the UAG appliance in order to access the Admin interface.  I recommend doing the initial configuration and testing on a non-production appliance.  Once testing is complete, you can either manually apply the settings to the production UAGs or download the configuration INI file and copy the SAML configuration into the production configuration files for deployment.

Note: You can configure SAML on the UAGs even if you aren’t using TrueSSO.  If you are using this feature, you may need to make some configuration changes on your connection servers.  I do not use TrueSSO in my lab, so I have not tested Azure MFA on the UAGs with TrueSSO.

The steps for configuring the UAG are:

  1. Log into the UAG administrative interface.
  2. Click Configure Manually.
  3. Go to the Identity Bridging Settings section.
  4. Click the gear next to Upload Identity Provider Metadata.
    11. Identity Provider Metadata
  5. Leave the Entity ID field blank.  This will be generated from the metadata file you upload.
  6. Click Select.
    12. Select IDP Metadata file
  7. Browse to the path where the Azure metadata file you downloaded in the last section is stored.  Select it and click Open.
    13. Select XML File Updated
  8. If desired, enable the Always Force SAML Auth option.
    Note: SAML-based MFA acts differently than RADIUS and RSA authentication. The default behavior has you authenticate with the provider, and the provider places an authentication cookie on the machine. Subsequently logins may redirect users from Horizon to the cloud MFA site, but they may not be force to reauthenticate. Enabling the Always Force SAML Auth option makes SAML-based Cloud MFA providers behave similiarly to the existing RADIUS and RSA-based multifactor solutions by requiring reauthentication on every login. Please also be aware that things like Conditional Access Policies in Azure AD and Azure AD-joined Windows 10 devices may impact the behavior of this solution.
  9. Click Save.
    14. Save IDP Data-Updated
  10. Go up to Edge Services Settings and expand that section.
  11. Click the gear icon next to Horizon Edge Settings.
  12. Click the More button to show all of the Horizon Edge configuration options.
  13. In the Auth Methods field, select one of the two options to enable SAML:
    1. If you are using TrueSSO, select SAML
    2. If you are not using TrueSSO, select SAML and Passthrough
      15. Select MFA Configuration
  14. Select the identity provider that will be used.  For Azure MFA, this will be the one labeled https://sts.windows.net
    16. Select Identity Provider
  15. Click Save.

SAML authentication with Azure MFA is now configured on the UAG, and you can start testing.

User Authentication Flows when using SAML

Compared to RADIUS and RSA, user authentication behaves a little differently when using SAML-based MFA.  When a user connects to a SAML-integrated environment, they are not prompted for their RADIUS or RSA credentials right away.

After connecting to the Horizon environment, the user is redirected to the website for their authentication solution.  They will be prompted to authenticate with this solution with their primary and secondary authentication options.  Once this completes, the Horizon client will reopen, and the user will be prompted for their Active Directory credentials.

You can configure the UAG to use the same username for Horizon as the one that is used with Azure AD, but the user will still be prompted for a password unless TrueSSO is configured.

Configuring SAML with Workspace ONE for AVI Networks

Earlier this year, VMware closed the acquisition of Avi Networks.  Avi Networks provides an application delivery controller solution designed for the multi-cloud world. While many ADC solutions aggregate the control plane and data plane on the same appliance, Avi Networks takes a different approach.  They utilize a management appliance for the control plane and multiple service engine appliances that handle load balancing, web application firewall, and other services for the data plane.

Integrating Avi Networks with Workspace ONE Access

The Avi Networks Controller appliance offers multiple options for integrating the management console into enterprise environments for authentication management.  One of the options that is avaiable is SAML.  This enables integration into Workspace ONE Access and the ability to take advantage of the App Catalog, network access restrictions and step-up authentication when administrators sign in.

Before I walk through the steps for integrating Avi Networks into Workspace ONE Access via SAML, I want to thank my colleague Nick Robbins.  He provided most of the information that enabled this integration to be set up in my lab environments and this blog post.  Thank you, Nick!

There are three options that can be selected for the URL when configuring SAML integration for Avi Networks.  The first option is to use the cluster VIP address.  This is a shared IP address that is used by all management nodes when they are clustered.  The second option is to use a fully-qualified domain name.

These options determine the SSO URL and entity ID that are used in the SAML configuration, and they are automatically generated by the system.

The third option is to use a user-provided entity ID.

For this walkthrough, we are going to use a fully-qualified domain name.

Prerequisites

Before we can begin configuring SAML integration, there are a few things we need to do.

First, we need to make sure a DNS record is in place for our Avi Controller.  This will be used for the fully-qualified domain name that is used when signing into our system.

Second, we need to get the Workspace One Access IDP metadata.  Avi does not import this automatically by providing a link the idp.xml file, so we need to download this file.  The steps for retrieving the metadata are:

  1. Log into your Workspace One Access administrator console.
  2. Go to App Catalog
  3. Click Settings
    7a. idp metadata WS1 Catalog Settings
  4. Under SaaS Apps, click SAML Metadata7b. idp metadata WS1 Catalog Settings idp
  5. Right click on Identity Provider Metadata and select Save Link As.  Save the file as idp.xml7c. idp metadata WS1 Catalog Settings idp
  6. Open the idp.xml file in your favorite text editor.  We will need to copy this into the Avi SAML configuration in the next step.

Avi Networks Configuration

The first thing that needs to be done is to configure an authentication profile to support SAML on the Avi Networks controller.  The steps for this are:

  1. Log into your Avi Networks controller as your administrative user.
  2. Go to Templates -> Security -> Auth Profile.
  3. Click Create to create a new profile.
  4. Provide a name for the profile in the Name field.
  5. Under Type, select SAML.

    6. SAML

  6. Copy the Workspace ONE SAML idp information into the idp Metadata field.  This information is located in the idp.xml file that we save in the previous section.8. Copy idp metadata to AVI SAML Profile
  7. Select Use DNS FQDN
  8. Fill in your organizational details.
  9. Enter the fully-qualified domain name that will be used for the SAML configuration in the FQDN field.
  10. Click Save

Next, we will need to collect some of our service provider metadata.  Avi Networks does not generate an xml file that can be imported into Workspace ONE Access, so we will need to enter our metadata manually.  There are three things we need to collect:

  • Entity ID
  • SSO URL
  • Signing Certificate

We will get the Entity ID and SSO URL from the Service Provider Settings screen.  Although this screen also has a field for signing certificate, it doesn’t seem to populate anyting in my lab, so we will have to get the certificate information from the SSL/TLS Certificate tab.

The steps for getting into the Service Provider Settings are:

  1. Go to Templates -> Security -> Auth Profile.
  2. Find the authentication profile that you created.
  3. Click on the Verify box on the far right side of the screen.  This is the square box with a question mark in it.  10. Get Auth Profile Details
  4. Copy the Entity ID and SSO URL and paste them into your favorite text editor.  We will be using these in the next step.11. Service Provider Settings
  5. Close the Service Provider Settings screen by clicking the X in the upper right-hand corner.

Next, we need to get the signing certificate.  This is the System-Default-Portal-Cert.  The steps to get it are:

  1. Go to Templates -> Security -> SSL/TLS Certificates.
  2. Find the System-Default-Portal-Cert.
  3. Click the Export button.  This is the circle with the down arrow on the right side of the screen.13. Export System-Default-Portal-Cert
  4. The certificate information is in the lower box labeled certificate.
  5. Click the Copy to Clipboard button underneath the certificate box.
  6. Paste the certificate in your favorite text editor.  We will also need this in the next step.
  7. Click Done to close the Export Certificate screen.

Configuring the Avi Networks Application Catalog item in Workspace One Access

Now that we have our SAML profile created in the Avi Networks Controller, we need to create our Workspace ONE catalog entry.  The steps for this are:

  1. Log into your Workspace One Access admin interface.
  2. Go to the Catalog tab.
  3. Click New to create a new App Catalog entry.14. Create WS1 New SaaS Application
  4. Provide a name for the new Avi Networks entry in the App Catalog.  14. WS1 New SaaS Application
  5. If you have an icon to use, click Select File and upload the icon for the application.
  6. Click Next.
  7. Enter the following details.  For the next couple of steps, you need to remain on the Configuration screen.  Don’t click next until you complete all of the configuration items:
    1. Authentication Type: SAML 2.0
    2. Configuration Type: Manual
    3. Single Sign-On URL: Use the single sign-on URL that you copied from the Avi Networks Service Provider Settings screen.
    4. Recipient URL: Same as the Single Sign-On URL
    5. Application ID: Use the Entity ID setting that you copied from the Avi Networks Service Provider Settings screen.15a. WS1 New SaaS App Configuration
    6. Username Format: Unspecified
    7. Username Value: ${user.email}
    8. Relay State URL: FQDN or IP address of your appliance15b. WS1 New SaaS App Configuration
  8. Expand Advanced Properties and enter the following values:
    1. Sign Response: Yes
    2. Sign Assertion: Yes15c. WS1 New SaaS App Configuration - Advanced
    3. Copy the value of the System-Default-Portal-Cert certificate that you copied in the previous section into the Request Signature field.15d. WS1 New SaaS App Configuration - Advanced
    4. Application Login URL: FQDN or IP address of your appliance.  This will enable SP-initiated login workflows.
  9. Click Next.
  10. Select an Access Policy to use for this application.  This will determine the rules used for authentication and access to the application.16. Assign Access Policy
  11. Click Next.
  12. Review the summary of the configuration.17. Save and Assign
  13. Click Save and Assign
  14. Select the users or groups that will have access to this application and the deployment type.18. Assign Users
  15. Click Save.

Enabling SAML Authentication in Avi Networks

In the last couple of steps, we created our SAML profile in Avi Networks and a SAML catalog item in Workspace One Access.  However, we haven’t actually turned SAML on yet or assigned any users to roles.  In this next section, we will enable SAML and grant superuser rights to SAML users.

Note: It is possible to configure more granular role-based access control by adding application parameters into the Workspace One Access catalog item and then mapping those parameters to different roles in Avi Networks.  This walkthrough will just provide a simple setup, and deeper RBAC integration will be covered in a possible future post.

  1. Log into your Avi Networks Management Console.
  2. Go Administration -> Settings -> Authentication/Authorization2. Settings
  3. Click the pencil icon to edit the Authentication/Authorization settings.
  4. Under Authentication, select Remote.
  5. 4. Authentication Remote
  6. Under Auth Profile, select the SAML profile that you created earlier.
  7. Make sure the Allow Local User Login box is checked.  If this box is not checked, and there is a configuration issue, you will not be able to log back into the controller.
  8. Click Save.9. Save AVI SAML Profile
  9. After saving the authentication settings, some new options will appear in the Authentication/Authorization screen to enable role mapping.
  10. Click New Mapping.9a. Create Role Mapping
  11. For Attribute, select Any
  12. Check the box labelled Super User9b. SuperUser
  13. Click Save.

SAML authentication is now configured on the Avi Networks Management appliance.

Testing SAML Authentication and Troubleshooting

So now that we have our authentication profiles configured in both Avi Networks and Workspace One Access, we need to test it to ensure our admin users can sign in.  There are two tests that should be run.  The first is launching Avi Networks from the Workspace One Access app catalog, and the second is doing an SP-initiated login by going to your Avi Networks URL.

In both cases, you should see a Workspace One Access authentication screen for login before being redirected to the Avi Networks management console.

In my testing, however, I had some issues in one of my labs where I would get a JSON error when attempting SAML authentication.  If you see this error, and you validate that all of your settings match, then reboot the appliance.  This solved the issue in my lab.

If SAML authentication breaks, and you need to gain access to the appliance management interface with a local account, then you need to provide a different URL.  That URL is https://avi-management-fqdn-or-ip/#!/login?local=1.

Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey

Recently, Mark Brookfield posted a three-part series on the process he uses for building Windows 10 images in HobbitCloud (Part 1, Part 2, Part 3). Mark has put together a great series of posts that explain the tools and the processes that he is using in his lab, and it has inspired me to revisit this topic and talk about the process and tooling I currently use in my lab and the requirements and decisions that influenced this design.

Why Automate Image Building?

Hand-building images is a time-intensive process.  It is also potentially error-prone as it is easy to forget applications and specific configuration items, requiring additional work or even building new images depending on the steps that were missed.  Incremental changes that are made to templates may not make it into the image building documentation, requiring additional work to update the image after it has been deployed.

Automation helps solve these challenges and provide consistent results.  Once the process is nailed down, you can expect consistent results on every build.  If you need to make incremental changes to the image, you can add them into your build sequence so they aren’t forgotten when building the next image.

Tools in My Build Process

When I started researching my image build process back in 2017, I was looking to find a way to save time and provide consistent results on each build.  I wanted a tool that would allow me to build images with little interaction with the process on my part.  But it also needed to fit into my lab.  The main tools I looked at were Packer with the JetBrains vSphere plugin and Microsoft Deployment Toolkit (MDT).

While Packer is an incredible tool, I ended up selected MDT as the main tool in my process.  My reason for selecting MDT has to do with NVIDIA GRID.  The vSphere Plugin for Packer does not currently support provisioning machines with vGPU, so using this tool would have required manual post-deployment work.

One nice feature of MDT is that it can utilize a SQL Server database for storing details about registered machines such as the computer name, the OU where the computer object should be placed, and the task sequence to run when booting into MDT.  This allows a new machine to be provisioned in a zero-touch fashion, and the database can be populated from PowerShell.

Unlike Packer, which can create and configure the virtual machine in vCenter, MDT only handles the operating system deployment.  So I needed some way to create and configure the VM in vCenter with a vGPU profile.  The best method of doing this is using PowerCLI.  While there are no native commandlets for managing vGPUs or other Shared PCI objects in PowerCLI, there are ways to utilize vSphere extension data to add a vGPU profile to a VM.

While MDT can install applications as part of a task sequence, I wanted something a little more flexible.  Typically, when a new version of an application is added, the way I had structured my task sequences required them to be updated to utilize the newer version.  The reason for this is that I wasn’t using Application Groups for certain applications that were going into the image, mainly the agents that were being installed, as I wanted to control the install order and manage reboots. (Yes…I may have been using this wrong…)

I wanted to reduce my operational overhead when applications were updated so I went looking for alternatives.  I ended up settling on using Chocolatey to install most of the applications in my images, with applications being hosted in a private repository running on the free edition of ProGet.

My Build Process Workflow

My build workflow consists of 7 steps with one branch.  These steps are:

  1. Create a new VM in vCenter
  2. Configure VM options such as memory reservations and video RAM
  3. GPU Flag Only – Add a virtual GPU with the correct profile to the VM.
  4. Identify Task Sequence that will be used.  There are different task sequences for GPU and non-GPU machines and logic in the script to create the task sequence name. Various parameters that are called when running the script help define the logic.
  5. Create a new computer entry in the MDT database.  This includes the computer name, MAC address, task sequence name, role, and a few other variables.  This step is performed in PowerShell using the MDTDB PowerShell module.
  6. Power on the VM. This is done using PowerCLI. The VM will PXE boot to a Windows PE environment configured to point to my MDT server.

Build Process

After the VM is powered on and boots to Windows PE, the rest of the process is hands off. All of the MDT prompts, such as the prompt for a computer name or the task sequence, are disabled, and the install process relies on the database for things like computer name and task sequence.

From this point forward, it takes about forty-five minutes to an hour to complete the task sequence. MDT installs Windows 10 and any drivers like the VMXNET3 driver, install Windows Updates from an internal WSUS server, installs any agents or applications, such as VMware Tools, the Horizon Agent, and the UEM DEM agent, silently runs the OSOT tool, and stamps the registry with the image build date.

Future Direction and Process Enhancements

While this process works well today, it is a bit cumbersome. Each new Windows 10 release requires a new task sequence for version control. It is also difficult to work tools like the OSDeploy PowerShell scripts by David Segura (used for slipstreaming updated into a Windows 10 WIM) into the process. While there are ways to automate MDT, I’d rather invest time in automating builds using Packer.

There are a couple of post-deployment steps that I would like to integrate into my build process as well. I would like to utilize Pester to validate the image build after it completes, and then if it passes, execute a shutdown and VM snapshot (or conversion to template) so it is ready to be consumed by Horizon. My plan is to utilize a tool like Jenkins to orchestrate the build pipeline and do something similar to the process that Mark Brookfield has laid out.

The ideal process that I am working towards will have multiple workflows to manage various aspects to the process. Some of these are:

1. A process for automatically creating updated Windows 10 ISOs with the latest Windows Updates using the OSDeploy PowerShell module.

2. A process for creating Chocolatey package updates and submitting them to my ProGet repository for applications managed by Chocolatey.

3. A process to build new images when Windows 10 or key applications (such as VMware Tools, the Horizon Agent, or NVIDIA Drivers) are updated. This process will ideally use Packer as the build tool to simplify management. The main dependency for this step is adding NVIDIA GRID support for the JetBrains Packer vSphere Plug-in.

So this is what I’m doing for image builds in my lab, and the direction I’m planning to go.

Horizon 7 Administration Console Changes

Over the last couple of releases, VMware has included an HTML5-based Horizon Console for managing Horizon 7.  Each release has brought this console closer to feature parity with the Flash-based Horizon Administrator console that is currently used by most administrators.

With the end-of-life date rapidly approaching for Adobe Flash, and some major browsers already making Flash more difficult to enable and use, there will be some changes coming to Horizon Administration.

  • The HTML5 console will reach feature parity with the Flash-based Horizon Administrator in the next release.  This includes a dashboard, which is one of the major features missing from the HTML5 console.  Users will be able to access the HTML5 console using the same methods that are used with the current versions of Horizon 7.
  • In the releases that follow the next Horizon release, users connecting to the current Flash-based console will get a page that provides them a choice to either go to the HTML5 console or continue to the Flash-based console.  This is similar to the landing page for vCenter where users can choose which console they want to use.

More information on the changes will be coming as the next version of Horizon is released.