Deep Dive – How Horizon Utilizes Active Directory

Microsoft Active Directory is the backbone of almost every enterprise network. It is also a very complex system, and large, multi-site organizations can have incredibly complex environments that stretch across multiple Active Directory forests.

I was recently on a support escalation with one of our service provider partners. The escalation revolved around integrating Horizon into a complex Active Directory environment that involved multiple Active Directory forests connected over a trust. While both Horizon and Active Directory were working properly, the design of these particular Active Directory environments caused issues that manifested in Horizon and other applications.

Active Directory

Before talking about how Horizon utilizes Active Directory, I want to do a little level setting. I won’t go into a full overview of Active Directory. This is a very large topic that can, and has, fill books, and Microsoft has some very good documentation on their public documentation site.

One Active Directory design concept that is important for Horizon deployments, especially large deployments where resource forests may be used, is Sites. Active Directory Sites are part of the logical representation of the physical network. They map physical IP space to logical network locations, and they serve multiple purposes in an Active Directory environment. One key role that sites fill is helping clients locate the closest computer that is providing a service. This includes domain controllers.

Windows has a built-in process for locating domain controllers. This process is part of the NetLogon service. During startup, the computer’s NetLogon service detects the site that the computer is located in. The site name is stored in the registry. During logon, NetLogon will use the site name to query for DNS SRV records to locate the domain controller for that site. This process is outlined in this Microsoft blog post. It gets more complicated when you have multiple forests as the site lookup is based on the domain membership of the computer, not the user.

How Horizon Interacts with Active Directory

So what does this have to do with Horizon and how it interacts with Active Directory?

When you set up a new Horizon pod, you’re not required to do any Active Directory setup. The Horizon Connection Server services run in the context of the local system account, and they utilize built-in processes to identify the domain.

The Windows NetLogon service includes processes to retrieve information about the local Active Directory environment, and there are Win32 APIs to allow applications to trigger this process. Horizon utilizes these APIs to discover the local domain and any trusted domains. The Windows DC Locator process will identify the closest domain controller to the site, and any queries against the domain will be targeted to that domain controller using the system’s Active Directory account. (Note: Write Operations, such as creating computer accounts for Instant Clones, will use not use the computer account credentials.)

If the Connection Server is not able to determine the site that it is in, then it will use any domain controller that is returned when querying DNS, and the DC Locator process will continue to query for domain controllers on a regular basis.

When it comes to integrating with Active Directory, Horizon isn’t doing anything special. We’re just building on top of what Microsoft has in Windows Server.

Troubleshooting

If AD sites are not set up properly, you may see performance issues, especially in network scenarios where Horizon cannot reach the domain controller that DNS is pointing them to.

These issues can include Active Directory user and group search results taking a long time to return, issues with user authentication, and issues with computer accounts for provisioned machines. This may also impact user login experience and site-aware services like file shares fronted by DFS Namespaces. These issues are mainly seen in large Active Directory environments with many sites, or in environments with trusts between forests, where sites are not properly set up or maintained.

So how do you troubleshoot Horizon issues with Active Directory? This Microsoft blog post provides a good starting point. You will need to use NetLogon debugging and the nltest command line tool to see what Active Directory site your servers are a member of and what domain controllers are being resolved when the DC Locator process runs.

This can get a little more complicated in cloud deployments, large enterprises or service provider scenarios where resource forests are being used. Site names become very important in these scenarios as the computer will use the local domain site name when searching for domain controllers across trusts. Fixing Active Directory issues in these environments may require site topology changes.

Best Practices

Horizon utilizes native Windows features when integrating with Active Directory. It’s important to have a solid Active Directory architecture and site topology to ensure good performance and user experience. This means having sites defined and subnets assigned to the correct site.

A well-defined site topology becomes very important in environments where a resource forest, connected to the on-premises Active Directory environment with a trust, will be used as the site names must match in both Active Directory environments for the DC Locator process to work properly. Active Directory design needs to be a part of the Horizon design process to avoid issues after deployment.

The Virtual Horizon Lab – February 2020

It’s been a while since I’ve done a home lab update.  In fact, the last one was over four years ago. William Lam’s home lab project and appearing on a future episode of “Hello from My Home Lab” with Lindy Collier has convinced me that it’s time to do an update.

My lab has both changed and grown since that last update.  Some of this was driven by vSphere changes – vSphere 6.7 required new hardware to replace my old R710s.  Changing requirements, new technology, and replacing broken equipment have also driven lab changes at various points.

My objectives have changed a bit too.  At the time of my last update, there were four key technologies and capabilities that I wanted in my lab.  These have changed as my career and my interests have changed, and my lab has evolved with it as well.  Today, my lab primarily focuses on end-user computing, learning Linux and AI, and running Minecraft servers for my kids.

vSphere Overview

The vSphere environment is probably the logical place to start.  My vSphere environment now consists of two vCenter Servers – one for my compute workloads and one for my EUC workloads.  The compute vCenter has two clusters – a 4 node cluster for general compute workloads and a 1 node cluster for backup.  The EUC vCenter has a single 2-node cluster for running desktop workloads.

Both environments run vSphere 6.7U3 and utilize the vCenter Server virtual appliance.  The EUC cluster utilzies VSAN and Horizon.  I don’t currently have NSX-T or vRealize Operations deployed, but those are on the roadmap to be redeployed.

Compute Overview

My lab has grown a bit in this area since the last update, and this is where the most changes have happened.

Most of my 11th generation Dell servers have been replaced, and I only have a single R710 left.  They were initially replaced by Cisco C220 M3 rackmounts, but I’ve switched back to Dell.  I preferred the Dell servers due to cost, availability, and HTML5-based remote management in the iDRACs.  Here are the specs for each of my clusters:

Compute Cluster – 4 Dell PowerEdge R620s with the following specs:

The R620s each have a 10GbE network card, but these cards are for future use.

Backup Cluster – 1 Dell PowerEdge R710 with the following specs:

This server is configured with local storage for my backup appliance.  This storage is provided by 1TB SSD SATA drives.

VDI Cluster – 2 Dell PowerEdge R720s with the following specs:

  • 2x Intel Xeon E5-2630 Processors
  • 96 GB RAM
  • NVIDIA Tesla P4 Card

Like the R620s, the R720s each have 10GbE networking available.

I also have an R730, however, it is not currently being used in the lab.

Network Overview

When I last wrote about my lab, I was using a pair of Linksys SRW2048 switches.  I’ve since replaced these with a pair of 48-port Cisco Catalyst 3560G switches.  One of the switches has PoE, and the other is a standard switch.  In addition to switching, routing has been enabled on these switches, and they act as the core router in the network.  HSRP is configured for redundancy.  These uplink to my firewall. Traffic in the lab is segregated into multiple VLANs, including a DMZ environment.

I use Ubiquiti AC-Lite APs for my home wifi.  The newer ones support standard PoE, which is provided by one of the Cisco switches.  The Unifi management console is installed on a Linux VM running in the lab.

For network services, I have a pair of PiHole appliances.  These appliances are running as virtual machines in the lab. I also have AVI Networks deployed for load balancing.

Storage Overview

There are two main options for primary storage in the lab.  Most primary storage is provided by Synology.  I’ve updated by Synology DS1515+ to a DS1818+.  The Synology appliance has four 4TB WD RED drives for capacity and four SSDs.  Two of the SSDs are used for a high-performance datastore, and the other two are used as a read-write cache for my primary datastore.  The array presents NFS-backed datastores to the VMware environment, and it also presents CIFS for file shares.

VSAN is the other form of primary storage in the lab.  The VSAN environment is an all-flash deployment in the VDI cluster, and it is used for serving up storage for VDI workloads.

The Cloud

With the proliferation of cloud providers and cloud-based services, it’s inevitable that cloud services work their way into home lab setups. My lab is no exception.

I use a couple of different cloud services in operating my lab across a couple of SaaS and cloud providers. These include:

  • Workspace ONE UEM and Workspace ONE Access
  • Office 365 and Azure – integrated with Workspace ONE through Azure AD
  • Amazon Web Services – management integrated into Workspace ONE Access, S3 as a offsite repository for backups
  • Atlassian Cloud – Jira and Confluence Free Tier integrated into Workspace ONE with Atlassian Access

Plans Going Forward

Home lab environments are dynamic, and they need to change to meet the technology and education needs of the users. My lab is no different, and I’m planning on growing my lab and it’s capabilities over the next year.

Some of the things I plan to focus on are:

  • Adding 10 GbE capability to the lab. I’m looking at some Mikrotik 24-port 10GbE SFP+ switches.
  • Upgrading my firewall
  • Implementing NSX-T
  • Deploying VMware Tunnel to securely publish out services like Code-Server
  • Putting my R730 back into production
  • Expanding my knowledge around DevOps and building pipelines to find ways to bring this to EUC
  • Work with Horizon Cloud Services and Horizon 7

Installing and Configuring the NVIDIA GRID License Server on CentOS 7.x

The release of NVIDIA GRID 10 included a new version of the GRID license server.  Rather than do an inplace upgrade of my existing Windows-based license servers that I was using in my lab, I decided to rebuild them on CentOS.

Prerequisites

In order to deploy the NVIDIA GRID license server, you will need two servers.  The license servers should be deployed in a highly-available architecture since the features enabled by the GRID drivers will not function if a license cannot be checked out.  These servers should be fully patched.  All of my CentOS boxes run without a GUI. All of the install steps will be done through the console, so you will need SSH access to the servers.

The license servers only require 2 vCPU and 4GB of RAM for most environments.  The license server component runs on Tomcat, so we will need to install Java and the Tomcat web server.  We will do that as part of our install.  Newer versions of Java default to IPv6, so if you are not using this technology in your environment, you will need to disable IPv6 on the server.  If you don’t, the license server will not be listening on any IPv4 addresses. While there are other ways to change Java’s default behavior, I find it easier to just disable IPv6 since I do not use it in my environment.

The documentation for the license server can be found on the NVIDIA docs site at https://docs.nvidia.com/grid/ls/2019.11/grid-license-server-user-guide/index.html

Installing the Prerequisites

First, we need to prepare the servers by installing and configuring our prerequisites.  We need to disable IPv6, install Java and Tomcat, and configure the Tomcat service to start automatically.

If you are planning to deploy the license servers in a highly available configuration, you will need to perform all of these steps on both servers.

The first step is to disable IPv6.  As mentioned above, Java appears to default to IPv6 for networking in recent releases on Linux.

The steps to do this are:

  1. Open the sysctl.conf file with the following command (substitute your preferred editor for nano).

    sudo nano /etc/sysctl.conf

  2. Add the following two lines at the end of the file:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1

  3. Save the file.
  4. Reboot to allow the changes to take effect.

Note: There are other ways to prevent Java from defaulting to IPv6.  These methods usually involve making changes to the application parameters when Java launches.  I selected this method because it was the easiest route to implement and I do not use IPv6 in my lab.

After the system reboots, the install can proceed.  The next steps are to install and configure Java and Tomcat.

  1. Install Java and Tomcat using the following commands:

    sudo yum install -y java tomcat tomcat-webapps

  2. Enable the tomcat service so that it starts automtically on reboot

    sudo systemctl enable tomcat.service

  3. Start Tomcat.

    sudo systemctl start tomcat.service

Finally, we will want to configure our JAVA_HOME variable.  The license server includes a command line tool, nvidialsadmin, that can be used to configure password authentication for the license server management console, and that tool requires a JAVA_HOME variable to be configured.  These steps will create the variable for all users on the system.

  1. Run the following command to see the path to the Java install:

    sudo alternatives –config java

  2. Copy the path to the Java folder, which is in parenthesis.  Do not include anyting after “jre/’
  3. Create a Bash plugin for Java with the following command:

    sudo nano /etc/profile.d/java.sh

  4. Add the following lines to the file:

    export JAVA_HOME=(Your Path to Java)
    export PATH=$PATH:$JAVA_HOME/bin

  5. Save the file.
  6. Reboot the system.
  7. Test to verify that the JAVA_HOME variable is set up properly

    echo $JAVA_HOME

Installing the NVIDIA License Server

Now that the prerequisites are configured, the NVIDIA license server software can be installed.  The license server binaries are stored on the NVIDIA Enterprise Licensing portal, and they will need to be downloaded on another machine and copied over using a tool like WinSCP.

The steps for installing the license server once the installer has been copied to the servers re:

  1. Set the binary to be executable.

    chmod +x setup.bin

  2. Run the setup program in console mode.

    sudo ./setup.bin -i console

  3. The first screen is a EULA that will need to be accepted.  To scroll down through the EULA, press Enter until you get to the EULA acceptance.
  4. Press Y to accept the EULA.
  5. When prompted, enter the path for the Tomcat WebApps folder.  On CentOS, this path is:
    /usr/share/tomcat
  6. When prompted, press 1 to enable firewall rules for the license server.  This will enable the license server port on TCP7070.
    Since this is a headless server, the management port on TCP8080 will also need to be enabled.  This will be done in a later step.
  7. Press Enter to install.
  8. When the install completes, press enter to exit the installer.

After the install completes, the management port firewall rules will need to be configured.  While the management interface can be secured with usernames and passwords, this is not configured out of the box.  The normal recommendation is to just use the browser on the local machine to set the configuration, but since this is a headless machine, that’s not avaialble either. For this step, I’m applying the rules to an internal zone and restricting access to the management port to the IP address of my management machine.  The steps for this are:

  1. Create a firewall rule for port TCP port 8080.

    sudo firewall-cmd –permanent –zone=internal –add-port=8080/tcp

  2. Create a firewall rule for the source IP address.

    sudo firewall-cmd –permanent –zone=internal –add-source=Management-Host-IP/32

  3. Reload the firewall daemon so the new rules take effect:

    sudo firewall-cmd –reload

Configuring the License Server For High Availability

Once the firewall rules for accessing the management port are in place, the server configuration can begin.  These steps will consist of configuring the high availability features.  Registering the license servers with the NVIDIA Licensing portal and retrieving and applying licenses will not be handled in this step.

In order to set the license servers up for high availability, you will need two servers running the same version of the license server software.  You will also need to identify which servers will be the primary and secondary servers in the infrastructure.

  1. Open a web browser on your management machine and go to http://<primary license server hostname or IP>:8080/licserver
  2. Click on Configuration
  3. In the License Generation section, fill in the following details:
    1. Backup URI:
      http://<secondary license server hostname or IP>:7070/fne/bin/capability
    2. Main URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  4. In the Settings for server to server sync between License servers section, fill in the following details:
    1. Synchronization to fne enabled: True
    2. Main FNE Server URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  5. Click Save.
  6. Open a new browser window or tab and go to go to http://<secondary license server hostname or IP>:8080/licserver
  7. Click on Configuration
  8. In the License Generation section, fill in the following details:
    1. Backup URI:
      http://<secondary license server hostname or IP>:7070/fne/bin/capability
    2. Main URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  9. In the Settings for server to server sync between License servers section, fill in the following details:
    1. Synchronization to fne enabled: True
    2. Main FNE Server URI:
      http://<primary license server hostname or IP>:7070/fne/bin/capability
  10. Click Save.

Summary

After completing the high availability setup section, the license servers are ready for the license file.  In order to generate and install this, the two license servers will need to be registered with the NVIDIA licensing service.  The steps to complete those tasks will be covered in a future post.

Integrating Rubrik Andes 5.1 with Workspace ONE Access

Early in December, Rubrik released the latest version of their core data protection platform – Andes 5.1. One of the new features in this release is support for SAML identity providers.  SAML integration provides new capabilities to service providers and large enterprises by enabling integration into enterprise networks without having to directly integrate into Active Directory.

Rubrik also supports multi-factor authentication, but the only method supported out of the box is RSA SecurID.  SAML integration enables enterprises to utilize other forms of multi-factor authentication, including RADIUS-based services and Azure MFA.  It also allows for other security policies to be implemented including device-based compliance checks.

Prerequisites

Before we can begin configuring SAML integration, there are a few things we need to do.  These prerequisites are similar to the Avi Networks SAML setup, but we won’t need to open the Workspace ONE Access metadata file in a text editor.

First, we need to make sure a DNS record is in place for our Rubrik environment.  This will be used for the fully-qualified domain name that is used when signing into our system.

Second, we need to get the Workspace One Access IDP metadata.  Rubrik does not import this automatically by providing a link the idp.xml file, so we need to download this file.  The steps for retrieving the metadata are:

  1. Log into your Workspace One Access administrator console.
  2. Go to App Catalog
  3. Click Settings
    7a. idp metadata WS1 Catalog Settings
  4. Under SaaS Apps, click SAML Metadata7b. idp metadata WS1 Catalog Settings idp
  5. Right click on Identity Provider Metadata and select Save Link As.  Save the file as idp.xml7c. idp metadata WS1 Catalog Settings idp

Rubrik SAML Configuration

Once the prerequisites are taken care of, we can start the SAML configuration on the Rubrik side.  This consists of generating the Rubrik SAML metadata and uploading the Workspace ONE metadata file.

  1. Log into your Rubrik Appliance.
  2. Go to the Gear icon in the upper right corner and select Users1. Users Menu
  3. Select Identity Providers2. Identity Providers
  4. Click Add Identity Provider3. Add Identity Providers
  5. Provide a name in the Identity Provider Name field.
  6. Click the folder icon next to the Identity Provider Metadata field.
  7. Upload the idp.xml file we saved in the last step.
  8. Select the Service Provider Host Address Option.  This can be a DNS Name or the cluster floating IP depending on your environment configuration.  For this setup, we will be doing a DNS Name.
  9. Enter the DNS name in the field.
  10. Click Download Rubrik Metadata.4. Rubrik Identity Provider Config
  11. Click Add.
  12. Open the Rubrik Metadata file in a text editor.  We will need this in the next step.

Workspace ONE Configuration

Now that the Rubrik side is configured, we need to create our Workspace ONE catalog entry.  The steps for this are:

  1. Log into your Workspace One Access administrator panel.
  2. Go to the Catalog tab.
  3. Click New to create a new App Catalog entry.
  4. Provide a name for the new Rubrik entry in the App Catalog.
  5. If you have an icon to use, click Select File and upload the icon for the application.
    5. New SaaS Application
  6. Click Next.
  7. In the Authentication Type field, select SAML 2.0
  8. In Configuration, select URL/XML
    6. SaaS Configuration 1
  9. Copy the contents of the Rubrik Metadata XML file.
  10. Paste them into the URL/XML textbox.
  11. Scroll down to the Advanced Properties section.
  12. Expand Advanced Properties.
  13. Click the toggle switch under Sign Assertion
    7. Sign Assertion
  14. Click Next.
  15. Select an Access Policy to use for this application. This will determine the rules used for authentication and access to the application.
    16. Assign Access Policy
  16. Click Next.
  17. Review the Summary of the Configuration
  18. Click Save and Assign
  19. Select the users or groups that will have access to this application
  20. Click Save.

Authorizing SAML Users in Rubrik

The final configuration step is to authorize Workspace ONE users within Rubrik and assign them to a role.  This step only works with individual users.  While testing, I couldn’t find a way to have it accept users based on a group or SAML attribute.

The steps for authorizing Workspace ONE users is:

  1. Log into your Rubrik Appliance.
  2. Go to the Gear icon in the upper right corner and select Users1. Users Menu
  3. Select Users and Groups8. Users and Groups
  4. Click Grant Authorization9. Grant Authorization
  5. Select the directory.
    10. Select Directory
  6. Select User and enter the username that the user will use when signing into Workspace ONE.11. Enter Username
  7. Click Continue.
  8. Select the role to assign to the user and click Assign.12. Assign Rights
  9. The SAML user has been authorized to access the Rubrik appliance through SSO.

Testing SAML Authentication and Troubleshooting

So now that we have our authentication profiles configured in both Rubrik and Workspace One Access, we need to test it to ensure our admin users can sign in.  In order to test access, you need to sign out of your Rubrik appliance.  When you return to the login screen, you’ll see that it has changed slightly, and there will be a large “Sign in with SSO” button above the username field.  When pressed, users will be directed to Workspace ONE and authenticated.

While Rubrik may be listed in the Workspace ONE Access App Catalog, launching from the app catalog will just bring you to the login page.  I could not figure out how to get IdP-initiated logins to work, and some of my testing resulted in error pages that showed metadata errors.

Integrating Microsoft Azure MFA with VMware Unified Access Gateway 3.8

One of the common questions I see is around integrating VMware Horizon with Microsoft Azure MFA. Natively, Horizon only supports RSA and RADIUS-based multifactor authentication solutions. While it is possible to configure Azure MFA to utilize RADIUS, it requires Network Policy Server and a special plugin for the integration (Another option existed – Azure MFA Server, but that is no longer available for new implementations as of July 2019).

Earlier this week, VMware released Horizon 7.11 with Unified Access Gateway 3.8. The new UAG contains a pretty cool new feature – the abilility to utilize SAML-based multifactor authentication solutions.  SAML-based multifactor identifaction allows Horizon to consume a number of modern cloud-based solutions.  This includes Microsoft’s Azure MFA solution.

And…you don’t have to use TrueSSO in order to implement this.

If you’re interested in learning about how to configure Unified Access Gateways to utilize Okta for MFA, as well as tips around creating web links for Horizon applications that can be launched from an MFA portal, you can read the operational tutorial that Andreano Lanusso wrote.  It is currently available on the VMware Techzone site.

Prerequisites

Before you can configure Horizon to utilize Azure MFA, there are a few prerequisites that will need to be in place.

First, you need to have licensing that allows your users to utilize the Azure MFA feature.  Microsoft bundles this into their Office 365 and Microsoft 365 licensing skus as well as their free version of Azure Active Directory.

Note: Not all versions of Azure MFA have the same features and capabilities. I have only tested with the full version of Azure MFA that comes with the Azure AD Premium P1 license.  I have not tested with the free tier or MFA for Office 365 feature-level options.

Second, you will need to make sure that you have Azure AD Connect installed and configured so that users are syncing from the on-premises Active Directory into Azure Active Directory.  You will also need to enable Azure MFA for users or groups of users and configure any MFA policies for your environment.

If you want to learn more about configuring the cloud-based version of Azure MFA, you can view the Microsoft documentation here.

There are a few URLs that we will need when configuring single sign-on in Azure AD.  These URLs are:

Case sensitivity matters here.  If you put caps in the SAML URL, you may receive errors when uploading your metadata file.

Configuring Horizon UAGs as a SAML Application in Azure AD

The first thing we need to do is create an application in Azure Active Directory.  This will allow the service to act as a SAML identity provider for Horizon.  The steps for doing this are:

  1. Sign into your Azure Portal.  If you just have Office 365, you do have Azure Active Directory, and you can reach it from the Office 365 Portal Administrator console.
  2. Go into the Azure Active Directory blade.
  3. Click on Enterprise Applications.
    1. Enterprise AD-Updated
  4. Click New Application.
    2. New Enterprise Application-Updated
  5. Select Non-Gallery Application.
    3. Non-Gallery Application
  6. Give the new application a name.
    4. Enterprise Application Name
  7. Click Add.
  8. Before we can configure our URLs and download metadata, we need to assign users to the app.  Click 1. Assign Users and Groups
    5. Assign Users and Groups
  9. Click Add User
    5a. Add Users
  10. Click where it says Users and Groups – None Selected.
    5b. Select Users
  11. Select the Users or Groups that will have access to Horizon. There is a search box at the top of the list to make finding groups easier in large environments.
    Note: I recommend creating a large group to nest your Horizon user groups in to simplify setup.
  12. Click Add.
  13. Click Overview.
    5c. Return to main menu
  14. Click 2. Set up single sign on.
    6. Configure SSO
  15. In the section labeled Basic SAML Configuration, click the pencil in the upper right corner of the box. This will allow us to enter the URLs we use for our SAML configuration.
    8. Basic SAML Configuration
  16. Enter the following items.  Please note that the URL paths are case sensitive, and putting in PORTAL, Portal, or SAMLSSO will prevent this from being set up successfully:
    1. In the Identifier (Entity ID) field, enter your portal URL.  It should look like this:
      https://horizon.uag.url/portal
    2. In the Reply URL (Assertion Consumer Service URL) field, enter your uag SAML SSO URL.  It should look like this:
      https://horizon.uag.url/portal/samlsso
    3. In the Sign on URL, enter your uag SAML SSO URL.  It should look like this:
      https://horizon.uag.url/portal/samlsso
      9. Basic SAML Configuration URLS
  17. Click Save.
  18. Review your user attributes and claims, and adjust as necessary for your environment. Horizon 7 supports logging in with a user principal name, so you may not need to change anything.
  19. Click the download link for the Federation XML Metadata file.
    10. Download Metadata URL File

We will use our metadata file in the next step to configure our identity provider on the UAG.

Once the file is downloaded, the Azure AD side is configured.

Configuring the UAG

Once we have completed the Azure AD configuration, we need to configure our UAGs to utilize SAML for multifactor authentication.

In order to do these steps, you will need to have an admin password set on the UAG appliance in order to access the Admin interface.  I recommend doing the initial configuration and testing on a non-production appliance.  Once testing is complete, you can either manually apply the settings to the production UAGs or download the configuration INI file and copy the SAML configuration into the production configuration files for deployment.

Note: You can configure SAML on the UAGs even if you aren’t using TrueSSO.  If you are using this feature, you may need to make some configuration changes on your connection servers.  I do not use TrueSSO in my lab, so I have not tested Azure MFA on the UAGs with TrueSSO.

The steps for configuring the UAG are:

  1. Log into the UAG administrative interface.
  2. Click Configure Manually.
  3. Go to the Identity Bridging Settings section.
  4. Click the gear next to Upload Identity Provider Metadata.
    11. Identity Provider Metadata
  5. Leave the Entity ID field blank.  This will be generated from the metadata file you upload.
  6. Click Select.
    12. Select IDP Metadata file
  7. Browse to the path where the Azure metadata file you downloaded in the last section is stored.  Select it and click Open.
    13. Select XML File Updated
  8. If desired, enable the Always Force SAML Auth option.
    Note: SAML-based MFA acts differently than RADIUS and RSA authentication. The default behavior has you authenticate with the provider, and the provider places an authentication cookie on the machine. Subsequently logins may redirect users from Horizon to the cloud MFA site, but they may not be force to reauthenticate. Enabling the Always Force SAML Auth option makes SAML-based Cloud MFA providers behave similiarly to the existing RADIUS and RSA-based multifactor solutions by requiring reauthentication on every login. Please also be aware that things like Conditional Access Policies in Azure AD and Azure AD-joined Windows 10 devices may impact the behavior of this solution.
  9. Click Save.
    14. Save IDP Data-Updated
  10. Go up to Edge Services Settings and expand that section.
  11. Click the gear icon next to Horizon Edge Settings.
  12. Click the More button to show all of the Horizon Edge configuration options.
  13. In the Auth Methods field, select one of the two options to enable SAML:
    1. If you are using TrueSSO, select SAML
    2. If you are not using TrueSSO, select SAML and Passthrough
      15. Select MFA Configuration
  14. Select the identity provider that will be used.  For Azure MFA, this will be the one labeled https://sts.windows.net
    16. Select Identity Provider
  15. Click Save.

SAML authentication with Azure MFA is now configured on the UAG, and you can start testing.

User Authentication Flows when using SAML

Compared to RADIUS and RSA, user authentication behaves a little differently when using SAML-based MFA.  When a user connects to a SAML-integrated environment, they are not prompted for their RADIUS or RSA credentials right away.

After connecting to the Horizon environment, the user is redirected to the website for their authentication solution.  They will be prompted to authenticate with this solution with their primary and secondary authentication options.  Once this completes, the Horizon client will reopen, and the user will be prompted for their Active Directory credentials.

You can configure the UAG to use the same username for Horizon as the one that is used with Azure AD, but the user will still be prompted for a password unless TrueSSO is configured.

Configuring SAML with Workspace ONE for AVI Networks

Earlier this year, VMware closed the acquisition of Avi Networks.  Avi Networks provides an application delivery controller solution designed for the multi-cloud world. While many ADC solutions aggregate the control plane and data plane on the same appliance, Avi Networks takes a different approach.  They utilize a management appliance for the control plane and multiple service engine appliances that handle load balancing, web application firewall, and other services for the data plane.

Integrating Avi Networks with Workspace ONE Access

The Avi Networks Controller appliance offers multiple options for integrating the management console into enterprise environments for authentication management.  One of the options that is avaiable is SAML.  This enables integration into Workspace ONE Access and the ability to take advantage of the App Catalog, network access restrictions and step-up authentication when administrators sign in.

Before I walk through the steps for integrating Avi Networks into Workspace ONE Access via SAML, I want to thank my colleague Nick Robbins.  He provided most of the information that enabled this integration to be set up in my lab environments and this blog post.  Thank you, Nick!

There are three options that can be selected for the URL when configuring SAML integration for Avi Networks.  The first option is to use the cluster VIP address.  This is a shared IP address that is used by all management nodes when they are clustered.  The second option is to use a fully-qualified domain name.

These options determine the SSO URL and entity ID that are used in the SAML configuration, and they are automatically generated by the system.

The third option is to use a user-provided entity ID.

For this walkthrough, we are going to use a fully-qualified domain name.

Prerequisites

Before we can begin configuring SAML integration, there are a few things we need to do.

First, we need to make sure a DNS record is in place for our Avi Controller.  This will be used for the fully-qualified domain name that is used when signing into our system.

Second, we need to get the Workspace One Access IDP metadata.  Avi does not import this automatically by providing a link the idp.xml file, so we need to download this file.  The steps for retrieving the metadata are:

  1. Log into your Workspace One Access administrator console.
  2. Go to App Catalog
  3. Click Settings
    7a. idp metadata WS1 Catalog Settings
  4. Under SaaS Apps, click SAML Metadata7b. idp metadata WS1 Catalog Settings idp
  5. Right click on Identity Provider Metadata and select Save Link As.  Save the file as idp.xml7c. idp metadata WS1 Catalog Settings idp
  6. Open the idp.xml file in your favorite text editor.  We will need to copy this into the Avi SAML configuration in the next step.

Avi Networks Configuration

The first thing that needs to be done is to configure an authentication profile to support SAML on the Avi Networks controller.  The steps for this are:

  1. Log into your Avi Networks controller as your administrative user.
  2. Go to Templates -> Security -> Auth Profile.
  3. Click Create to create a new profile.
  4. Provide a name for the profile in the Name field.
  5. Under Type, select SAML.

    6. SAML

  6. Copy the Workspace ONE SAML idp information into the idp Metadata field.  This information is located in the idp.xml file that we save in the previous section.8. Copy idp metadata to AVI SAML Profile
  7. Select Use DNS FQDN
  8. Fill in your organizational details.
  9. Enter the fully-qualified domain name that will be used for the SAML configuration in the FQDN field.
  10. Click Save

Next, we will need to collect some of our service provider metadata.  Avi Networks does not generate an xml file that can be imported into Workspace ONE Access, so we will need to enter our metadata manually.  There are three things we need to collect:

  • Entity ID
  • SSO URL
  • Signing Certificate

We will get the Entity ID and SSO URL from the Service Provider Settings screen.  Although this screen also has a field for signing certificate, it doesn’t seem to populate anyting in my lab, so we will have to get the certificate information from the SSL/TLS Certificate tab.

The steps for getting into the Service Provider Settings are:

  1. Go to Templates -> Security -> Auth Profile.
  2. Find the authentication profile that you created.
  3. Click on the Verify box on the far right side of the screen.  This is the square box with a question mark in it.  10. Get Auth Profile Details
  4. Copy the Entity ID and SSO URL and paste them into your favorite text editor.  We will be using these in the next step.11. Service Provider Settings
  5. Close the Service Provider Settings screen by clicking the X in the upper right-hand corner.

Next, we need to get the signing certificate.  This is the System-Default-Portal-Cert.  The steps to get it are:

  1. Go to Templates -> Security -> SSL/TLS Certificates.
  2. Find the System-Default-Portal-Cert.
  3. Click the Export button.  This is the circle with the down arrow on the right side of the screen.13. Export System-Default-Portal-Cert
  4. The certificate information is in the lower box labeled certificate.
  5. Click the Copy to Clipboard button underneath the certificate box.
  6. Paste the certificate in your favorite text editor.  We will also need this in the next step.
  7. Click Done to close the Export Certificate screen.

Configuring the Avi Networks Application Catalog item in Workspace One Access

Now that we have our SAML profile created in the Avi Networks Controller, we need to create our Workspace ONE catalog entry.  The steps for this are:

  1. Log into your Workspace One Access admin interface.
  2. Go to the Catalog tab.
  3. Click New to create a new App Catalog entry.14. Create WS1 New SaaS Application
  4. Provide a name for the new Avi Networks entry in the App Catalog.  14. WS1 New SaaS Application
  5. If you have an icon to use, click Select File and upload the icon for the application.
  6. Click Next.
  7. Enter the following details.  For the next couple of steps, you need to remain on the Configuration screen.  Don’t click next until you complete all of the configuration items:
    1. Authentication Type: SAML 2.0
    2. Configuration Type: Manual
    3. Single Sign-On URL: Use the single sign-on URL that you copied from the Avi Networks Service Provider Settings screen.
    4. Recipient URL: Same as the Single Sign-On URL
    5. Application ID: Use the Entity ID setting that you copied from the Avi Networks Service Provider Settings screen.15a. WS1 New SaaS App Configuration
    6. Username Format: Unspecified
    7. Username Value: ${user.email}
    8. Relay State URL: FQDN or IP address of your appliance15b. WS1 New SaaS App Configuration
  8. Expand Advanced Properties and enter the following values:
    1. Sign Response: Yes
    2. Sign Assertion: Yes15c. WS1 New SaaS App Configuration - Advanced
    3. Copy the value of the System-Default-Portal-Cert certificate that you copied in the previous section into the Request Signature field.15d. WS1 New SaaS App Configuration - Advanced
    4. Application Login URL: FQDN or IP address of your appliance.  This will enable SP-initiated login workflows.
  9. Click Next.
  10. Select an Access Policy to use for this application.  This will determine the rules used for authentication and access to the application.16. Assign Access Policy
  11. Click Next.
  12. Review the summary of the configuration.17. Save and Assign
  13. Click Save and Assign
  14. Select the users or groups that will have access to this application and the deployment type.18. Assign Users
  15. Click Save.

Enabling SAML Authentication in Avi Networks

In the last couple of steps, we created our SAML profile in Avi Networks and a SAML catalog item in Workspace One Access.  However, we haven’t actually turned SAML on yet or assigned any users to roles.  In this next section, we will enable SAML and grant superuser rights to SAML users.

Note: It is possible to configure more granular role-based access control by adding application parameters into the Workspace One Access catalog item and then mapping those parameters to different roles in Avi Networks.  This walkthrough will just provide a simple setup, and deeper RBAC integration will be covered in a possible future post.

  1. Log into your Avi Networks Management Console.
  2. Go Administration -> Settings -> Authentication/Authorization2. Settings
  3. Click the pencil icon to edit the Authentication/Authorization settings.
  4. Under Authentication, select Remote.
  5. 4. Authentication Remote
  6. Under Auth Profile, select the SAML profile that you created earlier.
  7. Make sure the Allow Local User Login box is checked.  If this box is not checked, and there is a configuration issue, you will not be able to log back into the controller.
  8. Click Save.9. Save AVI SAML Profile
  9. After saving the authentication settings, some new options will appear in the Authentication/Authorization screen to enable role mapping.
  10. Click New Mapping.9a. Create Role Mapping
  11. For Attribute, select Any
  12. Check the box labelled Super User9b. SuperUser
  13. Click Save.

SAML authentication is now configured on the Avi Networks Management appliance.

Testing SAML Authentication and Troubleshooting

So now that we have our authentication profiles configured in both Avi Networks and Workspace One Access, we need to test it to ensure our admin users can sign in.  There are two tests that should be run.  The first is launching Avi Networks from the Workspace One Access app catalog, and the second is doing an SP-initiated login by going to your Avi Networks URL.

In both cases, you should see a Workspace One Access authentication screen for login before being redirected to the Avi Networks management console.

In my testing, however, I had some issues in one of my labs where I would get a JSON error when attempting SAML authentication.  If you see this error, and you validate that all of your settings match, then reboot the appliance.  This solved the issue in my lab.

If SAML authentication breaks, and you need to gain access to the appliance management interface with a local account, then you need to provide a different URL.  That URL is https://avi-management-fqdn-or-ip/#!/login?local=1.

Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey

Recently, Mark Brookfield posted a three-part series on the process he uses for building Windows 10 images in HobbitCloud (Part 1, Part 2, Part 3). Mark has put together a great series of posts that explain the tools and the processes that he is using in his lab, and it has inspired me to revisit this topic and talk about the process and tooling I currently use in my lab and the requirements and decisions that influenced this design.

Why Automate Image Building?

Hand-building images is a time-intensive process.  It is also potentially error-prone as it is easy to forget applications and specific configuration items, requiring additional work or even building new images depending on the steps that were missed.  Incremental changes that are made to templates may not make it into the image building documentation, requiring additional work to update the image after it has been deployed.

Automation helps solve these challenges and provide consistent results.  Once the process is nailed down, you can expect consistent results on every build.  If you need to make incremental changes to the image, you can add them into your build sequence so they aren’t forgotten when building the next image.

Tools in My Build Process

When I started researching my image build process back in 2017, I was looking to find a way to save time and provide consistent results on each build.  I wanted a tool that would allow me to build images with little interaction with the process on my part.  But it also needed to fit into my lab.  The main tools I looked at were Packer with the JetBrains vSphere plugin and Microsoft Deployment Toolkit (MDT).

While Packer is an incredible tool, I ended up selected MDT as the main tool in my process.  My reason for selecting MDT has to do with NVIDIA GRID.  The vSphere Plugin for Packer does not currently support provisioning machines with vGPU, so using this tool would have required manual post-deployment work.

One nice feature of MDT is that it can utilize a SQL Server database for storing details about registered machines such as the computer name, the OU where the computer object should be placed, and the task sequence to run when booting into MDT.  This allows a new machine to be provisioned in a zero-touch fashion, and the database can be populated from PowerShell.

Unlike Packer, which can create and configure the virtual machine in vCenter, MDT only handles the operating system deployment.  So I needed some way to create and configure the VM in vCenter with a vGPU profile.  The best method of doing this is using PowerCLI.  While there are no native commandlets for managing vGPUs or other Shared PCI objects in PowerCLI, there are ways to utilize vSphere extension data to add a vGPU profile to a VM.

While MDT can install applications as part of a task sequence, I wanted something a little more flexible.  Typically, when a new version of an application is added, the way I had structured my task sequences required them to be updated to utilize the newer version.  The reason for this is that I wasn’t using Application Groups for certain applications that were going into the image, mainly the agents that were being installed, as I wanted to control the install order and manage reboots. (Yes…I may have been using this wrong…)

I wanted to reduce my operational overhead when applications were updated so I went looking for alternatives.  I ended up settling on using Chocolatey to install most of the applications in my images, with applications being hosted in a private repository running on the free edition of ProGet.

My Build Process Workflow

My build workflow consists of 7 steps with one branch.  These steps are:

  1. Create a new VM in vCenter
  2. Configure VM options such as memory reservations and video RAM
  3. GPU Flag Only – Add a virtual GPU with the correct profile to the VM.
  4. Identify Task Sequence that will be used.  There are different task sequences for GPU and non-GPU machines and logic in the script to create the task sequence name. Various parameters that are called when running the script help define the logic.
  5. Create a new computer entry in the MDT database.  This includes the computer name, MAC address, task sequence name, role, and a few other variables.  This step is performed in PowerShell using the MDTDB PowerShell module.
  6. Power on the VM. This is done using PowerCLI. The VM will PXE boot to a Windows PE environment configured to point to my MDT server.

Build Process

After the VM is powered on and boots to Windows PE, the rest of the process is hands off. All of the MDT prompts, such as the prompt for a computer name or the task sequence, are disabled, and the install process relies on the database for things like computer name and task sequence.

From this point forward, it takes about forty-five minutes to an hour to complete the task sequence. MDT installs Windows 10 and any drivers like the VMXNET3 driver, install Windows Updates from an internal WSUS server, installs any agents or applications, such as VMware Tools, the Horizon Agent, and the UEM DEM agent, silently runs the OSOT tool, and stamps the registry with the image build date.

Future Direction and Process Enhancements

While this process works well today, it is a bit cumbersome. Each new Windows 10 release requires a new task sequence for version control. It is also difficult to work tools like the OSDeploy PowerShell scripts by David Segura (used for slipstreaming updated into a Windows 10 WIM) into the process. While there are ways to automate MDT, I’d rather invest time in automating builds using Packer.

There are a couple of post-deployment steps that I would like to integrate into my build process as well. I would like to utilize Pester to validate the image build after it completes, and then if it passes, execute a shutdown and VM snapshot (or conversion to template) so it is ready to be consumed by Horizon. My plan is to utilize a tool like Jenkins to orchestrate the build pipeline and do something similar to the process that Mark Brookfield has laid out.

The ideal process that I am working towards will have multiple workflows to manage various aspects to the process. Some of these are:

1. A process for automatically creating updated Windows 10 ISOs with the latest Windows Updates using the OSDeploy PowerShell module.

2. A process for creating Chocolatey package updates and submitting them to my ProGet repository for applications managed by Chocolatey.

3. A process to build new images when Windows 10 or key applications (such as VMware Tools, the Horizon Agent, or NVIDIA Drivers) are updated. This process will ideally use Packer as the build tool to simplify management. The main dependency for this step is adding NVIDIA GRID support for the JetBrains Packer vSphere Plug-in.

So this is what I’m doing for image builds in my lab, and the direction I’m planning to go.

Horizon 7 Administration Console Changes

Over the last couple of releases, VMware has included an HTML5-based Horizon Console for managing Horizon 7.  Each release has brought this console closer to feature parity with the Flash-based Horizon Administrator console that is currently used by most administrators.

With the end-of-life date rapidly approaching for Adobe Flash, and some major browsers already making Flash more difficult to enable and use, there will be some changes coming to Horizon Administration.

  • The HTML5 console will reach feature parity with the Flash-based Horizon Administrator in the next release.  This includes a dashboard, which is one of the major features missing from the HTML5 console.  Users will be able to access the HTML5 console using the same methods that are used with the current versions of Horizon 7.
  • In the releases that follow the next Horizon release, users connecting to the current Flash-based console will get a page that provides them a choice to either go to the HTML5 console or continue to the Flash-based console.  This is similar to the landing page for vCenter where users can choose which console they want to use.

More information on the changes will be coming as the next version of Horizon is released.

Using Amazon RDS with Horizon 7 on VMware Cloud on AWS

Since I joined VMware back in November, I’ve spent a lot of time working with VMware Cloud on AWS – particularly around deploying Horizon 7 on VMC in my team’s lab.  One thing I hadn’t tried until recently was utilizing Amazon RDS with Horizon.

No, we’re not talking about the traditional Remote Desktop Session Host role. This is the Amazon Relational Database Service, and it will be used as the Event Database for Horizon 7.

After building out a multisite Horizon 7.8 deployment in our team lab, we needed a database server for the Horizon Events Database.  Rather than deploy and maintain a SQL Server in each lab, I decided to take advantage of one of the benefits of VMware Cloud on AWS and use Amazon RDS as my database tier.

This isn’t the first time I’ve used native Amazon services with Horizon 7.  I’ve previously written about using Amazon Route 53 with Horizon 7 on VMC.

Before we begin, I want to call out that this might not be 100% supported.  I can’t find anything in the documentation, KB58539, or the readme files that explicitly state that RDS is a supported database platform.  RDS is also not listed in the Product Interoperability Matrix.  However, SQL Server 2017 Express is supported, and there are minimal operational impacts if this database experiences an outage.

What Does a VDI Solution Need With A Database Server?

VMware Horizon 7 utilizes a SQL Server database for tracking user session data such as logins and logouts and auditing administrator activities that are performed in the Horizon Administrator console. Unlike on-premises environments where there are usually existing database servers that can host this database, deploying Horizon 7 on VMware Cloud on AWS would require a new database server for this service.

Amazon RDS is a database-as-a-service offering built on the AWS platform. It provides highly scalable and performant database services for multiple database engines including Postgres, Microsoft SQL Server and Oracle.

Using Amazon RDS for the Horizon 7 Events Database

There are a couple of steps required to prepare our VMware Cloud on AWS infrastructure to utilize native AWS services. While the initial deployment includes connectivity to a VPC that we define, there is still some networking that needs to be put into place to allow these services to communicate. We’ll break this work down into three parts:

  1. Preparing the VMC environment
  2. Preparing the AWS VPC environment
  3. Deploying and Configuring RDS and Horizon

Preparing the VMC Environment

The first step is to prepare the VMware Cloud on AWS environment to utilize native AWS services. This work takes place in the VMware Cloud on AWS management console and consists of two main tasks. The first is to document the availability zone that our VMC environment is deployed in. Native Amazon services should be deployed in the same availability zone to reduce any networking costs. Firewall rules need to be configured on the VMC Compute Gateway to allow traffic to pass to the VPC.

The steps for preparing the VMC environment are:

  1. Log into https://cloud.vmware.com
  2. Click Console
  3. In the My Services section, select VMware Cloud on AWS
  4. In the Software-Defined Data Centers section, find the VMware Cloud on AWS environment that you are going to manage and click View Details.
  5. Click the Networking and Security tab.
  6. In the System menu, click Connected VPC. This will display information about the Amazon account that is connected to the environment.
  7. Find the VPC subnet. This will tell you what AWS Availability Zone the VMC environment is deployed in. Record this information as we will need it later.

Now that we know which Availability Zone we will be deploying our database into, we will need to create our firewall rules. The firewall rules will allow our Connection Servers and other VMs to connect to any native Amazon services that we deploy into our connected VPC.

This next section picks up from the previous steps, so you should be in the Networking and Security tab of the VMC console. The steps for configuring our firewall rules are:

  1. In the Security Section, click on Gateway Firewall.
  2. Click Compute Gateway
  3. Click Add New Rule
  4. Create the new firewall rule by filling in the following fields:
    1. In the Name field, provide a descriptive name for the firewall rule.
    2. In the Source field, click Select Source. Select the networks or groups and click Save.
      Note: If you do not have any groups, or you don’t see the network you want to add to the firewall, you can click Create New Group to create a new Inventory Group.
    3. In the Destination field, click Select Destination. Select the Connected VPC Prefixes option and click Save.
    4. In the Services field, click Select Services. Select Any option and click Save.
    5. In the Applied To field, remove the All Interfaces option and select VPC Interfaces.
  5. Click Publish to save and apply the firewall rule.

There are two reasons that the VMC firewall rule is configured this way. First, Amazon assigns IP addresses at service creation. Second, this firewall rule can be reused for other AWS Services, and access to those services can be controlled using AWS Security Groups instead.

The VMC gateway firewall does allow for more granular rule sets. They are just not going to utilized in this walkthrough.

Preparing the AWS Environment

Now that the VMC environment is configured, the RDS service needs to be provisioned. There are a couple of steps to this process.

First, we need to configure a security group that will be used for the service.

  1. Log into your Amazon Console.
  2. Change to the region where your VMC environment is deployed.
  3. Go into the VPC management interface. This is done by going to Services and selecting VPC.
  4. Select Security Groups
  5. Click Create Security Group
  6. Give the security group a name and description.
  7. Select the VPC where the RDS Services will be deployed.
  8. Click Create.
  9. Click Close.
  10. Select the new Security Group.
  11. Click the Inbound Rules tab.
  12. Click Edit Rules
  13. Click Add Rule
  14. Fill in the following details:
    1. Type – Select MS SQL
    2. Source – Select Custom and enter the IP Address or Range of the Connection Servers in the next field
    3. Description – Description of the server or network
    4. Repeat as Necessary
  15. Click Save Rules

This security group will allow our connection servers to access the database services that are being hosted in RDS.

Once the security group is created, the RDS instance can be deployed. The steps for deploying the RDS instance are:

  1. Log into your Amazon Console.
  2. Change to the region where your VMC environment is deployed.
  3. Go into the RDS management interface. This is done by going to Services and selecting RDS.
  4. Click Create Database.
  5. Select Microsoft SQL Server.
  6. Select the version of SQL Server that will be deployed. For this walkthrough, SQL Server Express will be used.

    Note: There is a SQL Server Free Tier offering that can be used if this database will only be used for the Events Database. The Free Tier offering is only available with SQL Server Express. If you only want to use the Free Tier offering, select the Only enable options eligible for RDS Free Tier Usage.

  7. Click Next.
  8. Specify the details for the RDS Instance.
    1. Select License Model, DB Engine Version, DB instance class, Time Zone, and Storage.
      Note: Not all options are available if RDS Free Tier is being utilized.
    2. Provide a DB Instance Identifier. This must be unique for all RDS instances you own in the region.
    3. Provide a master username. This will be used for logging into the SQL Server instance with SA rights.
    4. Provide and confirm the master username password.
    5. Click Next.
  9. Configure the Networking and Security Options for the RDS Instance.
      1. Select the VPC that is attached to your VMC instance.
      2. Select No under Public Accessibility.
        Note: This refers to access to the RDS instance via a public IP address. You can still access the RDS instance from VMC since routing rules and firewall rules will allow the communication.
      3. Select the Availability Zone that the VMC tenant is deployed in.
      4. Select Choose Existing VPC Security Groups
      5. Remove the default security group by clicking the X.
      6. Select the security group that was created for accessing the RDS instance.

  10. Select Disable Performance Insights.
  11. Select Disable Auto Minor Version Upgrade.
  12. Click Create Database.

Once Create Database is clicked, the deployment process starts. This takes a few minutes to provision. After provisioning completes, the Endpoint URL for accessing the instance will be available in the in RDS Management Console. It’s also important to validate that the instance was deployed in the correct availability zone. While testing this process, some database instances were created in an availability zone that was different from the one selected during the provisioning process.

Make sure you copy your Endpoint URL. You will need this in the next step to configure the database and Horizon.

Creating the Horizon Events Database

The RDS instance that was provisioned in the last step is an empty SQL Server instance. There are no databases or SQL Server user accounts, and these will need to be created in order to use this server with Horizon. A tool like SQL Server Management Studio is required to complete these steps, and we will be using SSMS for this walkthrough. The instance must be accessible from the machine that has the database management tools installed.

The Horizon Events Database does not utilize Windows Authentication, so a SQL Server user will be required along with the database that we will be setting up. This also requires DB_Owner rights on that database so it can provision the tables when we configure it in Horizon the first time.

The steps for configuring the database server are:

  1. Log into new RDS instance using SQL Server Management Studio using the Master Username and Password.
  2. Right Click on Databases
  3. Select New Database
  4. Enter HorizonEventsDB in the Database Name Field.
  5. Click OK.
  6. Expand Security.
  7. Right click on Logins and select New Login.
  8. Enter a username for the database.
  9. Select SQL Server Authentication
  10. Enter a password.
  11. Uncheck Enforce Password Policy
  12. Change the Default Database to HorizonEventsDB
  13. In the Select A Page section, select User Mapping
  14. Check the box next to HorizonEventsDB
  15. In the Database Role Membership section, select db_owner
  16. Click OK

Configuring Horizon to Utilize RDS for the Events Database

Now that the RDS instance has been set up and configured, Horizon can be configured to use it for the Events Database. The steps for configuring this are:

  1. Log into Horizon Administrator.
  2. Expand View Configuration
  3. Click on Event Configuration
  4. Click Edit
  5. Enter the Database Server, Database Name, Username, and Password and click OK.

Benefits of Using RDS with Horizon 7

Combining VMware Horizon 7 with Amazon RDS is just one example of how you can utilize native Amazon services with VMware Cloud on AWS. This allows organizations to get the best of both worlds – easily consumed cloud services to back enterprise applications with an platform that requires few changes to the applications themselves and operational processes.

Utilizing native AWS services like RDS has additional benefits for EUC environments. When deploying Horizon 7 on VMware Cloud on AWS, the management infrastructure is typically deployed in the Software Defined Datacenter alongside the desktops. By utilizing native AWS services, resources that would otherwise be reserved for and consumed by servers can now be utilized for desktops.

 

More Than VDI…Let’s Make 2019 The Year of End-User Computing

It seems like the popular joke question at the beginning of every year is “Is this finally the year of VDI?”  The answer, of course, is always no.

Last week, Johan Van Amersfoort wrote a blog post about the virtues of VDI technology with the goal of making 2019 the “Year of VDI.”  Johan made a number of really good points about how the technology has matured to be able to deliver to almost every use case.

And today, Brian Madden published a response.  In his response, Brian stated that while VDI is a mature technology that works well, it is just a small subset of the broader EUC space.

I think both Brian and Johan make good points. VDI is a great set of technologies that have matured significantly since I started working with it back in 2011.  But it is just a small subset of what the EUC space has grown to encompass.

And since the EUC space has grown, I think it’s time to put the “Year of VDI” meme to bed and, in it’s place start talking about 2019 as the “Year of End-User Computing.”

When I say that we should make 2019 the “Year of End-User Computing,” I’m not referring to some tipping point where EUC solutions become nearly ubiquitous. EUC projects, especially in large organizations, require a large time investment for discovery, planning, and testing, so you can’t just buy one and call it a day.

I’m talking about elevating the conversation around end-user computing so that as we go into the next decade, businesses can truly embrace the power and flexibility that smartphones, tablets, and other mobile devices offer.

Since the new year is only a few weeks away, and the 2019 project budgets are most likely allocated, conversations you have around any new end-user computing initiatives will likely be for 2020 and beyond.

So how can you get started with these conversations?

If you’re in IT management or managing end-user machines, you should start taking stock of your management technologies and remote access capabilities.  Then talk to your users.  Yes…talk to the users.  Find out what works well, what doesn’t, and what capabilities they’d like to have.  Talk to the data center teams and application owners to find out what is moving to the cloud or a SaaS offering.  And make sure you have a line of communication open with your security team because they have a vested interest in protecting the company and its data.

If you’re a consultant or service provider organization, you should be asking your customers about their end-user computing plans and talking to the end-user computing managers. It’s especially important to have these conversations when your customers talk about moving applications out to the cloud because moving the applications will impact the users, and as a trusted advisor, you want to make sure they get it right the first time.  And if they already have a solution, make sure the capabilities of that solution match the direction they want to go.

End-Users are the “last mile of IT.” They’re at the edges of the network, consuming the reosurces in the data center. At the same time, life has a tendency to pull people away from the office, and we now have the technology to bridge the work-life gap.  As applications are moved from the on-premises data center to the cloud or SaaS platforms, a solid end-user computing strategy is critical to delivering business critical services while providing those users with a consistently good experience.