More Than VDI…Let’s Make 2019 The Year of End-User Computing

It seems like the popular joke question at the beginning of every year is “Is this finally the year of VDI?”  The answer, of course, is always no.

Last week, Johan Van Amersfoort wrote a blog post about the virtues of VDI technology with the goal of making 2019 the “Year of VDI.”  Johan made a number of really good points about how the technology has matured to be able to deliver to almost every use case.

And today, Brian Madden published a response.  In his response, Brian stated that while VDI is a mature technology that works well, it is just a small subset of the broader EUC space.

I think both Brian and Johan make good points. VDI is a great set of technologies that have matured significantly since I started working with it back in 2011.  But it is just a small subset of what the EUC space has grown to encompass.

And since the EUC space has grown, I think it’s time to put the “Year of VDI” meme to bed and, in it’s place start talking about 2019 as the “Year of End-User Computing.”

When I say that we should make 2019 the “Year of End-User Computing,” I’m not referring to some tipping point where EUC solutions become nearly ubiquitous. EUC projects, especially in large organizations, require a large time investment for discovery, planning, and testing, so you can’t just buy one and call it a day.

I’m talking about elevating the conversation around end-user computing so that as we go into the next decade, businesses can truly embrace the power and flexibility that smartphones, tablets, and other mobile devices offer.

Since the new year is only a few weeks away, and the 2019 project budgets are most likely allocated, conversations you have around any new end-user computing initiatives will likely be for 2020 and beyond.

So how can you get started with these conversations?

If you’re in IT management or managing end-user machines, you should start taking stock of your management technologies and remote access capabilities.  Then talk to your users.  Yes…talk to the users.  Find out what works well, what doesn’t, and what capabilities they’d like to have.  Talk to the data center teams and application owners to find out what is moving to the cloud or a SaaS offering.  And make sure you have a line of communication open with your security team because they have a vested interest in protecting the company and its data.

If you’re a consultant or service provider organization, you should be asking your customers about their end-user computing plans and talking to the end-user computing managers. It’s especially important to have these conversations when your customers talk about moving applications out to the cloud because moving the applications will impact the users, and as a trusted advisor, you want to make sure they get it right the first time.  And if they already have a solution, make sure the capabilities of that solution match the direction they want to go.

End-Users are the “last mile of IT.” They’re at the edges of the network, consuming the reosurces in the data center. At the same time, life has a tendency to pull people away from the office, and we now have the technology to bridge the work-life gap.  As applications are moved from the on-premises data center to the cloud or SaaS platforms, a solid end-user computing strategy is critical to delivering business critical services while providing those users with a consistently good experience.

Rubrik 5.0 “Andes” – A Refreshing Expansion

Since they came out of stealth in 2015, Rubrik has significantly expanded the features and capabilities of their core product.  They have had 13 major releases and added features for cloud providers, multi-tenant environments, Polaris, a software-as-a-service platform that provides enhanced cloud features and global management, and Radar, a service that detects and protects against ransomware attacks.

Today, Rubrik is announcing their 14th major release – Andes 5.0.  The Andes release builds on top of Rubrik’s feature rich platform to further expand the capabilities of the product.  It expands support for both on-premises mission critical applications as well as cloud native applications, and it extends or enhances existing product features.

Key features of this release are:

Enhanced Oracle Protection

Oracle database backup support was introduced in the Rubrik 4.0 Alta release, and it was basically a scripted RMAN backup to a Rubrik managed volume.  The Rubrik team has been hard at work enhnacing this feature.

Rubrik is introducing a connector agent that can be installed on Oracle hosts or RAC nodes.  This connector will be able to discover instances and databases automatically, allowing SLAs to be applied directly to the hosts or the databases directly.

Simplified administration of Oracle backups isn’t the only Oracle enhancement in the Andes release.  The popular Live Mount feature has now been extended to Oracle environments.  If you’re not familiar with Live Mount, it is the ability to run a virtual machine or database directly from the backup.  This is useful for test and development environments or retrieving a single table or row that was accidentally dropped from a database.

Point-in-time recovery of Oracle environments is another new Oracle enhancement.  This feature allows Oracle administrators to restore their database to a specific point in time.  Rubrik will orchestrate the recovery of the database and replay log files to reach the specified point in time.

SAP HANA Protection

SAP HANA is the in-memory database that drives many SAP implementations.  In Andes 5.0, Rubrik offers an SAP-certified HANA backup solution that utilizes SAP’s BackInt APIs for HANA data protection.  This solution integrates with HANA Studio and SAP Cockpit.  The SAP HANA protection feature also supports point-in-time recovery and log management features.

HANA protection relies on another new feature of Andes called Elastic App Service.  Elastic App Service is a managed volume mounted on the Rubrik CDM and provide the same SLA driven policies that other Rubrik objects get.

Microsoft SQL Server Enhancements

Rubrik has supported Microsoft SQL Server backups since the 3.0 release, and there has been a steady stream of enhancements to this feature.  The Andes release is no different, and it adds two major SQL Server backup features.

The first is the introduction of Changed Block Tracking for SQL Server databases. This feature will act similarly to the CBT function provided in VMware vSphere.  The benefit of this feature is that the Rubrik backup service can now look at the database change file to determine what blocks need to be backed up rather than scanning the database for changes, allowing for a shorter backup window and reduced overhead on the SQL Server host.

Another SQL Server enhancement is group Volume Shadow Copy Service (VSS) snapshots.  Rubrik utilizes Microsoft’s VSS SQL Writer Service to provide a point-in-time copy of the database.  The SQL Writer Service does this by freezing all operations on, or quiescing, the database to take a VSS snapshot.  Once the snapshot is completed, the database resumes operations while Rubrik performs any backup operations against the snapshot.  This process needs to be repeated on each individual database that Rubrik backs up, and this can lead to lengthy backup windows when there are multiple databases on each SQL Server.

Group VSS snapshots allow Rubrik to protect multiple databases on the same server in with one VSS snapshot action.  Databases that are part of the same SLA group will have their VSS snapshots taken and processed at the same time.  This essentially parallelizes backup operations for that SLA group.  The benefits of this are a reduction in SQL Server backup times and the ability to perform backups more frequently.

Windows Bare-Metal Recovery

Rubrik started off as a virtualization backup product.  However, there are still large workloads that haven’t been virtualized.  While Rubrik supported some phyiscal backups, such as SQL Server database backups, it never supported full backup and recovery of physical Windows Servers.  This meant that it couldn’t fully support all workloads in the database.

The Andes 5.0 release introduces the ability to protect workloads and data that reside on physical Windows Servers.  This is done with the same level of simplicity as all other virtualized and physical database workloads.

Physical Windows backup is done through the existing Rubrik Backup Service that is used for database workloads.  The initial backup is a full system backup that is saved to a VHDX file, and all subsequent backups utilize changed block tracking to only backup the changes to the volumes.

Restoring to bare metal isn’t fully automated, but it seems fairly straightforward.  The host server boots to a WinPE environment, mounts a Live Mount of the Windows Volume snapshots, and then runs a PowerShell script to restore the volumes. Once the restore is complete, the server can be rebooted to the normal boot drive.

This option is not only good for backing up and protecting physical workloads, but it can also be used for P2V and P2C (or physical-to-cloud) migrations.

The Windows BMR feature only supports Windows Server 2008 R2, Server 2012 R2, and Server 2016.  It does not support Windows 7 or Windows 10.

SLA Policy Enhancements

Setting up backup policies inside of Rubrik is fairly simple.  You create an SLA domain, you set the frequency and retention period of backup points, and you apply that policy to virtual machines, databases, or other objects.

But what if you need more control over when certain backups are taken?  There may be policies in place that determine when certain kinds of backups need to occur.

Andes 5.0 introduces Advanced SLA Policy Configuration. This is an optional feature that enables administrators to not only specify the frequency and retention period of a backup point, but is also allows that administrator to specify when those backups take place.

For example, my policy may dictate that I need to take my monthly backup on the last day of each month.  Under Rubrik’s normal scheduling engine, I can only specify a monthly backup.  I can’t create a schedule that is only applied on the last day of the month.

Office365 Backup

Office365 is quickly replacing on-premises Exchange and Sharepoint servers as organizations move to the Software-as-a-Service model. While Micorsoft provides tools to help retain data, it is possible to permanently delete data. There are also scenarios where it is not easy to move data – such as migrating to a new Office365 tenant.

Starting with the Andes 5.0 release, Rubrik will support backup and recovery of Office365 email and calendar objects through the Polaris platform. Polaris will act as the control plane for Office365 backup operations, and it will utilize the customer’s own Azure cloud storage to host the backup data and the search index.

SLAs can be applied to individual users or to all users in a tenant.  When it is applied to all users, new users and mailboxes will automatically inherit the SLA so they are protected as soon as they are created.

The Office365 protection feature allows for individual items, folders, or entire mailboxes to be recovered.  These items can be restored to the original mailbox location or exported to another user’s mailbox.

Other Enhancements

The Andes 5.0 release is a very large release, and I’m scratching the surface of what’s being included.  Some other key highlights of this release are:

  • NAS Direct Archive – Direct backup of NAS filesets into the Cloud
  • Live Mount VMDKs from Snapshots
  • Improved vCenter Recovery – Can recover directly to ESXi host
  • EPIC EHR Database Backup on Pure Storage
  • Snapshot Retention Enhancements
  • Support for RSA Multi-factor Authentication
  • API Tokens for Authentication
  • Cloud Archive Consolidation

Thoughts

This is another impressive release from Rubrik.  There are a number of long-awaited feature enhancements in this release, and they continue to add new features at a rapid pace.

#DW3727KU – The Digital Workspaces Showcase Keynote Live Blog

In a few minutes, the Digital Workspace Showcase keynote will take place. This keynote will show the future of end-user computing. I will be updating this blog as they make announcements and perform demonstrations.

4:32 PM – Room is pretty full. Looks like we are running a few minutes behind while everyone takes their seats.

4:34 PM – The keynote is starting with a video about EUC issues. Some laughter in the crowd.

4:35 PM – Shankar Iyer and Noah Wasmer take the stage. They’re talking about the history of EUC at VMware. Noah is talking about the business transformation that VMware EUC can provide to a variety of use cases.

4:38 PM – Companies with engaged workforce’s earn 147% more per share than their non-engaged competitors.

4:39 PM – Horizon cloud services are available over 25 regions between AWS, Azure, and IBM Softlayer. Workspace ONE is processing over 450 BILLION events per month.

4:41 PM – CIOs are saying that they can’t recruit talent unless they upgrade their end-user computing infrastructure.

4:43 PM – Workspace ONE is the platform that will unify, abstract, and reduce device silos. There are five key pillars around reducing digital silos:

  • Employee Experience
  • Modern Management
  • Virtualization
  • Insights
  • Automation
  • There are three core ideas to bring an intelligence-driven digital workspace to life. These three pillars are built on a foundation of intelligence and automation.
  • 4:45 PM – Shankar announces the first Employees-First Award to highlight a customer that brings digital transformation to their employees. Adobe Systems wins the award.
  • 4:49 PM – Shawn Bass, VMware EUC CTO, takes the stage to talk about Redefining Modern Management of End-User Computing
  • The first announcement is Dell “Ready to Work” Solutions. Brett Hansen, VP at Dell, joins Shawn on stage.
  • The first point that they are discussing is the ability to manage Dell hardware with Workspace ONE.
  • Dell is providing factory integration of Workspace ONE so users can receive a laptop directly from Dell, already integrated with Workspace ONE, so they can boot it and have their applications provisioned as if they were a mobile device. Large applications are preloaded in the factory. This sounds like the existing Dell factory process with Workspace ONE preinstalled and registered.
  • In order to prevent untrusted applications from being run on the machine, Workspace ONE will integrate with Device Guard to prevent untrusted applications from running. Trusted applications can be downloaded and run through the Workspace ONE portal.
  • 4:57 PM – Announcing Windows 10 Industry Baselines. These are prepopulated templates with policies configured to meet various industry baselines. Baselines can be updated and modified by administrators. This solution provides 100% GPO coverage and 100% modern policy management coverage.
  • Device Update Readiness is an automation capability that will let IT fully automate the process of application compatibility testing. Workspace ONE Intelligence will detect applications that are blocking the deployment of the latest version of Windows and allow IT to automatically send alerts to the developers.
  • CVE Vulnerability Remediation is a Workspace ONE Intelligence service. It pulls in a CVE database into intelligence and provide information about the vulnerability and provides the ability to automate the approval of a patch, deployment of the patch, and alerting the security team that it was being proactively addressed.
  • 5:05 PM – Windows 10 isn’t the only ecosystem that is being updated. Enhancements are coming to MacOS, Android, Chrome, Google Glass, and Rugged/IoT.
  • 5:05 PM – Changes to work styles require changes to IT security.
  • Zero-trust security is a principle that states the device should never be trusted. Workspace ONE can help create a zero-trust environment. Workspace ONE allows for a defense-in-depth strategy where security can be applied at multiple layers.
  • The partnership with Okta enables IT to set policies that can prevent users from accessing Okta applications unless the device is managed. When attempting to access the application, WS ONE will perform a device check before sending the user to Okta for authentication.
  • Workspace ONE Trust Networks allows security tools to integrate with Workspace ONE Intelligence. This allows Workspace ONE to automate actions to prevent the user from introducing security risks into the environment.
  • VMware is also announcing four new Trust Networks partners – Checkpoint, Palo Alto Networks, Trend Micro, and zScaler.
  • 5:15 PM – Shikha Mittal and Angela Ge take the stage to discuss modernization of Windows Application Delivery.
  • Intelligence and automation are being added into the Horizon control plane. This is built into the Horizon Cloud Control Plane. A cloud connector will be available to enable automation and intelligence for on-premises Horizon environments. Horizon is also available on VMC, and the cloud connector enables management of these environments as well. Horizon Cloud on Softlayer and Horizon Cloud on Azure IaaS are managed directly from the Horizon Cloud Management Console.
  • The Horizon Cloud Management Console allows administrators to view all of their Horizon environments, both on-premises and in the cloud, and perform management actions against them. It also allows administrators to provision both Horizon on VMC and Horizon Cloud on Azure.
  • 5:25 PM – The Workpsace ONE agent can be installed on Horizon desktops, and when the desktop is provisioned, it becomes a managed device. This enables VMware UEM policies to be applied to Horizon desktops as well as view intelligence about the desktop, applications, and security posture of the entire physical and virtual desktop estate.
  • 5:30 PM – Announcing the Workspace ONE Intelligent Hub – combining the Workspace ONE app and the Airwatch Agent. Workspace ONE Intelligent Hub enables workflow driven activities with integrations into other enterprise systems like Service Now, an internal people directory, and a notifications page where the user can keep track of tickets and alerts from application notifications.
  • 5:36 PM – Shawn wraps up the keynote by announcing the EUC Beta Program. You can learn more at https://goo.gl/wZmXqK
  • VMworld Vegas Tips and Tricks

    VMworld is only a few weeks away.  Like the last two VMworlds, VMworld 2018 will be held at the Mandalay Bay Conference Center in Las Vegas.  This will be the last year that VMworld is at Mandalay Bay – it should make a return to San Francisco’s Moscone Center for 2019.

    Whether you’re a seasoned pro or attending VMworld for the first time, there are a few things you should know for getting the most out of your VMworld experience.

    1. Wear Comfortable, Broken-In Shoes – You will be doing A LOT of walking. And I mean a lot.  If you track your steps, you will probably find that you do over 20,000 steps each day.  And when you’re not walking, you will probably be spending a lot of time on your feet.  Having a comfortable pair of walking shoes is key to surviving the week.  Make sure you break these shoes in before you go to Vegas.
    2. Lighten Your Load – If your backpack is anything like mine, it’s filled with most things that we think we need on a day-to-day basis.  This could be an extra power supply, dongles and adapters for projectors, spare whiteboard markers, or whatever else ends up in our backpacks.  That can be a lot of extra weight that you carry around.  You won’t need most of this for VMworld.  Clean out your backpack before you go and leave the extra stuff at home.  If you plan to bring electronics with you that you won’t carry every day, make sure you take advantage of the safe in your hotel room to keep them secure.
    3. Spend Time in the Community Areas and Solutions Exchange – VMworld is about the sessions, right?  Nope.  While the sessions are important, don’t fill your entire schedule with back-to-back sessions and talks.  You will want to spend time exploring the solutions exchange to talk to vendors and in the community areas.  The Blogger Tables and the vBrownbag Community Stage are great places to meet others.
    4. Join Twitter – If you’re not already on Twitter, make sure you join it for VMworld.  There is a lot going on, and you can keep up with sessions and after-hours activities by tracking various hashtags like #VMworld and #VMworld3Word.  It’s also a great way to meet people.
    5. Go Outside – Yes, Vegas is hot.  But you’ll be spending most of the day indoors breathing recycled and air-conditioned air.  Step outside, even if its for 15 minutes and get some fresh air.
    6. Be Safe – There is a lot to do in Vegas, but if you step out at night to explore the town, make sure you’re safe.  The usual tourism rules apply.  Don’t carry any more cash than you need to, keep your wallet and cell phone in your front pocket, and be aware of your surroundings.

    Getting Started With UEM Part 2: Laying The Foundation – File Services

    In my last post on UEM, I discussed the components and key considerations that go into deploying VMware UEM.  UEM is made up of multiple components that rely on a common infrastructure of file shares and Group Policy to manage the user environment, and in this post, we will cover how to deploy the file share infrastructure.

    There are two file shares that we will be deploying.  These file shares are:

    • UEM Configuration File Share
    • UEM User Data Share

    Configuration File Share

    The first of the two UEM file shares is the configuration file share.  This file share holds the configuration data used by the UEM agent that is installed in the virtual desktops or RDSH servers.

    The UEM configuration share contains a few important subfolders.  These subfolders are created by the UEM management console during it’s initial setup, and they align with various tabs in the UEM Management Console.  We will discuss this more in a future article on using the UEM Management console.

    • General – This is the primary subfolder on the configuration share, and it contains the main configuration files for the agent.
    • FlexRepository – This subfolder under General contains all of the settings configured on the “User Environment” tab.  The settings in this folder tell the UEM agent how to configure policies such as Application Blocking, Horizon Smart Policies, and ADMX-based settings.

    Administrators can create their own subfolders for organizing application and Windows  personalization.  These are created in the user personalization tab, and when a folder is created in the UEM Management Console, it is also created on the configuration share.  Some folders that I use in my environment are:

    • Applications – This is the first subfolder underneath the General folder.  This folder contains the INI files that tell the UEM agent how to manage application personalization.  The Applications folder makes up one part of the “Personalization” tab.
    • Windows Settings – This folder contains the INI files that tell the UEM agent how to manage the Windows environment personalization.  The Windows Settings folder makes up the other part of the Personalization tab.

    Some environments are a little more complex, and they require additional configuration sets for different use cases.  UEM can create a silo for specific settings that should only be applied to certain users or groups of machines.  A silo can have any folder structure you choose to set up – it can be a single application configuration file or it can be an entire set of configurations with multiple sub-folders.  Each silo also requires its own Group Policy configuration.

    User Data File Share

    The second UEM file share is the user data file share.  This file share holds the user data that is managed by UEM.  This is where any captured application profiles are stored. It can also contain other user data that may not be managed by UEM such as folders managed by Windows Folder Redirection.  I’ve seen instances where the UEM User Data Share also contained other data to provide a single location where all user data is stored.

    The key thing to remember about this share is that it is a user data share.  These folders belong to the user, and they should be secured so that other users cannot access them.  IT administrators, system processes such as antivirus and backup engines, and, if allowed by policy, the helpdesk should also have access to these folders to support the environment.

    User application settings data is stored on the share.  This consists of registry keys and files and folders from the local user profile.  When this data is captured by the UEM agent, it is compressed in a zip file before being written out to the network.  The user data folder also can contain backup copies of user settings, so if an application gets corrupted, the helpdesk or the user themselves can easily roll back to the last configuration.

    UEM also allows log data to be stored on the user data share.  The log contains information about activities that the UEM agent performs during logon, application launch and close, and logoff, and it provides a wealth of troubleshooting information for administrators.

    UEM Shared Folder Replication

    VMware UEM is perfect for multi-site end-user computing environments because it only reads settings and data at logon and writes back to the share at user logoff.  If FlexDirect is enabled for applications, it will also read during an application launch and write back when the last instance of the application is closed.  This means that it is possible to replicate UEM data to other file shares, and the risk of file corruption is minimized due to file locks being minimized.

    Both the UEM Configuration Share and the UEM User Data share can be replicated using various file replication technologies.

    DFS Namespaces

    As environments grow or servers are retired, this UEM data may need to be moved to new locations.  Or it may need to exist in multiple locations to support multiple sites.  In order to simplify the configuration of UEM and minimize the number of changes that are required to Group Policy or other configurations, I recommend using DFS Namespaces to provide a single namespace for the file shares.  This allows users to use a single path to access the file shares regardless of their location or the servers that the data is located on.

    UEM Share Permissions

    It’s not safe assume that everyone is using Windows-based file servers to provide file services in their environment.  Because of that, setting up network shares is beyond the scope of this post.  The process of creating the share and applying security varies based on the device hosting the share.

    The required Share and NTFS/File permissions are listed in the table below. These contain the basic permissions that are required to use UEM.  The share permissions required for the HelpDesk tool are not included in the table.

    Share Share Permissions NTFS Permissions
    UEMConfiguration Administrators: Full Control

    UEM Admins: Change

    Authenticated Users: Read

    Administrators: Full Control

    UEM Admins: Full Control

    Authenticated Users: Read and Execute

    UserData Administrators: Full Control

    UEM Admins: Full Control

    Authenticated Users: Change

    Administrators: Full Control

    UEM Admins: Full Control

    Authenticated Users (This folder Only):

    Read and Execute

    Create Folders/Append Data

    Creator Owner (Subfolders and files only):

    Full Control

    Wrapup and Next Steps

    This post just provided a basic overview of the required UEM file shares and user permissions.  If you’re planning to do a multi-site environment or have multiple servers, this would be a good time to configure replication.

    The next post in this series will cover the setup and initial configuration of the UEM management infrastructure.  This includes setting up the management console and configuring Group Policy.

    Moving to the Cloud? Don’t Forget End-User Experience

    The cloud has a lot to offer IT departments.  It provides the benefits of virtualization in a consumption-based model, and it allows new applications to quickly be deployed while waiting for, or even completely forgoing, on-premises infrastructure.  This can provide a better time-to-value and greater flexibility for the business.  It can help organizations reduce, or eliminate, their on-premises data center footprint.

    But while the cloud has a lot of potential to disrupt how IT manages applications in the data center, it also has the potential to disrupt how IT delivers services to end users.

    In order to understand how cloud will disrupt end-user computing, we first need to look at how organizations are adopting the cloud.  We also need to look at how the cloud can change application development patterns, and how that will change how IT delivers services to end users.

    The Current State of Cloud

    When people talk about cloud, they’re usually talking about three different types of services.  These services, and their definitions, are:

    • Infrastructure-as-a-Service: Running virtual machines in a hosted, multi-tenant virtual data center.
    • Platform-as-a-Service: Allows developers to subscribe to build applications without having to build the supporting infrastructure.  The platform can include some combination of web services, application run time services (like .Net or Java), databases, message bus services, and other managed components.
    • Software-as-a-Service: Subscription to a vendor hosted and managed application.

    The best analogy to explain this comparing the different cloud offerings with different types of pizza restaurants using the graphic below from episerver.com:

    pizza

    Image retrieved from: http://www.episerver.com/learn/resources/blog/fred-bals/pizza-as-a-service/

    So what does this have to do with End-User Computing?

    Today, it seems like enterprises that are adopting cloud are going in one of two directions.  The first is migrating their data centers into infrastructure-as-a-service offerings with some platform-as-a-service mixed in.  The other direction is replacing applications with software-as-a-service options.  The former is migrating your applications to Azure or AWS EC2, the latter is replacing on-premises services with options like ServiceNow or Microsoft Office 365.

    Both options can present challenges to how enterprises deliver applications to end-users.  And the choices made when migrating on-premises applications to the cloud can greatly impact end-user experience.

    The challenges around software-as-a-service deal more with identity management, so this post will focus on migrating on-premises applications to the cloud.

    Know Thy Applications – Infrastructure-As-A-Service and EUC Challenges

    Infrastructure-as-a-Service offerings provide IT organizations with virtual machines running in a cloud service.  These offerings provide different virtual machines optimized for different tasks, and they provide the flexibility to meet the various needs of an enterprise IT organization.  They allow IT organizations to bring their on-premises business applications into the cloud.

    The lifeblood of many businesses is Win32 applications.  Whether they are commercial or developed in house, these applications are often critical to some portion of a business process.  Many of these applications were never designed with high availability or the cloud in mind, and the developer and/or the source code may be long gone.  Or they might not be easily replaced because they are deeply integrated into critical processes or other enterprise systems.

    Many Win32 applications have clients that expect to connect to local servers.  But when you move those servers to a remote datacenter, including the cloud, it can introduce problems that makes the application nearly unusable.  Common problems that users encounter are longer application load times, increased transaction times, and reports taking longer to preview and/or print.

    These problems make employees less productive, and it has an impact on the efficiency and profitability of the business.

    A few jobs ago, I was working for a company that had its headquarters, local office, and data center co-located in the same building.  They also had a number of other regional offices scattered across our state and the country.  The company had grown to the point where they were running out of space, and they decided to split the corporate and local offices.  The corporate team moved to a new building a few miles away, but the data center remained in the building.

    Many of the corporate employees were users of a two-tier business application, and the application client connected directly to the database server.  Moving users of a fat client application a few miles down the road from the database server had a significant impact on application performance and user experience.  Application response suffered, and user complaints rose.  Critical business processes took longer, and productivity suffered as a result.

    More bandwidth was procured. That didn’t solve the issue, and IT was sent scrambling for a new solution.  Eventually, these issues were addressed with a solution that was already in use for other areas of the business – placing the core applications into Windows Terminal Services and provide users at the corporate office with a published desktop that provided their required applications.

    This solution solved their user experience and application performance problems.  But it required other adjustments to the server environment, business process workflows, and how users interact with the technology that enables them to work.  It took time for users to adjust to the changes.  Many of the issues were addressed when the business moved everything to a colocation facility a hundred miles away a few months later.

    Ensuring Success When Migrating Applications to the Cloud

    The business has said it’s time to move some applications to the cloud.  How do you ensure it’s a success and meets the business and technical requirements of that application while making sure an angry mob of users don’t show up at your office with torches and pitchforks?

    The first thing is to understand your application portfolio.  That understanding goes beyond having visibility into what applications you have in your environment and how those applications work from a technical perspective.  You need wholistic view of your applications and  keep the following questions in mind:

    • Who uses the application?
    • What do the users do in the application?
    • How do the users access the application?
    • Where does it fit into business processes and workflows?
    • What other business systems does the application integrate with?
    • How is that integration handled?

    Applications rarely exist in a vacuum, and making changes to one not only impacts the users, but it can impact other applications and business processes as well.

    By understanding your applications, you will be able to build a roadmap of when applications should migrate to the cloud and effectively mitigate any impacts to both user experience and enterprise integrations.

    The second thing is to test it extensively.  The testing needs to be more extensive than functional testing to ensure that the application will run on the server images built by the cloud providers, and it needs to include extensive user experience and user acceptance testing.  This may include spending time with users measuring tasks with a stop-watch to compare how long tasks take in cloud-hosted systems versus on-premises systems.

    If application performance isn’t up to user standards and has a significant impact on productivity, you may need to start investigating solutions for bringing users closer to the cloud-hosted applications.  This includes solutions like Citrix, VMware Horizon Cloud, or Amazon Workspaces or AppStream. These solutions bring users closer to the applications, and it can give users an on-premises experience in the cloud.

    The third thing is to plan ahead.  Having a roadmap and knowing your application portfolio enables you to plan for when you need capacity or specific features to support users, and it can guide your architecture and product selection.  You don’t want to get three years into a five year migration and find out that the solution you selected doesn’t have the features you require for a use case or that the environment wasn’t architected to support the number of users.

    When planning to migrate applications from your on-premises datacenters to an infrastructure-as-a-service offering, it’s important to know your applications and take end-user experience into account.   It’s important to test, and understand, how these applications perform when the application servers and databases are remote to the application client.  If you don’t, you not only anger your users, but you also make them less productive and less profitable overall.

     

    VDI in the Time of Frequent Windows 10 Upgrades

    The longevity of Windows 7, and Windows XP before that, has spoiled many customers and enterprises.  It provided IT organizations with a stable base to build their end-user computing infrastructures and applications on, and users were provided with a consistent experience.  The update model was fairly well known – a major service pack with all updates and feature enhancements would come out after about one year.

    Whether this stability was good for organizations is debatable.  It certainly came with trade-offs, security of the endpoint being the primary one.

    The introduction of Windows 10 has changed that model, and Microsoft is continuing to refine that model.  Microsoft is now releasing two major “feature updates” for Windows 10 each year, and these updates will only be supported for about 18 months each.  Microsoft calls this the “Windows as a Service” model, and it consists of two production-ready semi-annual release channels – a targeted deployment that is used to pilot users to test applications, and a broad deployment that replaces the “Current Branch for Business” option for enterprises.

    Gone are the days where the end user’s desktop will have the same operating system for it’s entire life cycle.

    (Note: While there is still a long-term servicing branch, Microsoft has repeatedly stated that this branch is suited for appliances and “machinery” that should not receive frequent feature updates such as ATMs and medical equipment.)

    In order to facilitate this new delivery model, Microsoft has refined their in-place operating system upgrade technology.  While it has been possible to do this for years with previous versions of Windows, it was often flaky.  Settings wouldn’t port over properly, applications would refuse to run, and other weird errors would crop up.  That’s mostly a thing of the past when working with physical Windows 10 endpoints.

    Virtual desktops, however, don’t seem to handle in-place upgrades well.  Virtual desktops often utilize various additional agents to deliver desktops remotely to users, and the in-place upgrade process can break these agents or cause otherwise unexpected behavior.  They also have a tendancy to reinstall Windows Modern Applications that have been removed or reset settings (although Microsoft is supposed to be working on those items).

    If Windows 10 feature release upgrades can break, or at least require significant rework of, existing VDI images, what is the best method for handling them in a VDI environment?

    I see two main options.  The first is to manually uninstall the VDI agents from the parent VMs, take a snapshot, and then do an in-place upgrade.  After the upgrade is complete, the VDI agents would need to be reinstalled on the machine.  In my opinion, this option has a couple of drawbacks.

    First, it requires a significant amount of time.  While there are a number of steps that could be automated, validating the image after the upgrade would still require an administrator.  Someone would have to log in to validate that all settings were carried over properly and that Modern Applications were not reinstalled.  This may become a significant time sink if I have multiple parent desktop images.

    Second, this process wouldn’t scale well.  If I have a large number of parent images, or a large estate of persistent desktops, I have to build a workflow to remove agents, upgrade Windows, and reinstall agents after the upgrade.  Not only do I have to test this workflow significantly, but I still have to test my desktops to ensure that the upgrade didn’t break any applications.

    The second option, in my view, is to rebuild the desktop image when each new version of Windows 10 is released.  This ensures that you have a clean OS and application installation with every new release, and it would require less testing to validate because I don’t have to check to see what broke during the upgrade process.

    One of the main drawbacks to this approach is that image building is a time consuming process.  This is where automated deployments can be helpful.  Tools like Microsoft Deployment Toolkit can help administrators build their parent images, including any agents and required applications, automatically as part of a task sequence.  With this type of toolkit, and administrator can automate their build process so that when a new version of Windows 10 is released, or a core desktop component like the Horizon or XenDesktop agent is updated, the image will have the latest software the next time a new build is started.

    (Note: MDT is not the only tool in this category.  It is, however, the one I’m most familiar with.  It’s also the tool that Trond Haavarstein, @XenAppBlog, used for his Automation Framework Tool.)

    Let’s take this one step further.  As an administrator, I would be doing a new Windows 10 build every 6 months to a year to ensure that my virtual desktop images remain on a supported version of Windows.  At some point, I’ll want to do more than just automate the Windows installation so that my end result, a fully configured virtual desktop that is deployment ready, is available at the push of a button.  This can include things like bringing it into Citrix Provisioning Services or shutting it down and taking a snapshot for VMware Horizon.

    Virtualization has allowed for significant automation in the data center.  Tools like VMware PowerCLI and the Nutanix REST API make it easy for administrators to deploy and manage virtual machines using a few lines of PowerShell.   Using these same tools, I can also take details from this virtual machine shell, such as the name and MAC address, and inject them into my MDT database along with a Task Sequence and role.  When I power the VM on, it will automatically boot to MDT and start the task sequence that has been defined.

    This is bringing “Infrastructure as Code” concepts to end-user computing, and the results should make it easier for administrators to test and deploy the latest versions of Windows 10 while reducing their management overhead.

    I’m in the process of working through the last bits to automate the VM creation and integration with MDT, and I hope to have something to show in the next couple of weeks.

     

    Getting Started with VMware UEM

    One of the most important aspects of any end-user computing environment is user experience, and a big part of user experience is managing the user’s Windows and application preferences.  This is especially true in non-persistent environments and published application environments where the user may not log into the same machine each time.

    So why is this important?  A big part of a user’s experience on any desktop is maintaining their customizations.  Users invest time into personalizing their environment by setting a desktop background, creating an Outlook signature, or configuring the applications to connect to the correct datasets, and the ability to retain these settings make users more productive because they don’t have to recreate these every time they log in or open the application.

    User settings portability is nothing new.  Microsoft Roaming Profiles have been around for a long time.  But Roaming Profiles also have limitations, such as casting a large net by moving the entire profile (or the App Data roaming folder on newer versions of Windows) or being tied to specific versions of Windows.

    VMware User Environment Manager, or UEM for short, is one of a few 3rd-party user environment management tools that can provide a lighter-weight solution than Roaming Profiles.  UEM can manage both the user’s personalization of the environment by capturing Windows and application settings as well as apply settings to the desktop or RDSH session based on the user’s context.  This can include things like setting up network drives and printers, Horizon Smart Policies to control various Horizon features, and acting as a Group Policy replacement for per-user settings.

    UEM Components

    There are four main components for VMware UEM.  The components are:

    • UEM Management Console – The central console for managing the UEM configuration
    • UEM Agent – The local agent installed on the virtual desktop, RDSH server, or physical machine
    • Configuration File Share – Network File Share where UEM configuration data is stored
    • User Data File Share – Network File Share where user data is stored.  Depending on the environment and the options used, this can be multiple file shares.

    The UEM Console is the central management tool for UEM.  The console does not require a database, and anything that is configured in the console is saved as a text file on the configuration file share.  The agent consumes these configuration files from the configuration share during logon and logoff, and it saves the application or Windows settings configuration when the application is closed or when the user logs off, and it stores them on the user data share as a ZIP file.

    The UEM Agent also includes a few other optional tools.  These are a Self-Service Tool, which allows users to restore application configurations from a backup, and an Application Migration Tool.  The Application Migration Tool allows UEM to convert settings from one version of an application to another when the vendor uses different registry keys and AppData folders for different versions.  Microsoft Office is the primary use case for this feature, although other applications may require it as well.

    UEM also includes a couple of additional tools to assist administrators with maintaining environment.  The first of these tools is the Application Profiler Tool.  This tool runs on a desktop or an RDSH Server in lieu of the UEM Agent.  Administrators can use this tool to create UEM profiles for applications, and it does this by running the application and tracking where the application writes to.  It can also be used to create default settings that are applied to an application when a user launches it, and this can be used to reduce the amount of time it takes to get users applications configured for the first time.

    The other support tool is the Help Desk support tool.  The Helpdesk support tool allows helpdesk agents or other IT support to restore a backup of a user settings archive.

    Planning for a UEM Deployment

    There are a couple of questions you need to ask when deploying UEM.

    1. How many configuration shares will I have, and where will they be placed? – In multisite environments, I may need multiple configuration shares so the configs are placed near the desktop environments.
    2. How many user data shares will I need, and where will they be placed?  – This is another factor in multi-site environments.  It is also a factor in how I design my overall user data file structure if I’m using other features like folder redirection.  Do I want to keep all my user data together to make it easier to manage and back up, or do I want to place it on multiple file shares.
    3. Will I be using file replication technology? What replication technology will be used? – A third consideration for multi-site environments.  How am I replicating my data between sites?
    4. What URL/Name will be used to access the shares? – Will some sort of global namespace, like a DFS Namespace, be used to provide a single name for accessing the shares?  Or will each server be accessed individually?  This can have some implications around configuring Group Policy and how users are referred to the nearest file server.
    5. Where will I run the management console?  Who will have access to it?
    6. Will I configure UEM to create backup copies of user settings?  How many backup copies will be created?

    These are the main questions that come up from an infrastructure and architecture perspective, and they influence how the UEM file shares and Group Policy objects will be configured.

    Since UEM does not require a database, and it does not actively use files on a network share, planning for multisite deployments is relatively straight forward.

    In the next post, I’ll talk about deploying the UEM supporting infrastructure.

    Configuring a Headless CentOS Virtual Machine for NVIDIA GRID vGPU #blogtober

    When IT administrators think of GPUs, the first thing that comes to mind for many is gaming.  But GPUs also have business applications.  They’re mainly found in high end workstations to support graphics intensive applications like 3D CAD and medical imaging.

    But GPUs will have other uses in the enterprise.  Many of the emerging technologies, such as artificial intelligence and deep learning, utilize GPUs to perform compute operations.  These will start finding their way into the data center, either as part of line-of-business applications or as part of IT operations tools.  This could also allow the business to utilize GRID environments after hours for other forms of data processing.

    This guide will show you how to build headless virtual machines that can take advantage of NVIDIA GRID vGPU for GPU compute and CUDA.  In order to do this, you will need to have a Pascal Series NVIDIA Tesla card such as the P4, P40, or P100 and the GRID 5.0 drivers.  The GRID components will also need to be configured in your hypervisor, and you will need to have the GRID drivers for Linux.

    I’ll be using CentOS 7.x for this guide.  My base CentOS configuration is a minimal install with no graphical shell and a few additional packages like Nano and Open VM Tools.  I use Bob Planker’s guide for preparing my VM as a template.

    The steps for setting up a headless CentOS VM with GRID are:

    1. Deploy your CentOS VM.  This can be from an existing template or installed from scratch.  This VM should not have a graphical shell installed, or it should be in a run mode that does not execute the GUI.
    2. Attach a GRID profile to the virtual machine by adding a shared PCI device in vCenter.  The selected profile will need to be one of the Virtual Workstation profiles, and these all end with a Q.
    3. GRID requires a 100% memory reservation.  When you add an NVIDIA GRID shared PCI device, there will be an associated prompt to reserve all system memory.
    4. Update the VM to ensure all applications and components are the latest version using the following command:
      yum update -y
    5. In order to build the GRID driver for Linux, you will need to install a few additional packages.  Install these packages with the following command:
      yum install -y epel-release dkms libstdc++.i686 gcc kernel-devel 
    6. Copy the Linux GRID drivers to your VM using a tool like WinSCP.  I generally place the files in /tmp.
    7. Make the driver package executable with the following command:
      chmod +X NVIDIA-Linux-x86_64-384.73-grid.run
    8. Execute the driver package.  When we execute this, we will also be adding the –dkms flag to support Dynamic Kernel Module Support.  This will enable the system to automatically recompile the driver whenever a kernel update is installed.  The commands to run the the driver install are:
      bash ./NVIDIA-Linux-x86_64-384.73-grid.run –dkms
    9. When prompted, select yes to register the kernel module sources with DKMS by selecting Yes and pressing Enter.Headless 1
    10. You may receive an error about the installer not being able to locate the X Server path.  Click OK.  It is safe to ignore this error.Headless 2
    11. Install the 32-bit Compatibility Libraries by selecting Yes and pressing Enter.Headless 3
    12. At this point, the installer will start to build the DKMS module and install the driver.  Headless 4
    13. After the install completes, you will be prompted to use the nvidia-xconfig utility to update your X Server configuration.  X Server should not be installed because this is a headless machine, so select No and press Enter.Headless 5
    14. The install is complete.  Press Enter to exit the installer.Headless 6
    15. To validate that the NVIDIA drivers are installed and running properly, run nvidia-smi to get the status of the video card.  headless 7
    16. Next, we’ll need to configure GRID licensing.  We’ll need to create the GRID licensing file from a template supplied by NVIDIA with the following command:
      cp  /etc/nvidia/gridd.conf.template  /etc/nvidia/gridd.conf
    17. Edit the GRID licensing file using the text editor of your choice.  I prefer Nano, so the command I would use is:
      nano  /etc/nvidia/gridd.conf
    18. Fill in the ServerAddress and BackupServerAddress fields with the fully-qualified domain name or IP addresses of your licensing servers.
    19. Set the FeatureType to 2 to configure the system to retrieve a Virtual Workstation license.  The Virtual Workstation license is required to support the CUDA features for GPU Compute.
    20. Save the license file.
    21. Restart the GRID Service with the following command:
      service nvidia-gridd restart
    22. Validate that the machine retrieved a license with the following command:
      grep gridd /var/log/messages
    23. Download the NVIDIA CUDA Toolkit.
      wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda_9.0.176_384.81_linux-run
    24. Make the toolkit installer executable.
      chmod +x cuda_9.0.176_384.81_linux-run.sh
    25. Execute the CUDA Toolkit installer.
      bash cuda_9.0.176_384.81_linux-run.sh
    26. Accept the EULA.
    27. You will be prompted to download the CUDA Driver.  Press N to decline the new driver. This driver does not match the NVIDIA GRID driver version, and it will break the NVIDIA setup.  The GRID driver in the VM has to match the GRID software that is installed in the hypervisor.
    28. When prompted to install the CUDA 9.0 toolkit, press Y.
    29. Accept the Default Location for the CUDA toolkit.
    30. When prompted to create a symlink at /usr/local/cuda, press Y.
    31. When prompted to install the CUDA 9.0 samples, press Y.
    32. Accept the default location for the samples.Headless 8
    33. Reboot the virtual machine.
    34. Log in and run nvidia-smi again.  Validate that you get the table output similar to step 15.  If you do not receive this, and you get an error, it means that you likely installed the driver that is included with the CUDA toolkit.  If that happens, you will need to start over.

    At this point, you have a headless VM with the NVIDIA Drivers and CUDA Toolkit installed.  So what can you do with this?  Just about anything that requires CUDA.  You can experiment with deep learning frameworks like Tensorflow, build virtual render nodes for tools like Blender, or even use Matlab for GPU compute.