What’s New – Horizon 7.0

(Edit: Updated to include a Blast Extreme feature I missed.)

Last week, VMware announced App Volumes 3.0.  It was a taste of the bigger announcements to come in today’s Digital Enterprise event.  And they have a huge announcement.  Just a few short months after unveiling Horizon 6.2, VMware has managed to put together another major Horizon release.  Horizon 7.0 brings some significant enhancements and new features to the end-user computing space, including one long awaiting feature.

Before I talk about the new features, I highly recommend that you register for VMware’s Digital Enterprise event if you have not done so yet.  They will be covering a lot of the features of the new Horizon Suite offerings in the webinar.  You can register are http://www.vmware.com/digitalenterprise?src=sc_569fec388f2c9&cid=70134000000Nz2D.

So without further ado, let’s talk about Horizon 7’s new features.

Instant Clones

Instant Clones were debutted during the Day 2 Keynote at VMworld 2014.  After receiving a lot of hype as the future of desktop provisioning, they kind of faded into the background for a while.  I’m pleased to announce that Horizon 7 will feature Instant Clones as a new desktop provisioning method.

Instant Clones utilize VMware’s vmFork technology to rapidly provision desktop virtual machines from a running and quiesced parent virtual desktop.  Instant clones share both the memory and the disk of the parent virtual machine, and this technology can provide customized and domain joined desktops quickly as they are needed.  These desktops are destroyed when the user logs off, and if a new desktop is needed, it will be cloned from the parent when requested by a user.  Instant clones also enable administrators to create elastic pools that can expand or shrink the number of available desktops based on demand.

Although they might not be suited for all use cases, there are a couple of benefits to using instant clones over linked clones.  These are:

  • Faster provisioning – Instant Clones provision in seconds compared to minutes for linked clones
  • No Boot Storms – The parent desktop is powered on, and all instant clones are created in a powered-on state
  • Simplified Administration – No need to perform refresh or recompose operations to maintain desktops.
  • No need to use View Composer

Although instant clones were not available as a feature in Horizon 6.2, it was possible to test out some of the concepts behind the technology using the PowerCLI extensions fling.  Although I can’t validate all of the points above, my experiences after playing with the fling show that provisioning is significantly faster and boot storms are avoided.

There are some limitations to instant clones in this release.  These limitations may preclude them from being used in some environments today.  These limitations are:

  • RDSH servers are not currently supported
  • Floating desktop pools only.  No support for dedicated assignment pools.
  • 2000 desktops maximum
  • Single vCenter and single vLAN only
  • Limited 3D support – no support for vGPU or vDGA, limited support for sVGA.
  • VSAN or VMFS datastores only.  NFS is not supported.

Desktop personalization for instant clones is handled using App Volumes User Writable drives and UEM.

Blast Extreme

VMware introduced HTML5 desktop access using the Blast protocol in Horizon 5.2 back in 2013.  This provided another method for accessing virtual desktops and, later, published applications.  But it had a few deficiencies as well – it used port 8443, was feature limited compared to PCoIP, and was not very bandwidth efficient.

The latest version of Horizon adds a new protocol for desktop access – Blast Extreme.  Blast Extreme is a new protocol that is built to provide better multimedia experiences while using less bandwidth to deliver the content.  It is optimized for mobile devices and can provide better battery life compared to the existing Horizon protocols.

image

Most importantly, Blast Extreme has feature parity with PCoIP.  It supports all of the options and features available today including client drive redirection, USB, unified communications, and local printing.

Unlike the original Blast, Blast Extreme is not strictly a web-only protocol.  It can be used with the new Windows, MacOS, Linux and mobile device clients, and it works over port the standard HTTPS port.  This simplifies access and allows users to access it in many locations where ports 8443 and 8172 are blocked.

Blast Extreme is a dual-stack protocol.  That means that it will work over both TCP and UDP.  UDP is the preferred communications method, but if that is not available, it will fall back to TCP-based connections.

Smart Policies

What if your use case calls for disabling copy and paste or local printing when uses log in from home?  Or what if you want to apply a different PCoIP profile based on the branch office users are connecting to?  In previous versions of Horizon, this would require a different pool for each use case with configurations handled either in the base image or Group Policy.  This could be cumbersome to set up and administer.

Horizon 7 introduces Smart Policies.  Smart policies utilize the UEM console to create a set of policies to control the desktop behavior based on a number of factors including the groups that the user is a member of and location, and they are evaluated and applied whenever a user logs in or reconnects.  Smart policies can control a number of capabilities of the desktop including client drive redirection, Clipboard redirection, and printing, and they can also control or restrict which applications can be run.

Enhanced 3D Support

Horizon 6.1 introduced vGPU and improved the support for workloads that require 3D acceleration.  vGPU is limited, however, to NVIDIA GRID GPUs.

Horizon 7 includes expanded support for 3D graphics acceleration, and customers are no longer restricted to NVIDIA.  AMD S7150 series cards are supported in a multi-user vDGA configuration that appears to be very similar to vGPU.  Intel Iris Pro GPUs are also supported for vDGA on a 1:1 basis.

Cloud Pod Architecture

Cloud Pod Architecture has been expanded to support 10 Horizon pods in four sites.  This enables up to 50,000 user sessions.

Entitlement support has also been expanded – home site assignment can be set for nested AD security groups.

Other enhancements include improved failover support to automatically redirect users to available resources in other sites if they are not available in the preferred site and full integration with vIDM.

Other Enhancements

Other enhancements in Horizon 7 include:

  • Unified Management Console for App Volumes, UEM, and monitoring.  The new management console also includes a REST API to support automating management tasks.
  • A new SSO service that integrates vIDM, Horizon, Active Directory, and a certificate authority.
  • Improvements to the Access Point appliance.
  • Improved printer performance
  • Scanner and Serial redirection support for Windows 10
  • URL Content redirection
  • Flash Redirection (Tech Preview)
  • Scaled Resolution for Windows Clients with high DPI displays
  • HTML Access 4.0 – Supports Linux, Safari on IOS, and F5 APM

Thoughts

Horizon 7 provides another leap in Horizon’s capabilities, and VMware continues to reach parity or exceed the feature sets of their competition.

Home Lab Update

Back in October of 2014, I wrote a post about the (then) current state of my home lab.  My lab has grown a lot since then, and I’ve started building a strategy around my lab to cover technologies that I wanted to learn and the capabilities I would need to accomplish those learning goals.

I’ve also had some rather spectacular failures in the last year.  Some of these failures have been actual lab failures that have impacted the rest of the home network.  Others have been buying failures – equipment that appeared to meet my needs and was extremely cheap but ended up having extra costs that made it unsuitable in the long run.

Home Lab 1.0

I’ve never really had a strategy when it comes to my home lab.  Purchasing new hardware happened when I either outgrew something and needed capacity or to replace broken equipment.  If I could repurpose it, an older device would be “promoted” from running an actual workload to providing storage or some other dedicated service.

But this became unsustainable when I switched over to a consulting role.  There were too many things I needed, or wanted, to learn and try out that would require additional capacity.  My lab also had a mishmash of equipment, and I wanted to standardize on specific models.  This has two benefits – I can easily ensure that I have a standard set of capabilities across all components of the lab and it simplifies both upgrades and management.

The other challenge I wanted to address as I developed a strategy was separating out the “home network” from the lab.  While there would still be some overlap, such as wireless and Internet access,  it was possible to take down my entire network when I had issues in my home lab.  This actually happened on one occassion last August when the vDS in my lab corrupted itself and brought everything down.

The key technologies that I wanted to focus on with my lab are:

  1. End-User Computing:  I already use my lab for the VMware Horizon Suite.  I want to expand my VDI knowledge to include Citrix. I also want to spend time on persona management and application layering technologies like Liquidware Labs, Norskale, and Unidesk.
  2. Automation: I want to extend my skillset to include automation.  Although I have vRO deployed in my lab, I have never touched things like vRealize Automation and Puppet.  I also want to spend more time on PowerShell DSC and integrating it into vRO/vRA.  Another area I want to dive back into is automating Horizon environments – I haven’t really touched this subject since 2013.
  3. Containers: I want to learn more about Docker and the technologies surrounding it including Kubernetes, Swarm, and other technology in this stack.  This is the future of IT.
  4. Nutanix: Nutanix has a community edition that provides their hyperconverged storage technology along with the Acropolis Hypervisor.  I want to have a single-node Nutanix CE cluster up and running so I can dive deeper into their APIs and experiment with their upcoming Citrix integration.  At some point, I will probably expand that cluster to three node and use it for a home “private cloud” that my kids can deploy Minecraft servers into.

There are also a couple of key capabilities that I want in my lab.  These are:

  1. Remote Power Management:  This is the most important factor when it comes to my compute nodes.  I don’t want to have them running 24×7.  But at the same time, I don’t want to have to call up my wife and have her turn things on when I’m traveling.  Servers that I buy need to have some sort of remote management capability that does not require an external IP KVM or Wake-on-LAN.   The compute nodes I use need to have some sort of integrated remote management, preferably one with an API.
  2. Redundancy: I’m trying to avoid single-points of failure whenever possible.  Since much of my equipment is off-lease or used, I want to make sure that a single failure doesn’t take everything down.  I don’t have redundancy on all components – my storage, for instance, is a single Synology device due to budget constraints.  Network and Compute, however, are redundant.  Future lab roadmaps will address storage redundancy through hyperconverged offerings like ScaleIO and Nutanix CE.
  3. Flexibility: My lab needs to be able to shift between a number of different technologies.  I need to be able to jump from EUC to Cloud to containers without having to tear things down and rebuild them.  While my lab is virtualized, I will need to have the capacity to build and maintain these environments in a powered-off state.
  4. Segregation: A failure in the lab should not impact key home network services such as wireless and Internet access.

What’s in Home Lab 1.0

The components of my lab are:

Compute

Aside from one exception, I’ve standardized my compute tier on Dell 11th Generation servers.  I went with these particular servers because there are a number of off-lease boxes on eBay, and you can usually find a good deals on servers that come with large amounts of RAM.  RAM prices are also fairly low, and other components like iDRACs are readily available.

I have also standardized on the following components in each server:

  • iDRAC Enterprise for Remote Management
  • Broadcom 5709 Dual-Port Gigabit Ethernet
  • vSphere 6 Update 1 with the Host Client and Synology NFS Plugin installed

I have three vSphere clusters in my lab.  These clusters are:

  • Management Cluster
  • Workload Cluster
  • vGPU Cluster

The Management cluster consists of two PowerEdge R310s.  These servers have a single Xeon X3430 processor and 24GB of RAM.  This cluster is not built yet because I’ve had some trouble locating compatible RAM – the fairly common 2Rx4 DIMMs do not work with this server.  I think I’ve found some 2Rx8 or 4Rx8 DIMMs that should work.  The management cluster uses standard switches, and each host has a standard switch for Storage and a standard switch for all other traffic.

The Workload cluster consists of two PowerEdge R710s.  These servers have a pair of Xeon E5520 processors and 96GB of RAM.   My original plan was to upgrade each host to 72GB of RAM, but I had a bunch of 8GB DIMMs from my failed R310 upgrades, and I didn’t want to pay return shipping or restocking fees.  The Workload cluster is configured with a virtual distributed switch for storage, a vDS for VM traffic, and a standard switch for management and vMotion traffic.

The vGPU cluster is the only cluster that doesn’t follow the hardware standards.  The server is a Dell PowerEdge R730 with 32GB of RAM.  The server is configured with the Dell GPU enablement kit and currently has an NVIDIA GRID K1 card installed.

My Nutanix CE box is a PowerEdge R610 with 32GB of RAM.

Storage

The storage tier of my lab consists of a single Synology Diskstation 1515+.  It has four 2 TB WD Red drives in a RAID 10 and a single SSD acting as a read cache.  A single 2TB datastore is presented to my ESXi hosts using NFS.  The Synology also has a couple of CIFS shares for things like user profiles and network file shares.

Network

The network tier consists of a Juniper SRX100 firewall and a pair of Linksys SRW2048 switches.  The switches are not stacked but have similar configurations for redundancy.  Each server and the Synology are connected into both fabrics.

I have multiple VLANs on my network to segregate different types of traffic.  Storage, vMotion, and management traffic are all on their own VLANs.  Other VLANs are dedicated to different types of VM traffic.

That’s the overall high-level view of the current state of my home lab.  One component I haven’t spent much time on so far is my Horizon design.  I will cover that indepth in an upcoming post.

A Look Back at 2015

The end of the year is just a few days away.  It’s time to take a look back at 2015 and a look ahead at 2016.

Year in Review

I got to experience a lot of new things and participate in some great opportunities in 2015.  Highlights include:

  • Presenting at the first North Central Wisconsin VMUG meeting
  • Wrote for Virtualization Review
  • Made a career change.  I went to Ahead as a Data Center Engineer
  • Attended Virtualization Field Day in June as a delegate
  • Was selected to be part of the VMware EUC vExperts group
  • Rebranded my blog.  I changed the URL from seanmassey.net to thevirtualhorizon.com

Goals

When I wrote my 2014 Year in Review post, I had also set three goals for 2015.  Those goals were:

  1. Get my VCDX
  2. Make a career change and go to a VAR/partner or vendor
  3. Find a better work/life/other balance

I accomplished 2/3rds of these goals.  In April, I made the move to Ahead, a consulting firm based out of Chicago.  This move has also enabled me to have a better work/life/other balance – when I’m home, I can now pick up my son from school.

I haven’t started on my VCDX yet, and this goal is sitting in the waiting queue.  There are a couple of reasons for this.  First, there were some large areas in my selected design that I would have had to fictionalize.  Secondly, and more importantly, there will be other opportunities to do a design based on an actual client.  I plan to keep this on my goals list and revisit it in 2016.

Although obtaining my VCDX will be my main goal for 2016, I have a few other smaller goals that I plan to work towards as well:

  • Write More – Although it can be time-consuming, I like writing.  The thing is, I like it as a hobby.  Writing professionally was an interesting experience, but took a lot of the fun out of blogging.  I would like to get back into the habit of blogging on a regular basis in 2016.
  • Expand my Skillsets – I’d like to spend more time learning the private cloud and automation toolkits, especially things like Puppet, Chef, and OpenStack.  I’d also like to spend more time on HyperConverged solutions like Nutanix.  I plan on expanding my lab to be able to dabble in this more. 

Blog Statistics

I didn’t do as much blogging in 2015 as I did in 2014.  There are a few reasons for this.  First, I passed on participating in the Virtual Design Master 30-in-30 challenge this year.  Second, a lot of content I would have written for my blog earlier in the year was directed to Virtualization Review instead, so I did not have a lot of original stuff to write.

I normally don’t care about blog stats, but I think it’s a fun exercise to take a look back at the year and compare it briefly to previous years.  As of December 27th, I had written 21 blog posts.  This was down from 90 posts in 2014.  Page views are about the same.  I ended 2014 with 151,862 page views by 55,471 visitors.  Year-to-date in 2015, I have had 151,862 page views by 64,618 visitors.

Some key statistics are:

And a Big Thank You Goes Out to…

I’m not on this journey alone, and another great year wouldn’t have been possible without the vCommunity.  A few people I’d like to call out are:

  • My wife Laura for weathering the transition to a consulting role
  • Brian Suhr – who has enabled me to take the next steps in my career
  • Jarian Gibson and Andrew Morgan
  • The entire team at Ahead for being some of the smartest people I’ve ever worked with and always having time to help someone out with questions
  • Stephen Foskett and Tom Hollingsworth for inviting me to participate in Virtualization Field Day

A Day of Giving Thanks

Today, the United States celebrates Thanksgiving.  It’s a day that we come together with our families to eat a little turkey, watch some football, and give thanks for the good things in our lives. 

I have a lot to be thankful for this year.  Some of the things I’m thankful for are:

1. An amazing and supportive family.

2. An awesome and challenging job with some of the smartest people I know.

3. A great community that enables passionate IT professionals to come together and share with each other.  Although I might only see people at VMUGs and conferences, I’ve come to consider many people friends.

4.  A Bears victory over the Packers…at Lambeau.

I hope everyone has a great Thanksgiving.

Horizon EUC Access Point Configuration Script

Horizon 6.2 included a new feature when it was launched in early September – the EUC Access Gateway.  This product is a hardened Linux appliance that has all of the features of the Security Server without the drawbacks of having to deploy Windows Servers into your DMZ.  It will also eventually support Horizon Workspace/VMware Identity Manager.

This new Horizon component exposes the “cattle philosophy” of virtual machine management.  If it stops working properly, or a new version comes out, its to be disposed of and redeployed.  To facilitate this, the appliance is configured and managed using a REST API.

Unfortunately, working with this REST API isn’t exactly user friendly, especially if you’re only deploying one or two of these appliances.  This API is also the only way to manage the appliance, and it does not have a VAMI interface or SSH access.

I’ve put together a PowerShell script that simplifies and automates the configuration of the EUC Access Gateway Appliances.  You can download the script off of my Github site.

The script has the following functions:

  • Get the appliance’s current Horizon View configuration
  • Set the appliance’s Horizon View configuration
  • Download the log bundle for troubleshooting

There are also placeholder parameters for configuring vIDM (which will be supported in future releases) and uploading SSL certificates.

The syntax for this script’s main features look like:

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetViewConfig

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -SetViewConfig -ViewEnablePCoIP -ViewPCoIPExternalIP 10.1.1.3 $ViewDisableBlast

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetLogBundle -LogBundleFolder c:\temp

If you have any issues deploying a config, use the script to download a log bundle and open the admin.log file.  This file will tell you what configuration element was rejected.

I want to point out one troubleshooting note that my testers and I both experienced when developing this script.  The REST API does not work until an admin password is set on the appliance.  One thing we discovered is that there were times when the password would not be set despite one being provided during the deployment.  If this happens, the script will fail when you try to get a config, set a config, or download the log bundle.

When this happens, you either need to delete the appliance and redeploy it or log into the appliance through the vSphere console and manually set the admin password.

Finally, I’d like to thank Andrew Morgan and Jarian Gibson for helping test this script and providing feedback that greatly improved the final product.

EUC5404 – Deliver High Performance Desktops with VMware Horizon and NVIDIA GRID vGPU

Notes from EUC5405.

Reasons for 3D Graphics

  • Distributed Workforces with Large Datasets – harder to share
  • Contractors/3rd Party workers that need revocable access – worried about data Leakage and Corporate Security

Engineering firm gained 70% productivity improvements for CATIA users by implementing VDI – slide only shows 20%

Windows 7 drives 3D graphics, Aero needs 3D.  Newer versions of Windows and new web browsers do even more.

History of 3D Graphics in Horizon

  • Soft3D was first
  • vSGA – shared a graphics card amongst VM, limited to productivity and lightweight use
  • vDGA – hardwire card to virtual machine
  • GRID vGPU – Mediated Pass-thru, covers the middle space between vSGA and vDGA

vGPU defined – Shared access to physical GPU on a GRID card, gets access to native NVIDIA drivers

vGPU has official support statements from application vendors

Product Announcement – 3D graphics on RDSH

vGPU does not support vMotion, but it does support HA and DRS placement

Upgrade Path to Horizon vGPU

If you already have GRID cards and are using vDGA or vSGA, there is an upgrade path to vGPU.

Steps:

  • Upgrade to vSphere 6.0
  • Upgrade Horizon to 6.1 or newer
  • Install NVIDIA VIBs on host
  • Upgrade VMs to version 11
  • Set vGPU profiles
  • Install drivers in VMs

vGPU has Composer Support

GRID Profiles set in vCenter

Two settings to configure – one in vCenter (vGPU Profiles) and one in Horizon

GRID 2.0 – bringing Maxwell to GRID

More users, Linux Support

Moving to Platform – software on top of hardware instead of dedicated product line for GRID

GRID 2.0 is hardware plus software.  Changing from being a driver into a platform and software with additional features

Licensing is changing. Licensed user groups.

Grid Editions

vMotion not coming today – much more complicated problem to solve

GRID editions

GRID Use Cases

Virtual PC – business users who expect great perf, AutoCAD, PhotoShop

Virtual Workstation – Siemens, Solidworks, CATIA, REVIT

Virtual Workstation Extended – Very high end.  Autodesk Maya

 

High-Perf VDI is not the same your regular VDI

  • Density goes down, CPU/Memory/IOPS/Rich Graphics capabilities go up
  • Workloads are different than traditional VDI

Hardware Recommendations

  • vSphere 6.0 Required
  • VM must be HW version 11
  • 2-8 vCPUs, at least 4 for Power Users
  • Minimum 4GB RAM
  • 64-bit OS

Required Components in VMs:

  • VM Tools
  • View Agent
  • NVIDIA Driver

Use the VMware OS Optimization Tool fling.  Users can see up to 40% in resource savings.

Sizing Rich Graphics – Storage

Storage still critical factor in performance

CAD users can demand more than 1TB of storage per desktop

Size and performance matter now

Storage Options:

  • Virtual SAN – SSD based local storage
  • Or All-Flash based SANs

Bringing Rich 3D into Production

  • Establish End-User Acceptance Criteria to verify that User Experience is acceptable
  • Have end users test applications and daily tasks
  • Time how long it takes to complete tasks

VAPP5483 – Virtualizing Active Directory the Right Way

Notes from VAPP5483 – Virtualizing Active Directory the Right Way

Active Directory Overview

Windows Active Directory multi-master replication conundrum

Writes originate from any DC

Changes must converge

  • Eventually
  • preferably on time

Why virtualize Active Directory

  • Virtualization is mainstream at this point
  • Active Directory is fully supported in virtual environments
  • Active Directory is virtualization friendly -> Distributed multi-master model, low resource requirements
  • Domain Controllers are interchangable -> one breaks, they can be replaced. Cattle, not pets
  • Physical domain controllers waste compute resources

Common Objections to DC Virtualization

  • Fear of the stolen VMDK -> no different than stolen server or backup tape
  • Priviledge Escalation -> vCenter priviledges are separate
  • Have to keep certain roles physical -> no technical reason for this, can seize or move roles if needed
  • Deviates from standards/build process -> helps standardization
  • Time Keeping in VMs is hard -> Presenters agree

Time Sync Issues

Old way – VMs get time from ESXi

Changed to use Windows time tools

KB 1189 -> time sync with host still happens on vMotion or Guest OS reboot

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1189

Demo -> moving PDC emulator to host with bad clock

If time on host is more than 1 year old, NTP cannot update or fix the time

How do we determine the correct time

Ask ESXi host?

This could be OK if…

  • Host times are always right
  • CMOS doesn’t go bad
  • Rogue operations don’t happen
  • Security is a thing other people worry about

Reality – Stuff happens…

vSphere default behavior corrects time on the PDC emulator

Can cause a lot of issues in impacted Windows Forests

Preventing Bad Time Sync

  • Ensure hardware clock is correct
  • Configure reliable NTP
  • Disable DRS on PDCe
  • Use Host-Guest Affinity for PDCes
  • Advanced Settings to disable Time Sync –> KB 1189

Best Practices

Don’t use WAN for Auth –  Place domain controllers locally

Distribute FSMO Roles

Use Effective RBAC – don’t cross roles unless needed, give rights only to trusted operators

To P2V or Not – don’t do it unless you hate yourself

Use Anti-Affinity Rules -> don’t have DCs on the same hosts, use host rules to place important

Sizing

vCPU – under 10K users, 1 vCPU, over that, start with 2 vCPU

RAM – database server, database is held in RAM, more RAM is better, perfmon counter shows cache usage

Networking – VMXNET3

Storage – Space that it needs plus room to grow

DNS –

70% of issues are DNS issues

AD requires effective DNS

DNS solution – doesn’t matter if Windows or Appliance, but must be AD-Aware

Avoid pointing DNS to itself, otherwise DNS cannot start

Virtual Disk -> Caching MS KB 888794

Preventing USN Rollback

AD is distributed directory service, relies on clock-based replication

Each DC keeps track of all transactions and tags them with a GUID

If a DC is snapshotted and rolled back, local DC will believe it is right, but all others will know it is bad and refuse to replicate with it. This is called USN rollback

Demo USN rollback

If you have 2008 R2 and below DCs, they will stop replicating. Both will still advertise as domain controllers

VM-Generation ID – exposes counter to guest

  • 2012 and newer. Operating system level feature and must be supported by hypervisor
  • vSphere 5.0 Update 2 and newer
  • Attribute is tracked in local copy of database on local domain controller, triggered by snapshots and snapshot rollback

Provides protection against USN rollback

Invented specifically for virtual domain controllers, allows for cloning of domain controllers

Demo – Clone a Domain Controller

Domain Controller must have software and services that support cloning – agents have to support cloning

Do NOT hot clone a domain controller. Must be in powered off state

Do not clone a DC that holds FSMO roles

Can Clone the PDCe, must power up reference domain controller before powering on clone

DNS must work

Do not sysprep the system

DC Safeguard allows a DC that has been reverted/restored to function as a DC

How it works:

  • VM Generation ID checked on DC boot, when a snapshot is created,  or when the VM is reverted to an old snapshot.  VM Generation-ID on VM is checked against the copy in the local database.
  • If it differs, RID Pool dumped and new RID pool issued
  • When Generation ID has changed, AD will detect it and remediate it
  • RID pool discarded, get new RID Pool and objects are re-replicated. VM essentially becomes a new DC

What’s New in VMware Horizon 6.2–User Experience

One of the areas where Horizon 6.2 has a lot of improvements is in the User Experience category.  The new version adds new features as well as brings a few older features out of tech preview.

Client Drive Redirection for VDI and RDSH

Client Drive redirection for Windows was in Tech Preview in Horizon 6.1.1.  It officially comes out of Tech Preview in Horizon 6.2, and it is now supported on both Windows and Mac clients.  It is also available as a tech preview for Linux clients.

This feature, when installed on the virtual desktop, allows users to remotely access files and data that might have stored on their local PC.  It utilizes compression and encryption when transferring files from the endpoint into the virtual desktop or server. 

Windows 10 Support

Although Windows 10 was officially supported on vSphere 6 on Day 1, it wasn’t supported in Horizon.  Virtual desktops built on Windows 10 would work, but there limits to what you could do, and other components of the Horizon Suite were not designed to work with or support it.

Horizon 6.2 has full support for Windows 10.  The Horizon Agent and Client are supported.  This includes Smart Card authentication support.

Windows 10 is only supported when running ESXi 5.5 Update 3 or ESXi 6.0 Update 1.

File Type Associations for Published Apps

There are times when I may want to allow a user to launch an application or work with files without installing the required applications on their machines.  In these cases, the user would then have to log into Horizon, launch the application, and then navigate to the network location where the file was stored.

But what if I could register a file handler in Windows that would allow me to double click on that file and have it launch the remote application automatically?  Horizon 6.2 now adds this capability.

In order to improve the user experience when opening files remotely, a data compression algorithm is utilized when transferring the files up to the remote host.  This transfer is also protected with SHA 256 encryption for when clients are remotely accessing the remote application over the Internet.

Mac OSX and IOS Support

Horizon Client 3.5 will be supported on OSX 10.11 and IOS 9.

Biometric Authentication

The Horizon Client for IOS will support biometric authentication.  This feature will allow users to store their credentials in Keychain and utilize their fingerprints to sign into their virtual desktops or published applications.  Administrators can also define polices for who can use this feature from with the Horizon Administrator console.

This feature is only supported with Horizon 6.2 when using Horizon Client 3.5.  The mobile device must also be running IOS 8 or IOS 9.

What’s New in VMware Horizon 6.2–3D Graphics

3D graphics are becoming increasingly important in virtual desktop environments.  While a number of high-end applications and use cases, such as CAD and medical imaging, require 3D graphics, modern applications are increasingly turning to the GPU to offload some processing.  These days, most web browsers, Microsoft Office, and even Windows are utilizing the GPU to assist with rendering and other tasks.

VMware has been slowly adding 3D support to Horizon.  Initially, this was limited to dedicating GPUs to a virtual machine or sharing the GPU through hypervisor-level components.  Horizon 6.1 added  NVIDIA’s vGPU to provide better shared GPU access.

Horizon 6.2 includes a significant number of improvements to virtual 3D acceleration.  In fact, most of the improvements are in this category.

NVIDIA GRID 2.0

NVIDIA announced the next generation of GRID on Sunday afternoon.  For more information, see my write-up on it here.

vDGA for AMD GPUs

AMD/ATI graphics cards were supported on virtual desktops in vSphere 5.x and Horizon 5.x.  This did not carry over to Horizon 6.  AMD support has been reintroduced in Horizon 6.2 for vDGA.

3D Support for RDS Hosted Applications

RDS desktops and published applications will now support both vDGA and vGPU when utilizing supported NVIDIA graphics cards.  3D acceleration is supported on RDSH servers running Windows Server 2008 R2 and Windows Server 2012.

Linux Desktop vSGA and vGPU Support

When Linux desktops were introduced in Horizon 6.1.1, they only supported vDGA for 3D graphics.  This limited Linux to a few specific use cases.

Horizon 6.2 adds significant support for 3D acceleration.  Both vSGA and vGPU are now available when utilizing supported NVIDIA graphics cards.

Linux desktops with vGPU will be able to utilize OpenGL 2.1, 3.x, and 4.x, while desktops with vSGA will be limited to OpenGL 2.1.

4K Resolution Support

4K content is extremely high resolution content, and more 4K content will appear as the displays start to come down in price.  These displays, which have a resolution of 3840×2160, are useful in situations where high resolution imaging is needed.

Horizon 6.2 will support in-guest resolutions up to 3840×2160.  In order to achieve this, Horizon Agent 6.2 is needed in the guest, and the client must be connecting with Horizon Client 3.5.

The guest operating system must be running Windows.  A Windows 7 virtual desktop can support up to three 4K monitors when running on a VM with HW version 11 and with Aero disabled.  Windows 7 machines with Aero enabled, or Windows 8 desktops running on HW version 10 can support a single 4K monitor.

Please note that this is for in-guest display resolutions.  Clients that have a 4K display with High DPI scaling are not supported at this time.

What’s New in VMware Horizon 6.2 – RDSH and Application Publishing

Publishing applications from RDSH servers was one of the big additions to Horizon 6.0.  Horizon 6.2 greatly expands on this feature set, and it offers many new capabilities under the covers to improve the management of the environment.

Cloud Pod Support for Applications

Horizon’s Cloud Pod for multi-datacenter architectures has been expanded to include support for RDSH-published applications.  Users can now be entitled to an application once and access them across Horizon pods and/or datacenters. 

image

Enhanced RDSH Load Balancing

The load balancing and user placement algorithms have been enhanced in Horizon 6.2 to ensure that users do not get placed on an already overloaded server.  There are two main improvements that enable this:

1. The load balancing algorithm utilizes Perfmon counters to determine which hosts are optimal for starting new sessions.  The View agent runs a script to collect system performance data, and it reports back to the connection servers with a recommendation based on the system’s current performance.  A server placement order is calculated based on the data that the View Agents return.

2. Application anti-affinity rules will look at the number instances of an application that is running on an RDSH host.  If the number of a particular application is higher than a predefined value, user connections will be directed to another host.  Application anti-affinity rules process after the server placement order has been determined.

There are a couple of things to be aware of with the new load balancing algorithms.  First, they only apply to new sessions, so if a user already has a session on an RDSH server, they will be reconnected to that session and be able to launch any application, even if it violates an anti-affinity rule.

Application anti-affinity rules also do not apply to RDSH desktop sessions.

Linked-Clone Support and Horizon Composer for RDSH

If you had wanted to build an RDSH Farm for Horizon 6.0, you would have had to build, deploy, and manage each server manually.  There was no built-in way for managing server images or server updates.  This could also be an inefficient use of storage.

Horizon 6.2 changes this.  Composer now supports linked-clone RDSH servers.  This brings the benefits of linked-clone desktops, such as automated pool builds, single image management, and system and application consistency, to server-based computing.