Horizon 7.0 Part 3–Desktop Design Considerations

Whether it is Horizon, XenDesktop, or a cloud-based Desktop-as-a-Service provider, the implementation of a virtual desktop and/or published applications environment requires a significant time investment during the design phase.  If care isn’t taken, the wrong design could be put into production, and the costs of fixing it could easily outweigh the benefits of implementing the solution.

So before we move into installing the actual components for a Horizon environment, we’ll spend the next two posts on design considerations.  This post, Part 3, will discuss design considerations for the Horizon virtual desktops, and Part 4 will discuss design considerations for Active Directory.

Virtual desktop environments are all about the end user and what they need.  So before you go shopping for storage arrays and servers, you need to start looking at your desktops.

There are four types of desktops in Horizon 7:

  • Full Clone Desktops – Each desktop is a full virtual machine deployed from a template and managed as an independent virtual machine.
  • Linked Clone Desktops – A linked clone is a desktop that shares its virtual disks with a central replica desktop, and any changes are written to its own delta disk.  Linked clones can be recomposed when the base template is updated or refreshed to a known good state at periodic intervals.  This feature requires Horizon Composer.
  • Instant Clone Desktops – Instant Clone desktops are new to Horizon 7, and they are built off of the VMfork technology introduced with vSphere 6.0.  Instant Clones are essentially a rapid clone of a running virtual machine with extremely fast customization.
  • Remote Desktop Session Host Pools – Horizon 6 expanded RDSH support to include PCoIP support and application remoting.  When RDSH desktops and/or application remoting are used, multiple users are logged into servers that host user sessions.  This feature requires Windows Server 2008 R2 or Server 2012 R2 with the RDSH features enabled.

There are two desktop assignment types for desktop pools:

  • Dedicated Assignment – users are assigned to a particular desktop during their first login, and they will be logged into this desktop on all subsequent logins.
  • Floating Assignment – users are temporarily assigned to a desktop on each login.  On logout, the desktop will be available for other users to log into.  A user may not get the same desktop on each login.

Understanding Use Cases

When you design a virtual desktop environment, you have to design around the use cases.  Use cases are the users, applications, peripherals, and how they are used to complete a task, and they are used to define many of the requirements in the environment.  The requirements of the applications in the type of desktops that are used and how they are assigned to users.

Unless you have some overriding constraints or requirements imposed upon your virtual desktop project, the desktop design choices that you make will influence and/or drive your subsequent purchases.   For instance, if you’re building virtual desktops to support CAD users, blade servers aren’t an option because high-end graphics cards will be needed, and if you want/need full clone desktops, you won’t invest in a storage array that doesn’t offer deduplication.

Other factors that may impact the use cases or the desktop design decisions include existing management tools, security policies, and other policies.

Once you have determined your use cases and the impacts that the use cases have on desktop design, you’ll be able to put together a design document with the following items:

  • Number of linked clone base images and/or full clone templates
  • Number and type of desktop pools
  • Number of desktops per pool
  • Number of Connection Servers needed
  • The remote access delivery method

If you’re following the methodology that VMware uses in their design exams, your desktop design document should provide you with your conceptual and logical designs.

The conceptual and logical designs, built on details from the use cases, will influence the infrastructure design.  This phase would cover the physical hardware to run the virtual desktop environment, the network layer, storage fabric, and other infrastructure services such as antivirus.

The desktop design document will have a heavy influence on the decisions that are made when selecting components to implement Horizon 7.  The components that are selected need to support and enable the type of desktop environment that you want to run.

In part four, we will cover Active Directory design for Horizon environments.

Horizon 7.0 Part 2–Horizon Requirements

In order to deliver virtual desktops to end users, a Horizon environment requires multiple components working together in concert.  Most of the components that Horizon relies upon are VMware products, but some of the components, such as the database and Active Directory, are 3rd-party products.

The Basics

The smallest Horizon environment only requires four components to serve virtual desktops to end users: ESXi, vCenter, a View Connection Server, and Active Directory.  The hardware for this type of environment doesn’t need to be anything special, and one server with direct attached storage and enough RAM could support a few users.

All Horizon environments, from the simple one above to a complex multi-site Cloud Pod environment, are built on this foundation.  The core of this foundation is the View Connection Server.

Connection Servers are the broker for the environment.  They handle desktop provisioning, user authentication and access.  They also manage connections to multi-user desktops and published applications.   Connection Servers also manage the

There are four types of Connection Server roles, and all four roles have the same requirements.  These roles are:

  • Standard Connection Server – The first Connection Server installed in the environment.
  • Replica Connection Server – Additional Connection Servers that replicate from the standard connection server
  • Security Server – A stripped down version of the Connection Server, designed to sit in the DMZ and proxy traffic to the Connection Servers.  A Security Server must be “paired” with a Connection Server.
  • Enrollment Server – A new role introduced in Horizon 7.  The Enrollment Server is used to facilitate the new True SSO feature.

The requirements for a Connection Server are:

  • 1 CPU, 4 vCPUs recommended
  • Minimum 4GB RAM, 10GB recommended if 50 or more users are connecting
  • Windows Server 2008 R2 or Windows Server 2012 R2
  • Joined to an Active Directory domain
  • Static IP Address

Note: The requirements for the  Security Server  and Enrollment Server are the same as the requirements for Connection Server.  Security Servers do not need to be joined to an Active Directory domain.

Aside from the latest version of the View Connection Server, the requirements are:

ESXi – ESXi is required for hosting the virtual machine The versions of ESXi that are supported by Horizon 7 can be found in the VMware compatibility matrix.  ESXi 5.0 Update 1 and newer, excluding ESXi 5.5 vanilla, are currently supported.  However, ESXi 6.0 Update 1 and newer are required for Instant Clones.

vCenter Server – The versions of vCenter that are supported by Horizon 7 can be found in the VMware compatibility matrix.  vCenter Server 5.0 Update 1 and newer, excluding vCenter 5.5 vanilla, are currently supported, and vCenter 6.0 Update 1 and newer are required to support Instant Clones.  The vCenter Server Appliance and the Windows vCenter Server application are supported.

Active Directory – An Active Directory environment is required to handle user authentication to virtual desktops, and the domain must be set to at least the Server 2008 functional level.  Group Policy is used for configuring parts of the environment, including desktop settings, roaming profiles, user data redirection, UEM, and the remoting protocol.   

Advanced Features

Horizon View has a lot of features, and many of those features require additional components to take advantage of them.  These components add options like secure remote access, profile management, and linked-clone desktops.

Secure Remote Access – There are a couple of options for providing secure remote access to virtual desktops and published applications.  Traditionally, remote access has been provided by the Horizon Security Server.  The Security Server is a stripped down version of the connection server that is designed to be deployed into a DMZ.  It also requires each server to be paired with a Connection Server.

There are two other remote access options.  The first is the Horizon Access Point.  The access point comes from the Horizon Air platform, and it was introduced in the on-premises solution in Horizon 6.2.2.  The Access Point is a hardened Linux appliance that is designed to be managed like a cloud appliance, and it serves the same function as the Security Server.  Unlike the Security Server, the Access Point does not need to be paired with a Connection Server.

Both the Security Server and the Access Point can be load balanced for high availability.

The other remote access option is the Horizon proxy built into the F5 APM module.  The APM module combines load balancing and rule-based secure remote access.  It can also replace the portal feature in vIDM.

Linked-Clone Desktops – Linked Clones are virtual machines that share a set of parent disks.  They are ideal for some virtual desktop environments because they can provide a large number of desktops without having to invest in new storage technologies, and they can reduce the amount of work that IT needs to do to maintain the environment.  Linked Clones are enabled by Horizon Composer.

The requirements for Horizon Composer are:

  • 2 vCPUs, 4 vCPUs recommended 
  • 4 GB RAM, 8GB required for deployments of 50 or more desktops
  • Windows Server 2008 R2 or Server 2012 R2
  • Database server – supported databases include Oracle and Microsoft SQL Server.  Please check the compatibility matrix for specific versions and service packs.
  • Static IP Address

Horizon Composer also requires a database.  The database requirements can be found in the VMware Product Interoperability Matrix.  The current requirements include SQL Server 2014 (RTM and SP1), SQL Server 2012 (SP2) and Oracle 12c Release 1.

Networking Requirements – Horizon requires a number of ports to be opened to allow the various components of the infrastructure to communicate.  The best source for showing all of the ports required by the various components is the VMware Horizon 7 Network Ports diagram.  It’s available in PDF format from here.

Other Components:  The Horizon Suite includes a number of tools to provide administrators with a full-fledged ecosystem for managing their virtual end-user computing environments.  These tools are App Volumes, User Environment Manager, VMware Identity Manager (vIDM), and vRealize Operations for Horizon.  The requirements for these tools will be covered in their respective sections.

Horizon 7.0 Part 1–Introduction

I realize that this series might seem like it’s a little late.  After all, Horizon 7.0 has been out for a few months now.  But between a very large writing project and wanting to take a few weeks off from writing, it’s time to get started with the comprehensive Horizon 7.0 series.

There have been a lot of updates and new features added to Horizon 7.0, and I covered most of those updates in a post back in February after the initial Horizon 7 announcement.  The major features are:

  • Instant Clones – New Provisioning Method
  • Blast Extreme – New Remoting Protocol
  • UEM Smart Policies – New Context Aware Policy Management

Those are just what I consider the major features that will impact most deployments.  There are a lot of other improvements as well, including improvements to scalability through CloudPod, security through True SSO, and new client redirection features to support additional use cases.

Unlike my previous series, I plan to go beyond installing the core Horizon 7.0 components.  This year, I hope to cover Access Point, RDSH, UEM, vRealize Operations for Horizon, and the latest version of App Volumes.  No, that won’t be App Volumes 3.0.  But I could cover that too.

Stay tuned.  There will be much more to come.

What’s New in NVIDIA GRID Licensing

When GRID 2.0 was announced at VMworld 2015, it included a licensing component for the driver and software portion of the component.  NVIDIA has recently revised the licensing and simplified the licensing model.  They have also added a subscription-based model for customers that don’t want to buy a perpetual license and pay for support on a yearly basis.

Major Changes

There are a few major changes to the licensing model.  The first is that the Virtual Workstation Extended licensing tier has been deprecated, and the features from this level have been added into the Virtual Workstation licensing tier.  This means that high-end features, such as dedicating an entire GPU to a VM and CUDA support, are now available in the Virtual Workstation licensing tier.

The second major change is a licensing SKU for XenApp and Published Applications.  In the early version of GRID 2.0, licensing support for XenApp and Horizon Published Applications was complicated.  The new model provides for per-user licensing for server-based computing.

The third major change is a change to how the license is enforced.  In the original incarnation of GRID 2.0, a license server was required for utilizing the GRID 2.0 features.  That server handled license enforcement, and if it wasn’t available, or there were no licenses available, the desktops were not usable.  In the latest license revision, the license has shifted to EULA enforcement.  The license server is still required, but it is now used for reporting and capacity planning.

The final major change is the addition of a subscription-based licensing model.  This new model allows organizations to purchase licenses as they need them without having to do a large capital outlay.  The subscription model includes software support baked into the price.  Subscriptions can also be purchased in multi-year blocks, so I can pay for three years at one time.

One major difference between perpetual and subscription models is what happens when support expires.  In the perpetual model, you know the licensing.  If you allow support to expire, you can still use these features.  However, you will not be able to get software updates. In a subscription model, the licensed features are no longer available as soon as the subscription expires. 

The new pricing for GRID 2.0 is:

Name

Perpetual Licensing

Subscription Licensing (yearly)

Virtual Apps $20 + $5 SUMS $10
Virtual PC $100+$25 SUMS $50
Virtual Workstation $450 + $100 SUMS $250

Software support for the 1st year is not included when you purchase a perpetual license.  Purchasing the 1st year of support is required when buying perpetual licenses.  A license is required if you plan to use a direct pass-thru with a GRID card.

It’s Time To Reconsider My Thoughts on GPUs in VDI…

Last year, I wrote that it was too early to consider GPUs for general VDI use and that they  should be reserved only for VDI use cases where they are absolutely required.  There were a number of reasons for this including user density per GPU, lack of monitoring and vMotion, and economics.  That lead to a Frontline Chatter podcast discussing this topic in more depth with industry expert Thomas Poppelgaard.

When I wrote that post, I said that there would be a day when GPUs would make sense for all VDI deployments.  That day is coming soon.  There is a killer app that will greatly benefit all users (in certain cases) who have access to a GPU.

Last week, I got to spend some time out at NVIDIA’s Headquarters in Santa Clara taking part in NVIDIA GRID Days.  GRID Days was a two day event interacting with the senior management of NVIDIA’s GRID product line along with briefings on the current and future technology in GRID.

Disclosure: NVIDIA paid for my travel, lodging, and some of my meals while I was out in Santa Clara.  This has not influenced the content of this post.

The killer app that will drive GPU adoption in VDI environments is Blast Extreme.  Blast Extreme is the new protocol being introduced in VMware Horizon 7 that utilizes H.264 as the codec for the desktop experience.  The benefit of using H.264 over other codecs is that many devices include hardware for encoding and decoding H.264 streams.  This includes almost every video card made in the last decade.

So what does this have to do with VDI?

When a user is logged into a virtual desktop or is using a published application on an RDSH server, the desktop and applications that they’re interacting with are being rendered, captured, encoded, or converted into a stream of data, and then transported over the network to the client.  Normally, this encoding happens in software and uses CPU cycles.  (PCoIP has hardware offload in the form of APEX cards, but these only handle the encoding phase, rendering happens somewhere else…

When GPUs are available to virtual desktops or RDSH/XenApp servers, the rendering and encoding tasks can be pushed into the GPU where dedicated and optimized hardware can take over these tasks.  This reduces the amount of CPU overhead on each desktop, and it can lead to snappier user experience.  NVIDIA’s testing has also shown that Blast Extreme with GPU offload uses less bandwidth and has lower latency compared to PCoIP.

Note: These aren’t my numbers, and I haven’t had a chance to validate these finding in my lab.  When Horizon 7 is released, I plan to do similar testing of my own comparing PCoIP and Blast Extreme in both LAN and WAN environment.

If I use Blast Extreme, and I install GRID cards in my hosts, I gain two tangible user experience benefits.  Users now have access to a GPU, which many applications, especially Microsoft Office and most web browsers, tap into for processing and rendering.  And they gain the benefits of using that same GPU to encode the H.264 streams that Blast Extreme uses, potentially lowering the bandwidth and latency of their session.  This, overall, translates into significant improvements in their virtual desktop and published applications experience*.

Many of the limitations of vGPU still exist.  There is no vMotion support, and performance analytics are not fully exposed to the guest OS.  But density has improved significantly with the new M6 and M60 cards.  So while it may not be cost effective to retrofit GPUs into existing Horizon deployments, GPUs are now worth considering for new Horizon 7 deployments.

*Caveat: If users are on a high latency network connection, or if the connection has a lot of contention, you may have different results.

What’s New – Horizon 7.0

(Edit: Updated to include a Blast Extreme feature I missed.)

Last week, VMware announced App Volumes 3.0.  It was a taste of the bigger announcements to come in today’s Digital Enterprise event.  And they have a huge announcement.  Just a few short months after unveiling Horizon 6.2, VMware has managed to put together another major Horizon release.  Horizon 7.0 brings some significant enhancements and new features to the end-user computing space, including one long awaiting feature.

Before I talk about the new features, I highly recommend that you register for VMware’s Digital Enterprise event if you have not done so yet.  They will be covering a lot of the features of the new Horizon Suite offerings in the webinar.  You can register are http://www.vmware.com/digitalenterprise?src=sc_569fec388f2c9&cid=70134000000Nz2D.

So without further ado, let’s talk about Horizon 7’s new features.

Instant Clones

Instant Clones were debutted during the Day 2 Keynote at VMworld 2014.  After receiving a lot of hype as the future of desktop provisioning, they kind of faded into the background for a while.  I’m pleased to announce that Horizon 7 will feature Instant Clones as a new desktop provisioning method.

Instant Clones utilize VMware’s vmFork technology to rapidly provision desktop virtual machines from a running and quiesced parent virtual desktop.  Instant clones share both the memory and the disk of the parent virtual machine, and this technology can provide customized and domain joined desktops quickly as they are needed.  These desktops are destroyed when the user logs off, and if a new desktop is needed, it will be cloned from the parent when requested by a user.  Instant clones also enable administrators to create elastic pools that can expand or shrink the number of available desktops based on demand.

Although they might not be suited for all use cases, there are a couple of benefits to using instant clones over linked clones.  These are:

  • Faster provisioning – Instant Clones provision in seconds compared to minutes for linked clones
  • No Boot Storms – The parent desktop is powered on, and all instant clones are created in a powered-on state
  • Simplified Administration – No need to perform refresh or recompose operations to maintain desktops.
  • No need to use View Composer

Although instant clones were not available as a feature in Horizon 6.2, it was possible to test out some of the concepts behind the technology using the PowerCLI extensions fling.  Although I can’t validate all of the points above, my experiences after playing with the fling show that provisioning is significantly faster and boot storms are avoided.

There are some limitations to instant clones in this release.  These limitations may preclude them from being used in some environments today.  These limitations are:

  • RDSH servers are not currently supported
  • Floating desktop pools only.  No support for dedicated assignment pools.
  • 2000 desktops maximum
  • Single vCenter and single vLAN only
  • Limited 3D support – no support for vGPU or vDGA, limited support for sVGA.
  • VSAN or VMFS datastores only.  NFS is not supported.

Desktop personalization for instant clones is handled using App Volumes User Writable drives and UEM.

Blast Extreme

VMware introduced HTML5 desktop access using the Blast protocol in Horizon 5.2 back in 2013.  This provided another method for accessing virtual desktops and, later, published applications.  But it had a few deficiencies as well – it used port 8443, was feature limited compared to PCoIP, and was not very bandwidth efficient.

The latest version of Horizon adds a new protocol for desktop access – Blast Extreme.  Blast Extreme is a new protocol that is built to provide better multimedia experiences while using less bandwidth to deliver the content.  It is optimized for mobile devices and can provide better battery life compared to the existing Horizon protocols.

image

Most importantly, Blast Extreme has feature parity with PCoIP.  It supports all of the options and features available today including client drive redirection, USB, unified communications, and local printing.

Unlike the original Blast, Blast Extreme is not strictly a web-only protocol.  It can be used with the new Windows, MacOS, Linux and mobile device clients, and it works over port the standard HTTPS port.  This simplifies access and allows users to access it in many locations where ports 8443 and 8172 are blocked.

Blast Extreme is a dual-stack protocol.  That means that it will work over both TCP and UDP.  UDP is the preferred communications method, but if that is not available, it will fall back to TCP-based connections.

Smart Policies

What if your use case calls for disabling copy and paste or local printing when uses log in from home?  Or what if you want to apply a different PCoIP profile based on the branch office users are connecting to?  In previous versions of Horizon, this would require a different pool for each use case with configurations handled either in the base image or Group Policy.  This could be cumbersome to set up and administer.

Horizon 7 introduces Smart Policies.  Smart policies utilize the UEM console to create a set of policies to control the desktop behavior based on a number of factors including the groups that the user is a member of and location, and they are evaluated and applied whenever a user logs in or reconnects.  Smart policies can control a number of capabilities of the desktop including client drive redirection, Clipboard redirection, and printing, and they can also control or restrict which applications can be run.

Enhanced 3D Support

Horizon 6.1 introduced vGPU and improved the support for workloads that require 3D acceleration.  vGPU is limited, however, to NVIDIA GRID GPUs.

Horizon 7 includes expanded support for 3D graphics acceleration, and customers are no longer restricted to NVIDIA.  AMD S7150 series cards are supported in a multi-user vDGA configuration that appears to be very similar to vGPU.  Intel Iris Pro GPUs are also supported for vDGA on a 1:1 basis.

Cloud Pod Architecture

Cloud Pod Architecture has been expanded to support 10 Horizon pods in four sites.  This enables up to 50,000 user sessions.

Entitlement support has also been expanded – home site assignment can be set for nested AD security groups.

Other enhancements include improved failover support to automatically redirect users to available resources in other sites if they are not available in the preferred site and full integration with vIDM.

Other Enhancements

Other enhancements in Horizon 7 include:

  • Unified Management Console for App Volumes, UEM, and monitoring.  The new management console also includes a REST API to support automating management tasks.
  • A new SSO service that integrates vIDM, Horizon, Active Directory, and a certificate authority.
  • Improvements to the Access Point appliance.
  • Improved printer performance
  • Scanner and Serial redirection support for Windows 10
  • URL Content redirection
  • Flash Redirection (Tech Preview)
  • Scaled Resolution for Windows Clients with high DPI displays
  • HTML Access 4.0 – Supports Linux, Safari on IOS, and F5 APM

Thoughts

Horizon 7 provides another leap in Horizon’s capabilities, and VMware continues to reach parity or exceed the feature sets of their competition.

Home Lab Update

Back in October of 2014, I wrote a post about the (then) current state of my home lab.  My lab has grown a lot since then, and I’ve started building a strategy around my lab to cover technologies that I wanted to learn and the capabilities I would need to accomplish those learning goals.

I’ve also had some rather spectacular failures in the last year.  Some of these failures have been actual lab failures that have impacted the rest of the home network.  Others have been buying failures – equipment that appeared to meet my needs and was extremely cheap but ended up having extra costs that made it unsuitable in the long run.

Home Lab 1.0

I’ve never really had a strategy when it comes to my home lab.  Purchasing new hardware happened when I either outgrew something and needed capacity or to replace broken equipment.  If I could repurpose it, an older device would be “promoted” from running an actual workload to providing storage or some other dedicated service.

But this became unsustainable when I switched over to a consulting role.  There were too many things I needed, or wanted, to learn and try out that would require additional capacity.  My lab also had a mishmash of equipment, and I wanted to standardize on specific models.  This has two benefits – I can easily ensure that I have a standard set of capabilities across all components of the lab and it simplifies both upgrades and management.

The other challenge I wanted to address as I developed a strategy was separating out the “home network” from the lab.  While there would still be some overlap, such as wireless and Internet access,  it was possible to take down my entire network when I had issues in my home lab.  This actually happened on one occassion last August when the vDS in my lab corrupted itself and brought everything down.

The key technologies that I wanted to focus on with my lab are:

  1. End-User Computing:  I already use my lab for the VMware Horizon Suite.  I want to expand my VDI knowledge to include Citrix. I also want to spend time on persona management and application layering technologies like Liquidware Labs, Norskale, and Unidesk.
  2. Automation: I want to extend my skillset to include automation.  Although I have vRO deployed in my lab, I have never touched things like vRealize Automation and Puppet.  I also want to spend more time on PowerShell DSC and integrating it into vRO/vRA.  Another area I want to dive back into is automating Horizon environments – I haven’t really touched this subject since 2013.
  3. Containers: I want to learn more about Docker and the technologies surrounding it including Kubernetes, Swarm, and other technology in this stack.  This is the future of IT.
  4. Nutanix: Nutanix has a community edition that provides their hyperconverged storage technology along with the Acropolis Hypervisor.  I want to have a single-node Nutanix CE cluster up and running so I can dive deeper into their APIs and experiment with their upcoming Citrix integration.  At some point, I will probably expand that cluster to three node and use it for a home “private cloud” that my kids can deploy Minecraft servers into.

There are also a couple of key capabilities that I want in my lab.  These are:

  1. Remote Power Management:  This is the most important factor when it comes to my compute nodes.  I don’t want to have them running 24×7.  But at the same time, I don’t want to have to call up my wife and have her turn things on when I’m traveling.  Servers that I buy need to have some sort of remote management capability that does not require an external IP KVM or Wake-on-LAN.   The compute nodes I use need to have some sort of integrated remote management, preferably one with an API.
  2. Redundancy: I’m trying to avoid single-points of failure whenever possible.  Since much of my equipment is off-lease or used, I want to make sure that a single failure doesn’t take everything down.  I don’t have redundancy on all components – my storage, for instance, is a single Synology device due to budget constraints.  Network and Compute, however, are redundant.  Future lab roadmaps will address storage redundancy through hyperconverged offerings like ScaleIO and Nutanix CE.
  3. Flexibility: My lab needs to be able to shift between a number of different technologies.  I need to be able to jump from EUC to Cloud to containers without having to tear things down and rebuild them.  While my lab is virtualized, I will need to have the capacity to build and maintain these environments in a powered-off state.
  4. Segregation: A failure in the lab should not impact key home network services such as wireless and Internet access.

What’s in Home Lab 1.0

The components of my lab are:

Compute

Aside from one exception, I’ve standardized my compute tier on Dell 11th Generation servers.  I went with these particular servers because there are a number of off-lease boxes on eBay, and you can usually find a good deals on servers that come with large amounts of RAM.  RAM prices are also fairly low, and other components like iDRACs are readily available.

I have also standardized on the following components in each server:

  • iDRAC Enterprise for Remote Management
  • Broadcom 5709 Dual-Port Gigabit Ethernet
  • vSphere 6 Update 1 with the Host Client and Synology NFS Plugin installed

I have three vSphere clusters in my lab.  These clusters are:

  • Management Cluster
  • Workload Cluster
  • vGPU Cluster

The Management cluster consists of two PowerEdge R310s.  These servers have a single Xeon X3430 processor and 24GB of RAM.  This cluster is not built yet because I’ve had some trouble locating compatible RAM – the fairly common 2Rx4 DIMMs do not work with this server.  I think I’ve found some 2Rx8 or 4Rx8 DIMMs that should work.  The management cluster uses standard switches, and each host has a standard switch for Storage and a standard switch for all other traffic.

The Workload cluster consists of two PowerEdge R710s.  These servers have a pair of Xeon E5520 processors and 96GB of RAM.   My original plan was to upgrade each host to 72GB of RAM, but I had a bunch of 8GB DIMMs from my failed R310 upgrades, and I didn’t want to pay return shipping or restocking fees.  The Workload cluster is configured with a virtual distributed switch for storage, a vDS for VM traffic, and a standard switch for management and vMotion traffic.

The vGPU cluster is the only cluster that doesn’t follow the hardware standards.  The server is a Dell PowerEdge R730 with 32GB of RAM.  The server is configured with the Dell GPU enablement kit and currently has an NVIDIA GRID K1 card installed.

My Nutanix CE box is a PowerEdge R610 with 32GB of RAM.

Storage

The storage tier of my lab consists of a single Synology Diskstation 1515+.  It has four 2 TB WD Red drives in a RAID 10 and a single SSD acting as a read cache.  A single 2TB datastore is presented to my ESXi hosts using NFS.  The Synology also has a couple of CIFS shares for things like user profiles and network file shares.

Network

The network tier consists of a Juniper SRX100 firewall and a pair of Linksys SRW2048 switches.  The switches are not stacked but have similar configurations for redundancy.  Each server and the Synology are connected into both fabrics.

I have multiple VLANs on my network to segregate different types of traffic.  Storage, vMotion, and management traffic are all on their own VLANs.  Other VLANs are dedicated to different types of VM traffic.

That’s the overall high-level view of the current state of my home lab.  One component I haven’t spent much time on so far is my Horizon design.  I will cover that indepth in an upcoming post.

A Look Back at 2015

The end of the year is just a few days away.  It’s time to take a look back at 2015 and a look ahead at 2016.

Year in Review

I got to experience a lot of new things and participate in some great opportunities in 2015.  Highlights include:

  • Presenting at the first North Central Wisconsin VMUG meeting
  • Wrote for Virtualization Review
  • Made a career change.  I went to Ahead as a Data Center Engineer
  • Attended Virtualization Field Day in June as a delegate
  • Was selected to be part of the VMware EUC vExperts group
  • Rebranded my blog.  I changed the URL from seanmassey.net to thevirtualhorizon.com

Goals

When I wrote my 2014 Year in Review post, I had also set three goals for 2015.  Those goals were:

  1. Get my VCDX
  2. Make a career change and go to a VAR/partner or vendor
  3. Find a better work/life/other balance

I accomplished 2/3rds of these goals.  In April, I made the move to Ahead, a consulting firm based out of Chicago.  This move has also enabled me to have a better work/life/other balance – when I’m home, I can now pick up my son from school.

I haven’t started on my VCDX yet, and this goal is sitting in the waiting queue.  There are a couple of reasons for this.  First, there were some large areas in my selected design that I would have had to fictionalize.  Secondly, and more importantly, there will be other opportunities to do a design based on an actual client.  I plan to keep this on my goals list and revisit it in 2016.

Although obtaining my VCDX will be my main goal for 2016, I have a few other smaller goals that I plan to work towards as well:

  • Write More – Although it can be time-consuming, I like writing.  The thing is, I like it as a hobby.  Writing professionally was an interesting experience, but took a lot of the fun out of blogging.  I would like to get back into the habit of blogging on a regular basis in 2016.
  • Expand my Skillsets – I’d like to spend more time learning the private cloud and automation toolkits, especially things like Puppet, Chef, and OpenStack.  I’d also like to spend more time on HyperConverged solutions like Nutanix.  I plan on expanding my lab to be able to dabble in this more. 

Blog Statistics

I didn’t do as much blogging in 2015 as I did in 2014.  There are a few reasons for this.  First, I passed on participating in the Virtual Design Master 30-in-30 challenge this year.  Second, a lot of content I would have written for my blog earlier in the year was directed to Virtualization Review instead, so I did not have a lot of original stuff to write.

I normally don’t care about blog stats, but I think it’s a fun exercise to take a look back at the year and compare it briefly to previous years.  As of December 27th, I had written 21 blog posts.  This was down from 90 posts in 2014.  Page views are about the same.  I ended 2014 with 151,862 page views by 55,471 visitors.  Year-to-date in 2015, I have had 151,862 page views by 64,618 visitors.

Some key statistics are:

And a Big Thank You Goes Out to…

I’m not on this journey alone, and another great year wouldn’t have been possible without the vCommunity.  A few people I’d like to call out are:

  • My wife Laura for weathering the transition to a consulting role
  • Brian Suhr – who has enabled me to take the next steps in my career
  • Jarian Gibson and Andrew Morgan
  • The entire team at Ahead for being some of the smartest people I’ve ever worked with and always having time to help someone out with questions
  • Stephen Foskett and Tom Hollingsworth for inviting me to participate in Virtualization Field Day

A Day of Giving Thanks

Today, the United States celebrates Thanksgiving.  It’s a day that we come together with our families to eat a little turkey, watch some football, and give thanks for the good things in our lives. 

I have a lot to be thankful for this year.  Some of the things I’m thankful for are:

1. An amazing and supportive family.

2. An awesome and challenging job with some of the smartest people I know.

3. A great community that enables passionate IT professionals to come together and share with each other.  Although I might only see people at VMUGs and conferences, I’ve come to consider many people friends.

4.  A Bears victory over the Packers…at Lambeau.

I hope everyone has a great Thanksgiving.

Horizon EUC Access Point Configuration Script

Horizon 6.2 included a new feature when it was launched in early September – the EUC Access Gateway.  This product is a hardened Linux appliance that has all of the features of the Security Server without the drawbacks of having to deploy Windows Servers into your DMZ.  It will also eventually support Horizon Workspace/VMware Identity Manager.

This new Horizon component exposes the “cattle philosophy” of virtual machine management.  If it stops working properly, or a new version comes out, its to be disposed of and redeployed.  To facilitate this, the appliance is configured and managed using a REST API.

Unfortunately, working with this REST API isn’t exactly user friendly, especially if you’re only deploying one or two of these appliances.  This API is also the only way to manage the appliance, and it does not have a VAMI interface or SSH access.

I’ve put together a PowerShell script that simplifies and automates the configuration of the EUC Access Gateway Appliances.  You can download the script off of my Github site.

The script has the following functions:

  • Get the appliance’s current Horizon View configuration
  • Set the appliance’s Horizon View configuration
  • Download the log bundle for troubleshooting

There are also placeholder parameters for configuring vIDM (which will be supported in future releases) and uploading SSL certificates.

The syntax for this script’s main features look like:

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetViewConfig

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -SetViewConfig -ViewEnablePCoIP -ViewPCoIPExternalIP 10.1.1.3 $ViewDisableBlast

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetLogBundle -LogBundleFolder c:\temp

If you have any issues deploying a config, use the script to download a log bundle and open the admin.log file.  This file will tell you what configuration element was rejected.

I want to point out one troubleshooting note that my testers and I both experienced when developing this script.  The REST API does not work until an admin password is set on the appliance.  One thing we discovered is that there were times when the password would not be set despite one being provided during the deployment.  If this happens, the script will fail when you try to get a config, set a config, or download the log bundle.

When this happens, you either need to delete the appliance and redeploy it or log into the appliance through the vSphere console and manually set the admin password.

Finally, I’d like to thank Andrew Morgan and Jarian Gibson for helping test this script and providing feedback that greatly improved the final product.