Patch Tuesday VDI Pains? We’ve got a script for that…Part 1

Having a non-persistent VDI environment doesn’t mean that Patch Tuesday is no longer a pain. In fact, it may mean more work for the poor VDI administrator who needs to make sure that all the Parent VMs are updated for a recompose. Some of the techniques for addressing patching, such as wake-on-LAN and scheduled installs, don’t necessarily apply to VDI Parent VMs and there is the additional step of snapshoting to prepare those parent VMs for deployment. And if an extended patch management solution like Solarwinds Patch Manager (formerly EminentWare) or Shavlik SCUPdates is not available, you’ll still need to manage updates for a
security risksproducts like Adobe Flash and Java.

So how can you streamline this process and make the process easier for the VDI administrator? There are a couple of ways to address the various pain points of patching VDI desktops for both Windows Updates and for other applications that might be installed in your environment.

This will be part 1 of 2 posts on how to automate updates for Linked-Clone parent VMs. This post will cover the process of updating, and the second post will dive into the code.

Patch Tuesday Process

Before we can start to address the various pain points, let’s look at how Patch Tuesday works for a non-Persistent linked-clone VDI environment and how it differs from a normal desktop environment. In a normal desktop environment, you can schedule Windows Updates to install after hours and use a Wake-on-LAN tool to make sure that every desktop is powered on to receive those updates and reboot automatically through Group Policy.

That procedure doesn’t apply for Linked-Clone desktops, and some additional orchestration is required to get the Linked-Clone parent VMs patched and ready for recompose. When patching Linked-Clone desktop images, you need to do the following:

  1. Power-on the Linked-Clone Parent VMs.
  2. Log into to install Windows and other updates
  3. Reboot the machine
  4. Repeat steps two and three as necessary
  5. Once all updates are installed, shut down the VM
  6. Take a snapshot

Adobe seems to have selected Microsoft’s Patch Tuesday (2nd Tueday of the Month) as their patch release date. Oracle, however, does not release on the same cycle, so updates to Java would require a second round of updates and recomposes if it is needed in your environment.

If you have more than a few linked-clone parent VMs in your environment, there is a significant time commitment involved in keeping them up-to-date. Even if you do your updates concurrently and do other things while the updates are installing, there are still parts that have to be done by hand such as power operations, snapshotting, and installing 3rd-party updates that aren’t handled through WSUS.

One Patch Source to Rule Them All

Rather than dealing with downloading and running the update on each Linked-Clone parent VM or the auto-update utility that comes with some of these smaller applications and plugins, we’ve standardized on one primary delivery mechanism at $Work – Microsoft WSUS. WSUS handles the majority of the patches we would deploy during the month, and it has a suite of reports built around it to track updates and the computer they’re installed, or fail to install, on. This makes it the perfect centerpiece to patch management in the environment at $work.

But WSUS doesn’t handle Adobe, Java, or other updates natively. WSUS 3.0 introduced an API that a number of patch management products use to add 3rd-party updates to the system. One of these products is an open-source solution called Local Updates Publisher.

Local Updates Publisher is a solution that allows an administrator to take a software update, be it an EXE, MSI, or MSP file, repackage it, and deploy it through WSUS. Additional rules can be built around this package to determine which machines are eligible for the update, and those updates can be approved, rejected, and/or targeted to various deployment groups right from within the application. It will also accept SCUP catalogs as an update source.

There is a bit of manual work involved with this method as some of the applications that are frequently updated do not come with SCUP catalogs – primarily Java. Adobe provides SCUP catalogs for Reader and Flash. There is a 3rd party SCUP catalog that does contain Java and other open-source applications from PatchMyPC.net (note – I have not used this product), and there are other options such as Solarwinds Patch Manager and Shavlik.

Having one centralized patch source will make it easier to automate patch installation.

Automating Updates

Once there is a single source for providing both Microsoft and 3rd party updates, the work on automating the installation of updates can begin. Automating the vSphere side of things will be done in PowerCLI, so the windows update solution should also use PowerShell. This leaves two options – POSHPaig, a hybrid solution that uses PowerShell to generate vbscript to run the updates, and the Windows Update PowerShell Module. POSHPaig is a good tool, but in my experience, it is more of a GUI product that works with multiple machines while the Windows Update PowerShell Module is geared more for scripted interactions.

The Windows Update PowerShell Module is a free module developed by Microsoft MVP Michal Gadja. It is a collection of local commands that will connect to the default update source – either WSUS or Microsoft Updates, download all applicable updates for the system, and automatically install them. The module will need to be stored locally on the Linked-Clone Parent VMs. I use Group Policy Preferences to load the module onto each machine as it ensures that the files will be loaded into the same place and updates will propogate automatically.

One of the limits of the Windows Updates API is that it cannot be called remotely, so the commands from this module will not work with PowerShell Remoting. There is another way to remotely call this script, though. The Invoke-VMScript can be used to launch this script through VMware Tools. In order to use Invoke-VMScript, the VIX API will need to be installed and the script run in a 32-bit instance of PowerShell.

On the Next Episode…

I didn’t originally plan on breaking this into two parts. But this is starting to run a little long. Rather than trying to cram everything into one post, I will be breaking this up into two parts and cover the script and some of the PowerShell/PowerCLI gotchas that came up when testing it out.

Finding and Remediating Horizon View Desktops with the Wrong Snapshot/Image

I want you to imagine this scenario for a moment.  You schedule an overnight recompose operation for some of your linked-clone desktop pools.  You go to bed knowing that your desktops will have that shiny new image in the morning.  But when you wake up, you have a warning from your monitoring system or the vCheck script that shows the datastore that your replica volumes run on are running out of space.  When you log in to check, you have a couple of extra replicas that shouldn’t be there.  There is no easy way to say which pools, or desktops, are the issue because the replica names are GUIDs that do not relate to the pools.

I’ve been in this spot many times.  Sometimes, a recompose fails with a recoverable error.  Other times a few desktops didn’t recompose because they were refreshed sometime between when I scheduled the recompose and when it actually kicked off. And on some very rare occassions, there was a desktop that clung to its image like a security blanket, and it wouldn’t update until it was deleted.

When this happens, there is, currently, only one way to find out which pools or desktops are the problem.  In order to properly diagnose and correct this, an administrator would need to log into View Administrator, open each pool individually, go into Inventory and select the View Composer details.  While it is a powerful management interface, View Administrator is not exactly the switftest tool for an administrator to use, and it could take a while for this information to load if you have a large number of pools and/or desktops per pool.
There had to be a better, and automated, way to handle this.  And there is.

View PowerCLI

The preferred method for automating Horizon View deployments is the View PowerCLI cmdlets.  These cmdlets are included on the View Connection Servers.  The two cmdlets that look like they would be useful for resolving this issue are Get-Pool and Get-DesktopVM.

However, this won’t quite work.  The output from the Get-DesktopVM cmdlet does not provide any information about the image or snapshot that the linked clones are using.  Even if we could find the desktops using View PowerCLI, there does not seem to be any way to remotely delete a linked-clone desktop using the View PowerCLI cmdlets, so this is not the solutions.  Deleting the desktop VMs in vCenter is not a good idea, and bad things happen when you remove the linked-clone desktop directly from vCenter.

Horizon View and LDAP

There is another way, though.  Before that option can be explored, we have to understand how Horizon View stores it’s configuration data.  The main database for Horizon View is an LDAP directory.  This directory contains all of the information about the Horizon View environment such as pools, desktop VMs, Active Directory users, and even ThinApp applications that are deployed through View Administrator.

Since this is an LDAP directory, we can directly query for the information that we need, and we’re not limited to PowerShell.  Any scripting or programming language that can talk to an LDAP directory can be used.  I use PowerShell with the Quest Active Directory Cmdlets because that is the language that I’m comfortable with.

If you’re not familar with LDAP, there are some terms that might not make sense.  Attribute is one term that I will be using.  An attribute is like a file, and this is where a data value is stored.  A group of attributes is called an object-class.  Object-classes are like folders, and there can be multiple levels of object-classes in the directory schema, or layout.
There are two object-classes that we are going to be concerned with.  Those two object-classes are:

  • pae-ServerPool –  This object-class defines the desktop pools.
  • pae-Server –  This object-class defines the desktops.

There is a memberDN attribute for each desktop pool.  This attribute contains the list of distinguished names for the desktops in that pool, and it will assist our search for desktops.
The Desktop Pool and Desktop VM records also contain a field that has the vCenter snapshot ID.  This field is the one that will be used to compare the desktop VM to the standard that has been applied to the pool.  This field is called pae-SVIVmSnapshotMOID in both the ServerPool (desktop pool) and Server(desktop VM) object-classes.
In order to find the desktops that aren’t using the correct image/snapshot, we need to compare the values in these two fields.  If they don’t match, they will be added to an object called DesktopExceptions that will be used for later processing.

Once we’ve identified all the desktops that need to be remediated, we need to find a method for removing the desktop so View will recreate it.  There are no native commands to do this, but there is a method provided by Andre Leibovici on his blog myvirtualcloud.net.  This method allows us to set the desktop state remotely through the LDAP datastore.  For our desktops that are using the wrong snapshots, the state will need to be set to “Deleting.”  Once we do this, the desktops will be deleted and recreated automatically.

These steps are wrapped up in the script below.  Since this talks directly to the LDAP directory for View and avoids using the View PowerCLI cmdlets, this can be run from any server with PowerShell and the Quest AD Cmdlets.

This code is also available on my GitHub site.

       

            <#
.SYNOPSIS
   Get-DesktopExceptions is a script that will locate VMware View Linked Clones that are not using the correct snapshot/image.  The script also contains an option to remediate any non-compliant desktops by deleting them and letting View recreate them.
.DESCRIPTION
   Get-DesktopExceptions will look in the View LDAP datastore to find the snapshot IDs used by the desktops and the pool. It compares these values to find any desktops that do not match the pool.  If the -Remediate switch is selected, the script will then remove them.  In order to run this script, the Quest Active Directory Cmdlets will need to be installed.
.PARAMETER ConnectionServer
   The View Connection server that you want to run this script against.
.PARAMETER Remediate
   Delete desktops that do not have the correct snapshots
.PARAMETER EmailRcpt
   Person or group who should receive the email report
.PARAMETER SMTPServer
   Email server
.EXAMPLE
   Get-DesktopExceptions -ConnectionServer connection.domain.com -Remediate -EmailRcpt user@domain.com -SMTPServer smtp.domain.com
#>
param($ConnectionServer,[switch]$Remediate,$EmailRcpt,$SMTPServer)

Function Send-Email
{
 Param([string]$SMTPBody,[string]$SMTPSubject = "View Snapshot Compliance Report",[string]$SMTPTo,$SMTPServer)
Send-MailMessage -To $SMTPTo -Body $SMTPBody -Subject $SMTPSubject -SmtpServer $SMTPServer -From "Notifications_noreply@gbdioc.org" -BodyAsHtml
}

Function Get-Pools
{
param($ConnectionServer)

$PoolList = @()

$arrIncludedProperties = "cn,name,pae-DisplayName,pae-MemberDN,pae-SVIVmParentVM,pae-SVIVmSnapshot,pae-SVIVmSnapshotMOID".Split(",")
$pools = Get-QADObject -Service $ConnectionServer -DontUseDefaultIncludedProperties -IncludedProperties $arrIncludedProperties -LdapFilter "(objectClass=pae-ServerPool)" -SizeLimit 0 | Sort-Object "pae-DisplayName" | Select-Object Name, "pae-DisplayName", "pae-SVIVmParentVM" , "pae-SVIVmSnapshot", "pae-SVIVmSnapshotMOID", "pae-MemberDN"

ForEach($pool in $pools)
{
$obj = New-Object PSObject -Property @{
           "cn" = $pool.cn
           "name" = $pool.name
           "DisplayName" = $pool."pae-DisplayName"
           "MemberDN" = $pool."pae-MemberDN"
           "SVIVmParentVM" = $pool."pae-SVIVmParentVM"
           "SVIVmSnapshot" = $pool."pae-SVIVmSnapshot"
           "SVIVmSnapshotMOID" = $pool."pae-SVIVmSnapshotMOID"
    }
$PoolList += $obj
}
Return $PoolList
}

Function Get-Desktop
{
param($MemberDN, $ConnectionServer)

$arrIncludedProperties = "cn,name,pae-DisplayName,pae-MemberDN,pae-SVIVmParentVM,pae-SVIVmSnapshot,pae-SVIVmSnapshotMOID".Split(",")
$Desktop = Get-QADObject -Service $ConnectionServer -DontUseDefaultIncludedProperties -IncludedProperties $arrIncludedProperties -LdapFilter "(&(objectClass=pae-Server)(distinguishedName=$MemberDN))" -SizeLimit 0 | Sort-Object "pae-DisplayName" | Select-Object Name, "pae-DisplayName", "pae-SVIVmParentVM" , "pae-SVIVmSnapshot", "pae-SVIVmSnapshotMOID"

Return $Desktop
}

$DesktopExceptions = @()
$pools = Get-Pools -ConnectionServer $ConnectionServer

ForEach($pool in $pools)
{
$MemberDNs = $pool.memberdn
 ForEach($MemberDN in $MemberDNs)
 {
 $Desktop = Get-Desktop -MemberDN $MemberDN -ConnectionServer $ConnectionServer

 If($Desktop."pae-SVIVmSnapshotMOID" -ne $pool.SVIVmSnapshotMOID)
  {
  $obj = New-Object PSObject -Property @{
           "PoolName" = $pool.DisplayName
     "DisplayName" = $Desktop."pae-DisplayName"
     "PoolSnapshot" = $pool.SVIVmSnapshot
     "PoolSVIVmSnapshotMOID" = $pool.SVIVmSnapshotMOID
           "DesktopSVIVmSnapshot" = $Desktop."pae-SVIVmSnapshot"
           "DesktopSVIVmSnapshotMOID" = $Desktop."pae-SVIVmSnapshotMOID"
     "DesktopDN" = $MemberDN
    }
$DesktopExceptions += $obj
  }
 }

}

If($DesktopExceptions -eq $null)
 {
 $SMTPBody = "All desktops are currently using the correct snapshots."
 }
Else
 {
 $SMTPBody = $DesktopExceptions | Select-Object DisplayName,PoolName,PoolSnapshot,DesktopSVIVmSnapshot | ConvertTo-HTML
 }

Send-Email -SMTPBody $SMTPBody -SMTPTo $EmailRcpt

If($Remediate -eq $true)
{
 ForEach($Exception in $DesktopExceptions)
 {
  Set-QADObject -Identity $Exception.DesktopDN -Service $ConnectionServer -IncludedProperties "pae-vmstate" -ObjectAttributes @{"pae-vmstate"="DELETING"}
 }

}

       

Updated Script – Start-Recompose.ps1

I will be giving my first VMUG presentation on Thursday, September 26th  at the Wisconsin VMUG meeting in Appleton, WI.  The topic of my presentation will be three scripts that we use in our VMware View environment to automate routine and time consuming tasks.

One of the scripts that I will be including in my presentation is the Start-Recompose script that I posted a few weeks ago.  I’ve made some updates to this script to address a few things that I’ve always wanted to improve with this script.  I’ll be posting about the other two scripts this week.

These improvements are:

  • Getting pool information directly from the View LDAP datastore instead of using the Get-Pool cmdlet
  • Checking for space on the Replica volume before scheduling the Recompose operation
  • Adding email and event logging alerts
  • The ability to recompose just one pool if multiple pools share the same base image.

The updated script will still need to be run from the View Connection Server as it requires the View PowerCLI cmdlets.  The vSphere PowerCLI cmdlets and the Quest AD cmdlets will also need to be available.  A future update will probably remove the need for the Quest cmdlets, but I didn’t feel like reinventing the wheel at the time.

The script can be downloaded from github here.

Three Years to #VCDX

“Two roads diverged in a wood, and I— 
I took the one less traveled by, 
And that has made all the difference.”
Robert Frost, The Road Not Taken

“Life is a journey, not a destination.”
Ralph Waldo Emerson

It’s good to have goals.  It’s good to have goals that seem like they might just be a little out of your reach as they force you to challenge yourself and grow.

On Thursday, I completed the goals a set of personal goals that I had set for myself at the beginning of the year – achieve my VMware Certified Professional Certifications on the Data Center and Desktop products.  I passed my first VCP exam in August at VMworld.  I passed the VCP Desktop exam Thursday morning.

This was actually the second time I took the Desktop exam.  The first time I took it was on the last day of VMworld.  I hadn’t actually planned on taking the test, but I had some time to kill before my flight.  I barely missed a passing score on that exam, which I thought was pretty good considering that I had not prepared for that exam in any way and was rushing through it at the end.  After a month of reviewing Jason Langer’s VCP5–DT study guide, I was ready to sit for this test again.

So now that I’ve achieved these two certifications, I’ve been trying to decide what’s next. It didn’t take long to set a new goal –  to become a VMware Certified Design Expert, or VCDX, within three years.

For those who are not familiar with VMware’s certification tracks, there are four levels.  Although the analogy might not be 100% accurate, I’m going to relate these levels to the different types of college degrees that one can get.  Those four levels are:

  • VMware Certified Associate (VCA) –  a new entry-level certification track.  Does not require any classroom training.  Think of it as an associates degree.
  • VMware Certified Professional  (VCP) –  The primary VMware certification.  Requires a 5–day instructor-led classroom training and proctored exam.  This certification is required to attempt any higher level certifications.  Think of it as a Bachelor’s degree.
  • VMware Certified Advanced Professional (VCAP) –  The level above the VCP, and a VCP is required to try for this certification.  Does not require any additional classroom training, but it does require in-depth knowledge of the products you are being certified on.  Under the current programs, there are two VCAPs that you can get for each product track – Design and Administration. Design focuses on the planning and design steps of implementing the product, and Administration focuses on actually running it once it is implemented.  This is equivalent to a Master’s Degree. 
  • VMware Certified Design Expert (VCDX) –  This is a PhD of VMware Certifications.  In order to even attempt this certification, you need to hold both the Design and the Administration VCAPs for the VCDX you’re attempting.  Anyone aspiring to this level needs to submit a design for a VMware environment.  If that design submission is accepted, they will need to defend that design in front of a panel of VCDX holders.  Some people have spent over 500 hours on their designs or gone in front of the panel multiple times.  Like I said…it’s the PhD of VMware Certifications.  (For the record, the two certifications that come closest to this are the soon-to-be-defunct Microsoft Certified Architect, which was a very expensive certification that required learning from the programmers of the Microsoft system followed by a panel defense and the Cisco Certified Architect, which requires a CCDE and a panel defense).  There are only around 125 VCDX’s currently.

My goal for acheiving this, as I said above, is three years.  This seemed like a reasonable goal because:

  1. I have two take two advanced certifications before I can even attempt to submit a design for the VCDX.  Depending on what products are released in the next couple of years, I will have to recertify to keep current.
  2. I want to get a lot more real-life experience, especially in the design area.
  3. The design work for the submission will take a significant chunk of my time.
  4. Baby Massey #2 is slated to arrive in early April. 

I have put together a plan that will get me into position to meet all of the prerequisties of the VCDX within a year, and I’m starting to build up my home lab so I can really dive into this.

Now, I may never reach this goal.  This is a very difficult road to go down.  But there is no harm in not making it to this destination as this road is also filled with the rewards of knowledge and growth.

VMware View Pool Recompose PowerCLI Script

Edit – I’ve updated this script recently.  The updated version includes some additional features such as checking to make sure there is enough space on the replica volume to for a successful clone.  You can read more about it here: http://www.seanmassey.net/2013/09/updated-script-start-recomposeps1.html

One of the many hats that I wear at $work is the administration of our virtual desktop environment that is built on VMware View.  Although the specific details of our virtual desktop environment may be covered in another post, I will provide a few details here for background.  Our View environment has about 240 users and 200 desktops, although we only have about 150 people logged in at a given time.  It is almost 100% non-persistent, and the seven desktops that are set up as persistent are only due to an application licensing issue and refresh on logout like our non-persistent desktops.
Two of the primary tasks involved with that are managing snapshots and scheduling pool recompose operations as part of our patching cycle.  I wish I could say that it was a set monthly cycle, but a certain required plugin…*cough* Java *cough*…and one application that we use for web conferencing seem to break and require an update every time there is a slight update to Adobe Air.  There is also the occassional request that a department needs that is a priority and falls outside of the normal update cycle such as an application that needs to be added on short notice.
Our 200 desktops are grouped into sixteen desktop pools, and there are seven Parent VMs that are used as the base images for these sixteen pools.  That seems like a lot given the total number of desktops that we have, but there are business reasons for all of these including restrictions on remote access, department applications that don’t play nicely with ThinApp, and restrictions on the number of people from certain departments that can be logged in at one time. 
Suffice it to say that with sixteen pools to schedule recompose actions for as part of the monthly patch cycle, it can get rather tedious and time consuming to do it through the VMware View Administrator.  That is where PowerShell comes in.  View ships with set of PowerCLI cmdlets, and these can be run from any connection broker in your environment.  You can execute the script remotely, but the script file will need to be placed on your View Connection Broker.
I currently schedule this script to run using the Community Edition of the JAMS Job Scheduler, but I will be looking at using vCenter Orchestrator in the future to tie in automation of taking and removing snapshots.
The original inspiration for, and bits of, this script were originally written by Greg Carriger.  You can view his work on his blog.  My version does not take or remove any snapshots, and my version will work with multiple pools that are based on the same Parent VM.  The full script is availabe to download here.
Prerequisites:
PowerCLI 5.1 or greater installed on the Connection Broker
View PowerCLI snapin
PowerShell 2.0 or greater
View requires the full snapshot path in order to update the pool and do a recompose, so one of the first things that needs to be done is build the snapshot path.  This can be a problem if you’re not very good at cleaning up old snapshots (like I am…although I have a script for that now too).  That issue can be solved with the code below.
Function
Build-SnapshotPath
{
Param($ParentVM)
##
CreateSnapshotPath$Snapshots=Get-Snapshot-VM$ParentVM$SnapshotPath=“”ForEach($Snapshotin$Snapshots){$SnapshotName=$Snapshot.name$SnapshotPath=$SnapshotPath+“/”+$snapshotname}Return$snapshotpath}

Once you have our snapshot path constructed, you need to identify the pools that are based around the ParentVM.
$Pools
=Get-Pool|Where {$_.ParentVMPath-like“*$ParentVM*”}
A simple foreach loop can be used to iterate through and update your list of pools once you know which pools you need to update.  This section of code will update the default snapshot used for desktops in the pool, schedule the recompose operation, and write out to the event log that the operation was scheduled. 
Stop on Error is set to false as this script is intended to be run overnight, and View can, and will, stop a recompose operation over the slightest error.  This can leave destkops stuck in a halted state and inacccessible when staff come in to work the following morning.
ForEach
($Poolin$Pools)
{
$PoolName=$Pool.Pool_ID$ParentVMPath=$Pool.ParentVMPath#Update Base Image for PoolUpdate-AutomaticLinkedClonePool-pool_id$Poolname-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPath## Recompose
##Stop on Error set to false. This will allow the pool to continue recompose operations after hours if a single vm encounters an error rather than leaving the recompose tasks in a halted state.
Get-DesktopVM-pool_id$Poolname|Send-LinkedCloneRecompose-schedule$Time-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPathforceLogoff:$truestopOnError:$falseWrite-EventLogLogNameApplicationSourceVMwareView” –EntryTypeInformationEventID 9000 –MessagePool$Poolnamewillstarttorecomposeat$Timeusing$snapshotname.”
}

Lessons From My First Major Conference

Last week, I attended my first major IT conference in San Francisco.  I learned a few lessons from the mistakes that I made when planning my trip and scheduling my sessions that I need to keep in mind for next year.

  1. Arrive early –  When booking my flights to San Francisco, I ended up on the last flight out of Appleton on Sunday with a scheduled arrival time of 11:00 PM San Francisco time.  That flight was canceled, and I was fortunate enough to get a flight directly out of O’Hare on Monday morning so that I could arrive before my first breakout session began.  I was landing just as the general session, and the latest product announcements, were beginning.  Aside from missing out on the general session on Monday morning, I also missed out on meeting with vendors in the Solutions Exchange and some networking events like vBeers on Sunday evening. 
  2. Don’t Pack Too Much –  I packed for my trip like it was a normal business trip.  That was a mistake since I flew with carry-on sized luggage.  I didn’t realize how many vendors would be giving out t-shirts, and I had to literally cram everything into my suitcase just to get it back home.  I would have been able to get away with packing less and using some of those shirts during the conference.
  3. Don’t Schedule Yourself into a Corner –  There is a lot to do an see at the conference, and there are sessions covering almost anything you want to learn.  Don’t schedule yourself into a corner by booking yourself solid.  You need time to get lunch, work the vendor floor, take part in community generated content like vBrownbag sessions, or even follow up on emails from $work.  The schedule builder tools are nice for laying out your day, but don’t be afraid to switch things up.  Also, you need to consider travel time –  walking from Moscone West to the Marriott is a lot longer than it looks on the map.
  4. Know What You Want from Vendors – The Solutions Exchange is HUGE. It took up most of the exhibition space in Moscone South.  It was extremely overwhelming the first time I walked through it, and I didn’t know where to begin.  Having an idea of what specific needs you want to address or what would interest your co-workers/colleagues/friends/family will help you narrow down which booths to stop into.  Obviously, there isn’t enough time to stop in to all of them, so you have to be a little discerning and hit up the vendors that meet your needs or interests first before exploring. 
  5. Corollary to Last Point –  Don’t stop by a vendor (or let them scan your badge) just because they’re drawing for a cool prize.  The last thing you need is another sales call for some product that you know nothing about.  (Some vendors…cough…Veeam…cough…do offer some very interesting contests with great prizes that involve attending events or technical sessions to learn more about their product.  It’s creative and it actually teaches you about the product.)
  6. Make Time to Spend Time on the Vendor Floor – This kind of goes without saying.  Because there is so much going on at the conference, you need to make sure you schedule time to talk to the vendors that are on your list.  Work that time into your schedule, and make sure you give yourself enough time to talk as a good conversation can turn into a 20 minute demonstration.
  7. Group Discussions are Great for Networking –  Group discussions are a great opportunity to sit down with VMware engineers and other users of a particular product/service and ask questions or see how others are addressing issues in their environment.  They’re more personal than the general breakout sessions.  If I get the chance in the future, I plan on attending more of these types of sessions at future conferences.
  8. Take Time to Enjoy the Local Culture –  I’ll be honest when I say that I didn’t mind the food at the conference.  It was much better than I expected for a kitchen that had to serve over 22,000 people. But there are a lot of good places around the Moscone Center that offer good food for a reasonable price.  It’s also worth making some time in the evening to explore the city and check out the sites like Fisherman’s Wharf. 

 

Two Potential PowerCLI Flings and One Cool New PowerCLI Command

While I was at VMWorld this week, I attended two great sessions on PowerCLI put on by Alan Renouf (Twitter).  Alan is one of the top two PowerCLI experts and a VMware Technical Marketing specialist on PowerCLI and Automation.  The two sessions that I attended were:

VSVC4944 – PowerCLI Best Practices – A Deep Dive –  A presentation with Luc Dekens (Twitter).  Luc is right up there with Alan in the top two of PowerCLI experts.

VSVC5931 – PowerCLI What’s New? Administrating with the CLI Was Never Easier

Before I talk about the two potential Flings, I want to mention a new PowerCLI command that is coming with vSphere 5.5 –  Open-VMConsole.  This command does exactly what it says on the tin –  it opens up the VM console in a web browser.  This feature allows administators direct access to a VM console without having to open either the Web Client or the C# client.  Alan demonstrated one interesting application of this cmdlet during his talk –  he built a simple PowerShell form using Primalforms that could be distributed to administrators to allow them to open the console without having to give them access to either client.  My environment is fairly small with few administrators, so I don’t see too much of an application for this where I’m at.  But there are huge potential uses for this in limiting access to the console of specific VMs without also giving application owners/administrators access to the vSphere client.

That’s not the only new addition to PowerCLI in 5.5.  There are also new cmdlets for working with Inventory Tags and expanded cmdlets for working with Virtual Distributed Switches.

Two exciting new potential VMware Flings/features were demonstrated during these sessions that take PowerShell and PowerCLI to the next level –  WebCommander, a web browser-based method of launching PowerShell scripts, and a PowerCLI window that can be launched from within the vSphere Web Client that was unofficially named “PowerWeb” by someone in the audience of the session I attended.  Both of these options will allow administrators who run Linux or OSX on their desktop to utilize PowerShell for their

“PowerWeb” or the PowerCLI vCenter Web Client Option –  This fling, which hopefully will make it into a future release of vCenter as a supported feature, adds links for a PowerCLI console and script window to the vCenter Web Client.  Administrators can execute PowerShell and PowerCLI scripts from directly within their web browser.  The current version only appears to work with the Windows version of vCenter, but it should be possible in the future to redirect the PowerCLI interface to a Windows Server when running the vCenter Appliance.

WebCommander –  WebCommander is a web portal for facilitating automation by presenting PowerShell and PowerCLI scripts to end-users and Administrators.  The scripts are run on a local Windows server and presented to the web via PHP.  This fling will facilitate self-service options by allowing Administrators to publish out PowerShell scripts so that they can be easily executed.

I’m most excited for seeing the WebCommander fling as I have an immediate use for something like this in my $work environment as we shuffle around help desk operations.