Updated Script – Start-Recompose.ps1

I will be giving my first VMUG presentation on Thursday, September 26th  at the Wisconsin VMUG meeting in Appleton, WI.  The topic of my presentation will be three scripts that we use in our VMware View environment to automate routine and time consuming tasks.

One of the scripts that I will be including in my presentation is the Start-Recompose script that I posted a few weeks ago.  I’ve made some updates to this script to address a few things that I’ve always wanted to improve with this script.  I’ll be posting about the other two scripts this week.

These improvements are:

  • Getting pool information directly from the View LDAP datastore instead of using the Get-Pool cmdlet
  • Checking for space on the Replica volume before scheduling the Recompose operation
  • Adding email and event logging alerts
  • The ability to recompose just one pool if multiple pools share the same base image.

The updated script will still need to be run from the View Connection Server as it requires the View PowerCLI cmdlets.  The vSphere PowerCLI cmdlets and the Quest AD cmdlets will also need to be available.  A future update will probably remove the need for the Quest cmdlets, but I didn’t feel like reinventing the wheel at the time.

The script can be downloaded from github here.

Three Years to #VCDX

“Two roads diverged in a wood, and I— 
I took the one less traveled by, 
And that has made all the difference.”
Robert Frost, The Road Not Taken

“Life is a journey, not a destination.”
Ralph Waldo Emerson

It’s good to have goals.  It’s good to have goals that seem like they might just be a little out of your reach as they force you to challenge yourself and grow.

On Thursday, I completed the goals a set of personal goals that I had set for myself at the beginning of the year – achieve my VMware Certified Professional Certifications on the Data Center and Desktop products.  I passed my first VCP exam in August at VMworld.  I passed the VCP Desktop exam Thursday morning.

This was actually the second time I took the Desktop exam.  The first time I took it was on the last day of VMworld.  I hadn’t actually planned on taking the test, but I had some time to kill before my flight.  I barely missed a passing score on that exam, which I thought was pretty good considering that I had not prepared for that exam in any way and was rushing through it at the end.  After a month of reviewing Jason Langer’s VCP5–DT study guide, I was ready to sit for this test again.

So now that I’ve achieved these two certifications, I’ve been trying to decide what’s next. It didn’t take long to set a new goal –  to become a VMware Certified Design Expert, or VCDX, within three years.

For those who are not familiar with VMware’s certification tracks, there are four levels.  Although the analogy might not be 100% accurate, I’m going to relate these levels to the different types of college degrees that one can get.  Those four levels are:

  • VMware Certified Associate (VCA) –  a new entry-level certification track.  Does not require any classroom training.  Think of it as an associates degree.
  • VMware Certified Professional  (VCP) –  The primary VMware certification.  Requires a 5–day instructor-led classroom training and proctored exam.  This certification is required to attempt any higher level certifications.  Think of it as a Bachelor’s degree.
  • VMware Certified Advanced Professional (VCAP) –  The level above the VCP, and a VCP is required to try for this certification.  Does not require any additional classroom training, but it does require in-depth knowledge of the products you are being certified on.  Under the current programs, there are two VCAPs that you can get for each product track – Design and Administration. Design focuses on the planning and design steps of implementing the product, and Administration focuses on actually running it once it is implemented.  This is equivalent to a Master’s Degree. 
  • VMware Certified Design Expert (VCDX) –  This is a PhD of VMware Certifications.  In order to even attempt this certification, you need to hold both the Design and the Administration VCAPs for the VCDX you’re attempting.  Anyone aspiring to this level needs to submit a design for a VMware environment.  If that design submission is accepted, they will need to defend that design in front of a panel of VCDX holders.  Some people have spent over 500 hours on their designs or gone in front of the panel multiple times.  Like I said…it’s the PhD of VMware Certifications.  (For the record, the two certifications that come closest to this are the soon-to-be-defunct Microsoft Certified Architect, which was a very expensive certification that required learning from the programmers of the Microsoft system followed by a panel defense and the Cisco Certified Architect, which requires a CCDE and a panel defense).  There are only around 125 VCDX’s currently.

My goal for acheiving this, as I said above, is three years.  This seemed like a reasonable goal because:

  1. I have two take two advanced certifications before I can even attempt to submit a design for the VCDX.  Depending on what products are released in the next couple of years, I will have to recertify to keep current.
  2. I want to get a lot more real-life experience, especially in the design area.
  3. The design work for the submission will take a significant chunk of my time.
  4. Baby Massey #2 is slated to arrive in early April. 

I have put together a plan that will get me into position to meet all of the prerequisties of the VCDX within a year, and I’m starting to build up my home lab so I can really dive into this.

Now, I may never reach this goal.  This is a very difficult road to go down.  But there is no harm in not making it to this destination as this road is also filled with the rewards of knowledge and growth.

VMware View Pool Recompose PowerCLI Script

Edit – I’ve updated this script recently.  The updated version includes some additional features such as checking to make sure there is enough space on the replica volume to for a successful clone.  You can read more about it here: http://www.seanmassey.net/2013/09/updated-script-start-recomposeps1.html

One of the many hats that I wear at $work is the administration of our virtual desktop environment that is built on VMware View.  Although the specific details of our virtual desktop environment may be covered in another post, I will provide a few details here for background.  Our View environment has about 240 users and 200 desktops, although we only have about 150 people logged in at a given time.  It is almost 100% non-persistent, and the seven desktops that are set up as persistent are only due to an application licensing issue and refresh on logout like our non-persistent desktops.
Two of the primary tasks involved with that are managing snapshots and scheduling pool recompose operations as part of our patching cycle.  I wish I could say that it was a set monthly cycle, but a certain required plugin…*cough* Java *cough*…and one application that we use for web conferencing seem to break and require an update every time there is a slight update to Adobe Air.  There is also the occassional request that a department needs that is a priority and falls outside of the normal update cycle such as an application that needs to be added on short notice.
Our 200 desktops are grouped into sixteen desktop pools, and there are seven Parent VMs that are used as the base images for these sixteen pools.  That seems like a lot given the total number of desktops that we have, but there are business reasons for all of these including restrictions on remote access, department applications that don’t play nicely with ThinApp, and restrictions on the number of people from certain departments that can be logged in at one time. 
Suffice it to say that with sixteen pools to schedule recompose actions for as part of the monthly patch cycle, it can get rather tedious and time consuming to do it through the VMware View Administrator.  That is where PowerShell comes in.  View ships with set of PowerCLI cmdlets, and these can be run from any connection broker in your environment.  You can execute the script remotely, but the script file will need to be placed on your View Connection Broker.
I currently schedule this script to run using the Community Edition of the JAMS Job Scheduler, but I will be looking at using vCenter Orchestrator in the future to tie in automation of taking and removing snapshots.
The original inspiration for, and bits of, this script were originally written by Greg Carriger.  You can view his work on his blog.  My version does not take or remove any snapshots, and my version will work with multiple pools that are based on the same Parent VM.  The full script is availabe to download here.
Prerequisites:
PowerCLI 5.1 or greater installed on the Connection Broker
View PowerCLI snapin
PowerShell 2.0 or greater
View requires the full snapshot path in order to update the pool and do a recompose, so one of the first things that needs to be done is build the snapshot path.  This can be a problem if you’re not very good at cleaning up old snapshots (like I am…although I have a script for that now too).  That issue can be solved with the code below.
Function
Build-SnapshotPath
{
Param($ParentVM)
##
CreateSnapshotPath$Snapshots=Get-Snapshot-VM$ParentVM$SnapshotPath=“”ForEach($Snapshotin$Snapshots){$SnapshotName=$Snapshot.name$SnapshotPath=$SnapshotPath+“/”+$snapshotname}Return$snapshotpath}

Once you have our snapshot path constructed, you need to identify the pools that are based around the ParentVM.
$Pools
=Get-Pool|Where {$_.ParentVMPath-like“*$ParentVM*”}
A simple foreach loop can be used to iterate through and update your list of pools once you know which pools you need to update.  This section of code will update the default snapshot used for desktops in the pool, schedule the recompose operation, and write out to the event log that the operation was scheduled. 
Stop on Error is set to false as this script is intended to be run overnight, and View can, and will, stop a recompose operation over the slightest error.  This can leave destkops stuck in a halted state and inacccessible when staff come in to work the following morning.
ForEach
($Poolin$Pools)
{
$PoolName=$Pool.Pool_ID$ParentVMPath=$Pool.ParentVMPath#Update Base Image for PoolUpdate-AutomaticLinkedClonePool-pool_id$Poolname-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPath## Recompose
##Stop on Error set to false. This will allow the pool to continue recompose operations after hours if a single vm encounters an error rather than leaving the recompose tasks in a halted state.
Get-DesktopVM-pool_id$Poolname|Send-LinkedCloneRecompose-schedule$Time-parentVMPath$ParentVMPath-parentSnapshotPath$SnapshotPathforceLogoff:$truestopOnError:$falseWrite-EventLogLogNameApplicationSourceVMwareView” –EntryTypeInformationEventID 9000 –MessagePool$Poolnamewillstarttorecomposeat$Timeusing$snapshotname.”
}

Lessons From My First Major Conference

Last week, I attended my first major IT conference in San Francisco.  I learned a few lessons from the mistakes that I made when planning my trip and scheduling my sessions that I need to keep in mind for next year.

  1. Arrive early –  When booking my flights to San Francisco, I ended up on the last flight out of Appleton on Sunday with a scheduled arrival time of 11:00 PM San Francisco time.  That flight was canceled, and I was fortunate enough to get a flight directly out of O’Hare on Monday morning so that I could arrive before my first breakout session began.  I was landing just as the general session, and the latest product announcements, were beginning.  Aside from missing out on the general session on Monday morning, I also missed out on meeting with vendors in the Solutions Exchange and some networking events like vBeers on Sunday evening. 
  2. Don’t Pack Too Much –  I packed for my trip like it was a normal business trip.  That was a mistake since I flew with carry-on sized luggage.  I didn’t realize how many vendors would be giving out t-shirts, and I had to literally cram everything into my suitcase just to get it back home.  I would have been able to get away with packing less and using some of those shirts during the conference.
  3. Don’t Schedule Yourself into a Corner –  There is a lot to do an see at the conference, and there are sessions covering almost anything you want to learn.  Don’t schedule yourself into a corner by booking yourself solid.  You need time to get lunch, work the vendor floor, take part in community generated content like vBrownbag sessions, or even follow up on emails from $work.  The schedule builder tools are nice for laying out your day, but don’t be afraid to switch things up.  Also, you need to consider travel time –  walking from Moscone West to the Marriott is a lot longer than it looks on the map.
  4. Know What You Want from Vendors – The Solutions Exchange is HUGE. It took up most of the exhibition space in Moscone South.  It was extremely overwhelming the first time I walked through it, and I didn’t know where to begin.  Having an idea of what specific needs you want to address or what would interest your co-workers/colleagues/friends/family will help you narrow down which booths to stop into.  Obviously, there isn’t enough time to stop in to all of them, so you have to be a little discerning and hit up the vendors that meet your needs or interests first before exploring. 
  5. Corollary to Last Point –  Don’t stop by a vendor (or let them scan your badge) just because they’re drawing for a cool prize.  The last thing you need is another sales call for some product that you know nothing about.  (Some vendors…cough…Veeam…cough…do offer some very interesting contests with great prizes that involve attending events or technical sessions to learn more about their product.  It’s creative and it actually teaches you about the product.)
  6. Make Time to Spend Time on the Vendor Floor – This kind of goes without saying.  Because there is so much going on at the conference, you need to make sure you schedule time to talk to the vendors that are on your list.  Work that time into your schedule, and make sure you give yourself enough time to talk as a good conversation can turn into a 20 minute demonstration.
  7. Group Discussions are Great for Networking –  Group discussions are a great opportunity to sit down with VMware engineers and other users of a particular product/service and ask questions or see how others are addressing issues in their environment.  They’re more personal than the general breakout sessions.  If I get the chance in the future, I plan on attending more of these types of sessions at future conferences.
  8. Take Time to Enjoy the Local Culture –  I’ll be honest when I say that I didn’t mind the food at the conference.  It was much better than I expected for a kitchen that had to serve over 22,000 people. But there are a lot of good places around the Moscone Center that offer good food for a reasonable price.  It’s also worth making some time in the evening to explore the city and check out the sites like Fisherman’s Wharf. 

 

Two Potential PowerCLI Flings and One Cool New PowerCLI Command

While I was at VMWorld this week, I attended two great sessions on PowerCLI put on by Alan Renouf (Twitter).  Alan is one of the top two PowerCLI experts and a VMware Technical Marketing specialist on PowerCLI and Automation.  The two sessions that I attended were:

VSVC4944 – PowerCLI Best Practices – A Deep Dive –  A presentation with Luc Dekens (Twitter).  Luc is right up there with Alan in the top two of PowerCLI experts.

VSVC5931 – PowerCLI What’s New? Administrating with the CLI Was Never Easier

Before I talk about the two potential Flings, I want to mention a new PowerCLI command that is coming with vSphere 5.5 –  Open-VMConsole.  This command does exactly what it says on the tin –  it opens up the VM console in a web browser.  This feature allows administators direct access to a VM console without having to open either the Web Client or the C# client.  Alan demonstrated one interesting application of this cmdlet during his talk –  he built a simple PowerShell form using Primalforms that could be distributed to administrators to allow them to open the console without having to give them access to either client.  My environment is fairly small with few administrators, so I don’t see too much of an application for this where I’m at.  But there are huge potential uses for this in limiting access to the console of specific VMs without also giving application owners/administrators access to the vSphere client.

That’s not the only new addition to PowerCLI in 5.5.  There are also new cmdlets for working with Inventory Tags and expanded cmdlets for working with Virtual Distributed Switches.

Two exciting new potential VMware Flings/features were demonstrated during these sessions that take PowerShell and PowerCLI to the next level –  WebCommander, a web browser-based method of launching PowerShell scripts, and a PowerCLI window that can be launched from within the vSphere Web Client that was unofficially named “PowerWeb” by someone in the audience of the session I attended.  Both of these options will allow administrators who run Linux or OSX on their desktop to utilize PowerShell for their

“PowerWeb” or the PowerCLI vCenter Web Client Option –  This fling, which hopefully will make it into a future release of vCenter as a supported feature, adds links for a PowerCLI console and script window to the vCenter Web Client.  Administrators can execute PowerShell and PowerCLI scripts from directly within their web browser.  The current version only appears to work with the Windows version of vCenter, but it should be possible in the future to redirect the PowerCLI interface to a Windows Server when running the vCenter Appliance.

WebCommander –  WebCommander is a web portal for facilitating automation by presenting PowerShell and PowerCLI scripts to end-users and Administrators.  The scripts are run on a local Windows server and presented to the web via PHP.  This fling will facilitate self-service options by allowing Administrators to publish out PowerShell scripts so that they can be easily executed.

I’m most excited for seeing the WebCommander fling as I have an immediate use for something like this in my $work environment as we shuffle around help desk operations.

What Does the Software-Defined Data Center Suite Mean For Managed Services Providers

If one thing has been made clear by the general sessions at this year’s VMWorld, it’s that the cloud is now here to stay, and VMWare and other vendors are providing tools to manage the cloud, where ever it might reside, and the machines that run on it.

The second general session of this year’s VMWorld focused on two tools in the vCloud Suite:  vCloud Automation Center, which handles infrastructure and application provisioning to turn IT into a service, and vCloud Operations Management, which handles monitoring and remediation of problems.  These tools, as well as some other tools in the vCloud Suite, tie in closely with both vSphere and other cloud providers like Amazon Web Services and Microsoft Azure to provide automated provisioning and management of public, private, and hybrid clouds.

As the presenters were demonstrating these products and showing how they worked together to deploy and maintain applications, I started to wonder what this meant for managed services providers who’s product was managing IT infrastructures.  These companies tend to focus on small-to-medium sized entities that don’t want to take on the additional expenses of staff, IT monitoring, or 24–hour operations.  Can this software replace these providers?

If managed services providers can’t find ways to bring additional value to their customers,they will be quickly replaced.  If software has gotten to the point where it can not only detect an issue but attempt to remediate it as well based on policies that the administrators set or perform a root cause analysis immediately to pinpoint the issues so administrators can act, then there is a significant cost savings that can be captured on the customer’s side.  Even if taking advantage of the advanced remediation provided by these software packages requires a little work to get right, the ongoing cost savings that would be generated make this sort of investment very attractive.

At $work, we currently use a managed service provider.  They provide monitoring and patching for the most critical servers in our environment, which comes out to about one third of our environment.  The rest are managed using a variety of tools such as the monitoring in vCenter and scripts.  Like many environments, the monitoring coverage is not ideal.

But when I look at the cost of expanding managed services to cover the rest of my environment, or even continuing it, and compare it to using a software solution, there’s no contest.  I can get a greater level of coverage, some level of automated remediation and intelligent baselining, and a short payback period.

Now, I realize that this won’t be implemented overnight.  These systems can be just as complex as the infrastructures they are monitoring, and they take time to learn the network and develop baselines.  But the payoff, if done right, is software that goes beyond monitoring systems to managing them for you.

Infrastructures are going to get more complex now that software-defined storage and networking are a reality and vSphere is getting application-aware features.  If managed services providers want to remain relevant, they need to bring more value to their customers, update their tools and their offerings to better support the cloud, and work more closely with their customers to understand their environments and their needs.

If they don’t, then their customers will be throwing good money after bad.

Utilizing Offsite Backups to Seed Backup Copy Jobs in @Veeam #V7

In the last couple posts about Veeam, I mentioned that $Work has been doing backups directly to our offsite storage.  Due to limits on bandwidth, any errors, changes, or server additions can have a drastic impact on our ability to complete backups in a timely manner.  And if you’ve ever tried to do a full backup of a 1.5TB file server over a 10Mbit connection because the VMID changed, you’ll know exactly what kind of pain I’ve felt in the past.

While a backup copy job will eliminate some of this pain, it still needs to be seeded at the remote site. 

I was a little disappointed to learn that an existing backup chain cannot be the target of a backup copy job.  The backup copy job needs to be pointed to a clean full backup that doesn’t include any forward or reverse incremental backups.  I’m not sure what the reason for this is, but it was confirmed in a Veeam in this thread on their support forums.

But there is a workaround to this, and it’s fairly easy.  The process is actually pretty simple.

  1. Create a backup copy job.  Use the backup job that currently saves to the remote site or the virtual machine as the source and use a backup repository at the remote site as the destination.
  2. Let the Backup Copy job run and create a new full backup of the file.
  3. Once the Backup Copy job has successfully completed (see notes below), create a new backup job for those servers and store that backup data at your local/primary site.  You cannot change the target of your existing backup job –  Veeam will require you to copy your existing backup chain to that location.  Its easier, and faster, to just create a new backup job.
  4. Edit your Backup Copy job to remove the job that backs up directly to the remote site and add the job that backs up to your primary site.
  5. The next time your VMs back up inside the copy window, it will sync the changes from the latest restore point to your remote site.

There are a couple of caveats with this.  You can’t set up a new backup chain at your primary site until you’ve created your backup copy job and created the new full backup file.  If there are more recent restore points available than the ones at your remote storage site, it will eschew the ones at the remote site in favor of the ones at your primary site.  This may mean copying a large amount of data over your WAN.

Second, you need to check to make sure that all of your data has copied over.  A copy job may end succcessfully if the interval expires and it copied some data.  If it hasn’t finished copying all of your data, it will restart and pick up where it left off.  This might give you a false sense of data security by making you think that your offsite backups have fully seeded when you still have to copy large amounts of data over your WAN.  In this case, it would be helpful to have a warning on any job notifications to inform the administrators that the seeding hasn’t completed and that there isn’t a restore point in the remote repository yet.

 

@Veeam #V7’s Killer Feature

Of all the features that have been added to the latest version of Veeam, there is one that really stands out as the killer feature.  This feature is available in all of the licensed versions of Veeam, and there are no restrictions on the base functionality.  This feature wasn’t widely heralded from what I can tell.

That feature, in my opinion, is the Backup Copy Job.  As I mentioned in my last post, I wanted to dedicate a little more time to this feature.

But I have a confession to make.  I want to make it clear that I don’t know what features are in Commvault, NetVault, Symantec, Avamar or other backup solutions.  Similar features probably do exist.  I do know that other software vendors have had GFS rotation for a long time, but I don’t know enough to say how it ties in with their virtualization backup or their offsite capabilites.  I also just want to focus on Veeam’s implementation.

So why am I making a big deal out of this if I think other vendors may have this capability?  Because this is what a lot of customers have been asking for for a long time.

In previous versions of Veeam, you couldn’t do any sort of backup rotation.  Well, that’s not entirely true.  It would be more accurate to say that there wasn’t any built-in functionality for doing GSF rotation, and their support forums have a number of hacks that add this capability by using PowerShell or recommending multiple backup jobs on varying schedules to handle this.

By setting up GFS rotation and building it into a new method for utilizing offsite storage, Veeam has built a powerful tool for backing up virtual environments and ensuring that your data is safely protected offsite without having to break the bank on expensive backup storage.

First Thoughts on @Veeam #V7

Veeam released the latest version of their backup software a week ago on August 15th.  I’ve been looking forward to this release as they’ve included some features that many customers have wanted for some time such as:

  • Grandfather-Father-Son backup rotation as part of a Backup Copy Job to secondary storage
  • Export Backups to Tape
  • vSphere Web Client Plugin
  • Built-In WAN Acceleration

The full list of enhancements and features can be found here.

$Work uses Veeam as the primary backup solution, so I set up a test environment to try out some of these new features before upgrading.  $Work is only licensed for the Standard Edition, and while the evaluation license is for the Enterprise Plus feature set, I will only be testing what I can use in my production environment.  So unfortunately, I won’t be trying out the WAN Acceleration feature or U-AIR.

First Thoughts

Installation of V7 and setting up jobs was a breeze.  There were a few small changes to the process compared to previous versions, like having to set up credentials to access VCenter and Windows servers in a credential vault, but those changes were relatively minor and saved time later.  In previous versions, I would have to go into my password vault each time I wanted to create a backup job that included windows servers.  This takes care of that.

Not much has changed with setting up new backup jobs.  They have added a screen for setting up a secondary storage site and backup rotation, which makes it easy to add backup jobs to a backup copy job if you already have one set up.  One of the best changes on various jobs screens, in my opinion, is that the backup job statistics screen is now accessible on the main screen just by selecting a backup job.  It is no longer buried in a context meu.

Previous versions of Veeam backed up servers sequentially if there was more than one server per backup job.  That’s changed in this edition.  Veeam will now backup multiple servers per job in parallel.  This will cut down backup times significantly.  This option isn’t enabled if you are upgrading from a previous version, but it can easily be enabled by going into the options menu.

I really like the Backup Copy job option.  There is a lot to this feature, and I want to dedicate more time to it in a separate post.

The timing of this release is very good.  We are a Veeam customer at $work, and we’ve just started to reevaluate our disaster recovery plan and capabilities.  Some of these features, especially the exporting backups to tape and GFS rotation, are capabilities that we wanted to get.  We currently back up directly to an offsite repository, so the backup copy job feature may be one of the best additions to this product.