The Approaching Backup (Hyper)Convergence #VFD5

When we talk about convergence in IT, it usually means bringing things together to make them easier to manage and use.  Network convergence, in the data center, is bringing together your storage and IP stacks, while hyperconverged is about bringing together compute and storage together in a platform that can easily scale as new capacity is needed.

One area where we haven’t seen a lot of convergence is the backup industry.  One new startup, fresh out of stealth mode, aims to change that by bringing together backup storage, compute, and virtualization backup software in a scalable and easy to use package.

I had the opportunity to hear from Rubrik, a new player in the backup space, at Virtualization Field Day 5.   My coworker, and fellow VFD5 delegate, Eric Shanks, has also written his thoughts on Rubrik.


Note: All travel and incidental expenses for attending Virtualization Field Day 5 were paid for by Gestalt IT.  This was the only compensation provided, and it did not influence the content of this post.


One of the challenges of architecting backup solutions for IT environments is that you need to bring together a number of disparate pieces, often from different vendors, and try to make them function as one.  Even if multiple components are from the same vendor, they’re often not integrated in a way to make them easy to deploy.

Rubrik’s goal is to be a “Time Machine for private cloud” and to make backup so simple that you can have the appliance racked and starting backups within 15 minutes.  Their product, which hit general availability in May, combines backup software, storage, and hardware in a package that is easy to deploy, use, and scale.

They front this with an HTML5 interface and advanced search capabilities for virtual machines and files within the virtual machine file system.  This works across both locally stored data and data that has been aged out to the cloud due to a local metadata cache.

Because they control the hardware and software for the entire platform, Rubrik is able to engineer everything for the best performance.  They utilize flash in each node to store backup metadata as well as ingest the inbound data streams to deduplicate and compress data.

Rubrik uses SLAs to determine how often virtual machines are protected and how long that data is saved.  Over time, that data can be aged out to Amazon S3.  They do not currently support replication to another Rubrik appliance in another location, but that is on the roadmap.

Although there are a lot of cool features in Rubrik, it is a version 1.0 product.  It is missing some things that more mature products have such as application-level item recovery and role-based access control.  They only support vSphere in this reslease.  However, the vendor has committed to adding many more features, and support for additional hypervisors, in future releases.

You can watch the introduction and technical deep dive for the Rubrik presentation on Youtube.  The links are below.

If you want to see a hands-on review of Rubrik, you can read Brian Suhr’s unboxing post here.

Rubrik has brought an innovative and exciting product to market, and I look forward to seeing more from them in the future.

First Thoughts on @Veeam #V7

Veeam released the latest version of their backup software a week ago on August 15th.  I’ve been looking forward to this release as they’ve included some features that many customers have wanted for some time such as:

  • Grandfather-Father-Son backup rotation as part of a Backup Copy Job to secondary storage
  • Export Backups to Tape
  • vSphere Web Client Plugin
  • Built-In WAN Acceleration

The full list of enhancements and features can be found here.

$Work uses Veeam as the primary backup solution, so I set up a test environment to try out some of these new features before upgrading.  $Work is only licensed for the Standard Edition, and while the evaluation license is for the Enterprise Plus feature set, I will only be testing what I can use in my production environment.  So unfortunately, I won’t be trying out the WAN Acceleration feature or U-AIR.

First Thoughts

Installation of V7 and setting up jobs was a breeze.  There were a few small changes to the process compared to previous versions, like having to set up credentials to access VCenter and Windows servers in a credential vault, but those changes were relatively minor and saved time later.  In previous versions, I would have to go into my password vault each time I wanted to create a backup job that included windows servers.  This takes care of that.

Not much has changed with setting up new backup jobs.  They have added a screen for setting up a secondary storage site and backup rotation, which makes it easy to add backup jobs to a backup copy job if you already have one set up.  One of the best changes on various jobs screens, in my opinion, is that the backup job statistics screen is now accessible on the main screen just by selecting a backup job.  It is no longer buried in a context meu.

Previous versions of Veeam backed up servers sequentially if there was more than one server per backup job.  That’s changed in this edition.  Veeam will now backup multiple servers per job in parallel.  This will cut down backup times significantly.  This option isn’t enabled if you are upgrading from a previous version, but it can easily be enabled by going into the options menu.

I really like the Backup Copy job option.  There is a lot to this feature, and I want to dedicate more time to it in a separate post.

The timing of this release is very good.  We are a Veeam customer at $work, and we’ve just started to reevaluate our disaster recovery plan and capabilities.  Some of these features, especially the exporting backups to tape and GFS rotation, are capabilities that we wanted to get.  We currently back up directly to an offsite repository, so the backup copy job feature may be one of the best additions to this product.

Exchange Restores and PowerShell Scripting Games

In my last post, I posted a script that I use to back up my Exchange 2010 test environment using PowerShell and Windows Server Backup.  But what if I need to do a restore?

Well, the good people over at ExchangeServerPro.com have a good step-by-step walkthrough of how to restore an individual mailbox that covers restoring from WSB, rolling the mailbox forward, and recovering data.

If you’re interested in how a restore would work, check out the article.

PowerShell Scripting Games

Microsoft’s annual scripting games started on Monday.  Unlike previous years, scripting is limited to the Powershell scripting language this year.  A beginner and an advanced scripting challenge is posted each day, and you have seven days to submit a solution to the problem.

You can find the challenges and scripting tips on the Hey! Scripting Guy blog.  The official rules also include a link to the registration page.

If you’re looking to learn about PowerShell or just challenge yourself with a scripting problem, you might want to check this out.

Scripting Exchange 2010 Backups on Windows Server 2008R2 using PowerShell and Windows Backup Service

I’ve struggled with backing up my Exchange 2010 SP1 environment in my home lab since I upgraded over a month ago.  Before I had upgraded, I was using a script that did Volume Shadow Services (VSS) backups.

After upgrading, I wanted to cut my teeth with Windows Server Backup (WBS).  Windows Server Backup is the replacement for the NTBackup program that was included with Windows until Vista, and it uses VSS to take snapshot backups of entire volumes or file systems.

Unlike NTBackup, WBS will not run backup jobs to tape.  You will need to dedicate an entire volume or use a network folder to store your backups.  If you use the GUI, you can only retain one backup set, and a new backup will overwrite the old.

This was an issue for me.  Even though I have Exchange configured to retain deleted items for 14 days and deleted mailboxes for 30 days, I like to keep multiple backups.  It allows me to play with multiple recovery scenarios that I might face in the real world.

And that is where PowerShell comes in.  Server 2008R2 allows users to create a temporary backup policy and pass that policy to the Windows Backup Service.  This will also allow you to change the folder where the backup is saved each time, and you can easily add or remove volumes, LUNs, and databases without having to reconfigure your backup job each time.

I started by working from the script that Michael Smith that I linked to above.  To modify this script to work with WBS, I first had to modify it to work with Exchange 2010.  One of the major differences between Exchange 2007 and Exchange 2010 is that storage groups have been removed in the latter.  Logging and other storage group functions have been rolled into the database, making them self-contained.

The original script used the Get-StorageGroup PowerShell command to get the location of each storage group’s log files.  Since this command is no longer present, I had to add sections of this function to the function that retrieved the location of the database files.

After adding some error handling by using Try/Catch, the section that locates mailbox databases looks like:

Try
{
foreach ($mdb in $colMB)
{
if ($mdb.Recovery)
{
write-host ("Skipping RECOVERY MDB " + $mdb.Name)
continue
}
write-host ($mdb.Name + "`t " + $mdb.Guid)
write-host ("`t" + $mdb.EdbFilePath)
write-host " "

$pathPattern.($mdb.EdbFilePath) = $i

$vol = $mdb.EdbFilePath.ToString().SubString(0, 2)
$volumes.set_item($vol,$i)

#This Section gets the log file information for the backup
$prefix  = $mdb.LogFilePrefix
$logpath = $mdb.LogFolderPath.ToString()

## E00*.log
$pathpattern.(join-path $logpath ($prefix + "*.log")) = $i

$vol = $logpath.SubString(0, 2)
$volumes.set_item($vol,$i)

$i += 1
}
}
Catch
{
Write-Host "There are no Mailbox Databases on this server."
}

I also removed all of the functions related to building and calling the Disk Shadow and RoboCopy commands.  Since we will be using WBS, there is no need to manually trigger a VSS backup.

Once we know where our mailbox and public folder databases and their log files are located, we can start to build our temporary backup job.  The first thing we need to do is create a new backup job called $bpol by using the New-WBPolicy cmdlet.

##Create New Backup Policy for Windows Server Backup
$BPol = New-WBPolicy

Once we have created our backup policy, we add the drives that we want to backup.  We can tell Windows Server Backup which drives we want to back up by using the drives and folder paths that we retrieved from Exchange using the code above.  We use the Get-WBVolume cmdlet to get the disk or volume information and the Add-WBVolume command to add it to the backup job.

##Define volumes to be backed up based on Exchange filepath information
##Retrieved in function GetStores

ForEach($bvol in $volumes.keys)
{
$WBVol = Get-WBVolume –volumepath $bvol
Add-WBVolume –policy $BPol –volume $WBVol
}

The Add-WBVolume doesn’t overwrite previous values, so I can easily add multiple drives to my backup job.

Now that my backup locations have been added, I need to tell WBS that this will be a VSS Full Backup instead of a VSS Copy Backup.  I want to run a full backup because this will commit information in the log files to the database and truncate old logs.  The command to set the backup job to a full backup is:

Set-WBVssBackupOptions -policy $BPol –VssFullBackup

Finally, I need to set my backup target.  This script is designed to back up to a network share.  Since I want to retain multiple backups, it will also create a new folder to store the backup at runtime.  I created a function called AddWBTarget to handle this part of the job.

Function AddWBTarget
{
##Create New Folder for back in $backuplocation using date format
$folder = get-date -uFormat "%Y-%m-%d-%H-%M"
md "$backupLocation\$folder"
$netFolder = "$backupLocation\$folder"

$netTarget = New-WBBackupTarget -NetworkPath "$netfolder"
Add-WBBackupTarget -policy $BPol -Target $netTarget
}

The backup location needs to be a UNC path to a network folder, and you set this when you run the script with the –backuplocation parameter.  The function will also create a new folder and then add this location to the backup job using the Add-WBBackupTarget.

The documentation for the Add-WBBackupTarget states that you need to provide user credentials to backup to a network location.  This does not appear to be the case, and WBS appears to use the credentials of the user running the script to access the backup location.

WBS now has all of the information that it needs to perform a backup, so I will pass the temporary backup job to WBS using the Start-WBBackup with the –policy parameter.

You can run the script manually by running EX2k10WBS.ps1 from your Exchange 2010 server.  You will need to declare your backup location by using the –backuplocation parameter.  Since this script will be performing a backup, you will need to run PowerShell with elevated permissions.

You can also set this script to run as a scheduled task.

You can download the entire script here.