Horizon EUC Access Point Configuration Script

Horizon 6.2 included a new feature when it was launched in early September – the EUC Access Gateway.  This product is a hardened Linux appliance that has all of the features of the Security Server without the drawbacks of having to deploy Windows Servers into your DMZ.  It will also eventually support Horizon Workspace/VMware Identity Manager.

This new Horizon component exposes the “cattle philosophy” of virtual machine management.  If it stops working properly, or a new version comes out, its to be disposed of and redeployed.  To facilitate this, the appliance is configured and managed using a REST API.

Unfortunately, working with this REST API isn’t exactly user friendly, especially if you’re only deploying one or two of these appliances.  This API is also the only way to manage the appliance, and it does not have a VAMI interface or SSH access.

I’ve put together a PowerShell script that simplifies and automates the configuration of the EUC Access Gateway Appliances.  You can download the script off of my Github site.

The script has the following functions:

  • Get the appliance’s current Horizon View configuration
  • Set the appliance’s Horizon View configuration
  • Download the log bundle for troubleshooting

There are also placeholder parameters for configuring vIDM (which will be supported in future releases) and uploading SSL certificates.

The syntax for this script’s main features look like:

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetViewConfig

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -SetViewConfig -ViewEnablePCoIP -ViewPCoIPExternalIP 10.1.1.3 $ViewDisableBlast

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetLogBundle -LogBundleFolder c:\temp

If you have any issues deploying a config, use the script to download a log bundle and open the admin.log file.  This file will tell you what configuration element was rejected.

I want to point out one troubleshooting note that my testers and I both experienced when developing this script.  The REST API does not work until an admin password is set on the appliance.  One thing we discovered is that there were times when the password would not be set despite one being provided during the deployment.  If this happens, the script will fail when you try to get a config, set a config, or download the log bundle.

When this happens, you either need to delete the appliance and redeploy it or log into the appliance through the vSphere console and manually set the admin password.

Finally, I’d like to thank Andrew Morgan and Jarian Gibson for helping test this script and providing feedback that greatly improved the final product.

Deploying SQL Server Using WinRM #VDM30in30

A couple of weeks ago, I shared the scripts I use to prepare a brand new VM to run SQL Server.  At the time, I noted that I could only get up to the point where the server was ready for SQL and that I was unable to overcome some issues with installing SQL Server remotely.

I have finally found a way to get around those issues, and I’ve put together a script for remotely deploying SQL Server using PowerShell and WinRM.  This script is designed to be used as part of a workflow in vCenter Orchestrator that will allow admins and, eventually, developers to provision their own SQL Server instances.

There are two scripts that are required for this process, and they borrow techniques from the other provisioning scripts that I’ve written.  I would highly recommend reading my previous articles on provisioning a VM and preparing a VM for SQL Server before trying out these scripts.

The first script is the Install-SQLServer script that should be included with the SQL Server files that are copied to the new server.  This is the script that will run on the local machine and install SQL.

The other script is the script that will run on your jump box or scripting server called Invoke-SQLInstall.  In my environment, this script is executed on my scripting server by vCenter Orchestrator and invokes the SQL Server installation using WinRM on my new SQL Server.

Kerberos and SQL Server Installations

While I was building this script, I ran into a lot of issues when trying to get SQL Server to install remotely using WinRM.  The install would fail, and the setup log would point to an error that the account or the computer wasn’t trusted for delegation.

The Kerberos “Second Hop” issue was causing the install to fail, and most of the workarounds for getting around this issue, such as using Start-Process to launch a new PowerShell session or using a local batch file to install under other credentials, would not work inside of a WinRM session.

There is one other option that I had considered, but I didn’t pursue it at the time because I thought it was a security risk.

CredSSP

Microsoft introduced a new secruity delegation method years back to work around some of the limitations of Kerberos.  This new security delegation method,  called the Credential Security Service Provider or CredSSP for short, was designed specifically to address the Kerberos second hop issue.

The issue with CredSSP is that it can be configured to delegate credentials to any computer on the domain through group policy.  We don’t want or need that, and credentials should only be delegated to the computer that we’re working on for the short time that it will need them.

It is actually fairly easy to configure CredSSP on the SQL Server at runtime and to turn it off when we’re done, and the script will take care of both tasks when installing SQL Server.

Custom Roles

SQL Server has a number of roles and features that can be selected during installation.  Many of these roles have very specific functions and aren’t suited for general purpose database servers, and they shouldn’t be installed if you aren’t going to use them.

What makes this more complicated is that some features are instance specific, such as the database engine and Reporting Services, while others are not instance specific and only need to be installed once.

Since each instance and/or each SQL Server may need different features installed, the script was designed with roles in mind.  Each role  is an element in a PowerShell Switch statement that contains the SQL command-line installation string.   It may also contain other commands that might be needed such as the Windows Firewall cmdlets to allow incoming connections to the SQL instance.

This design choice allows the script to be flexible and adapt to the changing needs of the business and the environment.

Get the Scripts

The scripts are available on my Github page with the rest of my provisioning scripts.

Using PowerCLI to Prepare a VM for SQL Server

SQL Server is one of those applications where performance can be greatly impacted by the initial server configuration.  One of the big contributing factors to this is storage configuration.  If this isn’t optimized at the VM or the storage array level, performance will suffer, and an entire book has been dedicated to the subject.

SQL Server may seem ubiquitous in many environments because many applications require a SQL database for storing data.  And in many cases, a new application means deploying a new database server.

Because SQL can require a virtual machine setup that follows a different baseline, each new SQL Server will either need to be deployed from a custom template for SQL or hand-crafted from whatever base template it was deployed from.  If you want to keep your template count to a minimum but still avoid having to hand craft your SQL Servers, we need to turn to PowerCLI and PowerShell 4.0.

Windows Server 2012 R2 and PowerShell 4.0 introduced a number of new Cmdlets that will assist in preparing a brand new VM to run SQL Server.  These cmdlets handle storage and disk operations, and these new cmdlets will be instrumental in provisioning the additional storage resources that the server needs.

The code for this script is up on Github.

Standard SQL Server Configuration

When you virtualize SQL Server, there are a few best practices that should be done to ensure good performance.  Therefore, we want to ensure that the script to prepare the server for SQL implements these best practices.  Most of these best practices relate to storage configuration and disk layout.

One of the other goals of this process is to ensure consistency.  All SQL Servers should be configured similarly, and drive letters, use of mount points, and installation paths should be the same on all SQL Servers to ease administrative overhead. 

I have a couple of preferences when deploying SQL in my environment.  Each instance will have two dedicated volumes – one for SQL data files and one for SQL logs.  I prefer to use mount points to store the data and log files for my databases and TempDB.  This allows me to keep drive letter assignments consistent across all database servers, and if I need to add an instance to a server, I don’t need to find free drive letters for the additional disks. 

I also like to include the VMDK file name in the volume label.  Drive numbers can change on VMs as drives are added and removed, so adding the VMDK file name to the volume label adds an additional value to check if you need to expand a disk or remove one from production. 

Screenshot of Disk Labels

image

Finally, I like to install SQL Server Management Studio prior to installing the database engine.  This gives me one less feature to worry about when configuring my instance deployments.

There are a couple of things that this job will do when preparing a server to run SQL:

  1. Set CPU and Memory reservations based on the currently assigned resources to guarantee performance
  2. Change the Storage Policy to automatically set all newly attached disks to Online so they can be configured.
  3. Create the following disk layout:
    1. E: – SQL Install location
    2. R: – SQL Data Volume
    3. S: – SQL Backup Volume
    4. T: – SQL Log Volume
  4. Copy SQL Installer files to E:\SQLInstall
  5. Create the following volumes as mount points, and attach them to PVSCSI storage controllers. 
    1. TEMPDB Database File Volume under R:
    2. TEMPDB Log File Volume under T:
  6. Add any SQL admin groups or database owners to the local administrator group
  7. Install SQL Server Management Studio

The script doesn’t add or configure any disks that will be used for the actual SQL Server instances that will be installed on the server.  I have another script that handles that.

Working with Disks

In the past, the only ways to manage disks using PowerShell were using the old command line utilities like fdisk or to use WMI.  That changed with Windows Server 2012 R2, and new disk management commands were included with PowerShell 4.0.

Note: These commands only work against Windows Server 2012 R2 and newer.

These commands will take care of all of the disk provisioning tasks once the VMDK has been added to the server, including initializing the disk, creating the partition, and formatting it with the correct block size.  The PowerShell commands also allow us to define whether the disk will be a mount point or be accessed through a drive letter.

Note: When attaching a disk as a mount point, there are some additional options that need to be selected to ensure that it does not get a drive letter assigned after a reboot.  Please see the code snippet below.

One of the neat things about these new cmdlets is that they use the new CIMSession connection type for PowerShell remoting.  Cmdlets that use CIMSessions are run on the local computer and connect to a WMI instance on the remote machine.  Unlike PSSessions, network connections are only utilized when a command is being executed.

An example of these cmdlets in action is the function to mount a VMDK as a mount point. 

Function Create-MountPoint
{
#Initialize volume, create partition, mount as NTFS Mount Point, and format as NTFS Volume
Param($Servername,$VolumeName,$VolumeSize,$Path,$CimSession)
$VolumeSizeGB = [string]$VolumeSize + "GB"

$partition = Get-Disk -CIMSession $CimSession | Where-Object {($_.partitionstyle -eq "raw") -and ($_.size -eq $VolumeSizeGB)} | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -UseMaximumSize -AssignDriveLetter:$False

$disknumber = $partition.DiskNumber
$partitionnumber = $partition.partitionnumber
$diskID = $partition.diskID

Get-Partition -DiskNumber $disknumber -PartitionNumber $partitionnumber -CimSession $CimSession | Format-Volume -AllocationUnitSize 64KB -FileSystem NTFS -NewFileSystemLabel $VolumeName -Confirm:$false
Add-PartitionAccessPath -CimSession $CimSession -DiskNumber $disknumber -PartitionNumber $partitionnumber -AssignDriveLetter:$False
Add-PartitionAccessPath -CimSession $CimSession -DiskNumber $disknumber -PartitionNumber $partitionnumber -AccessPath $Path
Set-Partition -CimSession $CimSession -DiskNumber $disknumber -PartitionNumber $partitionnumber -NoDefaultDriveLetter:$true
}

This function handles configuring a new disk for SQL, including formatting it with a 64KB block size, and attaches it as an NTFS mount point.

If you read through the code, you’ll notice that the disk is configured to not assign a drive letter in multiple places.  While writing and testing this function, all mount points would gain a drive letter when the system was rebooted.  In order to prevent this from happening, the script needed to tell Windows not to assign a drive letter multiple times.

What About Disks for SQL Instances

One thing that this particular script does not do is create the data and log volumes for a SQL instance.  While it wouldn’t be too hard to add that code in and prompt for an instance name, I decided to place that logic in another script.  This allows me to manage and use one script for adding instance disks instead of having that logic in two places.  This also helps keep both scripts smaller and more manageable.

Installing SQL Server

The last step in this process is to install SQL Server.  Unfortunately, that step still needs to be done by hand at this point.  The reason is that the SQL installation requires Kerberos in order work properly, and it throws an error if I try to install using WinRM. 

Simplifying VM Provisioning with PowerCLI and SQL

Virtualization has made server deployments easier, and putting a new server into production can be as easy as right-clicking on a template and selecting Deploy VM and applying a customization spec.

Deploying a VM from a template is just one step in the process.  Manual intervention, or worse – multiple templates, may be required if the new VM needs more than the default number of processors or additional RAM.  And deployment tasks don’t stop with VM hardware.  There may be other steps in the process such as putting the server’s Active Directory account into the correct OU, placing the VM in the correct folder, or granting administrative rights to the server or application owner.

All of these steps can be done manually.  But it requires a user to work in multiple GUIs and even log into the remote server to assign local admin rights.

There is an easier way to handle all of this.  PowerShell, with the PowerCLI and Active Directory plugins, can handle the provisioning process, and .Net calls can be used to add a user or group to the new server’s Administrator group while pulling the configuration data from a SQL database.

The Script

I have a script available on Github that you can download and try out in your environment.   The script, Provision-VM.ps1, requires a SQL database for profile information, which is explained below, PowerCLI, and the Active Directory PowerShell cmdlets.  You will also need two service accounts – an Active Directory user with Administrator permissions in vCenter and an Active Directory user with Domain Administrator permissions.

This script was designed to be used with the vCenter Orchestrator PowerShell module and WinRM.  vCO will provide a graphical front end for entering the script parameters and executing the script.

This script might look somewhat familiar.  I used a version of it in my Week 1 Virtual Design Master submission.

What Provision-VM.ps1 Does

So what exactly does Provision-VM.ps1 do?  Well, it does almost exactly what it says on the tin.  It provisions a brand new VM from a template.  But it does a little more than just deploy a VM from a template.

The exact steps that are taken are:

  1. Query the SQL database for the customization settings that are needed for the profile.
  2. Prestage the computer account in the Active Directory OU
  3. Create a non-persistent Customization Spec
  4. Set the IP network settings for the customization spec
  5. Deploy a new VM to the correct resource pool/cluster/host and datastore/datastore cluster using the specified template based on the details retrieved in step 1.
    Note: The Resource Pool  parameter is used in the script instead of the host parameter because the Resource Pool  parameter encompasses hosts, clusters, and resource pools.  This provides more flexibility than the host parameter.
  6. Add additional CPUs and RAM is specified using the –CPUCount and –RAMCount parameters
  7. Power on VM and customize
  8. Add server owner user account or group to the local administrators group if one is specified using the –Owner parameter.

By using this deployment process along with some other scripts for configuring a server for a specific role after it has been deployed, I’ve been able to reduce the number of templates that need to be managed to 1 per Windows version.

WinRM and Working Around Kerberos Issues

vCenter Orchestrator is a great tool for automation and orchestration, and VMware has developed a PowerShell plugin to extend vCO management to Windows hosts and VMs.  This plugin even uses WinRM, which is Microsoft’ s preferred remote management technology for PowerShell.

WinRM setup for the vCO appliance, which I use in my environments, requires Kerberos to be used when making the remote connection.  I use a single Windows jumpbox to execute all of my PowerShell scripts from one location, so I run into Kerberos forwarding issues when using vCO and PowerShell to administer other systems.

There is a way to work around this, but I won’t spend a lot of time on it since it deserves it’s own post.  However, you can learn more about how the password information is stored and converted into a PowerShell credential from this article on PowerShell.org.

I also put together a little script that creates a password hash file using some of the code in the article above.

SQL-Based Profiles

One of the drawbacks of trying to script server deployments is that it needs to be simple to use without making it too hard to maintain.   I can make all required inputs – cluster or resource pool, datastore, template, etc, – into parameters that the person who runs the script has to enter.  But if you plan on using a script as part of a self-service provisioning model, keeping the number of parameters to a minimum is essential.  This helps limit the options that are available to users when deploying VMs and prevents them from having to worry about backend details like cluster and datastore names.

The tradeoff, in my experience, is that you need to put more into the script to compensate for having fewer parameters.   To do this, you’ll need to create “profiles” of all the customization settings you want to apply to the deployed server and code it directly into the script.

Let’s say you have one vSphere.  The cluster has three VLANs that servers can connect to, two datastore clusters where the server can be stored, and three templates that can be deployed.  To keep the script easy to run, and prevent admins or app owners from having to memorize all the details, you’d need to create 18 different profile combinations to cover the various settings.

This can make the script larger as you’ll need to include all combinations of settings that will be deployed.  It also makes it more likely that any additions or changes could introduce a script breaking bug like a missing curly bracket or quotation mark.

There is another way to reduce the size and complexity of the script while keeping parameters to a minimum – use a SQL database to store the customization settings.  These customization settings would be queried at run-time based on the profile that the end user selects.

The database for this script is a simple single table database.  There is a SQL script on Github to set up a table similar to the one I use in my lab.  If you choose to add or remove fields, you will need to edit the Provision-VM.ps1 file starting around line 106.

Database Schema Screenshotimage

There are two ways that the information can be retrieved from the database.  The first method is to install SQL Server Management Studio for SQL Server 2012 or newer on the server where the script will be executed.  The other is to use .Net to connect to SQL and execute the query.  I prefer the later option because it requires one less component to install.

The code for querying SQL from PowerShell, courtesy of Iris Classon’s blog that is linked above, is:

$dataSource = $SQLServer
$user = "SQL Server User Account"
$pwd = "Password"
$database = "OSCustomizationDB"
$databasetable = "OSCustomizationSettings"
$connectionString = "Server=$dataSource;uid=$user;pwd=$pwd;Database=$database;Integrated Security=False;"
 
$query = "Select * FROM $databasetable WHERE Profile_ID = '$Profile'"
 
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$command = $connection.CreateCommand()
$command.CommandText  = $query
 
$result = $command.ExecuteReader()

$ProfileDetails = new-object “System.Data.DataTable”
$ProfileDetails.Load($result)
You may notice that SQL Authentication is used for querying the database.  This script was designed to run from vCO, and if I use the PowerShell plugin, I run into Kerberos issues when using Windows Integrated authentication.  The account used for accessing this database only needs to have data reader rights.

Once the settings have been retrieved from the database, they can be used to determine which template will be deployed, the resource pool and datastore or datastore cluster that it will be deployed to, temporarily modify an existing customization spec NIC mapping settings at runtime, and even determine which OU the server’s AD account will be deployed in.

The benefit of this setup is that I can easily add new profiles or change existing profiles without having to directly edit my deployment script.  This gets changes into production faster.

More to Come…

This is just scratching the surface of deployment tasks that can be automated with PowerShell.  PowerShell 4.0 and Windows Server 2012R2 add a lot of new cmdlets that can automate things like disk setup.

orchestrating Exchange with #vCO

Microsoft Exchange is a system that is ideally suited for automation.  It’s in almost every environment.  It has it’s own add-on to PowerShell that makes it easy to write scripts to handle tasks.  And most of the tasks that administrators perform after setup are rote tasks that are easily automated such as setting up mailboxes and adding IP addresses to a receive connector. 

Why vCenter Orchestrator?

Exchange already contains a robust automation platform with the PowerShell-based Exchange Management Shell.  This platform makes it easy to automate tasks through scripting.  But no matter how well these scripts are written, executing command line tasks can be error-prone if the end users of the scripts aren’t comfortable with a command line.  You may also want to limit input or provide a user-friendly interface to kicking off the script.

So what does that have to do with vCenter Orchestrator?  Orchestrator is an extensible workflow automation tool released by VMware and included with the vCenter Server license.   It supports Windows Remote Management and PowerShell through a plugin.

Start By Building a Jump Box/Scripting Server

Before we jump into configuring Orchestrator to talk to Exchange, we’ll need a Windows Server that we can configure to execute the scripts that Orchestrator will call.  This server should run Windows Server 2008 R2 at a minimum, and you should avoid Server 2012 R2 because the Exchange 2010 PowerShell cmdlets are not compatible with PowerShell 4.0. 

You will need to install the Exchange management tools on this server, and I would recommend a PowerShell IDE such as PowerGUI or Idera PowerShell Pro to aid in troubleshooting and testing.

Orchestrator and Exchange

As I mentioned above, Orchestrator can be used with PowerShell through a plugin.  This plugin uses WinRM to connect to a Windows Server instance to execute PowerShell commands and scripts.   In order to use this plugin, Orchestrator needs to be configured to support Kerberos authentication.

When I was testing out this combination, I was not able to get the Exchange Management Shell to load properly when using WinRM.  I think the issue has to do with Kerberos authentication and WinRM.

When you use WinRM, you’re remoting into another system using PowerShell.  In some ways, it is like Microsoft’s version of SSH – you’re logging into the system and working from a command line. 

The Exchange cmdlets add another hop in that process.  When you’re using the Exchange cmdlets, you’re executing those commands on one of your Exchange servers using a web service.  Unfortunately, Kerberos does not work well with multiple hops, so another to access the remote server is needed.

Another Option is Needed

So if WinRM and the Orchestrator PowerShell plugin don’t work, how can you manage Exchange with Orchestrator?  The answer is using the same remote access technology that is used for network hardware and Unix – SSH.

Since Exchange is Active Directory integrated, we’ll need an SSH server that runs on Windows, is compatible with PowerShell, and most importantly, supports Active Directory authentication.   There are a couple of options that fit here  such as the paid version of Bitvise, FreeSSHd, and nSoftware’s PowerShell Server.

There is one other catch, though.  Orchestrator has a built-in SSH plugin to support automating tasks over SSH.  However, this plugin does not support cached credentials, and it runs under whatever credentials the workflow is launched under.  One of the reasons that I initially looked at Orchestrator for managing Exchange was to be able to delegate certain tasks to the help desk without having to grant them additional rights on any systems. 

This leaves one option – PowerShell Server.  PowerShell Server has an Orchestrator Plugin that can use a shared credential that is stored in the workflow.  It is limited in some key ways, though, mainly that the plugin doesn’t process output from PowerShell.  Getting information out will require sending emails from PowerShell.

You will need to install PowerShell Server onto your scripting box and configure it for interactive sessions.

PowerShell Server Settings

Configuring the Exchange Management Shell for PowerShell Server

PowerShell Server supports the Exchange Management shell, but in a limited capacity.  The method that their support page recommends breaks a few cmdlets, and I ran into issues with the commands for configuring resource mailboxes and working with ActiveSync devices. 

One other method for launching the Exchange Management Shell from within your PowerShell SSH session is by using the following commands:

'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange.ps1'
Connect-ExchangeServer –auto
If you try that, though, you will receive an error that the screen size could not be changed.  This is due to the commands that run when the Exchange Management Shell loads – it resizes the PowerShell console window and prints a lot of text on the screen. 
 
The screen size change is controlled by a function in the RemoteExchange.ps1 script.  This file is located in the Exchange Install Directory\v14\Bin.  You need to open this file and comment out line 34.  This line calls a function that widens the window when the Exchange Management shell is loaded.  Once you’ve commented out this line, you need to save the modified file with a new file name in the same folder as the original.
 
Edit RemoteExchangePS1
 
In order to use this in a PowerShell script with Orchestrator, you will need to add it to each script or into the PowerShell profile for the account that will be executing the script.  The example that I use in my workflows looks like this:
 

'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange-Modified.ps1'
Connect-ExchangeServer –auto

Note: It may be possible to use the method outlined by Derek Schauland in this TechRepublic article in place of modifying the EMS script.  However, I have not tested this with technique with Orchestrator.

Putting It All Together

Earlier this month, I talked about this topic on vBrownbag, and I demonstrated two examples of this code in action.  You can watch it here.

One of the examples that I demonstrated during that vBrownbag talk was an employee termination workflow.  I had a request for that workflow and the scripts that the workflow called, so I posted them out on my github site.  The Terminate-DeactivateEmail.ps1 script that is found in the github repository is a working example. 

Where I Go Spelunking into the Horizon View LDAP Database–Part 2

In Part 1 of this series, I shared some of the resources that are currently available in the greater VMware View community that work directly with the View LDAP database.  Overall, there are some great things being done with these scripts, but they barely scratch the surface of what is in the LDAP database.

Connecting to the View LDAP Database

Connecting to the VIew LDAP database has been covered a few times, and VMware has a knowledgebase article that covers the steps to use ADSI edit on Windows Server. 

Any scripting language with an LDAP provider can also access the database.  Although they’re not View specific, there are a number of resources for using scripting languages, such as PowerShell or Python, with an LDAP database.

Top-Level LDAP Organizational Units

LDAP OUs

Like Active Directory or any other LDAP database, there are a number of top-level OUs where all the objects are stored.  Unlike many LDAP databases, though, the naming of these OUs doesn’t make it easy to navigate and find the objects that you’re looking for.

The OUs that are in the View LDAP Database are:

Organizational Unit Name

Purpose

Applications Pool, Application, and ThinApp settings
Data Disks Persistent Desktop Data Disks
Hosts ?? Possibly Terminal Server or Manual Pool members
Groups View Folders and Security Groups/Roles
ForeignSecurityPrincipals Active Directory SIDs used with View
Packages ?? Possibly ThinApp repositories or packages
People ??
Polices Various system properties stored in child container attributes
Properties VDM properties, child OU contains event strings
Roles Built-in security?
Servers Desktops
Server Groups Desktop Pools

You may notice that a few of the OUs have question marks under their purpose.  I wasn’t able to figure out what those OUs were used for based on how I had set up my home lab.  I normally don’t work with Terminal Server or Manual pools or ThinApp, and I suspect that the OUs that aren’t defined relate to those areas.

This series is going to continue at a slower pace over the next couple of months as I shift the focus to writing scripts against the LDAP database.

Exchange Restores and PowerShell Scripting Games

In my last post, I posted a script that I use to back up my Exchange 2010 test environment using PowerShell and Windows Server Backup.  But what if I need to do a restore?

Well, the good people over at ExchangeServerPro.com have a good step-by-step walkthrough of how to restore an individual mailbox that covers restoring from WSB, rolling the mailbox forward, and recovering data.

If you’re interested in how a restore would work, check out the article.

PowerShell Scripting Games

Microsoft’s annual scripting games started on Monday.  Unlike previous years, scripting is limited to the Powershell scripting language this year.  A beginner and an advanced scripting challenge is posted each day, and you have seven days to submit a solution to the problem.

You can find the challenges and scripting tips on the Hey! Scripting Guy blog.  The official rules also include a link to the registration page.

If you’re looking to learn about PowerShell or just challenge yourself with a scripting problem, you might want to check this out.

Scripting Exchange 2010 Backups on Windows Server 2008R2 using PowerShell and Windows Backup Service

I’ve struggled with backing up my Exchange 2010 SP1 environment in my home lab since I upgraded over a month ago.  Before I had upgraded, I was using a script that did Volume Shadow Services (VSS) backups.

After upgrading, I wanted to cut my teeth with Windows Server Backup (WBS).  Windows Server Backup is the replacement for the NTBackup program that was included with Windows until Vista, and it uses VSS to take snapshot backups of entire volumes or file systems.

Unlike NTBackup, WBS will not run backup jobs to tape.  You will need to dedicate an entire volume or use a network folder to store your backups.  If you use the GUI, you can only retain one backup set, and a new backup will overwrite the old.

This was an issue for me.  Even though I have Exchange configured to retain deleted items for 14 days and deleted mailboxes for 30 days, I like to keep multiple backups.  It allows me to play with multiple recovery scenarios that I might face in the real world.

And that is where PowerShell comes in.  Server 2008R2 allows users to create a temporary backup policy and pass that policy to the Windows Backup Service.  This will also allow you to change the folder where the backup is saved each time, and you can easily add or remove volumes, LUNs, and databases without having to reconfigure your backup job each time.

I started by working from the script that Michael Smith that I linked to above.  To modify this script to work with WBS, I first had to modify it to work with Exchange 2010.  One of the major differences between Exchange 2007 and Exchange 2010 is that storage groups have been removed in the latter.  Logging and other storage group functions have been rolled into the database, making them self-contained.

The original script used the Get-StorageGroup PowerShell command to get the location of each storage group’s log files.  Since this command is no longer present, I had to add sections of this function to the function that retrieved the location of the database files.

After adding some error handling by using Try/Catch, the section that locates mailbox databases looks like:

Try
{
foreach ($mdb in $colMB)
{
if ($mdb.Recovery)
{
write-host ("Skipping RECOVERY MDB " + $mdb.Name)
continue
}
write-host ($mdb.Name + "`t " + $mdb.Guid)
write-host ("`t" + $mdb.EdbFilePath)
write-host " "

$pathPattern.($mdb.EdbFilePath) = $i

$vol = $mdb.EdbFilePath.ToString().SubString(0, 2)
$volumes.set_item($vol,$i)

#This Section gets the log file information for the backup
$prefix  = $mdb.LogFilePrefix
$logpath = $mdb.LogFolderPath.ToString()

## E00*.log
$pathpattern.(join-path $logpath ($prefix + "*.log")) = $i

$vol = $logpath.SubString(0, 2)
$volumes.set_item($vol,$i)

$i += 1
}
}
Catch
{
Write-Host "There are no Mailbox Databases on this server."
}

I also removed all of the functions related to building and calling the Disk Shadow and RoboCopy commands.  Since we will be using WBS, there is no need to manually trigger a VSS backup.

Once we know where our mailbox and public folder databases and their log files are located, we can start to build our temporary backup job.  The first thing we need to do is create a new backup job called $bpol by using the New-WBPolicy cmdlet.

##Create New Backup Policy for Windows Server Backup
$BPol = New-WBPolicy

Once we have created our backup policy, we add the drives that we want to backup.  We can tell Windows Server Backup which drives we want to back up by using the drives and folder paths that we retrieved from Exchange using the code above.  We use the Get-WBVolume cmdlet to get the disk or volume information and the Add-WBVolume command to add it to the backup job.

##Define volumes to be backed up based on Exchange filepath information
##Retrieved in function GetStores

ForEach($bvol in $volumes.keys)
{
$WBVol = Get-WBVolume –volumepath $bvol
Add-WBVolume –policy $BPol –volume $WBVol
}

The Add-WBVolume doesn’t overwrite previous values, so I can easily add multiple drives to my backup job.

Now that my backup locations have been added, I need to tell WBS that this will be a VSS Full Backup instead of a VSS Copy Backup.  I want to run a full backup because this will commit information in the log files to the database and truncate old logs.  The command to set the backup job to a full backup is:

Set-WBVssBackupOptions -policy $BPol –VssFullBackup

Finally, I need to set my backup target.  This script is designed to back up to a network share.  Since I want to retain multiple backups, it will also create a new folder to store the backup at runtime.  I created a function called AddWBTarget to handle this part of the job.

Function AddWBTarget
{
##Create New Folder for back in $backuplocation using date format
$folder = get-date -uFormat "%Y-%m-%d-%H-%M"
md "$backupLocation\$folder"
$netFolder = "$backupLocation\$folder"

$netTarget = New-WBBackupTarget -NetworkPath "$netfolder"
Add-WBBackupTarget -policy $BPol -Target $netTarget
}

The backup location needs to be a UNC path to a network folder, and you set this when you run the script with the –backuplocation parameter.  The function will also create a new folder and then add this location to the backup job using the Add-WBBackupTarget.

The documentation for the Add-WBBackupTarget states that you need to provide user credentials to backup to a network location.  This does not appear to be the case, and WBS appears to use the credentials of the user running the script to access the backup location.

WBS now has all of the information that it needs to perform a backup, so I will pass the temporary backup job to WBS using the Start-WBBackup with the –policy parameter.

You can run the script manually by running EX2k10WBS.ps1 from your Exchange 2010 server.  You will need to declare your backup location by using the –backuplocation parameter.  Since this script will be performing a backup, you will need to run PowerShell with elevated permissions.

You can also set this script to run as a scheduled task.

You can download the entire script here.