How I Automated Minecraft Server Builds

If you have kids that are old enough to game on any sort of device with a screen, you’ve probably been asked about virtual Lego kits. And I don’t mean the various branded video games like LEGO Worlds or the LEGO Star Wars games. No, I’m talking about something far more addictive – Minecraft.

My kids are Minecraft fanatics. They could play for hours on end while creative how-tos and “Let’s Play” YouTube videos loop non-stop in the background. And they claim they want to play Minecraft together, although that’s more theory than actual practice in the end. They also like to experiment and try to build the different things they see on YouTube. They wanted multiple worlds to use as playgrounds for their different ideas.

And they even got me to play a few times.

So during the summer of 2020, I started looking into how I could build Minecraft server appliances. I had built a few Minecraft servers by hand before that, but they were difficult to maintain and keep up-to-date with Minecraft Server dot releases and general operating system maintenance.

I thought a virtual appliance would be the best way to do this, and this is my opinionated way of building a Minecraft server.

TL;DR: Here is the link to my GitHub repo with the Packer files and scripts that I use.

A little bit of history

The initial version of the virtual appliance was built on Photon. Photon is a stripped down version of Linux created by my employer for virtual appliances and running container workloads. William Lam has some great content on how to create a Photon-based virtual appliance using Packer.

This setup worked pretty well until Minecraft released version 1.17, also known as the Caves and Cliffs version, in the summer of 2021.

There are a couple of versions of Minecraft. The two main ones are Bedrock, which is geared towards Windows, mobile devices, and video game consoles, and Java, which uses Java and only runs on Windows, Linux, and Mac.

My kids play Java edition, and up until this point, Minecraft Java edition servers used the Java8 JDK. Minecraft 1.17, however, required the Java16 JDK. And that led to a second problem. The only JDK in the Photon repositories at the time was for Java8.

Now this doesn’t seem like a problem, or at least it isn’t on a small scale. There are a few open-source OpenJDK implementations that I could adopt. I ended up going with Adoptium’s Temurin OpenJDK. But after building a server or two, I didn’t really feel like maintaining a manual install process. I wanted the ease of use that came with a installing and updating from a package repository, and that wasn’t available for Photon.

So I needed a different Linux distribution. CentOS would have been my first choice, but I didn’t want something that was basically a rolling release candidate. My colleague Timo Sugliani spoke very highly of Debian, and he released a set of Packer templates for building lightweight Debian virtual appliances on GitHub. I modified these templates to use the Packer vSphere-ISO plugin and started porting over my appliance build process.

Customizing the Minecraft Experience

Do you want a flat world or something without mob spawns? Or try out a custom world seed? You can set that during the appliance deployment. I wanted the appliance to be self-configuring so I spent some time extending William Lam’s OVF properties XML file to include all of the Minecraft server attributes that you can configure in the Server.Properties file. This allows you to deploy the appliance and configure the Minecraft environment without having to SSH into it to manually edit the file.

One day, I may trust my kids enough to give them limited access to vCenter to deploy their own servers. This would make it easier for them.

Unfortunately, that day is not today. But this still makes my life easier.

Installing and Configuring Minecraft

The OVF file does not contain the Minecraft Server binaries. It actually gets installed during the appliance’s first boot. There are few reasons for this. First, the Minecraft EULA does not allow you to distribute the binaries. At least that was my understanding of it.

Second, and more importantly, you may not always want the latest and greatest server version, especially if you’re planning to develop or use mods. Mods are often developed against specific Minecraft versions, and they have a lengthy interoperability chart.

The appliance is not built to utilize mods out of the box, but there is nothing stopping someone from installing Forge, Fabric, or other modified binaries. I just don’t feel like taking on that level of effort, and my kids have so far resisted learning important life skills like the Bash CLI.

And finally, there isn’t much difference between downloading and installing the server binary on first boot and downloading and installing an updated binary. Minecraft Java edition is distributed as a JAR file, so I only really need to download it and place it in the correct folder.

I have a pair of PowerShell scripts that make these processes pretty easy. Both scripts have the same core function – query an online version manifest that is used by the Minecraft client and download the specified version to the local machine. The update script also has some extra logic in it to check if the service is running and gracefully stop it before downloading the updated server.jar file.

You can find these scripts in the files directory on GitHub.

Running Minecraft as a systemd Service

Finally, I didn’t want to have to deal with manually starting or restarting the Minecraft service. So I Googled around, and I found a bunch of systemd sample files. I did a lot of testing with these samples (and I apologize, I did not keep track of the links I used when creating my service file) to cobble together one of my own.

My service file has an external dependency. The MCRCON tool is required to shut down the service. While I was testing this, I ran into a number of issues where I could stop Minecraft, but it wouldn’t kill the Java process that spawned with it. It also didn’t guarantee that the world was properly saved or that users were alerted to the shutdown.

By using MCRCON, we can alert users to the shutdown, save the world, and gracefully exit all of the processes through a server shutdown command.

I also have the Minecraft service set to restart on failure. My kids have a tendency to crash the server by blowing up large stacks of TNT in a cave or other crazy things they see on YouTube, and that tends to crash the binary. This saves me a little headache by restarting the process.

Prerequisites

Before we begin, you’ll want to have a couple of prerequisites. These are:

  • The latest copy of HashiCorp’s Packer tool installed on your build machine
  • The latest copy of the Debian 11 NetInstall ISO
  • OVFTool

There are a couple of files that you should edit to match your environment before you attempt the build process these are:

  • Debian.auto.pkrvars.hcl – variables for the build process
  • debian-minecraft.pkr.hcl file – the iso_paths line includes part of a hard-coded path that may not reflect your environment, and you may want to change the CPUs or RAM allocated to the VM.
  • Preseed.cfg file located in the HTTP folder: localization information and root password

This build process uses the Packer vsphere-iso build process, so it talks to vCenter. It does not use the older vmware-iso build process.

The Appliance Build Process

As I mentioned above, I use Packer to orchestrate this build process. There is a Linux shell script in the public GitHub repo called build.sh that will kick off this build process.

The first step, obviously, is to install Debian. This step is fully automated and controlled by the preseed.cfg file that is referenced in the packer file.

Once Debian is installed, we copy over a default Bash configuration and our init-script that will run when the appliance boots for the first time to configure the hostname and networking stack.

After these are files are copied over, the Packer build begins to configure the appliance. The steps that it takes are:

  • Run an apt-get update & apt-get upgrade to upgrade any outdated installed packages
  • Install our system utilities, including UFW
  • Configure UFW to allow SSH and enable it
  • Install VMware Tools
  • Set up the Repos for and install PowerShell and the Temurin OpenJDK
  • Configure the rc.local file that runs on first boot
  • Disable IPv6 because Java will default to communicating over IPv6 if it is enabled

After this, we do our basic Minecraft setup. This step does the following:

  • creates our Minecraft service user and group
  • sets up our basic folder structure in /opt/Minecraft
  • downloads MCRCON into the /opt/Minecraft/tools/mcrcon directory.
  • Copy over the service file and scripts that will run on first boot

The last three steps of the build are to run a cleanup script, export the appliance to OVF, and create the OVA file with the configurable OVF properties. The cleanup script cleans out the local apt cache and log files and zeroes out the free space to reduce the size of the disks on export.

The configurable OVF properties include all of the networking settings, the root password and SSH key, and, as mentioned above, the configurable options in the Minecraft server.properties file. OVFTool and William Lam’s script are required to create the OVA file and inject the OVF properties, and the process is outlined in this blog post.

The XML file with the OVF Properties is located in the postprocess-ova-properties folder in my GitHub repo.

The outcome of this process is a ready-to-deploy OVA file that can be uploaded to a content library.

First Boot

So what happens after you deploy the appliance and boot it for the first time.

First, the debian-init.py script will run to configure the basic system identity. This includes the IP address and network settings, root password, and SSH public key for passwordless login.

Second, we will regenerate the host SSH keys so each appliance will have a unique key. If we don’t do this step, every appliance we deploy will have the same SSH host keys as the original template. This is handled by the debian-regeneratesshkeys.sh script that is based on various scripts that I found on other sites.

Our third step is to install and configure the Minecraft server using the debian-minecraftinstall.sh script. This has a couple of sub-steps. These are:

  • Retrieve our Minecraft-specific OVF Properties
  • Call our PowerShell script to download the correct Minecraft server version to /opt/Minecraft/bin
  • Initialize the Minecraft server to create all of the required folders and files
  • Edit eula.txt to accept the EULA. The server will not run and let users connect without this step
  • Edit the server.properties file and replace any default values with the OVFProperties values
  • Edit the systemd file and configure the firewall to use the Minecraft and RCON ports
  • Reset permissions and ownership on the /opt/Minecraft folders
  • Enable and start Minecraft
  • Configure our Cron job to automatically install system and Minecraft service updates

The end result is a ready-to-play Minecraft VM.

All of the Packer files and scripts are available in my GitHub repository. Feel free to check it out and adapt it to your needs.

Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey

Recently, Mark Brookfield posted a three-part series on the process he uses for building Windows 10 images in HobbitCloud (Part 1, Part 2, Part 3). Mark has put together a great series of posts that explain the tools and the processes that he is using in his lab, and it has inspired me to revisit this topic and talk about the process and tooling I currently use in my lab and the requirements and decisions that influenced this design.

Why Automate Image Building?

Hand-building images is a time-intensive process.  It is also potentially error-prone as it is easy to forget applications and specific configuration items, requiring additional work or even building new images depending on the steps that were missed.  Incremental changes that are made to templates may not make it into the image building documentation, requiring additional work to update the image after it has been deployed.

Automation helps solve these challenges and provide consistent results.  Once the process is nailed down, you can expect consistent results on every build.  If you need to make incremental changes to the image, you can add them into your build sequence so they aren’t forgotten when building the next image.

Tools in My Build Process

When I started researching my image build process back in 2017, I was looking to find a way to save time and provide consistent results on each build.  I wanted a tool that would allow me to build images with little interaction with the process on my part.  But it also needed to fit into my lab.  The main tools I looked at were Packer with the JetBrains vSphere plugin and Microsoft Deployment Toolkit (MDT).

While Packer is an incredible tool, I ended up selected MDT as the main tool in my process.  My reason for selecting MDT has to do with NVIDIA GRID.  The vSphere Plugin for Packer does not currently support provisioning machines with vGPU, so using this tool would have required manual post-deployment work.

One nice feature of MDT is that it can utilize a SQL Server database for storing details about registered machines such as the computer name, the OU where the computer object should be placed, and the task sequence to run when booting into MDT.  This allows a new machine to be provisioned in a zero-touch fashion, and the database can be populated from PowerShell.

Unlike Packer, which can create and configure the virtual machine in vCenter, MDT only handles the operating system deployment.  So I needed some way to create and configure the VM in vCenter with a vGPU profile.  The best method of doing this is using PowerCLI.  While there are no native commandlets for managing vGPUs or other Shared PCI objects in PowerCLI, there are ways to utilize vSphere extension data to add a vGPU profile to a VM.

While MDT can install applications as part of a task sequence, I wanted something a little more flexible.  Typically, when a new version of an application is added, the way I had structured my task sequences required them to be updated to utilize the newer version.  The reason for this is that I wasn’t using Application Groups for certain applications that were going into the image, mainly the agents that were being installed, as I wanted to control the install order and manage reboots. (Yes…I may have been using this wrong…)

I wanted to reduce my operational overhead when applications were updated so I went looking for alternatives.  I ended up settling on using Chocolatey to install most of the applications in my images, with applications being hosted in a private repository running on the free edition of ProGet.

My Build Process Workflow

My build workflow consists of 7 steps with one branch.  These steps are:

  1. Create a new VM in vCenter
  2. Configure VM options such as memory reservations and video RAM
  3. GPU Flag Only – Add a virtual GPU with the correct profile to the VM.
  4. Identify Task Sequence that will be used.  There are different task sequences for GPU and non-GPU machines and logic in the script to create the task sequence name. Various parameters that are called when running the script help define the logic.
  5. Create a new computer entry in the MDT database.  This includes the computer name, MAC address, task sequence name, role, and a few other variables.  This step is performed in PowerShell using the MDTDB PowerShell module.
  6. Power on the VM. This is done using PowerCLI. The VM will PXE boot to a Windows PE environment configured to point to my MDT server.

Build Process

After the VM is powered on and boots to Windows PE, the rest of the process is hands off. All of the MDT prompts, such as the prompt for a computer name or the task sequence, are disabled, and the install process relies on the database for things like computer name and task sequence.

From this point forward, it takes about forty-five minutes to an hour to complete the task sequence. MDT installs Windows 10 and any drivers like the VMXNET3 driver, install Windows Updates from an internal WSUS server, installs any agents or applications, such as VMware Tools, the Horizon Agent, and the UEM DEM agent, silently runs the OSOT tool, and stamps the registry with the image build date.

Future Direction and Process Enhancements

While this process works well today, it is a bit cumbersome. Each new Windows 10 release requires a new task sequence for version control. It is also difficult to work tools like the OSDeploy PowerShell scripts by David Segura (used for slipstreaming updated into a Windows 10 WIM) into the process. While there are ways to automate MDT, I’d rather invest time in automating builds using Packer.

There are a couple of post-deployment steps that I would like to integrate into my build process as well. I would like to utilize Pester to validate the image build after it completes, and then if it passes, execute a shutdown and VM snapshot (or conversion to template) so it is ready to be consumed by Horizon. My plan is to utilize a tool like Jenkins to orchestrate the build pipeline and do something similar to the process that Mark Brookfield has laid out.

The ideal process that I am working towards will have multiple workflows to manage various aspects to the process. Some of these are:

1. A process for automatically creating updated Windows 10 ISOs with the latest Windows Updates using the OSDeploy PowerShell module.

2. A process for creating Chocolatey package updates and submitting them to my ProGet repository for applications managed by Chocolatey.

3. A process to build new images when Windows 10 or key applications (such as VMware Tools, the Horizon Agent, or NVIDIA Drivers) are updated. This process will ideally use Packer as the build tool to simplify management. The main dependency for this step is adding NVIDIA GRID support for the JetBrains Packer vSphere Plug-in.

So this is what I’m doing for image builds in my lab, and the direction I’m planning to go.

Horizon EUC Access Point Configuration Script

Horizon 6.2 included a new feature when it was launched in early September – the EUC Access Gateway.  This product is a hardened Linux appliance that has all of the features of the Security Server without the drawbacks of having to deploy Windows Servers into your DMZ.  It will also eventually support Horizon Workspace/VMware Identity Manager.

This new Horizon component exposes the “cattle philosophy” of virtual machine management.  If it stops working properly, or a new version comes out, its to be disposed of and redeployed.  To facilitate this, the appliance is configured and managed using a REST API.

Unfortunately, working with this REST API isn’t exactly user friendly, especially if you’re only deploying one or two of these appliances.  This API is also the only way to manage the appliance, and it does not have a VAMI interface or SSH access.

I’ve put together a PowerShell script that simplifies and automates the configuration of the EUC Access Gateway Appliances.  You can download the script off of my Github site.

The script has the following functions:

  • Get the appliance’s current Horizon View configuration
  • Set the appliance’s Horizon View configuration
  • Download the log bundle for troubleshooting

There are also placeholder parameters for configuring vIDM (which will be supported in future releases) and uploading SSL certificates.

The syntax for this script’s main features look like:

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetViewConfig

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -SetViewConfig -ViewEnablePCoIP -ViewPCoIPExternalIP 10.1.1.3 $ViewDisableBlast

Set-EUCGateway -appliancename 10.1.1.2 -adminpassword P@ssw0rd -GetLogBundle -LogBundleFolder c:\temp

If you have any issues deploying a config, use the script to download a log bundle and open the admin.log file.  This file will tell you what configuration element was rejected.

I want to point out one troubleshooting note that my testers and I both experienced when developing this script.  The REST API does not work until an admin password is set on the appliance.  One thing we discovered is that there were times when the password would not be set despite one being provided during the deployment.  If this happens, the script will fail when you try to get a config, set a config, or download the log bundle.

When this happens, you either need to delete the appliance and redeploy it or log into the appliance through the vSphere console and manually set the admin password.

Finally, I’d like to thank Andrew Morgan and Jarian Gibson for helping test this script and providing feedback that greatly improved the final product.

Deploying SQL Server Using WinRM #VDM30in30

A couple of weeks ago, I shared the scripts I use to prepare a brand new VM to run SQL Server.  At the time, I noted that I could only get up to the point where the server was ready for SQL and that I was unable to overcome some issues with installing SQL Server remotely.

I have finally found a way to get around those issues, and I’ve put together a script for remotely deploying SQL Server using PowerShell and WinRM.  This script is designed to be used as part of a workflow in vCenter Orchestrator that will allow admins and, eventually, developers to provision their own SQL Server instances.

There are two scripts that are required for this process, and they borrow techniques from the other provisioning scripts that I’ve written.  I would highly recommend reading my previous articles on provisioning a VM and preparing a VM for SQL Server before trying out these scripts.

The first script is the Install-SQLServer script that should be included with the SQL Server files that are copied to the new server.  This is the script that will run on the local machine and install SQL.

The other script is the script that will run on your jump box or scripting server called Invoke-SQLInstall.  In my environment, this script is executed on my scripting server by vCenter Orchestrator and invokes the SQL Server installation using WinRM on my new SQL Server.

Kerberos and SQL Server Installations

While I was building this script, I ran into a lot of issues when trying to get SQL Server to install remotely using WinRM.  The install would fail, and the setup log would point to an error that the account or the computer wasn’t trusted for delegation.

The Kerberos “Second Hop” issue was causing the install to fail, and most of the workarounds for getting around this issue, such as using Start-Process to launch a new PowerShell session or using a local batch file to install under other credentials, would not work inside of a WinRM session.

There is one other option that I had considered, but I didn’t pursue it at the time because I thought it was a security risk.

CredSSP

Microsoft introduced a new secruity delegation method years back to work around some of the limitations of Kerberos.  This new security delegation method,  called the Credential Security Service Provider or CredSSP for short, was designed specifically to address the Kerberos second hop issue.

The issue with CredSSP is that it can be configured to delegate credentials to any computer on the domain through group policy.  We don’t want or need that, and credentials should only be delegated to the computer that we’re working on for the short time that it will need them.

It is actually fairly easy to configure CredSSP on the SQL Server at runtime and to turn it off when we’re done, and the script will take care of both tasks when installing SQL Server.

Custom Roles

SQL Server has a number of roles and features that can be selected during installation.  Many of these roles have very specific functions and aren’t suited for general purpose database servers, and they shouldn’t be installed if you aren’t going to use them.

What makes this more complicated is that some features are instance specific, such as the database engine and Reporting Services, while others are not instance specific and only need to be installed once.

Since each instance and/or each SQL Server may need different features installed, the script was designed with roles in mind.  Each role  is an element in a PowerShell Switch statement that contains the SQL command-line installation string.   It may also contain other commands that might be needed such as the Windows Firewall cmdlets to allow incoming connections to the SQL instance.

This design choice allows the script to be flexible and adapt to the changing needs of the business and the environment.

Get the Scripts

The scripts are available on my Github page with the rest of my provisioning scripts.

Using PowerCLI to Prepare a VM for SQL Server

SQL Server is one of those applications where performance can be greatly impacted by the initial server configuration.  One of the big contributing factors to this is storage configuration.  If this isn’t optimized at the VM or the storage array level, performance will suffer, and an entire book has been dedicated to the subject.

SQL Server may seem ubiquitous in many environments because many applications require a SQL database for storing data.  And in many cases, a new application means deploying a new database server.

Because SQL can require a virtual machine setup that follows a different baseline, each new SQL Server will either need to be deployed from a custom template for SQL or hand-crafted from whatever base template it was deployed from.  If you want to keep your template count to a minimum but still avoid having to hand craft your SQL Servers, we need to turn to PowerCLI and PowerShell 4.0.

Windows Server 2012 R2 and PowerShell 4.0 introduced a number of new Cmdlets that will assist in preparing a brand new VM to run SQL Server.  These cmdlets handle storage and disk operations, and these new cmdlets will be instrumental in provisioning the additional storage resources that the server needs.

The code for this script is up on Github.

Standard SQL Server Configuration

When you virtualize SQL Server, there are a few best practices that should be done to ensure good performance.  Therefore, we want to ensure that the script to prepare the server for SQL implements these best practices.  Most of these best practices relate to storage configuration and disk layout.

One of the other goals of this process is to ensure consistency.  All SQL Servers should be configured similarly, and drive letters, use of mount points, and installation paths should be the same on all SQL Servers to ease administrative overhead. 

I have a couple of preferences when deploying SQL in my environment.  Each instance will have two dedicated volumes – one for SQL data files and one for SQL logs.  I prefer to use mount points to store the data and log files for my databases and TempDB.  This allows me to keep drive letter assignments consistent across all database servers, and if I need to add an instance to a server, I don’t need to find free drive letters for the additional disks. 

I also like to include the VMDK file name in the volume label.  Drive numbers can change on VMs as drives are added and removed, so adding the VMDK file name to the volume label adds an additional value to check if you need to expand a disk or remove one from production. 

Screenshot of Disk Labels

image

Finally, I like to install SQL Server Management Studio prior to installing the database engine.  This gives me one less feature to worry about when configuring my instance deployments.

There are a couple of things that this job will do when preparing a server to run SQL:

  1. Set CPU and Memory reservations based on the currently assigned resources to guarantee performance
  2. Change the Storage Policy to automatically set all newly attached disks to Online so they can be configured.
  3. Create the following disk layout:
    1. E: – SQL Install location
    2. R: – SQL Data Volume
    3. S: – SQL Backup Volume
    4. T: – SQL Log Volume
  4. Copy SQL Installer files to E:\SQLInstall
  5. Create the following volumes as mount points, and attach them to PVSCSI storage controllers. 
    1. TEMPDB Database File Volume under R:
    2. TEMPDB Log File Volume under T:
  6. Add any SQL admin groups or database owners to the local administrator group
  7. Install SQL Server Management Studio

The script doesn’t add or configure any disks that will be used for the actual SQL Server instances that will be installed on the server.  I have another script that handles that.

Working with Disks

In the past, the only ways to manage disks using PowerShell were using the old command line utilities like fdisk or to use WMI.  That changed with Windows Server 2012 R2, and new disk management commands were included with PowerShell 4.0.

Note: These commands only work against Windows Server 2012 R2 and newer.

These commands will take care of all of the disk provisioning tasks once the VMDK has been added to the server, including initializing the disk, creating the partition, and formatting it with the correct block size.  The PowerShell commands also allow us to define whether the disk will be a mount point or be accessed through a drive letter.

Note: When attaching a disk as a mount point, there are some additional options that need to be selected to ensure that it does not get a drive letter assigned after a reboot.  Please see the code snippet below.

One of the neat things about these new cmdlets is that they use the new CIMSession connection type for PowerShell remoting.  Cmdlets that use CIMSessions are run on the local computer and connect to a WMI instance on the remote machine.  Unlike PSSessions, network connections are only utilized when a command is being executed.

An example of these cmdlets in action is the function to mount a VMDK as a mount point. 

Function Create-MountPoint
{
#Initialize volume, create partition, mount as NTFS Mount Point, and format as NTFS Volume
Param($Servername,$VolumeName,$VolumeSize,$Path,$CimSession)
$VolumeSizeGB = [string]$VolumeSize + "GB"

$partition = Get-Disk -CIMSession $CimSession | Where-Object {($_.partitionstyle -eq "raw") -and ($_.size -eq $VolumeSizeGB)} | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -UseMaximumSize -AssignDriveLetter:$False

$disknumber = $partition.DiskNumber
$partitionnumber = $partition.partitionnumber
$diskID = $partition.diskID

Get-Partition -DiskNumber $disknumber -PartitionNumber $partitionnumber -CimSession $CimSession | Format-Volume -AllocationUnitSize 64KB -FileSystem NTFS -NewFileSystemLabel $VolumeName -Confirm:$false
Add-PartitionAccessPath -CimSession $CimSession -DiskNumber $disknumber -PartitionNumber $partitionnumber -AssignDriveLetter:$False
Add-PartitionAccessPath -CimSession $CimSession -DiskNumber $disknumber -PartitionNumber $partitionnumber -AccessPath $Path
Set-Partition -CimSession $CimSession -DiskNumber $disknumber -PartitionNumber $partitionnumber -NoDefaultDriveLetter:$true
}

This function handles configuring a new disk for SQL, including formatting it with a 64KB block size, and attaches it as an NTFS mount point.

If you read through the code, you’ll notice that the disk is configured to not assign a drive letter in multiple places.  While writing and testing this function, all mount points would gain a drive letter when the system was rebooted.  In order to prevent this from happening, the script needed to tell Windows not to assign a drive letter multiple times.

What About Disks for SQL Instances

One thing that this particular script does not do is create the data and log volumes for a SQL instance.  While it wouldn’t be too hard to add that code in and prompt for an instance name, I decided to place that logic in another script.  This allows me to manage and use one script for adding instance disks instead of having that logic in two places.  This also helps keep both scripts smaller and more manageable.

Installing SQL Server

The last step in this process is to install SQL Server.  Unfortunately, that step still needs to be done by hand at this point.  The reason is that the SQL installation requires Kerberos in order work properly, and it throws an error if I try to install using WinRM. 

Simplifying VM Provisioning with PowerCLI and SQL

Virtualization has made server deployments easier, and putting a new server into production can be as easy as right-clicking on a template and selecting Deploy VM and applying a customization spec.

Deploying a VM from a template is just one step in the process.  Manual intervention, or worse – multiple templates, may be required if the new VM needs more than the default number of processors or additional RAM.  And deployment tasks don’t stop with VM hardware.  There may be other steps in the process such as putting the server’s Active Directory account into the correct OU, placing the VM in the correct folder, or granting administrative rights to the server or application owner.

All of these steps can be done manually.  But it requires a user to work in multiple GUIs and even log into the remote server to assign local admin rights.

There is an easier way to handle all of this.  PowerShell, with the PowerCLI and Active Directory plugins, can handle the provisioning process, and .Net calls can be used to add a user or group to the new server’s Administrator group while pulling the configuration data from a SQL database.

The Script

I have a script available on Github that you can download and try out in your environment.   The script, Provision-VM.ps1, requires a SQL database for profile information, which is explained below, PowerCLI, and the Active Directory PowerShell cmdlets.  You will also need two service accounts – an Active Directory user with Administrator permissions in vCenter and an Active Directory user with Domain Administrator permissions.

This script was designed to be used with the vCenter Orchestrator PowerShell module and WinRM.  vCO will provide a graphical front end for entering the script parameters and executing the script.

This script might look somewhat familiar.  I used a version of it in my Week 1 Virtual Design Master submission.

What Provision-VM.ps1 Does

So what exactly does Provision-VM.ps1 do?  Well, it does almost exactly what it says on the tin.  It provisions a brand new VM from a template.  But it does a little more than just deploy a VM from a template.

The exact steps that are taken are:

  1. Query the SQL database for the customization settings that are needed for the profile.
  2. Prestage the computer account in the Active Directory OU
  3. Create a non-persistent Customization Spec
  4. Set the IP network settings for the customization spec
  5. Deploy a new VM to the correct resource pool/cluster/host and datastore/datastore cluster using the specified template based on the details retrieved in step 1.
    Note: The Resource Pool  parameter is used in the script instead of the host parameter because the Resource Pool  parameter encompasses hosts, clusters, and resource pools.  This provides more flexibility than the host parameter.
  6. Add additional CPUs and RAM is specified using the –CPUCount and –RAMCount parameters
  7. Power on VM and customize
  8. Add server owner user account or group to the local administrators group if one is specified using the –Owner parameter.

By using this deployment process along with some other scripts for configuring a server for a specific role after it has been deployed, I’ve been able to reduce the number of templates that need to be managed to 1 per Windows version.

WinRM and Working Around Kerberos Issues

vCenter Orchestrator is a great tool for automation and orchestration, and VMware has developed a PowerShell plugin to extend vCO management to Windows hosts and VMs.  This plugin even uses WinRM, which is Microsoft’ s preferred remote management technology for PowerShell.

WinRM setup for the vCO appliance, which I use in my environments, requires Kerberos to be used when making the remote connection.  I use a single Windows jumpbox to execute all of my PowerShell scripts from one location, so I run into Kerberos forwarding issues when using vCO and PowerShell to administer other systems.

There is a way to work around this, but I won’t spend a lot of time on it since it deserves it’s own post.  However, you can learn more about how the password information is stored and converted into a PowerShell credential from this article on PowerShell.org.

I also put together a little script that creates a password hash file using some of the code in the article above.

SQL-Based Profiles

One of the drawbacks of trying to script server deployments is that it needs to be simple to use without making it too hard to maintain.   I can make all required inputs – cluster or resource pool, datastore, template, etc, – into parameters that the person who runs the script has to enter.  But if you plan on using a script as part of a self-service provisioning model, keeping the number of parameters to a minimum is essential.  This helps limit the options that are available to users when deploying VMs and prevents them from having to worry about backend details like cluster and datastore names.

The tradeoff, in my experience, is that you need to put more into the script to compensate for having fewer parameters.   To do this, you’ll need to create “profiles” of all the customization settings you want to apply to the deployed server and code it directly into the script.

Let’s say you have one vSphere.  The cluster has three VLANs that servers can connect to, two datastore clusters where the server can be stored, and three templates that can be deployed.  To keep the script easy to run, and prevent admins or app owners from having to memorize all the details, you’d need to create 18 different profile combinations to cover the various settings.

This can make the script larger as you’ll need to include all combinations of settings that will be deployed.  It also makes it more likely that any additions or changes could introduce a script breaking bug like a missing curly bracket or quotation mark.

There is another way to reduce the size and complexity of the script while keeping parameters to a minimum – use a SQL database to store the customization settings.  These customization settings would be queried at run-time based on the profile that the end user selects.

The database for this script is a simple single table database.  There is a SQL script on Github to set up a table similar to the one I use in my lab.  If you choose to add or remove fields, you will need to edit the Provision-VM.ps1 file starting around line 106.

Database Schema Screenshotimage

There are two ways that the information can be retrieved from the database.  The first method is to install SQL Server Management Studio for SQL Server 2012 or newer on the server where the script will be executed.  The other is to use .Net to connect to SQL and execute the query.  I prefer the later option because it requires one less component to install.

The code for querying SQL from PowerShell, courtesy of Iris Classon’s blog that is linked above, is:

$dataSource = $SQLServer
$user = "SQL Server User Account"
$pwd = "Password"
$database = "OSCustomizationDB"
$databasetable = "OSCustomizationSettings"
$connectionString = "Server=$dataSource;uid=$user;pwd=$pwd;Database=$database;Integrated Security=False;"
 
$query = "Select * FROM $databasetable WHERE Profile_ID = '$Profile'"
 
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$command = $connection.CreateCommand()
$command.CommandText  = $query
 
$result = $command.ExecuteReader()

$ProfileDetails = new-object “System.Data.DataTable”
$ProfileDetails.Load($result)
You may notice that SQL Authentication is used for querying the database.  This script was designed to run from vCO, and if I use the PowerShell plugin, I run into Kerberos issues when using Windows Integrated authentication.  The account used for accessing this database only needs to have data reader rights.

Once the settings have been retrieved from the database, they can be used to determine which template will be deployed, the resource pool and datastore or datastore cluster that it will be deployed to, temporarily modify an existing customization spec NIC mapping settings at runtime, and even determine which OU the server’s AD account will be deployed in.

The benefit of this setup is that I can easily add new profiles or change existing profiles without having to directly edit my deployment script.  This gets changes into production faster.

More to Come…

This is just scratching the surface of deployment tasks that can be automated with PowerShell.  PowerShell 4.0 and Windows Server 2012R2 add a lot of new cmdlets that can automate things like disk setup.

orchestrating Exchange with #vCO

Microsoft Exchange is a system that is ideally suited for automation.  It’s in almost every environment.  It has it’s own add-on to PowerShell that makes it easy to write scripts to handle tasks.  And most of the tasks that administrators perform after setup are rote tasks that are easily automated such as setting up mailboxes and adding IP addresses to a receive connector. 

Why vCenter Orchestrator?

Exchange already contains a robust automation platform with the PowerShell-based Exchange Management Shell.  This platform makes it easy to automate tasks through scripting.  But no matter how well these scripts are written, executing command line tasks can be error-prone if the end users of the scripts aren’t comfortable with a command line.  You may also want to limit input or provide a user-friendly interface to kicking off the script.

So what does that have to do with vCenter Orchestrator?  Orchestrator is an extensible workflow automation tool released by VMware and included with the vCenter Server license.   It supports Windows Remote Management and PowerShell through a plugin.

Start By Building a Jump Box/Scripting Server

Before we jump into configuring Orchestrator to talk to Exchange, we’ll need a Windows Server that we can configure to execute the scripts that Orchestrator will call.  This server should run Windows Server 2008 R2 at a minimum, and you should avoid Server 2012 R2 because the Exchange 2010 PowerShell cmdlets are not compatible with PowerShell 4.0. 

You will need to install the Exchange management tools on this server, and I would recommend a PowerShell IDE such as PowerGUI or Idera PowerShell Pro to aid in troubleshooting and testing.

Orchestrator and Exchange

As I mentioned above, Orchestrator can be used with PowerShell through a plugin.  This plugin uses WinRM to connect to a Windows Server instance to execute PowerShell commands and scripts.   In order to use this plugin, Orchestrator needs to be configured to support Kerberos authentication.

When I was testing out this combination, I was not able to get the Exchange Management Shell to load properly when using WinRM.  I think the issue has to do with Kerberos authentication and WinRM.

When you use WinRM, you’re remoting into another system using PowerShell.  In some ways, it is like Microsoft’s version of SSH – you’re logging into the system and working from a command line. 

The Exchange cmdlets add another hop in that process.  When you’re using the Exchange cmdlets, you’re executing those commands on one of your Exchange servers using a web service.  Unfortunately, Kerberos does not work well with multiple hops, so another to access the remote server is needed.

Another Option is Needed

So if WinRM and the Orchestrator PowerShell plugin don’t work, how can you manage Exchange with Orchestrator?  The answer is using the same remote access technology that is used for network hardware and Unix – SSH.

Since Exchange is Active Directory integrated, we’ll need an SSH server that runs on Windows, is compatible with PowerShell, and most importantly, supports Active Directory authentication.   There are a couple of options that fit here  such as the paid version of Bitvise, FreeSSHd, and nSoftware’s PowerShell Server.

There is one other catch, though.  Orchestrator has a built-in SSH plugin to support automating tasks over SSH.  However, this plugin does not support cached credentials, and it runs under whatever credentials the workflow is launched under.  One of the reasons that I initially looked at Orchestrator for managing Exchange was to be able to delegate certain tasks to the help desk without having to grant them additional rights on any systems. 

This leaves one option – PowerShell Server.  PowerShell Server has an Orchestrator Plugin that can use a shared credential that is stored in the workflow.  It is limited in some key ways, though, mainly that the plugin doesn’t process output from PowerShell.  Getting information out will require sending emails from PowerShell.

You will need to install PowerShell Server onto your scripting box and configure it for interactive sessions.

PowerShell Server Settings

Configuring the Exchange Management Shell for PowerShell Server

PowerShell Server supports the Exchange Management shell, but in a limited capacity.  The method that their support page recommends breaks a few cmdlets, and I ran into issues with the commands for configuring resource mailboxes and working with ActiveSync devices. 

One other method for launching the Exchange Management Shell from within your PowerShell SSH session is by using the following commands:

'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange.ps1'
Connect-ExchangeServer –auto
If you try that, though, you will receive an error that the screen size could not be changed.  This is due to the commands that run when the Exchange Management Shell loads – it resizes the PowerShell console window and prints a lot of text on the screen. 
 
The screen size change is controlled by a function in the RemoteExchange.ps1 script.  This file is located in the Exchange Install Directory\v14\Bin.  You need to open this file and comment out line 34.  This line calls a function that widens the window when the Exchange Management shell is loaded.  Once you’ve commented out this line, you need to save the modified file with a new file name in the same folder as the original.
 
Edit RemoteExchangePS1
 
In order to use this in a PowerShell script with Orchestrator, you will need to add it to each script or into the PowerShell profile for the account that will be executing the script.  The example that I use in my workflows looks like this:
 

'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange-Modified.ps1'
Connect-ExchangeServer –auto

Note: It may be possible to use the method outlined by Derek Schauland in this TechRepublic article in place of modifying the EMS script.  However, I have not tested this with technique with Orchestrator.

Putting It All Together

Earlier this month, I talked about this topic on vBrownbag, and I demonstrated two examples of this code in action.  You can watch it here.

One of the examples that I demonstrated during that vBrownbag talk was an employee termination workflow.  I had a request for that workflow and the scripts that the workflow called, so I posted them out on my github site.  The Terminate-DeactivateEmail.ps1 script that is found in the github repository is a working example. 

Where I Go Spelunking into the Horizon View LDAP Database–Part 2

In Part 1 of this series, I shared some of the resources that are currently available in the greater VMware View community that work directly with the View LDAP database.  Overall, there are some great things being done with these scripts, but they barely scratch the surface of what is in the LDAP database.

Connecting to the View LDAP Database

Connecting to the VIew LDAP database has been covered a few times, and VMware has a knowledgebase article that covers the steps to use ADSI edit on Windows Server. 

Any scripting language with an LDAP provider can also access the database.  Although they’re not View specific, there are a number of resources for using scripting languages, such as PowerShell or Python, with an LDAP database.

Top-Level LDAP Organizational Units

LDAP OUs

Like Active Directory or any other LDAP database, there are a number of top-level OUs where all the objects are stored.  Unlike many LDAP databases, though, the naming of these OUs doesn’t make it easy to navigate and find the objects that you’re looking for.

The OUs that are in the View LDAP Database are:

Organizational Unit Name

Purpose

Applications Pool, Application, and ThinApp settings
Data Disks Persistent Desktop Data Disks
Hosts ?? Possibly Terminal Server or Manual Pool members
Groups View Folders and Security Groups/Roles
ForeignSecurityPrincipals Active Directory SIDs used with View
Packages ?? Possibly ThinApp repositories or packages
People ??
Polices Various system properties stored in child container attributes
Properties VDM properties, child OU contains event strings
Roles Built-in security?
Servers Desktops
Server Groups Desktop Pools

You may notice that a few of the OUs have question marks under their purpose.  I wasn’t able to figure out what those OUs were used for based on how I had set up my home lab.  I normally don’t work with Terminal Server or Manual pools or ThinApp, and I suspect that the OUs that aren’t defined relate to those areas.

This series is going to continue at a slower pace over the next couple of months as I shift the focus to writing scripts against the LDAP database.

Exchange Restores and PowerShell Scripting Games

In my last post, I posted a script that I use to back up my Exchange 2010 test environment using PowerShell and Windows Server Backup.  But what if I need to do a restore?

Well, the good people over at ExchangeServerPro.com have a good step-by-step walkthrough of how to restore an individual mailbox that covers restoring from WSB, rolling the mailbox forward, and recovering data.

If you’re interested in how a restore would work, check out the article.

PowerShell Scripting Games

Microsoft’s annual scripting games started on Monday.  Unlike previous years, scripting is limited to the Powershell scripting language this year.  A beginner and an advanced scripting challenge is posted each day, and you have seven days to submit a solution to the problem.

You can find the challenges and scripting tips on the Hey! Scripting Guy blog.  The official rules also include a link to the registration page.

If you’re looking to learn about PowerShell or just challenge yourself with a scripting problem, you might want to check this out.

Scripting Exchange 2010 Backups on Windows Server 2008R2 using PowerShell and Windows Backup Service

I’ve struggled with backing up my Exchange 2010 SP1 environment in my home lab since I upgraded over a month ago.  Before I had upgraded, I was using a script that did Volume Shadow Services (VSS) backups.

After upgrading, I wanted to cut my teeth with Windows Server Backup (WBS).  Windows Server Backup is the replacement for the NTBackup program that was included with Windows until Vista, and it uses VSS to take snapshot backups of entire volumes or file systems.

Unlike NTBackup, WBS will not run backup jobs to tape.  You will need to dedicate an entire volume or use a network folder to store your backups.  If you use the GUI, you can only retain one backup set, and a new backup will overwrite the old.

This was an issue for me.  Even though I have Exchange configured to retain deleted items for 14 days and deleted mailboxes for 30 days, I like to keep multiple backups.  It allows me to play with multiple recovery scenarios that I might face in the real world.

And that is where PowerShell comes in.  Server 2008R2 allows users to create a temporary backup policy and pass that policy to the Windows Backup Service.  This will also allow you to change the folder where the backup is saved each time, and you can easily add or remove volumes, LUNs, and databases without having to reconfigure your backup job each time.

I started by working from the script that Michael Smith that I linked to above.  To modify this script to work with WBS, I first had to modify it to work with Exchange 2010.  One of the major differences between Exchange 2007 and Exchange 2010 is that storage groups have been removed in the latter.  Logging and other storage group functions have been rolled into the database, making them self-contained.

The original script used the Get-StorageGroup PowerShell command to get the location of each storage group’s log files.  Since this command is no longer present, I had to add sections of this function to the function that retrieved the location of the database files.

After adding some error handling by using Try/Catch, the section that locates mailbox databases looks like:

Try
{
foreach ($mdb in $colMB)
{
if ($mdb.Recovery)
{
write-host ("Skipping RECOVERY MDB " + $mdb.Name)
continue
}
write-host ($mdb.Name + "`t " + $mdb.Guid)
write-host ("`t" + $mdb.EdbFilePath)
write-host " "

$pathPattern.($mdb.EdbFilePath) = $i

$vol = $mdb.EdbFilePath.ToString().SubString(0, 2)
$volumes.set_item($vol,$i)

#This Section gets the log file information for the backup
$prefix  = $mdb.LogFilePrefix
$logpath = $mdb.LogFolderPath.ToString()

## E00*.log
$pathpattern.(join-path $logpath ($prefix + "*.log")) = $i

$vol = $logpath.SubString(0, 2)
$volumes.set_item($vol,$i)

$i += 1
}
}
Catch
{
Write-Host "There are no Mailbox Databases on this server."
}

I also removed all of the functions related to building and calling the Disk Shadow and RoboCopy commands.  Since we will be using WBS, there is no need to manually trigger a VSS backup.

Once we know where our mailbox and public folder databases and their log files are located, we can start to build our temporary backup job.  The first thing we need to do is create a new backup job called $bpol by using the New-WBPolicy cmdlet.

##Create New Backup Policy for Windows Server Backup
$BPol = New-WBPolicy

Once we have created our backup policy, we add the drives that we want to backup.  We can tell Windows Server Backup which drives we want to back up by using the drives and folder paths that we retrieved from Exchange using the code above.  We use the Get-WBVolume cmdlet to get the disk or volume information and the Add-WBVolume command to add it to the backup job.

##Define volumes to be backed up based on Exchange filepath information
##Retrieved in function GetStores

ForEach($bvol in $volumes.keys)
{
$WBVol = Get-WBVolume –volumepath $bvol
Add-WBVolume –policy $BPol –volume $WBVol
}

The Add-WBVolume doesn’t overwrite previous values, so I can easily add multiple drives to my backup job.

Now that my backup locations have been added, I need to tell WBS that this will be a VSS Full Backup instead of a VSS Copy Backup.  I want to run a full backup because this will commit information in the log files to the database and truncate old logs.  The command to set the backup job to a full backup is:

Set-WBVssBackupOptions -policy $BPol –VssFullBackup

Finally, I need to set my backup target.  This script is designed to back up to a network share.  Since I want to retain multiple backups, it will also create a new folder to store the backup at runtime.  I created a function called AddWBTarget to handle this part of the job.

Function AddWBTarget
{
##Create New Folder for back in $backuplocation using date format
$folder = get-date -uFormat "%Y-%m-%d-%H-%M"
md "$backupLocation\$folder"
$netFolder = "$backupLocation\$folder"

$netTarget = New-WBBackupTarget -NetworkPath "$netfolder"
Add-WBBackupTarget -policy $BPol -Target $netTarget
}

The backup location needs to be a UNC path to a network folder, and you set this when you run the script with the –backuplocation parameter.  The function will also create a new folder and then add this location to the backup job using the Add-WBBackupTarget.

The documentation for the Add-WBBackupTarget states that you need to provide user credentials to backup to a network location.  This does not appear to be the case, and WBS appears to use the credentials of the user running the script to access the backup location.

WBS now has all of the information that it needs to perform a backup, so I will pass the temporary backup job to WBS using the Start-WBBackup with the –policy parameter.

You can run the script manually by running EX2k10WBS.ps1 from your Exchange 2010 server.  You will need to declare your backup location by using the –backuplocation parameter.  Since this script will be performing a backup, you will need to run PowerShell with elevated permissions.

You can also set this script to run as a scheduled task.

You can download the entire script here.