My VMworld Schedule

I’ll be attending my first VMworld at the end of the month.  My schedule focuses mainly on three areas:  Automation (PowerCLI and Puppet), VMware View, and networking.

Monday

VSVC4944  —  PowerCLI Best Practices – A Deep Dive 11:00 AM

NET1000-GD  —  vSphere Distributed Switch with Vyenkatesh Deshpande 12:30 PM

VAPP5613  —  Successfully Virtualize Microsoft Exchange Server 2:00 PM

VCM7369-S  —  Uncovering the Hidden Truth in Log Data With vCenter Log Insight 3:30 PM

Tuesday

SEC5755  —  VMware NSX with Next-Generation Security by Palo Alto Networks 11:00 AM

EUC4764  —  What’s New and Next for VMware Horizon View 1:00 PM

EUC5434  —  Enterprise Architecture Design for VMware Horizon View 5.2 2:30 PM

BCO5362  —  Veeam Backup & Replication v7 Deep Dive 4:00 PM

VCM5271  —  VMware and Puppet: How to Plan, Deploy & Manage Modern Applications 5:30 PM

Wednesday

VSVC5931  —  PowerCLI What’s New? Administrating with the CLI Was Never Easier 8:00 AM

EUC5249  —  PCoIP: Sizing For Success 10:00 AM

VSVC5511  —  Deploying vSphere with OpenStack: What It Means to Your Cloud Environment 11:00 AM

VAPP5932  —  Virtualizing Highly Available SQL Servers 12:30 PM

EUC4629  —  ThinApp 101 and what’s new in ThinApp next version 2:30 PM

EUC1006-GD  —  View with Andre Leibovici 4:00 PM

 

I don’t have any Thursday sessions on my schedule.  I have two good reasons for this.  The first is that the few sessions that I wanted to attend were already full.  The other reason is that I plan on sitting for the VCP on Thursday morning.  I plan to spend whatever time I have left after that talking to vendors or visiting a friend in San Diego.

Notes from the Field

Remember to Update your CRLs (if you have an Offline Root CA)

I had an interesting issue crop up two weeks ago in my VMware View environment –  it basically stopped accepting all the certificates from my internal CA as valid.  The logs showed that they were failing on a revocation check, and I had to disable revocation checking on both of my connection brokers after opening a case with VMware.  View 5.1 requires valid certificates on the connection brokers and VCenter, and if those certificates expire, are revoked, or are unable to be checked against a revocation list, the system will choke on them.

A similar issue reared its ugly head on my Exchange Server today when I had to replace an expiring certificate.  I received a similar error in my Exchange 2010 Management Console, and a little digging led me to some tips to better troubleshoot this issue.  It turns out that the issue was an expired revocation from my Offline Root CA, which has been…well…offline for a while, that needed to be updated. Once I updated the list and copied it to the distribution point, all of the issues I was having cleared up.

The tips in this post helped greatly when troubleshooting this issue: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/348a9b8d-8583-488c-9a96-42b892c4ae77/

Nervepoint Access Manager

Account lockouts and password resets are two things that IT support personnel frequently deal with.  In my experience, these two tasks make up a large chunk of help desk tickets.

Self-service account management tools do exist, but many of these tools are expensive, and the cost can put them out of reach for small businesses and non-profits.

That is where Nervepoint Access Manager(abbreviated NAM) comes in.  NAM is a Linux-based virtual appliance that provides web-based self-service password reset and account unlock utilities.


Download and Setup
NAM can be downloaded from the Nervepoint website.  The download file is a TAR that contains the VMware vmx and vmdk files, so you will need a program like 7-zip to extract it.  Once downloaded, you will need to upload these files to a datastore in your VMware environment and add the virtual machine to your inventory.

Once the VM is powered on, it will grab a DHCP address.  My test network is small, so I was able to easily find it and log into the administrative web interface to configure my network adapter.  This may be an issue in larger environments or in data centers without DHCP, but there is a community forum post that describes how to configure the network adapter from the console.

Configuring access to Active Directory is fairly easy too.  Opening your web browser and browsing to the Nervepoint appliance will bring up a first-time setup screen.  It will use DNS to detect any Active Directory domains in your environment and connect to them.  You will also need to set up a service account that has permissions to change passwords on any OUs that contain users.

In order to successfully connect to an Active Directory domain, it will need to have LDAP over SSL configured.  For larger environments, this won’t be a problem as they will likely have an Active-Directory integrated PKI environment set up.  For environments that don’t have PKI, it will require at least one Enterprise CA and a Windows Server Enterprise license or a 3rd-party certificate.

Once configured, it is fairly easy for end-users to use.  They will need to log in to configure their answers to the questions that will be used to verify their identity.  Password changes and account unlocks are simple affairs – a user only needs to answer three of the five questions correctly to perform a password reset.


Nervepoint Pros
Despite being a beta, there are several things I like about the Nervepoint appliance.  It is a fairly small VM that uses less than a gigabyte of RAM.  It is suitable for production use in smaller environments, and it is very easy to use.


Nervepoint Cons
Even though I like this appliance a lot and would consider deploying it in my production network, there are a couple of areas for improvement.

For starters, there is very little documentation.  There are no install or administrator guides, and the forums don’t have a lot of information yet.  There is a FAQ section of the website, but it doesn’t have a lot of information in it either.  There is no read-me or license information included with the appliance either.
The VM doesn’t have the VMware tools installed.  I believe that this is something that should have been done by the developers before shipping the appliance.  It’s not a huge deal, but it would help with managing the VM.

I don’t have the ability to customize the security questions that my employees are asked or set the number of questions they must answer correctly.  The ability for administrators to customize these settings may be important in some environments.

And finally, the distribution method for this appliance leaves something to be desired.  The VM is downloaded from the Nervepoint website, and it took multiple attempts to correctly import the virtual machine into my test environment.  A better option might be to package the appliance as an OVF template and list it on the VMware marketplace.


Conclusion
Despite the cons, the Nervepoint Access Manager is a fairly decent little Self-Service Account Management appliance, and I would strongly consider deploying it in my production network in the future.

Edit:  It was brought to my attention by the developers of this product that the license and the default questions can be changed during the initial setup.  I did not have these two items in my notes, and I apologize for the error.

Another Great Article on Career Development…

In my last post, I mentioned that I follow a few blogs through Google Reader.  One of those blogs is the Ask the Directory Services Team blog on Microsoft Technet Site.

Normally, this blog is filled with answers to some very technical questions on Active Directory and related technologies.  One of their more recent entries, however, was focused on the author’s career development philosophy

This would be a great read for any college student entering the work force, and although some of the tips may seem like common sense, a lot of rookie (and sometimes experienced) IT pros can sometimes have trouble some of these basic tips. 

For instance, I know that I have trouble filling the that conversational dead space while working with others.  This is obvious when I work with other introverts.

I’ll also add that admitting your mistakes gets easier as you get older and gain confidence.

The blog ends with a truism that applies to any career: “I used many synonyms in this post, but not once did I say “job.” Jobs end at quitting time. A career is something that wakes you up at midnight with a solution. I can’t guarantee success with these approaches, but they’ve kept me happy with my IT career for 15 years. I hope they help with yours.”

Skills Development and IT

A good friend of mine recently wrote a blog on how to get hired.  Most of the blog posting talks about the interview process and his reflections on the experience performing mock interviews with college students studying IT. 

The main point of the article can be summed up with this sentence near the end of the post:  “Want to know what outdoes a Masters degree and a 4.0 GPA?  Pure, raw, f*cking talent.”

Talent is rarely raw and pure.  It is something that is honed through patience and practice.  It’s not just the successes and failures on projects and teams but applying those lessons you learned.

What everyone does have, though, is potential.  Potential is like a lump of clay or a block of marble.  To the average person, it appears to be nothing extraordinary.  But in the hands of a skilled artist, we get amazing works like Michelangelo’s David or the Venus de Milo

When you think of talent in terms of a career, I often think of job skills.  IT is a career path where one can easily keep their skills sharp and develop new skills.

There are a couple of ways that one can easily develop their technical skills, and anyone coming out of college with the intent of having a career in information technology should be doing some of these early in their career.


Reading
One of the easiest ways to expose yourself to new technology or improve your knowledge of existing technology is to read about it.  There are hundreds of books and blogs about every area of information technology.  There is a ton of information that can be gleaned from spending twenty minutes a day reviewing your RSS feeds.

I have 132 blogs in my Google Reader account.  Most are technical blogs.  I know that sounds like a lot, but once you filter through the obvious marketing posts from sites like TechRepublic, there is a lot of good content, and you can usually get through everything by spending 20 minutes per day.

A large chunk of those are also Microsoft blogs.  It seems like every product team at Microsoft has at least one official blog, and unlike many other vendor blogs that I’ve followed, the content usually seems to be geared towards technical readers.  Two good examples are the Exchange Team blog and the Ask the Directory Services Team blog.

Aside from vendor blogs, there is a lot of good community generated content.  I could list off a large number of blogs by other systems administrators that I consider good sources on a variety of topics, but then this would turn into a blog about other good blogs.

Another good source of content are sites like Server Fault and Stack Overflow.  Server Fault and Stack Overflow are a pair of websites that are geared towards IT Pros and programmers where users can turn to for community advice by asking questions about problems, projects, and how to handle different situations.


Certifications
Technology certifications are usually offered by vendors to certify that someone has a level of familiarity with their products.  For an experienced IT professional, it proves that you know something, and it can help round out your knowledge of a particular product.  For someone just entering the workforce, it can give you some hands-on knowledge.

Entry-level certification often involves one of two paths:  self-study or classroom.  I tend to focus more on the self-study, and this is usually the cheaper option for someone who doesn’t have a company paying for a week-long bootcamp.


Hands-on Experience
When I say hands-on experience, I don’t mean the kind of experience you get when someone is paying for your time, or when you are volunteering for an organization.  I mean the kind you get when you experiment on your own time.  In my opinion, this is probably the best way to learn about technology. 

The nice thing about IT is that you can easily get experience on enterprise-grade hardware and software.  Dell offers decent entry-level tower servers for $500-$600.  A Microsoft Technet Plus subscription runs about $150 for the lowest level package, and VMware and other vendors offer free versions of their enterprise software that allow a user to learn the basics of the system even if they are functionally crippled.

And you can’t forget about Linux and the other open-source *nixes and software packages.

It basically comes down to this – if you want to be a database administrator, you should be learning about databases by building them.  If you want to be a Linux administrator, you should be setting up Linux servers.  It pays off when you can go into an interview and explain a complicated problem that you solved on your personal network and what it taught you.

Exchange Restores and PowerShell Scripting Games

In my last post, I posted a script that I use to back up my Exchange 2010 test environment using PowerShell and Windows Server Backup.  But what if I need to do a restore?

Well, the good people over at ExchangeServerPro.com have a good step-by-step walkthrough of how to restore an individual mailbox that covers restoring from WSB, rolling the mailbox forward, and recovering data.

If you’re interested in how a restore would work, check out the article.

PowerShell Scripting Games

Microsoft’s annual scripting games started on Monday.  Unlike previous years, scripting is limited to the Powershell scripting language this year.  A beginner and an advanced scripting challenge is posted each day, and you have seven days to submit a solution to the problem.

You can find the challenges and scripting tips on the Hey! Scripting Guy blog.  The official rules also include a link to the registration page.

If you’re looking to learn about PowerShell or just challenge yourself with a scripting problem, you might want to check this out.

Scripting Exchange 2010 Backups on Windows Server 2008R2 using PowerShell and Windows Backup Service

I’ve struggled with backing up my Exchange 2010 SP1 environment in my home lab since I upgraded over a month ago.  Before I had upgraded, I was using a script that did Volume Shadow Services (VSS) backups.

After upgrading, I wanted to cut my teeth with Windows Server Backup (WBS).  Windows Server Backup is the replacement for the NTBackup program that was included with Windows until Vista, and it uses VSS to take snapshot backups of entire volumes or file systems.

Unlike NTBackup, WBS will not run backup jobs to tape.  You will need to dedicate an entire volume or use a network folder to store your backups.  If you use the GUI, you can only retain one backup set, and a new backup will overwrite the old.

This was an issue for me.  Even though I have Exchange configured to retain deleted items for 14 days and deleted mailboxes for 30 days, I like to keep multiple backups.  It allows me to play with multiple recovery scenarios that I might face in the real world.

And that is where PowerShell comes in.  Server 2008R2 allows users to create a temporary backup policy and pass that policy to the Windows Backup Service.  This will also allow you to change the folder where the backup is saved each time, and you can easily add or remove volumes, LUNs, and databases without having to reconfigure your backup job each time.

I started by working from the script that Michael Smith that I linked to above.  To modify this script to work with WBS, I first had to modify it to work with Exchange 2010.  One of the major differences between Exchange 2007 and Exchange 2010 is that storage groups have been removed in the latter.  Logging and other storage group functions have been rolled into the database, making them self-contained.

The original script used the Get-StorageGroup PowerShell command to get the location of each storage group’s log files.  Since this command is no longer present, I had to add sections of this function to the function that retrieved the location of the database files.

After adding some error handling by using Try/Catch, the section that locates mailbox databases looks like:

Try
{
foreach ($mdb in $colMB)
{
if ($mdb.Recovery)
{
write-host ("Skipping RECOVERY MDB " + $mdb.Name)
continue
}
write-host ($mdb.Name + "`t " + $mdb.Guid)
write-host ("`t" + $mdb.EdbFilePath)
write-host " "

$pathPattern.($mdb.EdbFilePath) = $i

$vol = $mdb.EdbFilePath.ToString().SubString(0, 2)
$volumes.set_item($vol,$i)

#This Section gets the log file information for the backup
$prefix  = $mdb.LogFilePrefix
$logpath = $mdb.LogFolderPath.ToString()

## E00*.log
$pathpattern.(join-path $logpath ($prefix + "*.log")) = $i

$vol = $logpath.SubString(0, 2)
$volumes.set_item($vol,$i)

$i += 1
}
}
Catch
{
Write-Host "There are no Mailbox Databases on this server."
}

I also removed all of the functions related to building and calling the Disk Shadow and RoboCopy commands.  Since we will be using WBS, there is no need to manually trigger a VSS backup.

Once we know where our mailbox and public folder databases and their log files are located, we can start to build our temporary backup job.  The first thing we need to do is create a new backup job called $bpol by using the New-WBPolicy cmdlet.

##Create New Backup Policy for Windows Server Backup
$BPol = New-WBPolicy

Once we have created our backup policy, we add the drives that we want to backup.  We can tell Windows Server Backup which drives we want to back up by using the drives and folder paths that we retrieved from Exchange using the code above.  We use the Get-WBVolume cmdlet to get the disk or volume information and the Add-WBVolume command to add it to the backup job.

##Define volumes to be backed up based on Exchange filepath information
##Retrieved in function GetStores

ForEach($bvol in $volumes.keys)
{
$WBVol = Get-WBVolume –volumepath $bvol
Add-WBVolume –policy $BPol –volume $WBVol
}

The Add-WBVolume doesn’t overwrite previous values, so I can easily add multiple drives to my backup job.

Now that my backup locations have been added, I need to tell WBS that this will be a VSS Full Backup instead of a VSS Copy Backup.  I want to run a full backup because this will commit information in the log files to the database and truncate old logs.  The command to set the backup job to a full backup is:

Set-WBVssBackupOptions -policy $BPol –VssFullBackup

Finally, I need to set my backup target.  This script is designed to back up to a network share.  Since I want to retain multiple backups, it will also create a new folder to store the backup at runtime.  I created a function called AddWBTarget to handle this part of the job.

Function AddWBTarget
{
##Create New Folder for back in $backuplocation using date format
$folder = get-date -uFormat "%Y-%m-%d-%H-%M"
md "$backupLocation\$folder"
$netFolder = "$backupLocation\$folder"

$netTarget = New-WBBackupTarget -NetworkPath "$netfolder"
Add-WBBackupTarget -policy $BPol -Target $netTarget
}

The backup location needs to be a UNC path to a network folder, and you set this when you run the script with the –backuplocation parameter.  The function will also create a new folder and then add this location to the backup job using the Add-WBBackupTarget.

The documentation for the Add-WBBackupTarget states that you need to provide user credentials to backup to a network location.  This does not appear to be the case, and WBS appears to use the credentials of the user running the script to access the backup location.

WBS now has all of the information that it needs to perform a backup, so I will pass the temporary backup job to WBS using the Start-WBBackup with the –policy parameter.

You can run the script manually by running EX2k10WBS.ps1 from your Exchange 2010 server.  You will need to declare your backup location by using the –backuplocation parameter.  Since this script will be performing a backup, you will need to run PowerShell with elevated permissions.

You can also set this script to run as a scheduled task.

You can download the entire script here.

Troubleshooting Low Space Issues on the HTC Desire

One of the recent complaints about the HTC Desire on the Cellcom Facebook page was that the phone was constantly displaying a low space warning.  A number of users have experienced this warning, and for some, it has prevented them from downloading applications from the Android Market.

So how can you troubleshoot this issue and correct it?  The first thought for many people is to push for an official upgrade to Android 2.2.  Android 2.2 allows users to install and/or move supported apps to a memory card.  While this could solve the issue, Cellcom has not released Android 2.2 for the Desire yet, and there is no guarantee that this will actually resolve the issue for all users.

I don’t intend to solve this issue for all readers or users of the Desire with Android 2.1 either.  But I can provide a few troubleshooting steps to help narrow down the issue.

The first step in troubleshooting this issue is to see how much space you have available on the phone.  To do this:

  1. Go into the Settings Menu
  2. Select SD & Phone Storage
  3. Check the amount available under Internal Phone Storage

Once you see how much storage remains, you can look at the list of Applications installed on your phone and the amount of space they are consuming.  You can access this by:

  1. Going into the Settings Menu
  2. Select Applications
  3. Select Manage Applications
  4. Press the menu button and select “Sort By Size.”
    (Note:  This option may not be available while the list loads).

The list will sort from the largest to smallest applications.  Some of the larger applications that you might have that come with the phone include the Contacts Storage, Maps, Internet, and QuickOffice.  Other larger applications that you can install from the Market include CardioTrainer, Evernote, and Google Voice.

If you have a few applications that you downloaded but no longer need or wish to remove, you can uninstall them from this menu by selecting the application and clicking the Uninstall button on the Application Details screen.

However, you might not have many applications installed, this might not alleviate the low storage space messages, and/or you might have a Contacts Storage that is extremely large.  If any of these are true, then you might need to check your synchronization settings.

You check your Synchronization settings by:

  1. Going into the Settings Menu
  2. Select Accounts and Sync

You will need to look under the Manage Accounts section of the Accounts and Sync screen.  This section displays all accounts that you have set up to sync with another data source, such as your Google Account, Weather, Twitter, and Exchange ActiveSync.  Review all your accounts to see what data you have syncing with your phone.  If there is anything you don’t want to sync, you can disable (or in some cases remove) the account.

There is one application in particular that can be troublesome:  Facebook for HTC Sense.  You can find additional details on what this program does here.  Why is Facebook for HTC Sense a problem?  Well, if it is configured, it will push your entire Facebook Friends list, including a photo for each contact, into your contacts list (People).  With enough friends, this can eat up a large amount of internal storage on the phone.

If Facebook for HTC Sense is configured, you will need to remove the account.  Disabling it from syncing Facebook Friends with your contacts will not remove the contacts it has downloaded, it will just hide them from view.  Removing the account will delete any information it has downloaded, and this could free up internal storage.  I reclaimed about 20 MB of storage by doing this on my Desire.

If this still doesn’t resolve your issue, or you didn’t have Facebook for HTC Sense configured, you might need to perform a factory reset or take your phone in for additional troubleshooting.

If you have a Desire and are experiencing this issue, did these steps help you resolve it?  Is there something else you did to solve your low space issues?

Looking for CCNA Study Resources

One of my long-time personal goals has been to complete my CCNA.  I took the first half of the exam in May of 2009.  I’m about halfway through the book for the 2nd half of the exam, and that puts me knee-deep in IP Routing and routing protocols.
Getting hands-on experience with the switching section (includes VLANs and Trunking) isn’t too difficult as I have a Cisco 2950 sitting in my basement as part of my home network.  It’s the routing part that I worry about as I would like to have a little more hands-on experience as the exam includes questions where you troubleshoot simulated networks using actual device commands.

I found a tool that emulates router IOS images, provided that I can acquire them.  However, most of the labs I have seen seem to cover CCNP or CCIE topics like MPLS and BGP.

Does anyone know of any resources that I can use for studying?

Today’s Blog Roundup – (Another) Free Trip to VMWorld

Matt over at Standalone Sysadmin is reporting that Gestalt IT is sponsoring a contest to win a free trip to VMworld in San Franscisco.

Unlike most normal contests, you need to describe how you’re going to use the trip to benefit the community.

This is very similar to a contest that Jason from boche.net ran about two months ago.

I’m half-tempted to enter the contest.  Live-blogging from VMWorld would be a great experience.