What’s In The Studio – Pivoting Community Involvement to Video

As we all start off 2021, I wanted to talk a little about video.

As we all know, 2020 put the kibosh on large, in-person events. This included all of the vendor conferences, internal conferences, and community events like the VMware User Group UserCons and other user groups. Most of these events transitioned to online events with presenters delivering recorded sessions. It also meant more webinars, Zoom meetings, and video conferences.

And it doesn’t look like this will be changing for at least the first half of 2021.

I’ve seen a number of blog and Twitter posts recently about home studios (for example, this great post by Johan van Amersfoort or this Twitter thread from John Nicholson), and I thought I would share my setup.


I was not entirely unprepared to transition to video last year. I had been a photographer since high school, and I made the jump to digital photography in college when Canon released the Digital Rebel. I mainly focused on sports that are played in venues that are a step or two above dimly lit caves. After college, I kind of put the camera down (except for a couple of vacations and trying my hand at a wedding or two which was not my thing). At the beginning of 2020, I decided it was time to get back into photography, thinking I might as well get back into photography since I was traveling ( 😂 ), and pick up a used Canon 6D that was opportunisticly priced. And it also could record video in 1080p.

Slideshow: Some of my photos from years past.

Video was new ground for me, and it resulted in a little lot of experimentation and purchasing in order to get things right. This was also happening at the beginning of the lockdowns when my whole family was at home all day and almost everything I needed was delayed or backordered. Some of this was driven by equipment limitations, which I will cover below, and some of it was driven by other factors.

And as I went through this, I spent a lot of time learning what worked and what didn’t work for me. For example, I found that sitting in front of my laptop trying to record in Zoom didn’t work for me. When recording for a VMUG or VMworld, I wanted to stand and have room to move around because that was what felt natural to me.

Before I go into my setup, I want to echo one point that Johan made in his post. The audio and video gear is there to support the message and enable remote delivery. If you are new to presenting, spend some time learning the craft of storytelling and presentation design. Johan recommended two books by Nancy Duarte – resonate and Slide:ology. I highly recommend these books as well. If you’re new to presenting in general, I also recommend finding a mentor and learning how to use PowerPoint as the graphics capabilities are powerful but intimidating. There are a number of good YouTube videos, for example, on how to do different things in PowerPoint.

Requirements and Constraints

I have primarily used my video gear two different ways. The first was for video conferencing. Whether it was Zoom, Teams, or “Other,” video became a major part of meetings to replace in-person meetings and workshops. The second use case case was the one I probably focused on more – producing recorded content for user groups and conferences, and my goal here was to try and replicate some of the feel of presenting live while taking advantage of the capabilities that video offers.

Most of the recorded video content was for VMUG UserCons. These sessions were 40 minutes long, and they wanted to have presenters on camera along with the slides.

There is a third use case, which didn’t really apply for 2020. This use case was live events such as webinars and video podcast recordings, although my studio kit can be used for this.

I had a few things I needed to consider when planning out my setup. The first was space. I had a few limiting factors when it came to having a space to record. My office was not set up properly for keeping the gear set up permanently, and the furniture arrangement was dictated by where the one outlet was located. (I have installed additional outlets in my office and rearranged.) I also wanted a space that I could record while standing. Both of these factors meant that I would be using common areas to record, so my gear selections would have to be something portable and easy to assemble.

Most of my recording was originally done in my kid’s playroom in my basement.

The second consideration was trying to keep this budget friendly. The key word here is trying. I may have failed there.

I already had a lot of Canon gear from my photography days, so I wanted to reuse it as much as possible. I already had a Canon EOS 6D, and that could record 1080p HD video. Although I did upgrade my camera bodies by trading in old gear, I stayed in the Canon ecosystem as I didn’t want to invest in all new lenses.

I had a copy of Camtasia for screen recording, but combining the Camtasia capture with video recorded in camera would require additional workflow to get the final video together. This would require some sort of video editing software. And I would also need audio and lighting gear. This gear had to fit the requirements and constraints laid out above and be both cost effective and portable.

Studio Gear

My studio setup in my office.

Note: I will be linking to Amazon, Adorama, and other sites in this section. These are NOT affiliate links. I have not monetized my site, and I make no money off of any purchases you choose to make.

Cameras and Lenses

Canon EOS R6 with Canon EF 50mm F/1.4 USM Lens and DC Adapter – Primary Camera

Canon EOS Rebel SL3 with Canon EF 40mm F/2.8 STM Lens and DC Adapter – Secondary/Backup Camera

Note: Both cameras use DC Adapters when set up in the studio because these cameras will eat through their batteries when doing video. Yes, I’ve lost a few hours while waiting for all of my battery packs to recharge.


Synco Audio WMic-T1 Wireless Lavalier Microphone System (x2) – primary audio

Comica CVM-V30 Pro Shotgun Microphone – Secondary audio

Blue Yeti USB Mic(Note: This is at my desk, but I only use it for recording voiceovers or while on Zoom/Teams/etc calls. If I ever restart my podcast, I will use this for that as well.)


Neewer 288 Large LED Video Light Panel (x2)

Viltrox VL-162T Video Light (x2)

Amazon Basics Light Stands (x2)

Other Hardware and Software

Blackmagic Design ATEM Mini Pro ISO – See Below

Davinci Resolve (Note: Davinci Resolve is a free, full featured video editing suite. There is also a paid version, Davinci Resolve Studio, that has a one-time cost of $299. Yes. It’s a perpetual license.)


A note on why I’m using the ATEM Mini Pro ISO

When I started, I was using Camtasia to record my screen while I recorded my presentation using my camera. Creating the final output required a lot of post-processing work to line up the audio and video across multiple sources.

The ATEM Mini Pro ISO allows me to bring together all my audio and video sources into a single device and record each input. So I can bring both cameras, my microphones, and any computers that I’m displaying content on (such as slides or demos) and record all of these inputs to disk. This allows me to record everything on one disk, so I don’t have to worry about managing data on multiple memory cards, and it simplifies my post-production workflow because I don’t have to synchronize everything manually.

There is a second benefit that I haven’t covered. It also allows me to get around a video recording limit built into modern cameras.

Most DSLRs and mirrorless cameras are have a video recording time limit when recording to internal cards. Video segments are limited to approximately 29 minutes and 59 seconds. This limit isn’t due to file size or hardware limitations (although some cameras have shorter time limits due to heat dissipation issues). It’s an artificial limit due to import-duty restrictions that the European Union put on video cameras.

VMUG UserCon sessions are 40 minutes, and I was burned by the 30 minute time limit on a couple of occassions.

That recording time limit only applies when recording to the internal card, though. It does not apply to external devices like the ATEM Mini. In order to use this with a DSLR or mirrorless camera, you need a one that supports sending a clean video feed over HDMI (Clean HDMI Out). Canon has a good video that explains it here. (Note: There are also USB webcam drivers for many modern DSLR and mirrorless cameras that allow you to do the same type of thing with tools like OBS.)

The Virtual Horizon Lab – February 2020

It’s been a while since I’ve done a home lab update.  In fact, the last one was over four years ago. William Lam’s home lab project and appearing on a future episode of “Hello from My Home Lab” with Lindy Collier has convinced me that it’s time to do an update.

My lab has both changed and grown since that last update.  Some of this was driven by vSphere changes – vSphere 6.7 required new hardware to replace my old R710s.  Changing requirements, new technology, and replacing broken equipment have also driven lab changes at various points.

My objectives have changed a bit too.  At the time of my last update, there were four key technologies and capabilities that I wanted in my lab.  These have changed as my career and my interests have changed, and my lab has evolved with it as well.  Today, my lab primarily focuses on end-user computing, learning Linux and AI, and running Minecraft servers for my kids.

vSphere Overview

The vSphere environment is probably the logical place to start.  My vSphere environment now consists of two vCenter Servers – one for my compute workloads and one for my EUC workloads.  The compute vCenter has two clusters – a 4 node cluster for general compute workloads and a 1 node cluster for backup.  The EUC vCenter has a single 2-node cluster for running desktop workloads.

Both environments run vSphere 6.7U3 and utilize the vCenter Server virtual appliance.  The EUC cluster utilzies VSAN and Horizon.  I don’t currently have NSX-T or vRealize Operations deployed, but those are on the roadmap to be redeployed.

Compute Overview

My lab has grown a bit in this area since the last update, and this is where the most changes have happened.

Most of my 11th generation Dell servers have been replaced, and I only have a single R710 left.  They were initially replaced by Cisco C220 M3 rackmounts, but I’ve switched back to Dell.  I preferred the Dell servers due to cost, availability, and HTML5-based remote management in the iDRACs.  Here are the specs for each of my clusters:

Compute Cluster – 4 Dell PowerEdge R620s with the following specs:

The R620s each have a 10GbE network card, but these cards are for future use.

Backup Cluster – 1 Dell PowerEdge R710 with the following specs:

This server is configured with local storage for my backup appliance.  This storage is provided by 1TB SSD SATA drives.

VDI Cluster – 2 Dell PowerEdge R720s with the following specs:

  • 2x Intel Xeon E5-2630 Processors
  • 96 GB RAM
  • NVIDIA Tesla P4 Card

Like the R620s, the R720s each have 10GbE networking available.

I also have an R730, however, it is not currently being used in the lab.

Network Overview

When I last wrote about my lab, I was using a pair of Linksys SRW2048 switches.  I’ve since replaced these with a pair of 48-port Cisco Catalyst 3560G switches.  One of the switches has PoE, and the other is a standard switch.  In addition to switching, routing has been enabled on these switches, and they act as the core router in the network.  HSRP is configured for redundancy.  These uplink to my firewall. Traffic in the lab is segregated into multiple VLANs, including a DMZ environment.

I use Ubiquiti AC-Lite APs for my home wifi.  The newer ones support standard PoE, which is provided by one of the Cisco switches.  The Unifi management console is installed on a Linux VM running in the lab.

For network services, I have a pair of PiHole appliances.  These appliances are running as virtual machines in the lab. I also have AVI Networks deployed for load balancing.

Storage Overview

There are two main options for primary storage in the lab.  Most primary storage is provided by Synology.  I’ve updated by Synology DS1515+ to a DS1818+.  The Synology appliance has four 4TB WD RED drives for capacity and four SSDs.  Two of the SSDs are used for a high-performance datastore, and the other two are used as a read-write cache for my primary datastore.  The array presents NFS-backed datastores to the VMware environment, and it also presents CIFS for file shares.

VSAN is the other form of primary storage in the lab.  The VSAN environment is an all-flash deployment in the VDI cluster, and it is used for serving up storage for VDI workloads.

The Cloud

With the proliferation of cloud providers and cloud-based services, it’s inevitable that cloud services work their way into home lab setups. My lab is no exception.

I use a couple of different cloud services in operating my lab across a couple of SaaS and cloud providers. These include:

  • Workspace ONE UEM and Workspace ONE Access
  • Office 365 and Azure – integrated with Workspace ONE through Azure AD
  • Amazon Web Services – management integrated into Workspace ONE Access, S3 as a offsite repository for backups
  • Atlassian Cloud – Jira and Confluence Free Tier integrated into Workspace ONE with Atlassian Access

Plans Going Forward

Home lab environments are dynamic, and they need to change to meet the technology and education needs of the users. My lab is no different, and I’m planning on growing my lab and it’s capabilities over the next year.

Some of the things I plan to focus on are:

  • Adding 10 GbE capability to the lab. I’m looking at some Mikrotik 24-port 10GbE SFP+ switches.
  • Upgrading my firewall
  • Implementing NSX-T
  • Deploying VMware Tunnel to securely publish out services like Code-Server
  • Putting my R730 back into production
  • Expanding my knowledge around DevOps and building pipelines to find ways to bring this to EUC
  • Work with Horizon Cloud Services and Horizon 7

Home Lab Update

Back in October of 2014, I wrote a post about the (then) current state of my home lab.  My lab has grown a lot since then, and I’ve started building a strategy around my lab to cover technologies that I wanted to learn and the capabilities I would need to accomplish those learning goals.

I’ve also had some rather spectacular failures in the last year.  Some of these failures have been actual lab failures that have impacted the rest of the home network.  Others have been buying failures – equipment that appeared to meet my needs and was extremely cheap but ended up having extra costs that made it unsuitable in the long run.

Home Lab 1.0

I’ve never really had a strategy when it comes to my home lab.  Purchasing new hardware happened when I either outgrew something and needed capacity or to replace broken equipment.  If I could repurpose it, an older device would be “promoted” from running an actual workload to providing storage or some other dedicated service.

But this became unsustainable when I switched over to a consulting role.  There were too many things I needed, or wanted, to learn and try out that would require additional capacity.  My lab also had a mishmash of equipment, and I wanted to standardize on specific models.  This has two benefits – I can easily ensure that I have a standard set of capabilities across all components of the lab and it simplifies both upgrades and management.

The other challenge I wanted to address as I developed a strategy was separating out the “home network” from the lab.  While there would still be some overlap, such as wireless and Internet access,  it was possible to take down my entire network when I had issues in my home lab.  This actually happened on one occassion last August when the vDS in my lab corrupted itself and brought everything down.

The key technologies that I wanted to focus on with my lab are:

  1. End-User Computing:  I already use my lab for the VMware Horizon Suite.  I want to expand my VDI knowledge to include Citrix. I also want to spend time on persona management and application layering technologies like Liquidware Labs, Norskale, and Unidesk.
  2. Automation: I want to extend my skillset to include automation.  Although I have vRO deployed in my lab, I have never touched things like vRealize Automation and Puppet.  I also want to spend more time on PowerShell DSC and integrating it into vRO/vRA.  Another area I want to dive back into is automating Horizon environments – I haven’t really touched this subject since 2013.
  3. Containers: I want to learn more about Docker and the technologies surrounding it including Kubernetes, Swarm, and other technology in this stack.  This is the future of IT.
  4. Nutanix: Nutanix has a community edition that provides their hyperconverged storage technology along with the Acropolis Hypervisor.  I want to have a single-node Nutanix CE cluster up and running so I can dive deeper into their APIs and experiment with their upcoming Citrix integration.  At some point, I will probably expand that cluster to three node and use it for a home “private cloud” that my kids can deploy Minecraft servers into.

There are also a couple of key capabilities that I want in my lab.  These are:

  1. Remote Power Management:  This is the most important factor when it comes to my compute nodes.  I don’t want to have them running 24×7.  But at the same time, I don’t want to have to call up my wife and have her turn things on when I’m traveling.  Servers that I buy need to have some sort of remote management capability that does not require an external IP KVM or Wake-on-LAN.   The compute nodes I use need to have some sort of integrated remote management, preferably one with an API.
  2. Redundancy: I’m trying to avoid single-points of failure whenever possible.  Since much of my equipment is off-lease or used, I want to make sure that a single failure doesn’t take everything down.  I don’t have redundancy on all components – my storage, for instance, is a single Synology device due to budget constraints.  Network and Compute, however, are redundant.  Future lab roadmaps will address storage redundancy through hyperconverged offerings like ScaleIO and Nutanix CE.
  3. Flexibility: My lab needs to be able to shift between a number of different technologies.  I need to be able to jump from EUC to Cloud to containers without having to tear things down and rebuild them.  While my lab is virtualized, I will need to have the capacity to build and maintain these environments in a powered-off state.
  4. Segregation: A failure in the lab should not impact key home network services such as wireless and Internet access.

What’s in Home Lab 1.0

The components of my lab are:


Aside from one exception, I’ve standardized my compute tier on Dell 11th Generation servers.  I went with these particular servers because there are a number of off-lease boxes on eBay, and you can usually find a good deals on servers that come with large amounts of RAM.  RAM prices are also fairly low, and other components like iDRACs are readily available.

I have also standardized on the following components in each server:

  • iDRAC Enterprise for Remote Management
  • Broadcom 5709 Dual-Port Gigabit Ethernet
  • vSphere 6 Update 1 with the Host Client and Synology NFS Plugin installed

I have three vSphere clusters in my lab.  These clusters are:

  • Management Cluster
  • Workload Cluster
  • vGPU Cluster

The Management cluster consists of two PowerEdge R310s.  These servers have a single Xeon X3430 processor and 24GB of RAM.  This cluster is not built yet because I’ve had some trouble locating compatible RAM – the fairly common 2Rx4 DIMMs do not work with this server.  I think I’ve found some 2Rx8 or 4Rx8 DIMMs that should work.  The management cluster uses standard switches, and each host has a standard switch for Storage and a standard switch for all other traffic.

The Workload cluster consists of two PowerEdge R710s.  These servers have a pair of Xeon E5520 processors and 96GB of RAM.   My original plan was to upgrade each host to 72GB of RAM, but I had a bunch of 8GB DIMMs from my failed R310 upgrades, and I didn’t want to pay return shipping or restocking fees.  The Workload cluster is configured with a virtual distributed switch for storage, a vDS for VM traffic, and a standard switch for management and vMotion traffic.

The vGPU cluster is the only cluster that doesn’t follow the hardware standards.  The server is a Dell PowerEdge R730 with 32GB of RAM.  The server is configured with the Dell GPU enablement kit and currently has an NVIDIA GRID K1 card installed.

My Nutanix CE box is a PowerEdge R610 with 32GB of RAM.


The storage tier of my lab consists of a single Synology Diskstation 1515+.  It has four 2 TB WD Red drives in a RAID 10 and a single SSD acting as a read cache.  A single 2TB datastore is presented to my ESXi hosts using NFS.  The Synology also has a couple of CIFS shares for things like user profiles and network file shares.


The network tier consists of a Juniper SRX100 firewall and a pair of Linksys SRW2048 switches.  The switches are not stacked but have similar configurations for redundancy.  Each server and the Synology are connected into both fabrics.

I have multiple VLANs on my network to segregate different types of traffic.  Storage, vMotion, and management traffic are all on their own VLANs.  Other VLANs are dedicated to different types of VM traffic.

That’s the overall high-level view of the current state of my home lab.  One component I haven’t spent much time on so far is my Horizon design.  I will cover that indepth in an upcoming post.

Home Lab Expansions #VDM30in30

Over the last two weeks, I’ve made some significant changes to my home lab.  The changes were brought about by a steadily increasing electric bill that had been increasing significantly over the last few months.

I picked up two new servers, a PowerEdge R710 and a PowerEdge R610,  on eBay that will replace the 3-node, 2U Dell DCS6005 that I had been using for my lab.  Both servers come with dual quad-core Xeon processor and 24 GB of RAM.   The R610 will be for server workloads, and the R710 will be for testing out VDI related software.

Although I end up with fewer cores and less RAM for running virtual machines, the two new servers have a few features which make them attractive for home lab use.  The include onboard power monitoring to track electricity usage, and I easily view this within the iDRAC.  The baseboard management on the DCS6005 nodes never worked right, so the new servers had iDRAC6 Enterprise modules added for improved remote management.  The new servers are far quieter than the DCS6005, and I can barely hear them once they are running. They also have more expansion slot, which will allow me to start testing GPUs with Horizon

Home Lab Updates

Back in December, I wrote about my home lab setup.  In the last few months, I’ve made a lot of changes to my lab as I’ve added new hardware and capabilities and shuffled some other equipment around to new roles.  It’s also changed locations as my wife and I moved into a new house last month.

The “newest” hardware in my lab is all used gear that I picked up off of eBay.  There were some great deals on custom Dell-built cloud servers from a large cloud provider that more than tripled the overall capacity of my lab.  The server run older AMD 2419EE processors, so they trade a lot of performance for power efficiency, and come with 48GB of RAM each.  They have their own set of quirks that need to be worked around, but that said, they do run vSphere 5.5 well.

The Dell servers can be rather finicky with the network switches that they work with, and the Linksys that I got late last year was swapped out for a Juniper EX4200 that I was able to get on loan from my employer.

The PowerEdge T110 II that I purchased last year has been moved from being a computer node to my primary storage node, and the storage OS has been migrated from OmniOS to Nexenta 4.0.  The PowerEdge T310 that was previously hosting storage has been retired, and I am selling it to a co-worker who is looking to start his own lab up.

I’ve also expanded the Fibre Channel footprint of my lab, and all my servers are connected to storage via Fibre Channel.  I recently picked up a Silkworm 3250 on eBay for $30.  I was also able to pick up a few 4GB QLogic 2450 cards for about $8 a piece. 

There is still some work that needs to be done on the lab.  I need to run some new electrical circuits into my makeshift server room as well as add some venting to take care of the excess heat.  I also plan on running Ethernet throughout the new house, and that will all terminate at my lab as well.

Looking for New Home Lab Storage

I’ve been a fan of Nexenta for a long time.  I’m not sure if it was Sun’s ZFS file system, the easy-to-use web interface, or how Nexenta was able to keep up with my changing needs as my lab grew and acquired more advanced gear.  Or it was support for VAAI.  Whatever the reason, or combination of reasons, Nexenta was a core component in my lab.

That changed a few months ago when I started a series of upgrades that culminated in my storage moving to a new server.  During those upgrades, I came across a few issues that forced me to change to OmniOS and NAPP-IT as a short-term solution while waiting to see if a new version of Nexenta was released.

Nexenta is no longer viable as a storage platform in my lab because:

  • Version doesn’t play nicely with the Broadcom NICs in the Dell PowerEdge T310 that I use for storage due to a line being commented out in the driver.  Even when I fix this, it’s not quite right.
  • Version 3.1.5 didn’t work period when I had USB devices plugged in – which makes it hard to use when you have USB hard drives and a USB keyboard.
  • Version 4 is vaporware.

The OmniOS/Napp-IT combination works, but it doesn’t meet one of my core requirements – VAAI support.

It doesn’t seem like a new version of Nexenta Community Edition will be coming anytime soon.  A beta was supposed to be released early in January, but that hasn’t materialized, and it’s time to move onto a new platform.

My requirements are fairly simple.  My requirements are:

  1. Spouse Approval Factor – My wife is 7 months pregnant and wants to buy a house.  Any solution must be either open-source or extremely cheap.  The less I spend, the better.
  2. Support for Fibre Channel – I’ve started putting 4GB Fibre Channel in as my storage network.  The solution must have support for using Fibre Channel as I would prefer to keep using it for my storage network.
  3. VMware APIs for Array Integration – My home lab is almost entirely virtualized, so any solution must support VAAI.

ZFS isn’t a requirement for a new system, and I’m not worried about performance right now.  A web interface is preferred but not required.

If you have any recommendations, please leave it in the comments.

Three Tips for Starting Your Home Lab

Home labs have been the topic de jure lately, and I covered my lab in my last post.  Virtualization makes it much easier to test new products and run an IT environment at home.  As Chris Wahl said, “Having a lab is a superb way to get your hands dirty with hardware and troubleshooting that just can’t be experience in a “cloud” environment.”

But where, and how, do you get started?  Here are three tips that will help you get started without breaking the bank.

Tip 1: Start Small

A good home lab takes time and money to build up.  You won’t be able to go out and buy a few servers, shared storage, and decent networking gear to run a miniature enterprise environment in your basement.  If you’re just starting out or branching into a new area, you might not need systems that can do a lot of heavy lifting.  An older desktop, or server, might not be on the hardware compatibility list or even offer great performance, but it could be the starter environment that you use to get your feet wet on a platform.

Your lab doesn’t need to run on separate hardware either.  VMware Workstation (Windows/Linux)/Fusion (Mac) and Virtualbox are two virtualization products that allow you to run virtual machines on your desktop or laptop.  GNS3 can run Cisco IOS without having to buy actual Cisco hardware.  Performance won’t be the greatest, and it is very easy to bog down your machine if you aren’t careful, but it can be one of the fastest ways to start getting hands-on without a significant investment.

Tip 2: Look for Deals

The enterprise-grade equipment that you’d find in an office or data center is expensive, and it is priced outside of what most people would be willing to pay for hardware if it was purchased new.  But as you start working on more sophisticated things, you will want to get better equipment. 

Build Your Own Server

There are three good ways to go about doing this.  The first is to build your own servers.  Chris Wahl has a nice list of whitebox servers that members of the community have built.  The nice thing about this is that you can control exactly what components are in the system, and many of the designs listed have a sub-$1000 bill of materials before sales at Amazon or NewEgg.

Buy a New Server

If assembling a server isn’t something that you have the time or inclination for, then you can buy lower-end retail hardware.  ESXi runs on a surprising number of platforms, and the HCL includes inexpensive options like HP Microservers, Dell PowerEdge T110 II, and even the Mac Mini.  Even a low-end server or Mac Mini maxed out with RAM can easily cross the $1000 barrier, but you get the peace-of-mind of having a manufacturer warranty for at least part of the machine’s life.

Pre-Owned Equipment

Off-Lease.  Refurbished.  Pre-Owned.  Whatever you call it, it’s buying used equipment.  And like a day-old loaf of bread, it’s a little stale but still usable and much cheaper.

There is still a lot of life left in the three to five year old equipment that you can pick up.  Many of these servers will show up on the VMware HCL and run vSphere 5.1 or 5.5 without any problem.  Depending on where you get them from, you may get a warranty.

A few months ago, Scott Lowe took this route when building up his lab for OpenStack.  He picked up two off-lease Dell C6100 servers that provided him with 8 blades, 16 processors, and 192 GB of RAM.

Another possible source purchasing used equipment is your employer.  Many employers, especially larger ones, are constantly refreshing equipment in their datacenters.  Purchased equipment needs to be retained or disposed of, and your company may allow you to purchase some of that equipment if their policies allow.

eBay, Craigslist, and local computer recyclers may also be good sources of equipment, and you can often get very good deals on items that they collected from a business.

Caveat emptor applies whenever you buy used equipment.  Although most local businesses and eBayers have reputations to protect, you may not have any recourse if the server you bought turns out to be a rather large and expensive paperweight. 

All of the Above

As you build up your lab, you’ll probably end up with an odd mixture of equipment.  My lab has my PowerEdge T310 that I purchased new over four years ago and a T110 II from Dell Outlet utilizing used QLogic Fibre Channel HBAs that I picked up from a friend who runs a computer recycling business.

Tip 3: Utilize Free/Open Source/Not-for-Resale

The untimely death of MIcrosoft’s TechNet program hurt hobbyists and IT professionals by taking away a source of legitimate software that could be used almost perpetually in a home lab.  That’s been replaced with 120-day trials.  I don’t know about you, but I don’t want to be rebuilding a domain controller/DHCP/DNS infrastructure three times per year.  I pick on Microsoft here because many of the workloads I want to run in my home lab are Microsoft-based, and I find it to be a bigger pain to rebuild an Active Directory infrastructure than a virtual infrastructure.

VMware hasn’t had a TechNet equivalent for many years.  There have been murmurings in the community that it might be coming back, but that doesn’t seem likely at this point.  VMware’s trials only last 60 days on most products, although some, such as Workstation and Fusion, only have 30 day trials.  Although VMware has the free ESXi Hypervisor, the 5.5 version is crippled in that the vSphere client cannot manage machines with the latest vm hardware compatibility levels. 

If there are parts of your lab that you don’t want to rebuild on a regular basis, you will need to look to free and/or open source products beyond the Linux, MySQL, and LibreOffice that people normally associate with those categories.  Some vendors also offer Not-For-Resale licenses, although some of those offers may only be available if you possess a Microsoft or VMware Certification.

The list below does not include everything out there in the community that you can try out, but here are a few products that offer free or not-for-resale versions:

Bonus Tip: Be Creative

If you’ve ever read one of these types of lists on LinkedIn or the Huffington Post, you knew this was coming. 

If you look out in the community, you’ll see some very creative solutions to the problems that home labs can pose.  I’ve posted two of the best ideas below:

Frank Denneman built a rack for his servers using two Lack tables from Ikea.

Greg Rouche’s VSAN/Infiniband environment is built sans server cases on a wood bookshelf.

My Home Lab

The topic of home labs is a popular one lately with several people talking about it on their blogs or twitter.  They are one of the best learning tools that you can have, especially if you are a hands-on learner or just want to try out the latest technology.  I’ve had some form of a home lab since I graduated from college in 2005 and has ranged in size from an old desktop running a Windows Server domain controller to the multi-device environment that I run today. 

It’s taken me a few years to build up to where I am today, and most of my equipment is good enough to keep my wife happy that I’m not spending too much money.

The most recent upgrades to my lab were adding 4GB Fibre Channel and switching from Nexenta to OmniOS.  I have also been slowly swapping out hard drives in my storage box to bring everything up to 1TB drives.  The last one should be arriving by the end of the week.

I use both Fibre Channel and iSCSI in my lab.  Fibre Channel is used to connect the storage to the compute node, and iSCSI is used to connect to the backup server.


  • Dell PowerEdge T110 II
  • Xeon E3-1240v2
  • 32GB RAM
  • ESXi 5.5
  • 2x 50GB OCZ Vertex 2 SSD
  • 1 Bootable 2GB USB Flash drive
  • 3x gigabit NICs (2 Single Port Intel NICs, 1 Broadcom Onboard NIC)
  • 1 QLogic 2460 4GB Fibre Channel HBA


  • Dell PowerEdge T310
  • Xeon X3430
  • 8GB RAM
  • OmniOS w/ NAPP-IT Management Interface
  • ZFS (2 mirrored pairs w/ SSD Cache)
  • 3x 7200 RPM 1TB Hard Drives (1x WD Blue, 1x WD Red, 1 Seagate Constellation)
  • 1x 7200 RPM 500 Hard Drive (soon to be upgraded to a 1 TB WD Red)
  • 1 60GB SSD
  • 2x 60GB USB Hard Drives for OmniOS
  • 4 gigbait NICs (2 Onboard Broadcom NICs, 1 Dual-Port Intel NIC)
  • 1 QLogic 2460 4GB Fibre Channel HBA in Target Mode


  • HP WX4400 Workstation
  • Intel Core 2 Duo 4300
  • 4GB RAM
  • Windows Server 2008R2
  • Veeam 7
  • 80GB OS Drive
  • 2x WD Blue 500GB Hard Drives in software RAID1
  • 3 gigabit NICs (1 Onboard Broadcom NIC, 1 Dual-Port Broadcom NIC)


  • Firewall/Router – Juniper SRX100
  • Switch – Linksys 48-port gigabit switch