VMware NSX-T 3.1 Global Manager Deployment

I decided to write up some lessons learned when deploying the latest NSX-T instance in my home lab. I’m using NSX-T 3.1 and trying out the options to use the Global Manager option for deployment. Here’s a summary of what I worked on:

  1. NSX-T Global Deployment to include 2 Global NSX Managers clustered with a VIP including a Standby NSX Global Manager. Current NSX-T Federation supported features can be found here.
  2. Attach Compute Managers to Global and NSX Manager appliances.
  3. NSX Manager Deployment to include 2 NSX Managers clustered to manage my physical lab vSphere cluster and an additional 2 NSX Managers clustered to manage my nested vSphere cluster.
  4. Test cluster failover from node to node.
  5. All NSX Global and NSX Manager appliances configured to use Workspace One Identity for individual appliance authentication as well as the VIP used for each cluster.
  6. Configure all NSX Global and Manager appliances for backup.

Deploying NSX-T Global Managers

This process was fairly simple, downloading the NSX-T 3.1 unified appliance, adding the unified appliance to my vSphere content library, and walking through the NSX-T Unified OVF deployment to get the first NSX-T Global Manager appliance up and running.

For my home lab use I picked the Small size (4 vCPU x 16GB RAM x 300GB sttorage) for everything and upon completing the deployment only because I didn’t want to consume too many resources for lab testing. If this was a production deployment of any size and scale, I would choose the Large option (12 vCPU x 48GB RAM x 300GB Storage) to ensure I had enough resources available for production usage.

On the customize template section, I chose to use the same password for the root, admin, and audit accounts, set my hostname, IP, Mask, Gateway DNS, Domain Search List, NTP Server settings, but important to choose the proper role using the drop down box option for NSX Global Manager, and I also enabled SSH and Root Login options for lab purposes but I would probably not enable that option in production.

**Lab Note – before powering on the NSX Global appliance, I reconfigured the virtual machine to not reserve the CPU and RAM. I’m running a lab and need the resources, but note for production deployments, the virtual appliance does reserve the amount of vCPUs in MHz and RAM depending on the size of the Virtual Appliance picked during the deployment process.

Once the NSX Global Manager Appliance completed the OVF deployment, I powered on the virtual machine and once services were up and running, connected to the IP address of the appliance using the https://x.x.x.x address I configured during the wizard, accepted the EULA.

With a working NSX Manager Global appliance I went under the System Tab and gave it a Global Manager Name by selecting the Location Manager menu item on the left and activating the cluster. It’s important to note the Global Manager name used (in my case Hudsonhouse) to use this for later when adding a standby node.

Note the prompt already to add a standby node.

** A little tip for lab environments, if you don’t want your appliance passwords to expire, you can SSH into your NSX appliances and run the following commands to prevent passwords for the root, admin and audit accounts from expiring. Do this for every NSX appliance you don’t want to have the passwords expire on. In production deployments, I would recommend changing the password using the same password rotation policy you use on other infrastructure services.

Next up, add a compute manager, by going to the Fabric Menu on the left and selecting Compute Managers and clicking the + ADD COMPUTE MANAGER option to enter my vCenter information.

When complete, I ended up with 2 vCenter servers for each environment that I planned to use NSX-T in a federated model.

With compute managers in place, the process of adding nodes to the existing Global Manager cluster is simple and now streamlined. Going back to the Global Manager Appliances menu on the left, there’s an option to add an NSX Appliance. In the screenshot below, I’ve already added my second appliance using the Add NSX Appliance wizard. This is much easier than deploying from template and based on the cluster type, automatically deploys a similar node into the cluster.

Next step is to add a virtual IP as shown in the screenshot above is quite simple and is added to the primary 1st node deployed in the cluster. Don’t fret as the VIP is added, your connection to NSX Manager will drop as both nodes are reconfigured with the VIP address as part of the clustering services and will come back with an IP address assigned to the cluster and directly to one of the nodes.

At this point, you can rinse and repeat depending on how many active nodes are needed in the cluster using the size and scaling recommendations.

Next up, deploying a standby node for the cluster.

**I wish the Add NSX Appliance wizard worked for the standby node, but unfortunately, in my testing, did not work and you are required to deploy from OVF again in order to stand up a unique instance that you’ll link to the cluster later.

For now, follow the same steps (recommended to deploy a standby node in a different location and add to the cluster) to deploy another appliance from OVF template using the same process above.

Once completed, we’ll add the standby Global Manager appliance using the Location Manager option. Go to the System Tab on top and then go to the Location Manager option on the left side of the Global Manager UI.

You’ll notice that there’s a Caution message at the top of the screen recommending to add a standby Global Manager for high availability. My active cluster is shown with both nodes and the corresponding VIP along with an option to Add Standby.

We’ll get that information later from the new Global Manager standby appliance we deployed from OVF previously. Log into the standby Global Manager appliance now that it’s deployed successfully, powered on and responding, accept the EULA. Go to the System tab -> Global manager Appliances menu on the left -> click on the “View Details” option for the appliance.

Viewing the details allows us to get the Cert Thumbprint needed to add as a standby appliance in the main Global Manager cluster.

Click on the copy icon next to the Cert Thumbprint that we’ll use to add to the Global Manager cluster as a standby.

Add compute managers just like in the main cluster configuration so the standby appliance has access to the same compute manager resources. Once you have this complete, we’ll go back to the main standby Global Manager appliance and get the details needed to add to the main cluster as a standby node.

If logged out log back into the active NSX Manager Cluster Global appliance, go to the System Tab -> Location Manager left menu and Click on the “Add Standby” option. Here’s where you’ll populate the details from the standby node including the name, FQDN/IP address, admin username and password, and then paste the cert thumbprint in the box copied from the standby node.

Click on the Check Compatibility option to verify connectivity and then click save. Once completed, you’ll have an NSX Global Manager Cluster with a Standby Node available.

Deploy On-Prem NSX Managers

***Note – no licenses needed to install at this point as the Global managers do not require any licensing, but at this point we’ll need to determine the version of NSX Manager we want to use and gather licenses required to make them work.

Back to my content library to deploy another NSX OVF file, with the exception being now that the role for the appliance I’ll be using with be the NSX Manager role instead of the global manager role.

Same process as before for my homelab, choosing the Small option, ensuring the NSX Manager role was selected, and before powering on, reconfiguring the VM to not reserve CPU or RAM.

Once powered on and services are responding, I log into the appliance using the https://x.x.x.x address using the admin username and password created during the OVF deployment and accept the EULA.

Also, using now the NSX Manager, you’ll also be prompted to join the CEIP. If you are using VMware Skyline, then I would recommend joining.

You’ll also be prompted at first login to choose the interface type you want to use, either Policy or Manager interface. See the differences between the two interfaces here in order to adjust based on your situation.

I’m setting everything up under a federated deployment so I’m using the Policy mode option.

I want to add a second NSX manager for my setup to cluster the services together, so I first need to add a compute manager in order to take advantage of the nifty deployment wizard to add another NSX Manager appliance to the cluster. I’ve explained this previously, but a quick reminder to go to the System tab -> Fabric menu on the left and expand to view the Compute Managers option -> and then +Add Compute Manager. One new option is now available to choose to Create Service Account on the vCenter compute manager which I’m selecting.

Add appropriate vcenter appliances as compute managers where you plan to deploy additional NSX Manager appliances. (**Note – you typically want one Computer Manager vCenter tied to 1 NSX Manager cluster, by trying to add a secondary, and in my case second vCenter to the first cluster, I am not able to add those same compute managers to my second NSX Manager cluster as they’ll show in use by another NSX Manager cluster).

Now back to the System Tab -> Appliances left menu option to add an additional NSX Manager appliance to the cluster.

Once the second NSX Manager appliance has been deployed from the wizard, I like to keep my VM names the same (**note – the add nsx manager wizard doesn’t allow customizing the appliance VM name and will use the host name to name the VM) and also reconfigure the virtual appliance again to not reserve CPU and RAM and setup a VIP for the cluster. You’ll notice now that once the cluster has been successfully deployed, there is now a prompt to install a license key.

**Note – Production Deployments of NSX Manager are recommended to have 3 nodes.

Once completed, you should have two nodes minimum in the NSX Manager cluster.

Time to license the cluster either by clicking on the hyperlink to the licenses page in the informational banner above or by going to the System Tab -> Settings left menu -> Licenses section. Deploy the license you plan to use for NSX, in my case I’m using NSX Enterprise Plus licensing. By default you’ll have the NSX for vShield Endpoint already installed in case you plan on use a hypervisor based antivirus solution. Click on the +ADD LICENSE option and add your appropriate license key.

At this point, I’m going to rinse and repeat for a second NSX Manager cluster to run my nested vCenter lab environment. I’ll be doing this a little different using the NSX CLI as I don’t have enough resources on my nested vSphere cluster to run both NSX Manager Servers. I used the CLI instructions found here to setup both nodes, license the cluster, add a Compute Manager, as well as assign a VIP to the new cluster. (Fast Forward)

So let’s pause for a second and review what’s been completed thus far.

  1. Setup an NSX Global Manager 2-node cluster with a VIP and Standby node. (No Licenses Required)
  2. Setup an NSX Manager 2-node cluster (3 recommended for production) with a VIP, licensed using NSX Enterprise Plus key and connected my home lab physical vSphere Hosts and vCenter as a Compute Manager.
  3. Setup an NSX Manager 2-node cluster (3 recommended for production) with a VIP, licensed using NSX Enterprise Plus key and connected my home lab nested vSphere Hosts and vCenter as a Compute Manager.

Next Step is to add both NSX Manager Clusters to my Global Manager Cluster using the Location Manager.

Logging into my NSX Global Manager VIP address, selecting the System Tab at the top and then using the Location Manager on the left Configuration menu, I’m adding both NSX Manager Clusters by using the Add New Location option. First we’ll need the NSX Manager Certificate Thumbprint from each NSX Manager Cluster which can be accomplished via CLI. CLI Options: SSH into the NSX Manager Server and run the following command ‘get certificate cluster thumbprint’. Copy and save to use in Location Manager to past into the field needed to verify connectivity. Now in Location Manager on the Global Manager click on the Add New Location option. I’ll do this for each of my NSX Manager Clusters.

Once completed, I now have two NSX Manager Clusters connected to the Global Manager Cluster and I can now start working on prepping hosts at both locations, setting up transport zones and also play around with NSX Intelligence (possible blog forthcoming).

Testing Cluster Failover Capabilities

At this point I wanted to test cluster failover for the Global Manager and both NSX Manager clusters. The method I’m using involves a scenario where I need to patch or upgrade in which case a node would be in a state of maintenance and cluster services including the VIP would fail over to the second node in the cluster. Since this is a fairly new install, I don’t have any upgrade packages available yet (upgrade details provided later once available) however, it’s important to mention how the upgrade process works. You’ll start with each local NSX Manager and then upgrade each Global Manager. The upgrade works by using the ‘orchestration node’ which is the node not directly assigned the Cluster VIP or can be found by running this command on any of the nodes via CLI ‘get service install-upgrade’. This will tell you which node will orchestrate the upgrade process on the ESXi hosts, Edge Cluster(s), Controller Cluster and management plane.

For immediate testing purposes I decided to validate this from an availability standpoint and just restarted nodes in each cluster, observed the VIP being reassigned to the second active node and back again upon restarting each node by node. The tested was designed to demonstrate the separation between the management/control plane and the data plane. Keep in mind that the data plane is not affected when the control plane is upgraded or has a loss in connectivity due to a Cluster node rebooting. This is similar to upgrading a vCenter server only impacts your ability to manage the environment but VM/Host operations continue to function. As expected, each cluster temporarily lost connectivity through the Web Interface during failover of the VIP but only for a couple minutes of time.

Configuring NSX Federation with SSO using Workspace One Access

This is a follow-up from a previous blog on this topic where I configured a single NSX Manager integrated with Workspace One Identity to provide SSO capabilities. That can be found here.

I’m taking a little different approach as I’m going to setup access using the VIPs created for each NSX Manager Cluster and the Global Manager Cluster. Starting with the Global Manager Cluster, the option I used in the previous article, load balancing was not used, but I want to use the VIP for each cluster this time around. First I need to configure a few things on Workspace One Access similar to what I did in the previous article. I need to create 3 new applications corresponding to my 3 VIPs.

App1 – NSX (NSX Global Manager VIP access)

App2 – NSXLab (NSX Manager VIP access for my physical lab)

App3 – NestedNSXLab (NSX Manager VIP access for my nested lab)

I’m going to follow the same procedure for the three apps I’m creating using the previous blog details (saving some time here) and should have 3 new Remote Access App Service Client tokens created that I can use with each of the 3 NSX Clusters.

Following the same procedure as in the previous blog, I need the certificate thumbprint of the Workspace One Access server that I’ll need when I register each NSX Cluster using their unique service token. SSH into the WS1A server and run the following command ‘openssl1 s_client -connect <FQDN of vIDM host>:443 < /dev/null 2> /dev/null | openssl x509 -sha256 -fingerprint -noout -in /dev/stdin‘ substituting the FQDN details of the actual WSA1 server. Copy and paste this value as we’ll use that on each setup for each NSX Manager Cluster.

Back to the NSX Global Manager, logging into the NSX Global Manager VIP, selecting the System Tab and on the left menu option under Settings, I selected Users and Groups, then changed to the Identity Manager option. Here’s where I’ll configure SSO authentication options. I’ll need the WS1A service token details to populate this correctly along with the WS1A certificate thumbprint details. I’m not using an external load balancer, just the cluster VIP (ex..FQDN of the global manager VIP is nsx.hudsonlab.com)

Configured successfully on the NSX side and new app now registered on WS1A that I can assign myself or anyone based on their role for access to the NSX Manager Global Appliance environment.

Rinse and Repeat for the other two NSX Manager Clusters

As an alternative, I can log directly into my Workspace One Access web interface and now access all three environments with SSO directly.

***Important – Once SSO has been configured, don’t forget to go in and add your AD Users/Groups to the NSX Managers’ environments based on the role and scope you want them to have access to, in my case I’m using my default vmadmins active directory group to grant permission. If you forget to do this before logging off, you’ll have a difficult time logging back in once SSO has been enabled.

Now when authenticating to each of my cluster VIP names, I’m redirected to WS1A to authenticate and granted access to log in to each NSX Manager cluster.

Failsafe on this log in method is that if WS1A is ever down, you will be prompted to login to the NSX Manager with the local credentials or by forcing a local login using the following format: https://mynsxmanger.com/login.jsp?local=true.

Backing up NSX Managers

We’re on the home stretch, last thing I wanted to accomplish was setting up backup for each appliance to ensure I have something to recover if any of the NSX Manager appliances goes bad. On the global manager, under the System Tab, left menu option for Lifecycle Management, there’s a backup and restore option. I’m going to use SFTP to my NAS to accomplish this task.

To do this I’ll first need the certificate thumbprint on my NAS. I’m using Filezilla to connect and it has a nifty option to view the thumbprint when you connect and click on the key icon on the bottom.

I’ll use the SHA256 thumbprint to connect with the backup option on NSX Manager. Once connected, I’ll set up a schedule to backup on a reoccurring basis using the edit button under the schedule option.

Just for fun, I’ll kick off a backup to make sure it works successfully.

**Note – By logging into the NSX Global Manager, you can configure backups for all the Location Manager clusters from one view.

Rinse and repeat for you NSX Manager Clusters and you’ll have a reoccurring backup set in place for you NSX Federation.

Summary

So let’s wrap up what was accomplished.

  1. NSX-T 3.1 Federated deployment
    • 2 Global NSX Managers clustered with a standby node
    • 2 NSX Managers clustered for the physical lab gear
    • 2 NSX Managers clustered for the nested lab
    • All Clusters configured with a VIP
    • All clusters configured for backup
  2. All NSX Manager VIP DNS Names configured for SSO using Workspace One Access
  3. Simple failover testing (actual upgrade testing will follow)

Hopefully you found this valuable, now on to my next adventure as I begin to setup both lab environments with software defined network capabilities, you may have an NSX Intelligence blog coming too!!

vSAN Witness on ESXiOnPi

Another continuation of my main blog post and working through my 3 use cases, this one happens to be something a lot of people have been asking me about. 2 node vSAN edge site with a remote witness, but what if the remote witness was in the same room and running on a Raspberry Pi. Here’s where I’ll test that option.

Let’s set this up a bit, I’ll be using two virtual ESXi hosts again using William Lam’s virtual ESX content library and deploying two virtual ESXi servers to setup and configure a 2-node vSAN cluster. I’ll use the VMware Fling again running on a Raspberry Pi4 to act as my vSAN witness.

Here’s my cluster setup, a cluster called vSAN ROBO with two ESXi hosts running nested and as you can see, they have the error “No datastores have been configured”. Also in the view below is the single raspberry pi4 running the ESXiOnArm fling that I’ll be using as my remote witness. If you need instructions on how I setup the RP4 with ESXi and configured for vSAN then review the main blog post above and the other blog on how I setup a 2-node vSAN cluster on RP4s running ESXi here.

I’ll use the Cluster Quick start option to configure the 2-node vSAN ROBO setup after placing both nested hosts into maintenance mode. Step 1 cluster basics I’m only going to turn on vSAN services.

After hosts have been placed into maintenance mode and adding vSAN services, step 2 of the Add Hosts has already ran the pre-checks for me, and this point I’m going to walk through step 3 and configure the cluster.

I already have a distributed virtual switch in use and will not be using the network configuration wizard.

2 node option selected, de-dupe and compression enabled and choosing not to be notified at this time on host update preferences.

I’m choosing my 4GB drives as the caching tier and my 100GB drives as the capacity tier.

Now this time around I’m selecting the vSphere host running the VMware ESXiOnArm fling on my raspberry pi4 as the witness for the 2-node vSAN cluster.

Then selecting the caching drive and capacity drive on the PI host.

Once completed, I have a summary page on how the cluster will be configured. All the hosts are already configured with NTP settings.

Under the cluster tasks view, you can see the progress as it converts it to a stretched cluster using the remote witness and creates the vSAN datastore.

Once completed, I now have a 2-node vSAN cluster with 200GB of storage available as a datastore to use.

Now time to test a VM deployment or two but I’ll need to turn on DRS and Availability to really take advantage of the benefits of a resilient vSphere vSAN cluster that could be used for a remote edge use case. Hopefully you found this valuable.

Create a vSAN 2 Node cluster on ESXi On Pi

Continuing up on my main blog post around using Raspberry Pi4b devices with the vSphere Fling, my first use case involves create a 2-node vSAN cluster on these devices using a virtual remote witness. Here’s how I set this up in my homelab.

Assuming you already have 2 Raspberry Pi4b devices installed and configured with the vSphere fling using the instructions linked in the main blog post above, I added vSAN to the vmkernel port that I also have the management network configured to make it simpler to setup both devices.

I’ve also deployed a virtual vSphere host as my vSAN witness server as shown above using William Lam’s content library I have configured on my vCenter server. Same setup as above adding the vSAN service to my vmkernel port that I also have my management network configured.

Next up, adding some flash drives to the Raspberry PIs for cache and capacity drives. I have two Kingston DTSE9 8 GB flash drives and two EMTEC 8GB Flash Drives I’ll be using, one of each plugged into both of the RP4s.

I decided to format all the flash drives first on my Mac in order to make sure they were clean. The process I used was the same for all 4 flash drives using the disk management utility and erasing each drive formatting as fat which I’ll eventually erase within the config option in vCenter. (Examples given 1 for each type of flash drive)

Once all 4 drives have been formatted, I plugged on of each into the RP4s. In order to see them as storage devices, I needed to trick ESXi into seeing them as USB drives by killing the usb arbitrator service. This can be done by SSH into each RP4 and running the following command.

I then added the advanced configuration to each RP4 vSphere host in order to allow the devices to be seen by vSphere as a USB storage device and prevent the usb arbitrator service from running upon restart of the host by adding these options with a value of 1.

Next up, viewing the USB storage devices, erasing the partitions on each flash drive and marking them as Flash Hard Drives for use with vSAN. After plugging in the flash drives as seen below, in vCenter, I’m selecting each RP4 ESX host and looking at the storage devices in order to tag them as flash disks and erase the partitions so they’ll be ready to use as disks for vSAN.

keychain USB running ESXi, silver kingston plugged into USB3 for cache drive, colored flash drives as capacity drives

Selecting each ESX host and going to the Configure Tab, under Storage, under Storage Devices, the drives should appear now as HDD disks and FAT formatted, so I’ll mark them as flash drives and erase the partitions.

I’ll erase partitions first and then mark as flash and do this for each drive on both RP4 hosts.

Fast forward, both RP4 hosts with two USB drives with erased partitions both marked as flash and ready to create a 2-node vSAN cluster. Turning on vSAN at the cluster level, I’ll choose the 2-node option and use a virtual vSphere host as my witness as mentioned previously.

Once completed, I now have a 2-node vSAN cluster running on 2 RP4s running ESXi with a virtual remote witness host to complete the process.

(**NOTE – don’t forget to install a license for vSAN if you want to use it for an extended period of time)

ESXi on Raspberry Pi

With the release of the new VMware Fling, you can now run ESXi on a Raspberry Pi 4b. I used 2 of the Canakit Raspberry Pi 4B 8GB kits for my testing.

2 Raspberry Pi 4b STR32 8GB kits

Two kits allows me to play around with a couple different options and I decided to work through 3 use cases for my testing.

  1. Create a 2-node vSAN cluster using the RP4s and a remote witness (blog post here)
  2. Use a RP4 as a remote witness for a 2 or more node vSAN cluster (blog here)
  3. Frankenstein project running a VeloCloud edge with an LTE uplink (blog coming)

First things first, I needed to load vSphere on the SD card inserted into RP4 units using the provided instructions on the VMware Fling Link above.

I’m using Samsung EVO Plus 32GB SD cards, one in the RP4 for the UEFI firmware, one to load the ESXi installer, and one to install ESXi onto. Following the instructions on how to configure the UEFI firmware using the Fling documentation was pretty simple. I used somewhat of the same instructions to format one of the SD cards on my mac and copy to contents of the ESXi7 ISO file mounted to my mac onto one of the SD cards that is shown in the blue Lexar SD card reader. In the shot above, I have the installer files in the Lexar and just below that (included in the kit I purchased) is the USB SD Card reader with a blank SD card installed to install ESXi onto.

The tricky part I ran into was toggling between my primary monitor HDMI connection and the USB C connection on my laptop so I could see what was happening on the RP4 as each time it rebooted it defaulted back to my Mac.

The key here is to be patient as the RP4 is booting up, it takes a bit before the ESXi installer kicks in, and then quickly hitting the Control+O option to add the advanced installer option to change the ESXi partition size.

After getting the ESXi installer going, the process was pretty simple at this point selecting the 32GB blank SD card to install options like you normally would with the normal installer prompts. Once completed, the installer prompts to remove the install media and reboots the RP4.

As seen above the installer media has been removed and I only have the SD card with the UEFI firmware and boot loader and in the back, the USB card reader with the install of ESXi7. It boots up and when plugged into a network cable, grabbed a DHCP address off my network, allowing me to log in and verify everything looks like a typical vSphere stand-alone host.

I took a couple more screenshots to show the default configuration but decided to try a couple things outside the default. First adding another USB NIC and then viewing the partition data to confirm my advanced config worked during the install. I also added an NFS shared datastore so I could play around with registering and unregistering virtual machines that I have in my home lab.

All in all, pretty simple to setup once you get the UEFI firmware done using the Fling instructions.

I also decided to add an HDMI switch to view all the RP devices using one monitor. RP4 device running ESXi. Next up, work through the use cases mentioned above.

VMworld 2020 (it’s a virtual thing)

VMworld is right around the corner, but this year will be a little different with everyone still sheltering in place. VMworld 2020 will be a virtual conference this year. Starting on September 29th through October 1st, 2020, attendees will be able to sign up for unlimited sessions based on the current catalog (don’t forget the schedule builder went live on September 1st). There will also be additional activities before and after the conference this year ranging from Customer Technical Exchange sessions, to the traditional TAM customer day sessions that TAM customers have become accustomed to more personalized content. I put together a mindmap around all things related to VMworld 2020 here that I hope will help guide you around all the events and activities surrounding VMworld this year.

As VMworld events and activities continue to progress and during VMworld itself, I’ll continue to update this map and also try to put my perspective on the announcements during the conference and keep this page updated going forward. Don’t miss out on this free virtual conference and see how VMware is helping customers engage globally to help make things possible together.

Configuring ESXi Hosts for Active Directory Authentication using the vSphere Authentication Proxy Service on vSphere 7

Ever wondered how to configure direct login access to your vSphere hosts using your Windows Active Directory credentials?

I’ve been having these conversations with varying VMware customers over time and the subject always comes up when it comes to implementing some additional security controls over your VMware infrastructure.

As you are aware, the direct login access to an individual vSphere ESXi host has been available for quite some time but most customers that I have worked with use the traditional root login to access individual hosts. This may be needed due to troubleshooting host disconnect issues or troubleshooting an individual ESXi host when vCenter is not available or not being used. Other options may include using Auto-Deploy to provision hosts and have them automatically joined to the domain based on hosts pre-defined in the vCenter access control list.

To add a more secure manner of authentication to your vSphere ESXi hosts, and by leveraging a more secure method of tracking direct login attempts not as root, you can configure stand-alone hosts or even hosts connect to vCenter to use Windows Active Directory Authentication. This will also allow you to set in place a root password rotation policy and restrict root logins for only those critical situations where other authentication methods are not available.

There are 5 configuration steps necessary to make this happen.

  1. Enable and configure the vCenter Authentication Proxy service on vCenter.
  2. Copy the vCenter certificate to a shared directory your ESXi hosts will have access to.
  3. Configure the preferred Windows Active Directory group that will be used for authentication to ESXi hosts.
  4. Configure the advanced settings on ESXi hosts for the Active Directory Group
  5. Join the ESXi hosts to the domain using the vCenter Authentication Proxy

Step 1

There are actually two parts to this first step.

The first part is setting up the configuration on the vCenter Server using the Configure Authentication Proxy settings. If you are logged into vCenter, select the vCenter and go to the Configure Tab and under settings, select the Authentication Proxy option. You’ll enable the service and configure your domain settings for the Domain to join, the Domain User authorized to join computers to the domain and the Domain User Password.

The second part involves logging into the vCenter VAMI and enabling SSH if not already enabled. (https://myvcenter.mydomain.com:5480). Login to the VAMI UI and select Access on the left pane, edit your Access settings and enable SSH login.

Once SSH is enabled, you will open a putty session to your vCenter and run the following command logged in as root.

/usr/lib/vmware-vmcam/bin/camconfig ssl-cliAuth -e

This command enables the authentication proxy service to accept incoming ssl connections to join the domain provided they are setup with the correct SSL connection, which we’ll get to next.

Step 2

While still logged into the vCenter SSH session as root, you will need to copy the certificate used by the vCenter Authentication Proxy Services to a shared directory that your ESXi hosts have access to (in my case, and example below, I am copying the rui.crt file from the vCenter to a shared datastore directory on my synology storage nas)

scp /var/lib/vmware/vmcam/ssl/rui.crt root@192.168.0.201:/vmfs/volumes/synologynas/authproxycert/rui.crt

Step 3

Now you’ll need to configure your Active Directory Group to be used for authenticating users that are members of that group for direct login access to the ESXi host. The default group on ESXi 7 is called ‘ESX Admins’, so you have two choices, either create an AD group called ESX Admins or reconfigure the ESXi host advanced settings to use a preconfigured group of your choice. In my case, I created an AD group called ‘VMAdmins’ that I wanted to use.

Step 4

Now you’ll need to set the advanced settings on ESXi servers to use that group. The configuration setting is called ‘Config.HostAgent.plugins.hostsvc.esxAdminsGroup’. You can do this manually on each host that you want to use Active Directory Authentication or you can use a powershell script as an example below details.

Manually

From vCenter: Select your ESXi host and go to the Configure Tab, under System select Advanced System Settings and click on the Edit button on the top right of the screen.

Search for ESXAdmin and the advanced option should pop up pre-populated with the default group (I’m changing mine to VMAdmins)

Once you’ve made the change, click on OK and your done.

Scripted

Get-VMHost | Get-AdvancedSetting -Name Config.HostAgent.plugins.hostsvc.esxAdm
insGroup | Set-AdvancedSetting -Value “GROUPNAME” -confirm:$false

In my case I would sub GROUPNAME with vmadmins.

Step 5

Now the last step involves importing the certificate we copied to the shared datastore location in step 2 above to each ESXi host we want to join to the domain and then select the option to join the domain using the vCenter Authentication Proxy. (Note: if you choose to although I would recommend against this option, you can bypass the certificate import process by changing the advanced settings on your hosts and setting the flag to 0 on each host with this advanced setting “UserVars.ActiveDirectoryVerifyCAMCertificate”)

Select your ESXi host from the vCenter console, go to the Configure Tab and under System, select Authentication Services.

You’ll want to first choose the Import Certificate Option and provide the certificate path and the server IP address of your vCenter server running the Authentication Proxy Service we just configured above. You’ll use the format of datastore in brackets with the path to the folder and cert name as in my example below along with the IP address of your vCenter.

Next you’ll choose the Join Domain option and populate the Domain name you’ll be joining the ESXi host to and then select the radio button for Using proxy server and populate the IP address of your vCenter server. (in my example below, I’m joining the ESXi host to my lab domain and using my lab vCenter IP address) Click on OK and the host should join the domain securely.

Just to verify, he’s a look from the Active Directory Users and Computers view

That’s it, 5 easy steps to get your hosts joined to Active Directory, and now I can log in with an account that is a member of the AD group I chose to use (in my case the vmadmins group) and directly log in with credentials logged for users in the ESXi syslog to track from a security point of view.

This works for SSH access to your hosts as well so you don’t have to log in as root all the time.

***It’s important to note that when using host profiles, the advanced settings that were configured during the previous steps are captured, so when using Auto Deploy, assigning the host profile to the corresponding ACL host will have this information pre-populated and can be setup using the Image Builder process.

References: VMware Documentation Using vSphere Authentication Proxy

Configuring NSX-T Manager with VMware Identity Manager (i.e. Workspace One Access)

For all the NSX experts out there, I thought you might find this helpful, especially with all the request to leverage the VMware Identity Manager platform (recently rebranded Workspace One Access) as the source for authentication into various VMware products.

As I’ve been working through through the upgrade of my home lab, I recently used vRealize Suite Lifecycle Manager to deploy various VMware products to help manage my “private cloud” homelab environment. vRSLCM is a slick tool and I’ll probably write up something on that at a later date. One of the neat features is the ability to deploy VMware Identity Manager as part of the automated deployment method and it integrates all the VMware Solutions together with Identity Manager for authentication.

Now that I have a VMware Identity Manager platform to leverage, one of the things I noticed is that when deploying VMware NSX-T, the NSX manager has an option to leverage VMware Identity Manager as an authentication source.

**Versions of products in scope here include NSX-T Version 3.0.0.0.0.15946738

VMware Identity Manager 3.3.2.0 Build 15951611 (i.e Workspace One Access)

Assuming you’ve already installed both products listed above, here’s how you would go about configuration NSX Manager to use VMware Identity Manager for authentication.

  1. The first step is logging into the Identity Manager appliance using the admin interface using the local system domain to get access to the appliance configuration settings for catalog items. I’m logging in as the local admin since I don’t have any identity sources setup yet other than what vRealize Lifecycle Manager had me configure to setup my active directory environment.

2. Once logged you’ll need to setup a new Remote Access App in order to allow NSX Manager to pair to the IDM for authentication. Going to the Catalog Tab and choosing settings, you’ll create a new remote access client app.

3. You will need to create something that obviously identifies this as the app paired with NSX Manager, in my case I just called it nsxmanager.

4. You’ll need to change the Access Type to [Service Client Token], provide the Client ID name [nsxmanager], generate a shared secret key, and then choose the TTL settings for the token, in my case I used 6 hours for Access, 1 month for refresh, and idle for 4 days.

Once you are done you’ll have the new service client token details available for reference later.

5. Now we need to get the certificate thumprint from a vIDM host (see official VMware documentation here). In my case I logged directly into the vIDM host using SSH and ran the following command and then copied the thumbprint string for use in NSX Manager later.

6. The other item we’ll need before switching over to NSX Manager is the OAuth shared secret we created in step 3 above. You can get this by clicking on the nsxmanager item in step 4 which will list all the details on what you created previously including the shared secret key.

7. Now we’ll log into NSX Manager to configure the identity manager linking.

8. Once logged in, go to the System tab and under Settings, select Users and Roles

9. Click on the VMware Identity Manager Option and you’ll have the option to enable VMware Identity Manager. In my case, I’m not using a load balancer so I’m connecting directly to the NSX Manager.

10. Here’s where you’ll use the Oauth Client ID and OAuth Client Secret (step 6) and SSL Thumbprint (step 5) from above and the other configuration options are pretty simple using your FQDN for your vIDM appliance and NSX Manager. Once done, you’ll have NSX Manager configured to use vIDM as an identity source using your Active Directory configured already on the vIDM appliance.

11. Now all you have to do is configure your AD Groups (assuming you already have AD Groups defined) and assign roles in NSX Manager. In my case I have an AD Group I’m using called vmadmins for access to all my VMware solutions but you could configure whatever you want leveraging your AD groups already defined and assign to the different NSX Manager roles for access.

****Bonus Content******

I figured since I have an Identity Manager Platform to work with now, I might as well configure NSX Manager as one of the options to connect to in the user portal seeing that vRealize Suite Lifecycle Manager already provided this for the other VMware Solutions I deployed already so I decided to add a weblink to NSX Manager and also to my vCenter (although I still have to use normal SSO login credential on vSphere 7) to simply my access to VMware solutions I’m using in my home lab.

I created a little custom branding for my particular NSX-T Web App, but the process is pretty simple, going back to vIDM in step 2 above and choosing Web Apps, I get the option to configure apps that will show up in my user portal. Here’s how I configured NSX-T Manager.

Now that I have this configured, if I log back into vIDM using my active directory as the login source instead of the system domain, voila, I’ve got all my VMware apps in one place and now NSX Manager authentication is passed through to NSX Manager and I’m able to login to any of the apps with SSO (with the exception of vCenter 😦 but I’m hoping that’s coming soon)

That’s it folks. Hopefully you found this valuable and can work on setting this up yourself.

Home-Lab Upgrade to vSphere 7

I’ve had a lot of questions on how my upgrade went to vSphere 7 in my home-lab. I thought I’d share how I was able to get vSphere 7 up and running on a lab using hardware that isn’t on the VMware Hardware Compatibility List.

So what you see here are 5 Dell R71 Servers, 4 of which I use for my vSphere home-lab and the other is a Plex Media Server for my 4K video. The processor type in these servers are all Intel Xeon 56xx and are not on the HCL for vSphere and only compatible on the HCL for vSphere 6.0U3. Here’s a little visual and summary on the home lab setup.

So I have two Synology servers, one used primarily for a file server and second Plex server, and the 1817 is my SSD and SAS for 10GB over iSCSI connected device for my 3 ESX hosts. the 512 replicates to a second volume on the 1817 so my data is redundant. Vol 1 on the 1817 is all flash drives and presented as an iSCSI mount on the 10GB network for the 3 hosts to connect as a shared datastore device. the 10GB switch is not externally connected and only used for storage connectivity. The Synology 512 and R710 service console networks, vMotion, etc… are connected to the 1GB switch that is connected for external access through my router and out to the internet.

Now, on to the upgrade to vSphere 7. If you tried to just install vSphere 7 natively on the DellR710 servers, the install would prompt you that the CPUs were incompatible. Here’s how I got around this issue. Using USB drives, I installed vSphere 7 using my macbook and VMware Fusion.

  1. Create a virtual machine with no hard drives, configured with enough CPU and RAM for vSphere 7 to install successfully.

2. Mount the vSphere 7 ESXi ISO file to the virtual CDRom drive to prepare for installation.

3. Plug in the USB drive to install ESX and verify it shows up as a device I can connect to after power on.

4. Make sure I’m using a current version of VM Hardware that will work for the install.

5. Power on the VM and connect the USB device to the Virtual machine.

6. The ESX 7 installer will run through the wizard and prompt you to chose a device to install the hypervisor on.

7. When the installer finished, I powered down the VM and pulled out the USB drive from my laptop.

8. Using the USB device, I plugged this into the USB port on the back of my server and after verifying the device in my BIOS settings, powered on the server and walked through the setup of the IP, Mask, Gateway, DNS….etc..that you would normally do to install a new vSphere host.

Connect to my host webconsole and deploy a new vCenter 7 Server appliance, rinse and repeat and I now have a home lab upgraded to vSphere 7 albeit running on unsupported hardware.

To make things even more fun, I used the content library and subscribed to William Lam’s Virtual Ghetto blog nested vSphere content library to start working on rebuilding the nested vSphere playground.

I didn’t think it would be possible to extend the life of my home lab but thanks to David Davis for the Boot from USB idea, I can use the home-lab a little longer. Give it a try and see if this works for you as well.

Configuring the VMware ESXi Dump Collector on vCenter 7

For those not familiar with this topic, it relates to a recent presentation I made at a VMware User Group virtual meeting around managing vSphere and some best practices. This also is applicable to a recent home-lab upgrade and wanted to provide a current method for configuring this setup and the lack of official support documentation around the configuration of this service.

When I used to be a VMware customers, I always wanted to ensure I had a good handle on driving root cause on any issues experienced in our VMware environment. Purple Screen of Death issues on vSphere hosts was always a challenge due to the issues around the knee jerk reaction to reboot a host when a PSOD event occurred just to get things operating normally as fast as possible.

When a vSphere host has a PSOD event, there’s a core dump file of the memory contents that is created in the scratch location of a host. That file, if not stored on persistent storage (think of booting ESXi with Autodeploy or running ESXi headless) gets removed upon reboot. A way to capture the dump file is to use the network dump service allowing the ESX host to send the core dump file over the network to a file storage location.

There’s two parts to allow this process to work.

  1. configure a crash dump server either on a file server or leverage the embedded service in the vCenter Server appliance itself.
  2. configure ESXi hosts with a network core dump location

I chose option 2 for my home lab setup although based on the size of your environment, you may choose to go with option 1 and ensure you have adequate storage available to store the size of the dump files and keep your vCenter appliance directory cleaner. Option 2 was a bit of a challenge, as the service option for the vCenter Server appliance does not provide a way to configure the startup service and the default setting is set to manual startup mode. If you have to restart your vCenter, by default, you have to log into the VAMI (VMware Appliance Management Interface) https://myvcenter:5480 and start the service.

After doing some digging, and thinking back to my cli days, I wanted to look at the existing services running on my vCenter appliance to see what configuration options were available. Here’s how I looked at this a little deeper.

  1. Turn on SSH access on my vCenter Appliance (VAMI config)

2. SSH to my vCenter appliance and launch the command line shell

3. use the vmon-cli tool to list features available

4. Find the coredump service (netdump) and check it’s status

5. Change the status and start the service

Keep in mind that I used the VAMI to start the service but I could have run the command to start the service, or just restarted my vCenter appliance.

Now that the vCenter is configured, I needed to configure my ESXi hosts to enable the network dump location and start the service on all my lab hosts. You can use a powerCli script or use the CLI option I used below by enabling SSH on your ESXi hosts and running the following commands (I used my service console vmkernel port to bind the settings to my vCenter appliance IP address)

​1. esxcli system coredump network set –interface-name vmk0 –server-ipv4 myvcenteripaddress –server-port 6500 ​

2. esxcli system coredump network set –enable true

That’s it. Now my ESX hosts can send the memory core dump file to my vCenter appliance located in the /storage/netdump directory.

Why is this useful? When you call VMware Support for help and ask how did my host PSOD? They may ask for the memory dump file and without persistent storage available to maintain this file after a reboot, you may be out of luck. VMware Support will use that file for analysis to determine if there was anything running in active memory that could have attributed to the PSOD and hopefully provide root cause, but more importantly, a remediation plan to prevent this from happening again.

My Light-board, Whiteboard, GreenScreen Home Office Setup

I’ve had a few requests now that the home office has been moved into a bigger room to share how I setup the home office to enable remote work capabilities and leverage conference calls with a whiteboard and green screen as well as video presentations using a Light-board. For those of you that are now working from home more due to pandemic reasons, this is a good way to stay connected with your customers and business partners while making it fun at the same time. It works well with family and friends too of course.

Whiteboard and GreenScreen setup

Not much to share here other than purchasing a whiteboard of your choice and mounting it on the wall, but the whiteboard is magnetic so it made it easy to add a blue screen or green screen backdrop behind my desk.

4’x6’ magnetic whiteboard with Neewer collapsible GreenScreen/BlueScreen backdrop

I also have a green cloth backdrop I use more permanently as the collapsable one is good for road trips and conferences where vLogging may come into play. Keep in mind the whiteboard makes it simple as I just use some magnets to hold the cloth on the whiteboard. The placement of my Logitech camera makes it easy to both use the whiteboard with the extended boom mic for customer presentations and doubles as the GreenScreen backdrop for Zoom meetings.

I use either a Logitech conference cam or my iPad/iPhone to record the video and if you use Zoom as your conferencing application, there is an option to add a GreenScreen backdrop and choose your picture or video of choice with the latest version of Zoom. I put together a library of zoom backgrounds and videos that I use for remote presentations and customer calls here.

Using Zoom on my iPad with a GreenScreen background picture

Using a video as my background during a Zoom meeting with proper lighting and GreenScreen backdrop

Light-board Setup

Now on to the fun part, the Light-board. I’ll include a list of materials that I used to build out the home office Light-board setup.

Testing out the Light-board behind the desk, but quickly realized that I needed more black space in the background.

The 3’x4’ Light-board is made with a clear piece of Lexan I ordered on Amazon (see parts list below) connected to some closet organizer pieces 4’ length and I attached the glass to the strips using some picture hangers designed to hold glass being mounted to a wall. The legs are 10”x15” shelf brackets held to the bottom of each shelf bracket using some black mechanical bolts and secured to the table to keep from slipping using some Gorilla Pads (rubberized pads with double sided tape).

Hole drilled through glass and mounting bracket secured to vertical legs
L brackets secured to vertical strips on each side with Gorilla Pads mounted underneath

The LED light strip (see parts list) is secured to the underside of the glass using a piece of shower door stripping. I secured the extra light strip by reversing the LED light strip on the existing spool and clipping it to the side of the leg using a simple binder clip.

Reversing the LED strip allows me to re-use the light by cutting on the dotted H line if they ever break and then remounting in the rubber shower strip on the bottom of the glass.

Lighting

I decided to use 3 lights which are soft photo lighting 2 on either side and if needed a boom light for overhead. Come to find out I didn’t need the boom based on the lighting in my office.

Side lighting for GreenScreen and Light-board
Side lighting
Boom light if needed (I use more for the GreenScreen)

This is the trickiest part because if you don’t get the 2-3 point lighting down just right, you’ll end up with reflections off the glass and shadows behind you when presenting or when using a GreenScreen, you’ll still have shadow effects around your profile like an aura.

General information on lighting:

1.Wikipedia

2.Complete Guide to Three-Point Lighting | Video School Online Lighting Tutorial

Some good videos are:

1.How To Set Up 3-Point Lighting for Film, Video and Photography

2.Using Geometry for Better Interview Lighting

Backdrop

This was a bit of a challenge as I started off with some PVC pipe and made a large golf ball toss frame to hold the black curtain. I started off trying this behind my desk but quickly realized after a few tests that I needed a wider backdrop. I switched to moving the curtain to horizontal and hung it on the wall using some small nails at the ceiling and using some office binder clips to hold the top of the curtain while hooking over the brad nails close to the ceiling. After some curtain steaming, I finally had a nice clean back drop that was wide enough to cover the wall behind me and get the whole Light-board in the picture.

Brad nails and binder clips holding the curtain

Here’s an example of the first Light-board video I made using this new setup

Last thing to keep in mind that when you’re ready to put the video together, you’ll want to use some type of video editing software to flip the video so the text show the readable way instead of backwards. I have a Mac computer and do most of my editing using iMovie as it’s simple to use and allows you to add additional effects like fade in and out options or soundtracks.

Testing sound, lighting, placement on iPhone

Now all you need is a content script to follow, some neon erasable markers, something to record with and you’re ready to go.

Now let’s get going on some Light-boarding videos!!

Parts List

Falken Design falkenacrylic_Clear_236_36x48 Acrylic Sheet, Plastic/Plexiglas/Lucite, 36″ x 48-1/4″, Clear
Learn more: https://www.amazon.com/dp/B0755G4B3X/ref=cm_sw_em_r_mt_dp_U_AdeCEbQBHJDD7

Glass Door Seal Strip (With High Viscosity Adhesive), A Roll 120 Inch Frameless Shower Door Sweep to Stop Shower Leaks by Bicopo
Learn more: https://www.amazon.com/dp/B07TH2C9BJ/ref=cm_sw_em_r_mt_dp_U_-feCEbQ4KBC03

LuckIn 20-Pack 3/4 x 1 Inch Stainless Steel Standoff Screws, Mounting Glass Hardware Sign, Stand Off Nail for Hanging Acrylic Picture Frame, Black by LuckIn Home
Learn more: https://www.amazon.com/dp/B07HRGD5BM/ref=cm_sw_em_r_mt_dp_U_4ieCEb2H306TC

John Sterling™ Dual-Trak™ Standard at Menards® http://www.menards.com/main/storeitem.html?iid=1444436950985

14-1/2″ Heavy-Duty Shelf Bracket at Menards® http://www.menards.com/main/storeitem.html?iid=1501223575289

Slipstick® GorillaPads™ 2″ Gripper Floor Pads – 8 Pack at Menards® http://www.menards.com/main/storeitem.html?iid=1537424883597

Lighting

Ustellar Dimmable LED Light Strip Kit, 300 Units SMD 2835 LEDs, 16.4ft/5m 12V LED Ribbon, 6000K Daylight White Under Cabinet Lighting Strips, Non-waterproof LED Tape, UL Listed Power Supply by DragonSmart
Learn more: https://www.amazon.com/dp/B075RYSHQQ/ref=cm_sw_em_r_mt_dp_U_jkeCEbRX59D9Q

Andoer 2400W Lighting Kit, Photography Studio Continuous Softbox Lighting System Including 12X45W 5500K LED E27 Bulbs for for Photo, Video, Portrait and Product Shooting by Ellisa-us
Learn more: https://www.amazon.com/dp/B07QDH51VQ/ref=cm_sw_em_r_mt_dp_U_XkeCEb11C5EBP

Backdrops

Neewer Chromakey Green Chromakey Blue Collapsible Backdrop Collapsible Reversible Background 5’x7′ Chroma-Key Blue/Green by Midgar’s Best
Learn more: https://www.amazon.com/dp/B00E89Q5OY/ref=cm_sw_em_r_mt_dp_U_hmeCEbG5NK4R9

Neewer 6×9 feet/1.8×2.8 meters Photo Studio 100 Percent Pure Muslin Collapsible Backdrop Background for Photography, Video and Television (Background Only) – Black by Photo Guard
Learn more: https://www.amazon.com/dp/B00SR28ZG4/ref=cm_sw_em_r_mt_dp_U_mieCEbMVNFF8N

5X7 Feet Non-Woven Backdrop Green Screen Photo Backdrop for Professional Photography Video Studio Background Newborn Baby Children Portrait Clothes Photo Props EY044 (5x7ft, Green)
by Evergreen for you
Learn more: https://www.amazon.com/dp/B0834MDTPC/ref=cm_sw_em_r_mt_dp_U_oFXOEbRTY9TGR

Recording Gear

PoP voice 16 Feet Single Head Lavalier Lapel Microphone Omnidirectional Condenser Mic for Apple iPhone Android & Windows Smartphones, Youtube, Interview, Studio, Video Recording, Noise Cancelling Mic by PoP voice
Learn more: https://www.amazon.com/dp/B07FQNBKDK/ref=cm_sw_em_r_mt_dp_U_6geCEbSXEKFR8

iPad or iphone with photo tripod

Ulanzi Aluminum Pad Tripod Mount with Cold Shoe Compatible for iPad, Metal Tablet Tripod Adapter Holder with Quick Release Plate 1/4” Screw Mount Universal for iPad Mini/iPad 4/iPad Pro/Surface Pro
by ULANZI Official US Store
Learn more: https://www.amazon.com/dp/B076LVMX7X/ref=cm_sw_em_r_mt_dp_U_EheCEb552YWQV

Blue Snowball iCE USB Mic for Recording and Streaming on PC and Mac, Cardioid Condenser Capsule, Adjustable Stand, Plug and Play – Black
by Amazon.com
Learn more: https://www.amazon.com/dp/B014PYGTUQ/ref=cm_sw_em_r_mt_dp_U_mHXOEb7T8YJ0V

InnoGear Adjustable Mic Stand for Blue Snowball and Blue Snowball iCE Suspension Boom Scissor Arm Stand with Microphone Windscreen and Dual Layered Mic Pop Filter, Max Load 1.5 KG
by JS Deal
Learn more: https://www.amazon.com/dp/B07QH554TJ/ref=cm_sw_em_r_mt_dp_U_MCXOEbMHG3P61

Markers

EXPO 1752226 Neon Dry Erase Markers, Bullet Tip, Assorted Colors, 5-Count by Amazon.com
Learn more: https://www.amazon.com/dp/B0033AGVVG/ref=cm_sw_em_r_mt_dp_U_HjeCEbVX1WBHF

Existing Add Ons
Folding Table
Nuts and Bolts

Video and Lightboard Series

I’m trying something new to get better at presenting whiteboard sessions with customers and the community.  I’ll be doing a combination of video, whiteboard and light board sessions around VMware topics on YouTube.

Session 1 – Interview from Keith Townsend at The CTO Advisor

This is Keith’s interview in preparation for the VMware User Group Virtual Event providing a little history on how I went from a non-traditional IT career into my role working at VMware.

VMUG Virtual Event interview by Keith Townsend

Session 2 – Nate’s VMware Story

A little background, to prepare for VMware’s WWKO in Vegas this year (2020), we were asked to put together a video for a Pitch2Win contest.  The video was supposed to cover a bunch of information around our Any Cloud, Any App, Any Device strategy.  Although I submitted a separate video internally to meet the requirement, I wanted to take some additional time and make it a bit more creative and not focused on a customer conversation.  Here’s my interpretation of our VMware Strategy story for your critique and enjoyment.

Nate’s VMware Story

Session 3 – VMware Hybrid Cloud Evolution and logical design

As part of the VMware Any Cloud, Any App, Any Device story, I thought I’d dive into the Any Cloud portion and dig into the components and logical design requirements to move towards a hybrid cloud platform.

VMware Hybrid Cloud Evolution

Session 4 – VMware’s Future Ready Workforce Reference Architecture

With Covid19 and more corporate employees working from home, I thought it would be good to discuss how VMware’s Future Ready Workforce Reference Architecture provides an elastic, secure and resilient model to enable remote workers access from any device or location using any work style to all their apps used to perform their daily jobs. This aligns nicely with the Any Cloud, Any App, Any Device strategy VMware has been using for a while now. The lightboard is still a work in progress so I can get the whole board in view with my camera but we’re getting there.

VMware’s Future Ready Workforce Reference Architecture Lightboard and model

My Experience rebuilding a VMware Corporate MacBook in the field using Workspace One Intelligent Hub

I don’t know how many people have run through this scenario, but back when I used to do Altiris consulting, the process of rebuilding any laptop was painful and time consuming. It usually involved requesting a loaner laptop with nothing customized, while waiting for approval, shipping your broken laptop back to a corporate office and waiting for someone in desktop support to contact you with the good or bad news on data loss and personalization details. This could take up to a couple weeks before you would have a fully functioning computer back.

I recently had a hiccup with my work laptop (MacBook) where it went to sleep on low battery. When powering the laptop back on after hooking it back to the power supply and logging it, I started getting a never ending prompt that my application library was hosed. My profile looked different, none of my documents were accessible, and it was essentially unusable.

I submitted a help desk ticket with our internal IT team to assist and after 3 hours of troubleshooting, determined that somehow the laptop encryption hoses the laptop making it an expensive desk ornament. I asked, how hard would it be to just reload the operating system and reconfigure it for work? I was cautioned on data loss, but assured my helper that I had all my data on SharePoint or OneDrive. My helper informed me that the time to reload the operating system would take the most time as I would be reloading Mac OS X over the air.

Here’s the details based on when I started and ended the process and my experience using VMware’s own internal tools to restore my corporate laptop to a working state.  Call this my personal VMware on VMware experience as a field solutions engineer.

Step 1 – Reload the Mac OS X operating system

I won’t go into detail on reloading the operating system, as I’m sure there’s plenty of examples already on how to re-install the Mac operating system.

Total Time:  Approximately 30 minutes to download and walk through the re-install.

This slideshow requires JavaScript.

Step 2 – Install Google Chrome

After re-installing the operating system, I downloaded Google Chrome, installed and setup sync to restore all my bookmarks and plugins (approx 5 mins).

Pay attention to the time on my laptop, this is when I started the application installations at 12:12 PM.

img_0169

Step 3 – Install the Workspace One Intelligent Hub app

After getting the operating system setup with a profile name identical to my VMware login name, I went to the public site www.getwsone.com and downloaded the Workspace One Intelligent Hub application for mac and walked through the installation.  After entering my corporate cloud instance, I was prompted to choose between an employee owned or corporate owned device.  As part of the installation, I was provided a clear understanding on my privacy rights, had security profiles installed, and prompted to setup an encryption password to secure the hard drive data.  Once completed, the Workspace One Intelligent Hub displayed all my favorite apps I use on a regular basis and provided an opportunity to download the apps native to the Mac OS right from the Workspace One Intelligent Hub portal.  I’ll get to that next.

This slideshow requires JavaScript.

Step 4 – Install Office 365

Pretty cut and dry here, but opening the Office 365 app in Workspace One single sign on authenticates me to Office 365, where I was able to download the software installer package and walk through the installation.  Upon completing the install, Office 365 apps single sign on authenticate me using my VMware email address without having to remember my passwords.

This slideshow requires JavaScript.

Step 5 – Install additional corporate software I’m entitled to use as a VMware Employee

So with the exception of Microsoft Office, all the remaining software I needed to install was available as native applications to download and install from the mac section in Workspace One Intelligent Hub (Adobe Creative Cloud, OneDrive, Skype for Business, Microsoft Teams, Horizon Client, Slack, HP Printer Drivers, OneNote, McAfee AV, Global Protect VPN Client, Zoom).

This slideshow requires JavaScript.

Now the icing on the cake had to do with my recent request to gain access to Microsoft Visio.  What you say, Microsoft Visio doesn’t run on a Mac?! Technically you are correct, but with the Horizon Client installed, Visio becomes available as a published app and as long as I’m connected to the Horizon client, the Visio, Internet Explorer, even Putty apps are available to use as a native Windows App running on a Mac as well as access to two different VDI sessions for Windows 7 and Windows 10 when I need access to full Native Windows Operating system functions.

This slideshow requires JavaScript.

In conclusion, it took me a little over 1 hour and 15 minutes to complete the application installation of all my apps, synchronize my OneDrive and Sharepoint Folders, authenticate through VPN, Reboot for a MacAfee installation completing my tasks around 1:32PM that afternoon.  All in about a 2 hour timeframe to rebuild my laptop and get back up and running being productive at my job using VMware Workspace One Intelligent Hub.  Oh by the way, I did the same process on a personal mac mini in my home office the prior weekend, the difference being, one device is a personal computer and the other is a corporate owned device.  I have access to the same apps on my laptop, desktop, iPad and iPhone for all corporate applications using this same platform.

img_0281

 

%d bloggers like this: