Home Lab upgrade to vSphere 7.0.2

This is probably more relevant to home lab users as it relates to unsupported hardware, as in my case, I’m running a VMware home lab on unsupported CPU hardware using Dell R710 servers running various Intel Xeon 56xx CPUs. If you want to see how I initially build my lab on the unsupported CPU hardware, you can check out my former blog article here.

I could not work through the normal VMware Lifecycle manager process and ended up using a CD Based install to upgrade my VMware hosts from ESX 7.0.0 to 7.0.2. My 2 biggest challenges were overcoming the unsupported CPUs in my Dell servers and making sure the 10GB Nic card drivers worked post upgrade.

Shout out to William Lam for providing the advanced installer options for allowing unsupported CPUs to allow the vSphere installer to continue and allow my upgrade to complete.

In my case, I ended up working through the normal vSphere upgrade workflow by upgrading my vCenter appliances first and then moving on to my vSphere hosts. I ended up upgrading my primary vCenter server without any issues, but I’m still working through the issues with my second vCenter that manages my nested ESXi portion of my lab due to errors around the online upgrade repo (not critical but I’ll get this sorted out eventually).

Moving on to my vSphere hosts, I downloaded the Custom Dell OEM installer and burned a CD (ouch) and went through the process with each server. Rebooting each host after being placed into maintenance mode, I selected the option at BIOS prompt screen to enter the Boot option menu and chose my CD Rom as the boot device. Once the vSphere installer screen started, hitting the ‘Shift + O’ keys I was able to enter the advanced boot option and added option ‘allowLegacyCPU=true’ and hit enter, went through the normal boot up process, accepting all the options and even the Warning message about installing on an unsupported CPU combination in the server. The upgrade process went quite well with one exception, no driver installed as part of the upgrade that would support the 10GB NIC card in the server (which I use to connect via software iscsi adapter to my Synology NAS).

I ended up doing some troubleshooting by logging into my ESX hosts via SSH and running a query for installed VIBs (esxcli software vib list) and discovered that the ixgben driver in use on my upgraded server was using a Dell driver named ‘INT_bootbank_ixgben_1.8.9.0-1OEM.700.1.0.15525992’ and my working esx hosts running vSphere 6.7U2 were using ‘VMW_bootbank_ixgben_1.7.1.26-1vmw.700.1.0.15843807’.

As luck would have it, I ran into this issue when I upgraded my lab from 6.5 to 6.7 and kept the older 10GB NIC driver on a shared datastore. So my next step was to uninstall the existing VIB and install the previous VIB that I knew would work on my particular server.

Upon a successful reboot of the server after removing and installing the correct driver, I now had a working ESX 7.0.2 server that kept all the existing configurations and was able to see my synologyiscsi datastore.

Rinse and repeat with the other lab servers and we’re all upgraded successfully. Hopefully this was useful for those that may be in my same situation using similar hardware but more importantly, provides some guidance on how to troubleshoot a potential failed upgrade on a host.

On a side note, one of the upgrades did not work as expected and upon initial upgrade, I lost all connectivity and could not manage the host at all.

**Useful tip: if you need to revert a host to a previous configuration, when booting up the host into ESXi, one of the options is to select ‘Shift+R’ and boot your server into recovery mode which reverts the server to the alternate boot bank that was functioning prior to the upgrade.

VMware Press Releases

  • Partners Honored for Delivering Top Performance and Noteworthy Impact During Unprecedented Year PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE:VMW), a leading innovator in enterprise software, today announced the winners of the VMware 2021 Partner Achievement Awards to honor select partners for extraordinary achievements aligned to key business priorities. The annual VMware Partner …
  • PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE: VMW), a leading innovator in enterprise software, today announced that Zane Rowe, as VMware’s interim chief executive officer, and Raghu Raghuram, VMware’s chief operating officer, products and cloud services, will participate at the Dell Technologies Financial Analyst Q&A at Dell Technologies World on Wednesday, May 5, 2021 at […]
  • VMware Tanzu to empower soldiers to use modern cloud technologies and DevSecOps practices; solve problems through agile and more secure software development processes PALO ALTO, Calif.–(BUSINESS WIRE)– VMware (NYSE: VMW) today announced that it has been selected by U.S. Army Futures Command to enable a software factory for soldiers, by soldiers. Together, Army Futures Command […]
  • VMware Telco Cloud Platform RAN Extends the Benefits of Network Disaggregation and Virtualization from the Core to the RAN PALO ALTO, Calif.–(BUSINESS WIRE)– Communication service providers (CSPs) are modernizing their networks as quickly as they can to bring new 5G services to customers. First with the core—the heart of the network. And now further out […]
  • New Solution Enables a Highly Engaged Workforce; Delivers Broader, More Effective Security; and Reduces Costs and Operational Overhead PALO ALTO, Calif.–(BUSINESS WIRE)– The way we work has changed forever. And now that leading companies have seen the benefits of remote work, they want to do more than just support it. They want to become truly […]
  • Dell Technologies to Spin-off 81% Equity Ownership of VMware, Positioning VMware for Further Growth Provides VMware with Strategic and Operational Flexibility while Preserving Dell Technologies Strategic Partnership VMware to Pay $11.5B-$12.0B Special Cash Dividend to All Stockholders; Committed to Investment Grade Rating PALO ALTO, Calif.–(BUSINESS WIRE)– The VMware (NYSE: VMW) Special …
  • Technology Leaders Will Convene to Accelerate the Evolution of the Internet and Enable the Next Generation of Human Experiences AUSTIN, Texas & PALO ALTO, Calif.–(BUSINESS WIRE)– Vapor IO and VMware today announced the Open Grid Alliance (OGA), an industry alliance that will define and accelerate the Open Grid, an evolutionary rearchitecting of the Internet. Dell […]
  • New innovations enable collaboration between InfoSec and DevOps teams to reduce risk and protect public cloud and on-premises Kubernetes environments PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE: VMW) today unveiled expanded cloud workload protection capabilities to deliver security for containers and Kubernetes. The new solution will help increase visibility, enable compliance and …
  • New VMware Cloud Universal Delivers Flexible Subscription for Consuming Multi-Cloud PALO ALTO, Calif.–(BUSINESS WIRE)– For more than 20 years, VMware has powered essential business applications around the world. More than 300,000 organizations have built and run more than 85 million workloads on VMware, and more than five million developers build apps on VMware technology. Today, […]
  • PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE: VMW) today launched the Customer Lifecycle Incentives Program to help partners facilitate the end-to-end digital transformation for their customers. The program will help partners drive increased profitability through new and expanded customer engagement, a simplified experience, optimized incentive return on investment, and …

VMware NSX-T 3.1 Global Manager Deployment

I decided to write up some lessons learned when deploying the latest NSX-T instance in my home lab. I’m using NSX-T 3.1 and trying out the options to use the Global Manager option for deployment. Here’s a summary of what I worked on:

  1. NSX-T Global Deployment to include 2 Global NSX Managers clustered with a VIP including a Standby NSX Global Manager. Current NSX-T Federation supported features can be found here.
  2. Attach Compute Managers to Global and NSX Manager appliances.
  3. NSX Manager Deployment to include 2 NSX Managers clustered to manage my physical lab vSphere cluster and an additional 2 NSX Managers clustered to manage my nested vSphere cluster.
  4. Test cluster failover from node to node.
  5. All NSX Global and NSX Manager appliances configured to use Workspace One Identity for individual appliance authentication as well as the VIP used for each cluster.
  6. Configure all NSX Global and Manager appliances for backup.

Continue reading “VMware NSX-T 3.1 Global Manager Deployment”

vSAN Witness on ESXiOnPi

Another continuation of my main blog post and working through my 3 use cases, this one happens to be something a lot of people have been asking me about. 2 node vSAN edge site with a remote witness, but what if the remote witness was in the same room and running on a Raspberry Pi. Here’s where I’ll test that option.

Let’s set this up a bit, I’ll be using two virtual ESXi hosts again using William Lam’s virtual ESX content library and deploying two virtual ESXi servers to setup and configure a 2-node vSAN cluster. I’ll use the VMware Fling again running on a Raspberry Pi4 to act as my vSAN witness.Continue reading “vSAN Witness on ESXiOnPi”

Create a vSAN 2 Node cluster on ESXi On Pi

Continuing up on my main blog post around using Raspberry Pi4b devices with the vSphere Fling, my first use case involves create a 2-node vSAN cluster on these devices using a virtual remote witness. Here’s how I set this up in my homelab.

Assuming you already have 2 Raspberry Pi4b devices installed and configured with the vSphere fling using the instructions linked in the main blog post above, I added vSAN to the vmkernel port that I also have the management network configured to make it simpler to setup both devices.Continue reading “Create a vSAN 2 Node cluster on ESXi On Pi”

%d bloggers like this: