Home Lab upgrade to vSphere 7.0.2

This is probably more relevant to home lab users as it relates to unsupported hardware, as in my case, I’m running a VMware home lab on unsupported CPU hardware using Dell R710 servers running various Intel Xeon 56xx CPUs. If you want to see how I initially build my lab on the unsupported CPU hardware, you can check out my former blog article here.

I could not work through the normal VMware Lifecycle manager process and ended up using a CD Based install to upgrade my VMware hosts from ESX 7.0.0 to 7.0.2. My 2 biggest challenges were overcoming the unsupported CPUs in my Dell servers and making sure the 10GB Nic card drivers worked post upgrade.

Shout out to William Lam for providing the advanced installer options for allowing unsupported CPUs to allow the vSphere installer to continue and allow my upgrade to complete.

In my case, I ended up working through the normal vSphere upgrade workflow by upgrading my vCenter appliances first and then moving on to my vSphere hosts. I ended up upgrading my primary vCenter server without any issues, but I’m still working through the issues with my second vCenter that manages my nested ESXi portion of my lab due to errors around the online upgrade repo (not critical but I’ll get this sorted out eventually).

Moving on to my vSphere hosts, I downloaded the Custom Dell OEM installer and burned a CD (ouch) and went through the process with each server. Rebooting each host after being placed into maintenance mode, I selected the option at BIOS prompt screen to enter the Boot option menu and chose my CD Rom as the boot device. Once the vSphere installer screen started, hitting the ‘Shift + O’ keys I was able to enter the advanced boot option and added option ‘allowLegacyCPU=true’ and hit enter, went through the normal boot up process, accepting all the options and even the Warning message about installing on an unsupported CPU combination in the server. The upgrade process went quite well with one exception, no driver installed as part of the upgrade that would support the 10GB NIC card in the server (which I use to connect via software iscsi adapter to my Synology NAS).

I ended up doing some troubleshooting by logging into my ESX hosts via SSH and running a query for installed VIBs (esxcli software vib list) and discovered that the ixgben driver in use on my upgraded server was using a Dell driver named ‘INT_bootbank_ixgben_1.8.9.0-1OEM.700.1.0.15525992’ and my working esx hosts running vSphere 6.7U2 were using ‘VMW_bootbank_ixgben_1.7.1.26-1vmw.700.1.0.15843807’.

As luck would have it, I ran into this issue when I upgraded my lab from 6.5 to 6.7 and kept the older 10GB NIC driver on a shared datastore. So my next step was to uninstall the existing VIB and install the previous VIB that I knew would work on my particular server.

Upon a successful reboot of the server after removing and installing the correct driver, I now had a working ESX 7.0.2 server that kept all the existing configurations and was able to see my synologyiscsi datastore.

One thing to note, in order to successfully reboot the host, I edited the boot.cfg files and added the ‘allowLegacyCPU=true’ option to the /bootbank/boot.cfg and /altbootbank/boot.cfg files appending the kernelopt line to allow that setting to stick after rebooting. Using SCP and editing these two files works successfully.

Rinse and repeat with the other lab servers and we’re all upgraded successfully. Hopefully this was useful for those that may be in my same situation using similar hardware but more importantly, provides some guidance on how to troubleshoot a potential failed upgrade on a host.

If you want to use nested vSphere, William Lam has a public shared content library with all the virtual appliance versions available to add to your vCenter content library configuration. When using the nested appliances on unsupported hardware, keep in mind that you’ll need to follow the same process on those nested appliances in order to boot them successfully. On first boot, use the Shift + O option and append the ‘allowLegacyCPU=true’ option and after successfully booting, follow the process above to edit the boot.cfg files.

On a side note, one of the upgrades did not work as expected and upon initial upgrade, I lost all connectivity and could not manage the host at all.

**Useful tip: if you need to revert a host to a previous configuration, when booting up the host into ESXi, one of the options is to select ‘Shift+R’ and boot your server into recovery mode which reverts the server to the alternate boot bank that was functioning prior to the upgrade.

VMware Press Releases

  • PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE: VMW) today announced that leading analyst firm IDC ranked VMware No. 1 by market share in the worldwide IT automation and configuration management (ITACM) software market (1) for 2020. As the industry requirements for IT automation and configuration management software shift to optimize applications and infrastructure supporting digital […]
  • New Solution Will Improve Spectrum Efficiency for Mobile Networks by as Much as 2x PALO ALTO, Calif. & SANTA CLARA, Calif.–(BUSINESS WIRE)– As communication service providers (CSPs) move to open radio access network (O-RAN) architectures, the door to innovation opens. Partnering to accelerate this innovation, Cohere Technologies and VMware (NYSE: VMW) today announced they are […]
  • Lowering the cost and complexity of deploying new services by stitching together multiple cloud and edge environments through a unifying real-time framework AUSTIN & PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE:VMW) and Vapor IO today announced they are building a Multi-Cloud Services Grid that integrates the VMware Telco Cloud Platform with Vapor IO’s Kinetic Grid […]
  • PALO ALTO, Calif.–(BUSINESS WIRE)– Security Connect 2021 – Today at Security Connect 2021, VMware, Inc. (NYSE: VMW) announced its work with Zoom Video Communications, Inc. to enable a better and more secure collaboration experience for hybrid work environments. The effort delivers interoperability between the recently announced VMware Anywhere Workspace and the Zoom collaboration platform to […]
  • Report highlights opportunity for security leaders to redefine and transform cybersecurity strategies PALO ALTO, Calif.–(BUSINESS WIRE)– Security Connect 2021 – Today at Security Connect 2021, VMware, Inc. (NYSE: VMW) released the findings from the fourth installment of the Global Security Insights Report, based on an online survey of 3,542 CIOs, CTOs and CISOs in December […]
  • Security Connect 2021 brings together innovators and practitioners to transform security strategies PALO ALTO, Calif.–(BUSINESS WIRE)– Security Connect 2021 – For many organizations, the first year of the global pandemic was all about survival and resiliency, accelerating the adoption to private and public clouds and supporting an anywhere workforce. However, the sprint toward digital …
  • PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE: VMW), a leading innovator in enterprise software, today announced that Raghu Raghuram, VMware’s incoming chief executive officer will present at the Stifel 2021 Virtual Cross Sector Insight Conference on Wednesday, June 9, 2021 at 1:40 p.m. PT/ 4:40 p.m. ET. A live webcast will be available on VMware’s […]
  • Total Revenue growth of 9% year-over-year Subscription and SaaS revenue growth of 29% year-over-year PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE: VMW), a leading innovator in enterprise software, today announced financial results for the first quarter of fiscal year 2022: Revenue for the first quarter was $2.99 billion, an increase of 9% from the first […]
  • At RSA Conference 2021, VMware will showcase how the company helps eliminate the complexity existing approaches with more interconnected security New threat landscape report details how attackers are successfully evading perimeter defenses PALO ALTO, Calif.–(BUSINESS WIRE)– At RSA Conference 2021, VMware security experts will highlight opportunities for continued evolution in cybersecurity …
  • PALO ALTO, Calif.–(BUSINESS WIRE)– VMware, Inc. (NYSE: VMW) today announced that global market research firm Forrester has named VMware a leader in The Forrester Wave™: Endpoint Security Software As A Service, Q2 20211. “We are incredibly proud to be named a leader in The Forrester Wave™,” said Patrick Morley, senior vice president and general manager […]

VMware NSX-T 3.1 Global Manager Deployment

I decided to write up some lessons learned when deploying the latest NSX-T instance in my home lab. I’m using NSX-T 3.1 and trying out the options to use the Global Manager option for deployment. Here’s a summary of what I worked on:

  1. NSX-T Global Deployment to include 2 Global NSX Managers clustered with a VIP including a Standby NSX Global Manager. Current NSX-T Federation supported features can be found here.
  2. Attach Compute Managers to Global and NSX Manager appliances.
  3. NSX Manager Deployment to include 2 NSX Managers clustered to manage my physical lab vSphere cluster and an additional 2 NSX Managers clustered to manage my nested vSphere cluster.
  4. Test cluster failover from node to node.
  5. All NSX Global and NSX Manager appliances configured to use Workspace One Identity for individual appliance authentication as well as the VIP used for each cluster.
  6. Configure all NSX Global and Manager appliances for backup.

Continue reading “VMware NSX-T 3.1 Global Manager Deployment”

vSAN Witness on ESXiOnPi

Another continuation of my main blog post and working through my 3 use cases, this one happens to be something a lot of people have been asking me about. 2 node vSAN edge site with a remote witness, but what if the remote witness was in the same room and running on a Raspberry Pi. Here’s where I’ll test that option.

Let’s set this up a bit, I’ll be using two virtual ESXi hosts again using William Lam’s virtual ESX content library and deploying two virtual ESXi servers to setup and configure a 2-node vSAN cluster. I’ll use the VMware Fling again running on a Raspberry Pi4 to act as my vSAN witness.Continue reading “vSAN Witness on ESXiOnPi”

Create a vSAN 2 Node cluster on ESXi On Pi

Continuing up on my main blog post around using Raspberry Pi4b devices with the vSphere Fling, my first use case involves create a 2-node vSAN cluster on these devices using a virtual remote witness. Here’s how I set this up in my homelab.

Assuming you already have 2 Raspberry Pi4b devices installed and configured with the vSphere fling using the instructions linked in the main blog post above, I added vSAN to the vmkernel port that I also have the management network configured to make it simpler to setup both devices.Continue reading “Create a vSAN 2 Node cluster on ESXi On Pi”

%d bloggers like this: