Skip to main content

Posts

vSphere 6.0 w/ NSX-v 6.2 upgrade to vSphere 6.5 w/NSX-v 6.3 - EAM Agency issue

Problems with EAM (VMware ESX Agent Manager) and NSX After upgrading ESXi 6.0 there can be problems with the NSX-v installation VIBs. Edit Symptoms In the Installation section, of Network and Security the installation status of the host(s) will show as not ready. When you click the gear icon next to the host and attempt to run the “resolve” operation, it fails repeatedly.  In the administration page of the vCSA under vCenter Server Extensions, double click on vSphere ESX Agent Manager and select the agency for the appropriate cluster. If you see the issue that “Agent VIB module is not installed”  To confirm, ssh to the vCenter server and check the EAM log: [ msherian@MARS-E550 ~ ] $ ssh root@vc-01 VMware vCenter Server Appliance 6.5.0.11000 Type: vCenter Server with an external Platform Services Controller root@vc-01's password: root@vc-01 [ ~ ] # less /storage/log/vmware/eam/eam.log 2017 - 11 -15T18: 52 :39.932Z | INFO | host- 1886 - 2 | VibJob.java | 288
Recent posts

HyperCube — Building the custom BIOS

One of the minor annoyances of using unsupported hardware is that you get unexpected behaviour. The Intel NUC will not report some required hardware states that ESXi needs. These include things such as the make, model, serial, and asset numbers. In order to work around this we need to build a custom BIOS to populate missing values. Thanks to Virten.net   for the know how to do this. There is one caveat, the NUC does not appear in the "download and customize selection" any more. download the BIOS file from Intel Open the file Add in the missing details—I've created a BIOS file for each node with proper details for each, but that isn't really required. Save the BIOS file(s) To easily create a bootable USB use Rufus Rufus version: 2.5.799 Windows version: Windows 10 64-bit (Build 10240) Syslinux versions: 4.07/2013-07-25, 6.03/2014-10-06 Grub versions: 0.4.6a, 2.02~beta2 Locale ID: 0x0409 0 devices found Checking for Rufus updates... Checking

HyperCube — The easy bit, hardware assembly

Below I'll provide the first node as an example of the ridiculously simple hardware install. The unboxing: Removing the case: Installation of the 500 GB M.2 Drive: Remove the mounting screw from the chassis: Place the drive into the slot: Fix the screw in place: 1 TB SSD: Slot it into the drive bay: The correct way! ;-) Repeat two more times, cable and go:

HyperCube — The fastest little lab in the west of Europe.

In the coming days and weeks, I'll be documenting the construction of a very powerful nested cluster of ESXi hosts. These are based on the Intel NUC platform, and I will be linking to much of the source material used in constructing this lab. The hardware used is as follows: 3×Intel NUC 5i7RYH  3× Samsung 1TB 850 EVO Series SATA  SSD 3×SSD 850 EVO M.2 500GB 6×16GB SO-DIMM Intelligent Memory  The three ESXi nodes are running  6.0.0 (Build 2809209) customised with the E1000E driver, and have the ESXi UI Fling Installed. The first node of the cluster has been running for several weeks, running a nested ESX environment with all flash virtual SAN So, if you are interested, stay tuned here, and I will be documenting the steps I've already taken, Giving much credit to those other bloggers out there that have done a lot of the heavy lifting.  I'll be extending this deployment with all of the following: AD Services design for vSphere Environmen