Great set of instructions for migrating to a new Virtual Center when using the Nexus 1000v DVS:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020893
From a recent experience, steps number 7 and 9 should be reversed (configure new SVS connection parameters before importing the extension key to vCenter). Also note, there doesn't seem to be a step #8... Not sure why!
Data center compute, storage and virtualization blog to share tips and tricks with the world.
4.12.2012
4.04.2012
VM-FEX Deployment on UCS Now Easier!
If you are interested in deploying VM-FEX on the UCS, ESX/ESXi host .vib files are now accessible from the UCSM home page:
Waiting for BIOS POST completion from CIMC on server
When a blade will not complete discovery due to BIOS POST completion issues, the following fault will surface in the FSM tab of the server:
Waiting for BIOS POST completion from CIMC on server 4/6(FSM-STAGE:sam:dme:ComputeBladeDiscover:BiosPostCompletion)
The following steps can be taken to try and mitigate this error, or further determine if a hardware failure is causing this problem:
1. Physically remove and reinsert the blade into the same slot
2. Physically remove and reinsert the blade into a different slot
3. Physically reseat CMOS battery in the blade
4. Navigate to the server in UCSM -> Inventory tab, and select "Recover Corrupt BIOS Firmware"
5. BIOS jumper recovery, and
6. Attempt to boot the blade with a single DIMM, and single CPU to rule out any DIMM/CPU/socket failures
Waiting for BIOS POST completion from CIMC on server 4/6(FSM-STAGE:sam:dme:ComputeBladeDiscover:BiosPostCompletion)
The following steps can be taken to try and mitigate this error, or further determine if a hardware failure is causing this problem:
1. Physically remove and reinsert the blade into the same slot
2. Physically remove and reinsert the blade into a different slot
3. Physically reseat CMOS battery in the blade
4. Navigate to the server in UCSM -> Inventory tab, and select "Recover Corrupt BIOS Firmware"
5. BIOS jumper recovery, and
6. Attempt to boot the blade with a single DIMM, and single CPU to rule out any DIMM/CPU/socket failures
VEM Control and Packet Uplink Interfaces
Need to find out which uplink your VEM is sending control or packet information out of? When using VPC-HM, you can use the following command to determine this information: "vemcmd show port-old" (run from the ESX CLI), or "module vem X execute vemcmd show port-old" (run from the VSM). The updated "vemcmd show port" output only shows VM interfaces and omits the control/packet interfaces.
LTL 10 is your Control interface
LTL 12 is your Packet interface
LTL 49 and above will be your Virtual Machine Interfaces
If you look under the SG_ID column you'll see the sub group ID of each VMNIC. From the output below, Control LTL 10 has a Pinned_SGID of 1. LTL 18 is vmnic1 which is the member interface for SGID 1. Likewise, Packet LTL 12 has a Pinned_SGID of 0. LTL 17 is vmnic0, which is the member interface for SGID 0.
~ # vemcmd show port-old
LTL IfIndex Vlan Bndl SG_ID Pinned_SGID Type Admin State CBL Mode Name
6 0 1 T 0 32 32 VIRT UP UP 1 Trunk vns
8 0 3969 0 32 32 VIRT UP UP 1 Access
9 0 3969 0 32 32 VIRT UP UP 1 Access
10 0 3001 0 32 1 VIRT UP UP 1 Access
11 0 3968 0 32 32 VIRT UP UP 1 Access
12 0 3002 0 32 0 VIRT UP UP 1 Access
13 0 1 0 32 32 VIRT UP UP 0 Access
14 0 3971 0 32 32 VIRT UP UP 1 Access
15 0 3971 0 32 32 VIRT UP UP 1 Access
16 0 1 T 0 32 32 VIRT UP UP 1 Trunk arp
17 2500c000 1 T 305 0 32 PHYS UP UP 1 Trunk vmnic0
18 2500c040 1 T 305 1 32 PHYS UP UP 1 Trunk vmnic1
49 1c000180 3002 0 32 0 VIRT UP UP 1 Access n1000v-1.eth2
50 1c000170 19 0 32 1 VIRT UP UP 1 Access n1000v-1.eth1
51 1c000120 3001 0 32 0 VIRT UP UP 1 Access n1000v-1.eth0
52 1c000060 19 0 32 1 VIRT UP UP 1 Access Windows.eth0
53 1c000030 19 0 32 0 VIRT UP UP 1 Access vmk3
305 16000000 1 T 0 32 32 CHAN UP UP 1 Trunk
LTL 10 is your Control interface
LTL 12 is your Packet interface
LTL 49 and above will be your Virtual Machine Interfaces
If you look under the SG_ID column you'll see the sub group ID of each VMNIC. From the output below, Control LTL 10 has a Pinned_SGID of 1. LTL 18 is vmnic1 which is the member interface for SGID 1. Likewise, Packet LTL 12 has a Pinned_SGID of 0. LTL 17 is vmnic0, which is the member interface for SGID 0.
LTL IfIndex Vlan Bndl SG_ID Pinned_SGID Type Admin State CBL Mode Name
6 0 1 T 0 32 32 VIRT UP UP 1 Trunk vns
8 0 3969 0 32 32 VIRT UP UP 1 Access
9 0 3969 0 32 32 VIRT UP UP 1 Access
10 0 3001 0 32 1 VIRT UP UP 1 Access
11 0 3968 0 32 32 VIRT UP UP 1 Access
12 0 3002 0 32 0 VIRT UP UP 1 Access
13 0 1 0 32 32 VIRT UP UP 0 Access
14 0 3971 0 32 32 VIRT UP UP 1 Access
15 0 3971 0 32 32 VIRT UP UP 1 Access
16 0 1 T 0 32 32 VIRT UP UP 1 Trunk arp
17 2500c000 1 T 305 0 32 PHYS UP UP 1 Trunk vmnic0
18 2500c040 1 T 305 1 32 PHYS UP UP 1 Trunk vmnic1
49 1c000180 3002 0 32 0 VIRT UP UP 1 Access n1000v-1.eth2
50 1c000170 19 0 32 1 VIRT UP UP 1 Access n1000v-1.eth1
51 1c000120 3001 0 32 0 VIRT UP UP 1 Access n1000v-1.eth0
52 1c000060 19 0 32 1 VIRT UP UP 1 Access Windows.eth0
53 1c000030 19 0 32 0 VIRT UP UP 1 Access vmk3
305 16000000 1 T 0 32 32 CHAN UP UP 1 Trunk
4.03.2012
Nexus 1000v Recommended Mode is now Layer 3
With the advent of 4.2(1)SV1(5.1), Cisco now recommends that you configure your SVS domain mode as Layer 3 for VSM-to-VEM communication. Note: The installation still defaults to Layer 2. This recommendation comes to support best practices for VXLANs:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_5_1/install_upgrade/vsm_vem/guide/n1000v_installupgrade_overview.html#wp1083068
Subscribe to:
Posts (Atom)