Reference link for MS NLB configuration on Catalyst switches.
http://www.cisco.com/en/US/products/hw/switches/ps708/products_configuration_example09186a0080a07203.shtml#mm
Data center compute, storage and virtualization blog to share tips and tricks with the world.
11.27.2012
11.19.2012
Nexus 5K 130 Day Restart Field Notice
I have had several questions arise around the Nexus 5000 field notice #63560 - Nexus 5000 System Restart after 130 days of uptime. This issue only affects 5K switches that are running the following version of software, AND have the LAN_BASE_SERVICES_PKG license installed:
If you purchase an L3 daughter card for the 5K switch, the LAN_BASE_SERVICES_PKG ships with it. The following L3 features are included with the card/license:
Source: http://www.cisco.com/en/US/ts/fn/635/fn63560.html
- 5.1(3)N1(1)
- 5.1(3)N1(1a)
- 5.1(3)N2(1)
- 5.1(3)N2(1a)
If you purchase an L3 daughter card for the 5K switch, the LAN_BASE_SERVICES_PKG ships with it. The following L3 features are included with the card/license:
- Static routing
- RIPv2
- OSPFv2
- EIGRP stub
- HSRP
- VRRP
- IGMP v2/v3
- PIMv2 (sparse mode)
- routed ACL
- uRPF
Source: http://www.cisco.com/en/US/ts/fn/635/fn63560.html
11.17.2012
UCS Pre-Login Banner
Have you ever wanted to configure a pre-login banner for your UCS system? I haven't, but for those of you that would like this added feature in your environment, here's how:
-Navigate within UCSM to the Admin tab -> User Management -> User Services -> Banners tab.
-Click on the Create Pre-Login Banner Action, and enter a message in the pop-up box. Only text is currently supported with this feature.
-The banner will appear for both GUI and CLI system-wide logins.
A pre-login banner can also be configured from the UCS command line using the following guide:
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/cli/config/guide/2.0/b_UCSM_CLI_Configuration_Guide_2_0_chapter_011.html#concept_5415580F234F4CE1AE4A39395E236E1A
-Navigate within UCSM to the Admin tab -> User Management -> User Services -> Banners tab.
-Click on the Create Pre-Login Banner Action, and enter a message in the pop-up box. Only text is currently supported with this feature.
-The banner will appear for both GUI and CLI system-wide logins.
A pre-login banner can also be configured from the UCS command line using the following guide:
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/cli/config/guide/2.0/b_UCSM_CLI_Configuration_Guide_2_0_chapter_011.html#concept_5415580F234F4CE1AE4A39395E236E1A
UCS Local Disk Configuration Policy Location
There is a type-o in the Cisco UCS Configuration Guide regarding the location of Local Disk Configuration Policies. The document describes the location of the Local Disk Configuration Policy to be within the Servers tab -> Service Profile -> Policies tab, when in fact the Local Disk Configuration Policy is located under the service profile in the Storage tab. You can find it in the top left corner in the Actions box.
Source: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_011100.html
Source: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_011100.html
8.01.2012
Bare Metal UCS Blade vNIC Configuration
When configuring bare metal installation of an OS on a UCS blade, you need to take into consideration the fact that the UCS vethernet ports are always trunk interfaces.. To essentially make a vNIC into an "access" port, you would need to configure 1 VLAN per vNIC, with the Native VLAN checkbox checked so that packets set across that trunk are untagged.
Alternatively, you can leave multiple VLANs on the vNIC trunk interface, and configure subinterfaces/trunking at the OS level.
Alternatively, you can leave multiple VLANs on the vNIC trunk interface, and configure subinterfaces/trunking at the OS level.
4.12.2012
Migrate Nexus 1000v to New vCenter
Great set of instructions for migrating to a new Virtual Center when using the Nexus 1000v DVS:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020893
From a recent experience, steps number 7 and 9 should be reversed (configure new SVS connection parameters before importing the extension key to vCenter). Also note, there doesn't seem to be a step #8... Not sure why!
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020893
From a recent experience, steps number 7 and 9 should be reversed (configure new SVS connection parameters before importing the extension key to vCenter). Also note, there doesn't seem to be a step #8... Not sure why!
4.04.2012
VM-FEX Deployment on UCS Now Easier!
If you are interested in deploying VM-FEX on the UCS, ESX/ESXi host .vib files are now accessible from the UCSM home page:
Waiting for BIOS POST completion from CIMC on server
When a blade will not complete discovery due to BIOS POST completion issues, the following fault will surface in the FSM tab of the server:
Waiting for BIOS POST completion from CIMC on server 4/6(FSM-STAGE:sam:dme:ComputeBladeDiscover:BiosPostCompletion)
The following steps can be taken to try and mitigate this error, or further determine if a hardware failure is causing this problem:
1. Physically remove and reinsert the blade into the same slot
2. Physically remove and reinsert the blade into a different slot
3. Physically reseat CMOS battery in the blade
4. Navigate to the server in UCSM -> Inventory tab, and select "Recover Corrupt BIOS Firmware"
5. BIOS jumper recovery, and
6. Attempt to boot the blade with a single DIMM, and single CPU to rule out any DIMM/CPU/socket failures
Waiting for BIOS POST completion from CIMC on server 4/6(FSM-STAGE:sam:dme:ComputeBladeDiscover:BiosPostCompletion)
The following steps can be taken to try and mitigate this error, or further determine if a hardware failure is causing this problem:
1. Physically remove and reinsert the blade into the same slot
2. Physically remove and reinsert the blade into a different slot
3. Physically reseat CMOS battery in the blade
4. Navigate to the server in UCSM -> Inventory tab, and select "Recover Corrupt BIOS Firmware"
5. BIOS jumper recovery, and
6. Attempt to boot the blade with a single DIMM, and single CPU to rule out any DIMM/CPU/socket failures
VEM Control and Packet Uplink Interfaces
Need to find out which uplink your VEM is sending control or packet information out of? When using VPC-HM, you can use the following command to determine this information: "vemcmd show port-old" (run from the ESX CLI), or "module vem X execute vemcmd show port-old" (run from the VSM). The updated "vemcmd show port" output only shows VM interfaces and omits the control/packet interfaces.
LTL 10 is your Control interface
LTL 12 is your Packet interface
LTL 49 and above will be your Virtual Machine Interfaces
If you look under the SG_ID column you'll see the sub group ID of each VMNIC. From the output below, Control LTL 10 has a Pinned_SGID of 1. LTL 18 is vmnic1 which is the member interface for SGID 1. Likewise, Packet LTL 12 has a Pinned_SGID of 0. LTL 17 is vmnic0, which is the member interface for SGID 0.
~ # vemcmd show port-old
LTL IfIndex Vlan Bndl SG_ID Pinned_SGID Type Admin State CBL Mode Name
6 0 1 T 0 32 32 VIRT UP UP 1 Trunk vns
8 0 3969 0 32 32 VIRT UP UP 1 Access
9 0 3969 0 32 32 VIRT UP UP 1 Access
10 0 3001 0 32 1 VIRT UP UP 1 Access
11 0 3968 0 32 32 VIRT UP UP 1 Access
12 0 3002 0 32 0 VIRT UP UP 1 Access
13 0 1 0 32 32 VIRT UP UP 0 Access
14 0 3971 0 32 32 VIRT UP UP 1 Access
15 0 3971 0 32 32 VIRT UP UP 1 Access
16 0 1 T 0 32 32 VIRT UP UP 1 Trunk arp
17 2500c000 1 T 305 0 32 PHYS UP UP 1 Trunk vmnic0
18 2500c040 1 T 305 1 32 PHYS UP UP 1 Trunk vmnic1
49 1c000180 3002 0 32 0 VIRT UP UP 1 Access n1000v-1.eth2
50 1c000170 19 0 32 1 VIRT UP UP 1 Access n1000v-1.eth1
51 1c000120 3001 0 32 0 VIRT UP UP 1 Access n1000v-1.eth0
52 1c000060 19 0 32 1 VIRT UP UP 1 Access Windows.eth0
53 1c000030 19 0 32 0 VIRT UP UP 1 Access vmk3
305 16000000 1 T 0 32 32 CHAN UP UP 1 Trunk
LTL 10 is your Control interface
LTL 12 is your Packet interface
LTL 49 and above will be your Virtual Machine Interfaces
If you look under the SG_ID column you'll see the sub group ID of each VMNIC. From the output below, Control LTL 10 has a Pinned_SGID of 1. LTL 18 is vmnic1 which is the member interface for SGID 1. Likewise, Packet LTL 12 has a Pinned_SGID of 0. LTL 17 is vmnic0, which is the member interface for SGID 0.
LTL IfIndex Vlan Bndl SG_ID Pinned_SGID Type Admin State CBL Mode Name
6 0 1 T 0 32 32 VIRT UP UP 1 Trunk vns
8 0 3969 0 32 32 VIRT UP UP 1 Access
9 0 3969 0 32 32 VIRT UP UP 1 Access
10 0 3001 0 32 1 VIRT UP UP 1 Access
11 0 3968 0 32 32 VIRT UP UP 1 Access
12 0 3002 0 32 0 VIRT UP UP 1 Access
13 0 1 0 32 32 VIRT UP UP 0 Access
14 0 3971 0 32 32 VIRT UP UP 1 Access
15 0 3971 0 32 32 VIRT UP UP 1 Access
16 0 1 T 0 32 32 VIRT UP UP 1 Trunk arp
17 2500c000 1 T 305 0 32 PHYS UP UP 1 Trunk vmnic0
18 2500c040 1 T 305 1 32 PHYS UP UP 1 Trunk vmnic1
49 1c000180 3002 0 32 0 VIRT UP UP 1 Access n1000v-1.eth2
50 1c000170 19 0 32 1 VIRT UP UP 1 Access n1000v-1.eth1
51 1c000120 3001 0 32 0 VIRT UP UP 1 Access n1000v-1.eth0
52 1c000060 19 0 32 1 VIRT UP UP 1 Access Windows.eth0
53 1c000030 19 0 32 0 VIRT UP UP 1 Access vmk3
305 16000000 1 T 0 32 32 CHAN UP UP 1 Trunk
4.03.2012
Nexus 1000v Recommended Mode is now Layer 3
With the advent of 4.2(1)SV1(5.1), Cisco now recommends that you configure your SVS domain mode as Layer 3 for VSM-to-VEM communication. Note: The installation still defaults to Layer 2. This recommendation comes to support best practices for VXLANs:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_5_1/install_upgrade/vsm_vem/guide/n1000v_installupgrade_overview.html#wp1083068
3.21.2012
Disconnect Active KVM Connections
In order to disconnect active KVM sessions that are opened up to a blade, you can use the "Reset KVM Server" option. This is available by navigating to Equipment -> Chassis # -> Server #, then select "Recover Server". The "Reset KVM Server" option will be available in the list.
2.17.2012
2.14.2012
esxcfg-vswitch Cheat Sheet; Especially for DVS Commands
It is not uncommon to migrate a NIC or vmk that causes one to lose management connectivity to the ESX host. Some/all of the following commands can be used to restore uplinks to the standard/DVS switch:
- These commands allow you to add or remove network cards (known as uplinks) to or from a Standard vSwitch:
# esxcfg-vswitch -U vmnic vSwitch # unlink an uplink
# esxcfg-vswitch -L vmnic vSwitch # add an uplink
Note: Unlink and relinking from or to a distributed switch depending on the scenario.
- These commands allow you to add or remove network cards (known as uplinks) to or from a vNetwork Distributed Switch (vDS):
# esxcfg-vswitch -Q vmnic -V dvPort_ID_of_vmnic dvSwitch # unlink a DVS uplink
# esxcfg-vswitch -P vmnic -V unused_dvPort_ID dvSwitch # add a DVS uplink
- To create an ESX Service Console management interface (vswif) and uplink it to the vDS, run the command:
Note: This command does not apply to ESXi.
# esxcfg-vswif -a -i IP_address -n Netmask -V dvSwitch -P DVPort_ID vswif0
For example:
# esxcfg-vswif -a -i 192.168.76.1 -n 255.255.255.0 -V dvSwitch -P 8 vswif0
- To use the same IP address of the management VMkernel port, perform one of these options:
- Delete an existing VMkernel port from a vDS with the command:
# esxcfg-vmknic -d -s DVswitchname -v virtual_port_ID
- Disable the management VMkernel port with the command:
# esxcfg-vmknic -D -s DVswitchname -v virtual_port_ID vmnic#
- Delete an existing VMkernel port from a vDS with the command:
- To create a VMkernel port and attach it to the DVPort ID on a vDS, run the command:
# esxcfg-vmknic -a -i IP_address -n netmask -s DVswitchname -v virtual_port_ID
- To create a VMkernel port and attach it to the DVPort ID on a vSS, run the command:
# esxcfg-vmknic -a -i IP_address -n netmask portgroup
1.19.2012
UCS 2.0 Upgrade Order
It seems they took the most important section out of the 2.0 upgrade guides: the required order of steps. It is now buried in the guide.
The following order is used when upgrading from 1.4 to 2.0:
1. Update adapters, CIMC, IOMs
2. Activate adapters
3. Activate CIMC
4. Activate UCSM
5. Activate IOMs - set startup only
6. Activate subordinate FI
7. Manually failover FIs if there is a concern for loss of control plane traffic (Optional in lab environments)
8. Activate primary FI
9. Host firmware package
The following order is used when upgrading from 1.4 to 2.0:
1. Update adapters, CIMC, IOMs
2. Activate adapters
3. Activate CIMC
4. Activate UCSM
5. Activate IOMs - set startup only
6. Activate subordinate FI
7. Manually failover FIs if there is a concern for loss of control plane traffic (Optional in lab environments)
8. Activate primary FI
9. Host firmware package
1.10.2012
ARP Table on ESX 4.1/5.0
VMware has the CLI pretty locked down on ESX version 4.1 and later (as compared to the commands that were available in 4.0). The following can be used to view the ARP table in later versions:
esxcli network neighbor list
esxcli network neighbor list
Subscribe to:
Posts (Atom)