Finally got Jumbo Frame to work. Tutorials on Internet is usually incomplete.
Based on http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007654
Say you have two NICs in ESXi, NIC 1 has been set up with vSwitch0 for VM and Management Network. NIC 2 has been set up with vSwitch1 for iSCSI storage network. Below settings must be done via commands. There is no GUI in VI client for these setups.
Now login over SSH to your ESXi.
List current MTU of vSwitches:
$ esxcfg-vswitch -l
set the MTU to 9000 (Jumbo Frames) for vSwitch1
$ esxcfg-vswitch -m 9000 vSwitch1
Verify the change.
$ esxcfg-vswitch -l
$ esxcfg-nics -l
Now we need to create a Jumbo Frames-enabled VMkernel interface. If there are already VMkernel ports configured to this vswitch, they must be removed first. There is no way to edit existing VMkernel ports to support Jumbo Frames. They must be created with Jumbo Frame parameters.
First, we create a VMkernel connection with Jumbo Frames support, Run this command to create the port group StorageNetwork on Jumbo frame-enabled vSwitch1
esxcfg-vswitch -A StorageNetwork vSwitch1
Then create a VMkernel connection with Jumbo Frame support:
esxcfg-vmknic -a -i 10.56.51.78 -n 255.255.255.0 -m 9000 StorageNetwork
Verify:
esxcfg-vmknic -l
To test it, we can ping Jumbo Frame-enabled NAS with large packet from this ESXi host. Note, there is 28 bytes overhead, although in most cases, 9000 also works.
ping -s 8972 10.56.51.1
Or to ping this ESXi host from a system connected in the same subnet assuming these systems are also Jumbo Frame enabled and the switch has Jumbo Frame enabled:
From Linux:
ping -c 4 -s 8972 -M do 10.56.51.78
From Windows:
ping 10.56.51.78 -f -l 8972
That is all
Thursday, August 12, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment