Configuring ESXi Storage Networks

In our first ESXi host networking post, we covered some basic terminology and concepts that are fundamental to understanding the configuration of the ESXi host. Let’s now move forward to beginning our configuration of the various core networks required on our ESXi host. It is recommended to first take a look at configuring ESXi Storage Networks for iSCSI storage traffic on our ESXi host.

NAKIVO for VMware vSphere Backup

NAKIVO for VMware vSphere Backup

Complete data protection for VMware vSphere VMs and instant recovery options. Secure backup targets onsite, offsite and in the cloud. Anti-ransomware features.

ESXi Storage Networks

We can use the vSphere client, which many still prefer, or we can go ahead and add our host to the vCenter server and configure from there. In this post, we will show how to set up ESXi Storage Networks in the vSphere client which still allows for connectivity in ESXi 6.0 U2. Let’s first take a look at creating ESXi storage networks. Click on the Add Networking link in the upper right hand corner.

Configuring ESXi Storage Networks

Since we will be configuring iSCSI networking, we choose VMkernel Connection type.

VMkernel Connection type

Next, we create the vSphere standard switch that will handle the network traffic for our iSCSI network. Note you will select by checkbox, the vmnics that you want to use with the new vSwitch.

vSphere standard switch

The next step is to name the first port group for our new vSwitch. For iSCSI, we will create multiple port groups. We do this for redundancy and performance. We set the Network Label for the port group as well as any VLAN ID that we want to assign. This is optional and may not be needed or desired in your environment.

Setting VLANs at the port group level is known as virtual switch tagging or VST. In many VMware environments if you have a dedicated storage switch, the ports will be set in untagged mode meaning the physical switch is doing all of the tagging operations and the virtual switch is not aware of this. This is known as External Switch Tagging or EST.

Each environment will be different and will determine how the VLAN setting is configured for each port group in your environment.

Network lables for VMkernel connections' identification

Next, we set the IP address settings for the VMkernel port.

IP address for VMkernel port

We are now ready to complete the configuration of the first iSCSI port group.

Complete the configuration of iSCSI port group

Now we have a new vSwitch with our first iSCSI VMkernel port configured.

New vSwitch with iSCSI VMkernel port

Now, go to the Properties of vSwitch1 and let’s add another port group so we can adjust our vmnic assignments.

Add another port group to vSwitch1

Select Add and then VMkernel as the connection type.

VMkernel as connection type

We can name this one iSCSI2 or something else intuitive.

Network lable - iSCSI2

We need to assign a different IP address in the same subnet as our other storage VMkernel port. When finished, we will have (2) VMkernel ports assigned to our new vSwitch carrying our iSCSI traffic.

2 VMkernel ports assigned to new vSwitch

Jumbo Frames and NIC teaming assignments

We need to enable jumbo frames both on the new vSwitch as well as each new port group. iSCSI performance benefits greatly from the use of jumbo frames. Jumbo frames are Ethernet frames with a payload greater than a maximum transmission unit (MTU) value of 1500 bytes. Typically for iSCSI and other use cases, we use a value of 9000 MTU.

vSphere standard switch advanced properties

Setting the value on the vSwitch properties.

iSCSI properties

Setting the jumbo frame value on each port group. You need to do this on each port group added to your iSCSI vSwitch.

Also important is setting our NIC Teaming properties. We need to disable Failback and also Override switch failover order for each port group. Additionally, we place the second vmnic in the unused state on each Port group. Why? This prevents the VMkernel port from floating across uplinks in the case of a failure.

The reason for this is that if the physical NIC loses connectivity, it should be treated as a storage path failure, not a network failure. We want the Pluggable Storage Architecture (PSA) in the VMkernel to handle this event and failover to an alternate path to stay connected to the storage. This approach enables consideration of storage resiliency based on multiple paths to the storage, rather than basing it on the number of ESXi Storage networks available to a single storage path.

NIC Teaming properties


After we have made our NIC teaming changes and also made sure that Jumbo frames are setup properly on both the vSwitch as well as the port groups we are using for iSCSI, we should be ready to attach to our iSCSI storage.

1 Year of Free Data Protection: NAKIVO Backup & Replication

1 Year of Free Data Protection: NAKIVO Backup & Replication

Deploy in 2 minutes and protect virtual, cloud, physical and SaaS data. Backup, replication, instant recovery options.

People also read