NSX-v vs NSX-T: Comprehensive Comparison
By: NAKIVO Team
Virtualization has made revolutionary changes in the way datacenters are built. The majority of modern datacenters use hardware virtualization and deploy physical servers as hypervisors to run virtual machines on said servers. This approach improves the scalability, flexibility and cost efficiency of the datacenter. VMware is one of top players on the virtualization market and their products are highly respected in the IT industry, its VMware ESXi Hypervisor and VMware vCenter being widely known components of VMware vSphere virtualization solution.
The network is a crucial component of each datacenter, including virtualized datacenters, and if you require large networks and complex network configurations for your virtualized datacenter, consider using software-defined networking (SDN). Software-defined networking is an architecture that aims to make networks agile and flexible. The goal of SDN is to improve network control by enabling enterprises and service providers to respond quickly to changing business requirements. VMware cares about its customers and provides VMware NSX solution for building software-defined networks. Today’s blog post covers VMware NSX and explores the difference between VMware NSX-v and VMware NSX-T.
What Is VMware NSX and How Can It Be Used?
VMware NSX is a network virtualization solution that allows you to build software-defined networks in virtualized datacenters. Just as VMs are abstracted from physical server hardware, virtual networks including switches, ports, routers, firewalls etc., are constructed in the virtual space. Virtual networks are provisioned and managed independent of the underlying hardware. Virtual machines are connected to virtual ports of virtual switches; the connection between virtual networks is performed with virtual routers, and access rules are configured on virtual firewalls. Alternatively, network load balancing is also available. VMware NSX is a successor of VMware vCloud Networking & Security (vCNS), and Nicira NVP which was acquired by VMware in 2012.
When using a traditional approach for configuring access between multiple networks in a virtual environment, a physical router or an edge gateway running on a VM is usually deployed, though this approach is not especially fast or convenient. VMware has implemented the micro segmentation concept in NSX by using a distributed firewall which has been built into the core of the hypervisor. Security policies, network interaction parameters for IP addresses, MAC addresses, VMs, applications and other objects are all set in this distributed firewall. The rules can be configured by using such objects as Active Directory users and groups if NSX is deployed inside your company where the Active Directory Domain Controller (ADDC) is used. Each object can be considered as a micro segment in its own security perimeter of the appropriate network which has own DMZ (demilitarized zone).
The Distributed Firewall allows you to segment virtual data center entities like virtual machines. Segmentation can be based on VM names and attributes, user identity, vCenter objects like data centers, and hosts, or can be based on traditional networking attributes like IP addresses, port groups, and so on.
The Edge Firewall component helps you meet key perimeter security requirements, such as building DMZs based on IP/VLAN constructs, tenant-to-tenant isolation in multi-tenant virtual data centers, Network Address Translation (NAT), partner (extranet) VPNs, and user-based SSL VPNs.
If a VM is migrated from one host to another—from one subnet to another—the access rules and security policies are adopted in accordance with the new location. If a database server is running on a migrated VM, the rules set for this VM in firewall will continue to work for this VM after migration to another host or network is complete, letting the database server access the application server running on the VM which was not migrated. This is an example of improved flexibility and automation in action when using VMware NSX. NSX can be especially useful for cloud providers and large virtual infrastructures. VMware offers two types of the NSX software-defined networking platform – NSX-v and NSX-T.
NSX for vSphere (NSX-v) is tightly integrated with VMware vSphere and requires deployment of the VMware vCenter. VMware NSX-v is specific to vSphere hypervisor environments and was developed before NSX-T.
NSX-T (NSX-Transformers) was designed for different virtualization platforms and multi-hypervisor environments and can also be used in cases where NSX-v is not applicable. While NSX-v supports SDN for only VMware vSphere, NSX-T also supports network virtualization stack for KVM, Docker, Kubernetes, and OpenStack as well as AWS native workloads. VMware NSX-T can be deployed without a vCenter Server and is adopted for heterogeneous compute systems.
The main scenarios for using NSX-v are listed in the table below. The table is divided into three rows, one of which describes the scenario category. Scenarios for using NSX-T are highlighted with a bold font.
|Micro-segmentation||Automating IT||Disaster recovery|
|Secure end user||Developer cloud||Multi data center pooling|
|DMZ anywhere||Multi-tenant infrastructure||Cross cloud|
The main components of VMware NSX are NSX Manager, NSX controllers, and NSX Edge gateways.
NSX Manager is a centralized component of NSX which is used for management of networks. NSX Manager can be deployed as a VM on one of the ESXi servers managed by vCenter (from OVA template). In cases where you are using NSX-v, NSX Manager can work with only one vCenter Server, whereas NSX Manager for NSX-T can be deployed as an ESXi VM or KVM VM and can work with multiple vCenter servers at once.
NSX Manager for vSphere is based on the Photon OS (similar to the vCenter Server Appliance).
NSX-T Manager runs on the Ubuntu operating system.
NSX controllers. The NSX controller is a distributed state management system used to overlay transport tunnels and control virtual networks, which can be deployed as a VM on ESXi or KVM hypervisors. The NSX Controller controls all logical switches within the network, and handles information about VMs, hosts, switches and VXLANs. Having three controller nodes ensures data redundancy in case of failure of one NSX Controller node.
NSX Edge is a gateway service that provides access to physical and virtual networks for VMs. NSX Edge can be installed as a distributed virtual router or as a services gateway. The following services can be provided: Dynamic routing, firewalls, Network Address Translation (NAT), Dynamic Host Configuration Protocol (DHCP), Virtual Private Network (VPN), Load Balancing, and High Availability.
The concept of deployment is quite similar for both the NSX-v and NSX-T. You should perform the following steps for deploying NSX:
- Deploy NSX Manager as a VM on an ESXi host using a virtual appliance. Be sure to register NSX Manager on vSphere vCenter (for NSX-v). If you are using NSX-T, NSX Manager can be deployed as a virtual appliance on a KVM host as VMware NSX-T allows you to create a cluster of NSX Managers.
- Deploy three NSX controllers and create an NSX controller cluster.
- Install VIBs (kernel modules) on ESXi hosts to enable a distributed firewall, distributed routing and VXLAN if you are using NSX-v. If you are using NSX-T, kernel modules must be also installed on KVM hypervisors.
- Install NSX Edge as a VM on ESXi (for NSX-v and NSX-T). If you are using NSX-T and there is no possibility to install Edge as a virtual machine on ESXi, Edge can be deployed on a physical server. Installing Edge as a VM on KVM hypervisors is not supported at this time (for NSX-T v.2.3). If you need to deploy Edge on a physical server, check the hardware compatibility list (important for CPUs and NICs) before doing this.
NSX Common Capabilities
There are a series of capabilities that are available for both NSX types.
The common capabilities for NSX-v and NSX-T are:
- Software based network virtualization
- Software based overlay
- Distributed routing
- Distributed firewalling
- API-driven Automation
- Detailed monitoring and statistics
Be aware that APIs are different for NSX-v and NSX-T.
Licensing is the same for both NSX types in that it provides you more flexibility and universality. For example, you can order a license for using NSX for vSphere, and if you make some changes in your infrastructure, and need to deploy NSX-T, you can use the license obtained for ESXi-v. NSX is NSX – there is no distinction from the licensing side, as licensing editions are also the same.
Overlay encapsulation for virtual networks is used to abstract virtual networks by carrying layer 2 information over layer 3. A logical layer 2 network is created over existing layer 3 networks (IP networks) on an existing physical infrastructure. As a result, two VMs can communicate with each other over the network, even if the path between VMs must be routed. A physical network can be called the underlay network.
VXLAN vs GENEV
NSX-v uses the VXLAN encapsulation protocol while NSX-T uses GENEVE that is a more modern protocol.
VXLAN. A MAC over IP encapsulation is used for VXLAN and the working principle of network isolation differs from the VLAN technique. Traditional VLAN has a limited number of networks that is 4094 according to the 802.1q standard, and network isolation is done on the layer 2 of a physical network by adding 4 bytes into Ethernet frame headers. The maximum number of virtual networks for VXLAN is 2^24. The VXLAN network identifier is used to mark each virtual network in this case. The layer-2 frames of the overlay network are encapsulated within the UDP datagrams transmitted over a physical network. The UDP port number is 4789 in this case.
The VXLAN header consists of the following parts.
- 8 bits are used for flags. The I flag must be set to 1 for making a VXLAN Network ID (VNI) valid. The other 7 bits are R fields that are reserved and must be set to zero on transmission. The R fields set to zero are ignored on receipt.
- VXLAN Network Identifier (VNI) that is also known as VXLAN Segment ID is a 24-bit value used to determine the individual overlay network utilized for communicating VMs with each other.
- Reserved fields (24-bit and 8-bit) must be set to zero and ignored on receipt.
A size of the VXLAN header is fixed and is equal to 8 bytes. Using Jumbo frames with MTU set to 1600 bytes or more is recommended for VXLAN.
GENEVE. The GENEVE header looks a lot like VXLAN and has the following structure:
- A compact tunnel header is encapsulated in UDP over IP.
- A small fixed tunnel header is used to provide control information, as well as a base level of functionality and interoperability.
- Variable length options are available for making possible to implement future innovations.
The size of the GENEVE header is variable.
NSX-T uses GENEVE (GEneric NEtwork Virtualization Encapsulation) as a tunneling protocol that preserves traditional offload capabilities available on NICs (Network Interface Controllers) for the best performance. Additional metadata can be added to overlay headers and allows to improve context differencing for processing information such as end-to-end telemetry, data tracking, encryption, security etc. on the data transferring layer. Additional information in the metadata is called TLV (Type, Length, Value). GENEVE is developed by VMware, Intel, Red Hat and Microsoft. GENEVE is based on the best concepts of VXLAN, STT and NVGRE encapsulation protocols.
The MTU value for Jumbo frames must be at least 1700 bytes when using GENEVE encapsulation that is caused by the additional metadata field of variable length for GENEVE headers (MTU 1600 or higher is used for VXLAN as you recall).
NSX-v and NSX-T are not compatible due to the overlay encapsulation difference explained in this section.
Now you know how virtual layer 2 Ethernet frames are encapsulated over IP networks, hence, it’s time to explore implementation of virtual layer 2 networks for NSX-v and NSX-T.
Transport nodes and virtual switches
Transport nodes and virtual switches represent NSX data transferring components.
Transport Node (TN) is the NSX compatible device participating in the traffic transmission and NSX networking overlay. A node must contain a hostswitch for being able to serve as a transport node.
NSX-v requires to use vSphere distributed virtual switch (VDS) as usual in vSphere. Standard virtual switches cannot be used for NSX-v.
NSX-T presumes that you need to deploy an NSX-T distributed virtual switch (N-VDS). Open vSwitches (OVS) are used for KVM hosts and VMware vSwitches are used for ESXi hosts can be used for this purpose.
N-VDS (virtual distributed switch that is previously known as a hostswitch) is a software NSX component on the transport node, that performs traffic transmission. N-VDS is the primary component of the transport nodes’ data plane that forwards traffic and owns at least one physical network interface controller (NIC). NSX Switches (N-VDS) of the different transport nodes are independent but can be grouped by assigning the same names for centralized management.
On ESXi hypervisors N-VDS is implemented by using VMware vSphere Distributed Switch through the NSX-vSwitch module that is loaded to the kernel of the hypervisor. On KVM hypervisors the hostswitch is implemented by the Open-vSwitch (OVS) module.
Transport zones are available for both NSX-v and NSX-T. Transport zones define the limits of logical networks distribution. Each transport zone is linked to its NSX Switch (N-VDS). Transport zones for NSX-T are not linked to clusters.
There are two types of transport zones for VMware NSX-T due to GENEVE encapsulation: Overlay or VLAN. As for VMware NSX-v, a transport zone defines the distribution limits of VXLAN only.
Logical switch replication modes
When two virtual machines residing on different hosts communicate directly, the unicast traffic is exchanged in the encapsulated mode between two endpoint IP addresses assigned to hypervisors without need for flooding. Sometimes, layer-2 network traffic originated by a VM must be flooded similarly as layer-2 traffic in traditional physical networks, for example, if a sender doesn’t know the MAC address of the destination network interface. It means that the same traffic (broadcast, unicast, multicast) must be sent to all VMs connected to the same logical switch. If VMs are residing on different hosts, traffic must be replicated to those hosts. Broadcast, unicast and multicast traffic is also known as BUM traffic.
Let’s see the difference between replication modes for NSX-v and NSX-T.
NSX-v supports Unicast mode, Multicast mode and Hybrid mode.
NSX-T supports Unicast mode with two options: Hierarchical Two-Tier replication (optimized, the same as for NSX-v) and Head replication (not optimized).
ARP suppression reduces the amount of ARP broadcast traffic sent over the network and is available for Unicast and Hybrid traffic replication modes. Thus ARP suppression is available for both NSX-v and NSX-T.
When a VM1 sends an ARP request to know the MAC address of a VM2, the ARP request is intercepted by the logical switch. If the switch already has the ARP entry for the target network interface of the VM2, the ARP response is sent to the VM1 by the switch. Otherwise, the switch sends the ARP request to an NSX controller. If the NSX controller contains the information about VM IP to MAC binding, the controller sends the reply with that binding and then the logical switch sends the ARP response to the VM1. If there is no ARP entry on the NSX controller, then the ARP request is re-broadcasted on the logical switch.
NSX layer 2 bridging
Layer 2 bridging is useful for migrating workloads from overlay networks to VLANs, or for splitting subnets across physical and virtual workloads.
NSX-v: This feature works on the kernel level of a hypervisor on which a control VM is running.
NSX-T: A separate NSX-bridge node is created for this purpose. NSX bridge nodes can be assembled into clusters to improve fault tolerance of the entire solution.
In NSX-v control VM, redundancy was implemented by using the High Availability (HA) scheme. One VM copy is active while the second VM copy is on stand-by. If the active VM is failed, it can take some time to switch VMs and load the stand-by VM by making it active. NSX-T does not face this disadvantage, since a fault-tolerant cluster is used instead of the active/stand-by scheme for HA.
The Routing Model
In cases where you are using VMware NSX, the following terms are used:
East-west traffic refers to transferring data over network within the datacenter. This name is used for this particular type of traffic since horizontal lines on diagrams typically indicate local area network (LAN) traffic.
North-south traffic refers to client-server traffic or traffic that moves between a datacenter and a location outside the datacenter (external networks). Vertical lines on the diagrams usually describe this type of network traffic.
Distributed logical router (DLR) is a virtual router which can use static routes and dynamic routing protocols such as OSPF, IS-IS or BGP.
Tenant refers to a customer or an organization that gets access to an isolated secure environment provided by a managed service provider (MSP). A large organization can use multi-tenant architecture by regarding each department as a single tenant. VMware NSX can be particularly useful for providing Infrastructure as a Service (IaaS).
Routing in NSX-v
NSX for vSphere uses DLR (distributed logical router) and centralized routing. There is a routing kernel module on each hypervisor on which to perform routing between logical interfaces (LIFs) on the distributed router.
Let’s consider, for example, the typical routing scheme for NSX-v, when you have a set of three segments: VMs running databases, VMs running application servers and VMs running web servers. VMs of these segments (sky-blue, green and deep-blue) are connected to a distributed logical router (DLR) which is in turn connected to external networks via edge gateways (NSX Edge).
If you are working with multiple tenants, you can use a multi-tier NSX Edge construction, or each tenant can have its own dedicated DLR and controller VM, the latter of which resides on the edge cluster. The NSX Edge gateway connects isolated, stub networks to shared (uplink) networks by providing common gateway services such as DHCP, VPN, NAT, dynamic routing, and Load Balancing. Common deployments of NSX Edge include in the DMZ, VPN Extranets, and multi-tenant Cloud environments where the NSX Edge creates virtual boundaries for each tenant.
If you need to transmit traffic from a VM located in segment A (blue) of the first tenant to segment A of the second tenant, traffic must pass through the NSX Edge gateway. In this case, there is no distributed routing, as traffic must pass the single point that is the designated NSX Edge gateway.
You can also see the working principle on the scheme on which components are divided into clusters: Management cluster, Edge cluster, and Compute cluster. In this example, each cluster is using 2 ESXi hosts. If two VMs are running on the same ESXi host but belong to different network segments, traffic passes through the NSX Edge gateway that is located on another ESXi host of the Edge cluster. After routing, this traffic must be transmitted back to the ESXi host on which source and destination VMs are running.
The route of traffic transmission is not optimal in this case. The advantages available for distributed routing in the multi-tenant model with Edge gateways cannot be utilized, resulting in greater latency for your network traffic.
Routing in NSX-T
NSX-T uses a two-tier distributed routing model for resolving issues explained above. Both Tier0 and Tier1 are created on the Transport nodes, the latter of which is not necessary, but is intended for improving scalability.
Traffic is transmitted by using the most optimal path, as routing is then performed on the ESXi or KVM hypervisor on which the VMs are running. The only case when a fixed point of routing must be used is when connecting to external networks. There are separate Edge nodes deployed on servers running hypervisors.
Additional services such as BGP, NAT, and Edge Firewall can be enabled on Edge nodes, which can in turn be combined into a cluster for improving availability. What’s more, NSX-T also provides faster failure detection. In simple terms, the best means for distributing routing is routing inside the virtualized infrastructure.
IP addressing for virtual networks
When you configure NSX-v, you need to compose a plan of IP addressing inside NSX segments. Transit logical switches that link DLRs and Edge gateways must also be added in this case. If you are using a high number of Edge gateways, you should compose the IP addressing scheme for segments which are linked by these Edge gateways.
NSX-T, however, does not require these operations. All network segments between Tier0 and Tier1 obtain IP addresses automatically. No dynamic routing protocols are used—instead, static routes are used and a system connects the components automatically, making configuration easier; you don’t need to spend lots of time planning IP addressing for service (transit) network components.
Integration for Traffic Inspection
NSX-v offers integration with third-party services such as agentless antiviruses, advanced firewalling (next generation firewalls), IDS (Intrusion Detection Systems), IPS (Intrusion Prevention Systems), and other types of traffic inspection services. Integration with listed types of traffic inspection is performed on a hypervisor kernel layer using a protected VMCI bus (Virtual Machine Communication Interface).
NSX-T does not provide these capabilities at this time.
Kernel-level distributed firewalls can be configured for NSX-v and NSX-T, working on a VM virtual adapter level. Switch security options are available for both NSX types, but the “Rate-limit Broadcast & Multicast traffic” option is available only for NSX-T.
NSX-T allows you to apply rules in a more granular fashion, resulting in transport nodes being utilized more rationally. For example, you can apply rules based on the following objects: logical switch, logical port, NSGroup. This feature can be used to reduce rule-set configuration on the logical switch, logical port or NSGroup instances for achieving higher levels of efficiency and optimization. You can also save scale space and rule lookup cycles, in addition to hosting multi-tenancy deployment, and apply tenant specific rules (rules that are applied to workloads of the appropriate tenant).
The process of creating and applying the rules is quite similar for both NSX-v and NSX-T. The difference is that the policies created for NSX-T are sent to all Controllers where rules are converted to IP addresses, while in NSX-v, policies are immediately transferred to vShield Firewall Daemon (VSFWD).
NSX-v vs NSX-T – Comparison Table
Now that you are familiarized with the most interesting capabilities of VMware NSX, let’s summarize the main features of NSX-v and NSX-T that have been explored in this blog post in addition to comparing them in the table.
|Tight integration with vSphere||Yes||No|
|Working without vCenter||No||Yes|
|Support for multiple vCenter instances by NSX Manager||No||Yes|
|Provides virtual networking for the following virtualization platforms||VMware vSphere||VMware vSphere, KVM, Docker, Kubernetes, OpenStack, AWS native workloads|
|NSX Edge deployment||ESXi VM||ESXi VM or physical server|
|Overlay encapsulation protocols||VXLAN||GENEVE|
|Virtual switches (N-VDS) used||vSphere Distributed Switch (VDS)||Open vSwitch (OVS) or VDS|
|Logical switch replication modes||Unicast, Multicast, Hybrid||Unicast (Two-tier or Head)|
|A two-tier distributed routing||No||Yes|
|Configuring the IP addressing scheme for network segments||Manual||Automatic (between Tier 0 and Tier 1)|
|Integration for traffic inspection||Yes||No|
|Kernel-level distributed firewall||Yes||Yes|
NSX-v is the most optimal solution if you use a vSphere environment only, while NSX-T can be used not only for vSphere but also for KVM, Docker, Kubernetes, and OpenStack virtualization platforms in the framework of building virtual networks. There is no single answer as to which type of NSX is better. Whether you should use NSX-v or NSX-T depends on your needs and features provided by each NSX type.
The NSX licensing policy is user-friendly – you only need to buy one NSX license, regardless of the NSX type you are going to use. Later you can install NSX-T in an NSX-v environment or inversely, depending on your needs, and continue to use your single NSX license.
You can build your own software-defined datacenter with VMware by using the NSX solution. VMware provides you with clustering features to ensure operation continuity, high availability, and fault tolerance, yet VM backup won’t be a redundant measure.
Regularly back up your production VMs related to different projects and VMs running as components of VMware vSphere and VMware NSX (such as vCenter, NSX Manager, NSX Controller, NSX Edge) in order to protect your data. NAKIVO Backup & Replication can help you create the VMware backup in a reliable and efficient way even if you are using clusters.