Skip to main content

Reference Architecture: Virtual Networking

Purpose: For platform engineers, provides virtual switch configuration, distributed port groups, and overlay network specifications.

Overview

Virtual networking bridges the physical network (see Physical Network) to the Kubernetes VMs. This layer handles VLAN trunking, traffic shaping, and overlay networks. Configuration differs between vSphere (vDS/NSX) and OpenStack (Neutron/OVS/OVN).

vSphere Virtual Networking

vSphere Distributed Switch (vDS)

Use a vSphere Distributed Switch for consistent network policy across all ESXi hosts. Standard vSwitches are acceptable for single-host labs but not for production.

SettingValue
vDS Version7.0 or 8.0 (match vCenter version)
MTU9000 (for vMotion/storage port groups)
Discovery ProtocolLLDP (both listen and advertise)
Network I/O Control (NIOC)Enabled
Health CheckEnabled (VLAN and MTU checks)

Port Groups

Port GroupVLANTeaming PolicyActive UplinksMTUTraffic Type
Management10Active/Standbyuplink1 (active), uplink2 (standby)1500ESXi mgmt, vCenter
vMotion20Active/Standbyuplink2 (active), uplink1 (standby)9000Live migration
Storage30Route based on IP hashuplink1, uplink29000vSAN, NFS, iSCSI
VM Network40Route based on originating portuplink1, uplink21500Kubernetes node IPs

Stagger active uplinks across port groups to distribute traffic across both physical NICs. Management uses uplink1 active; vMotion uses uplink2 active.

Network I/O Control (NIOC) Shares

NIOC prevents a single traffic type from saturating the physical NICs.

Traffic TypeSharesReservation
ManagementHigh1 Gbps
vMotionHigh5 Gbps
vSANHigh10 Gbps
VM TrafficNormalNone
NFSNormalNone

Security Policies (per Port Group)

PolicyManagementvMotionStorageVM Network
Promiscuous ModeRejectRejectRejectReject
MAC Address ChangesRejectRejectRejectAccept
Forged TransmitsRejectRejectRejectAccept

Set MAC Address Changes and Forged Transmits to "Accept" on the VM Network port group. Kubernetes CNI plugins (Calico, Cilium) require this for pod networking when using non-overlay modes.

VMware NSX (Optional)

NSX provides micro-segmentation, distributed firewalling, and overlay networking at the hypervisor level. Use NSX when:

  • Multi-tenant isolation is required between Kubernetes clusters on the same hosts.
  • Distributed firewall rules must be enforced below the VM OS level.
  • The organization already operates NSX for other workloads.

NSX is not required for standard openCenter deployments. Calico or Cilium inside Kubernetes handles pod-level network policy.

NSX ComponentPurpose
NSX ManagerCentral management (3-node cluster for HA)
Transport ZonesOverlay (GENEVE) and VLAN zones
SegmentsReplace port groups for VM connectivity
Distributed FirewallMicro-segmentation rules
Tier-0/Tier-1 GatewaysNorth-south routing

OpenStack Virtual Networking (Neutron)

Network Backend

BackendUse CasePerformance
Open vSwitch (OVS)Standard deploymentsGood; mature and well-tested
OVN (Open Virtual Network)New deployments, scaleBetter; distributed control plane
SR-IOVHigh-performance workloadsBest; hardware offload, bypasses hypervisor

OVN is the recommended backend for new OpenStack deployments. It eliminates the central Neutron L3/DHCP agents and distributes routing to each compute node.

Network Topology

NetworkTypeCIDR (example)Purpose
provider-mgmtFlat / VLAN 1010.0.10.0/24OpenStack API, node management
provider-externalFlat / VLAN 50203.0.113.0/24Floating IPs, external access
k8s-internalVXLAN (tenant)10.100.0.0/16Kubernetes node network
k8s-podVXLAN (tenant)10.200.0.0/16Pod overlay (Calico manages)

Neutron Configuration

SettingValueReason
MTU (tenant networks)14501500 minus VXLAN overhead (50 bytes)
MTU (provider networks)1500 or 9000Match physical network
Security GroupsEnabledRequired for node-level firewall rules
Port SecurityEnabled (default)Disable per-port for Calico BGP if needed
Allowed Address PairsConfigure for pod CIDRsRequired for Calico non-overlay mode
DNSNeutron DNS integrationResolves instance hostnames

Floating IPs

Allocate floating IPs for:

PurposeCountNotes
Kubernetes API (control plane VIP)1Accessed by kubectl, CI/CD
Ingress controller / MetalLB2–5External service exposure
Bastion host1SSH jump host

Kubernetes Overlay Network (Calico)

Calico runs inside the Kubernetes cluster and manages pod-to-pod networking. The virtual network layer must allow Calico traffic to pass.

Calico Modes

ModeEncapsulationPerformanceRequirements
VXLAN (default)VXLANGoodNo special network config
IP-in-IPIPIPGoodIP protocol 4 allowed
BGP (no overlay)NoneBestBGP peering with ToR switches or Neutron

VXLAN is the default and works on all platforms without network changes. BGP mode provides better performance but requires coordination with the physical or virtual network team.

Firewall Rules for Calico

If firewalls exist between Kubernetes nodes, allow:

ProtocolPortPurpose
UDP4789VXLAN encapsulation
TCP179BGP (if using BGP mode)
IP Protocol 4IPIP (if using IP-in-IP mode)
TCP5473Calico Typha
TCP9091Calico Felix metrics

Considerations

  • MTU consistency: Verify MTU end-to-end from pod → VM NIC → virtual switch → physical switch → destination. A single MTU mismatch causes silent packet drops or fragmentation. Use ping -M do -s 8972 <destination> to test jumbo frame path (8972 + 28 byte header = 9000).
  • VLAN pruning: Only trunk the VLANs each port group needs. Do not trunk all VLANs to all hosts — this increases the broadcast domain and attack surface.
  • Monitoring: Enable NetFlow/sFlow on the vDS or OVS to capture traffic statistics. Feed into the Grafana stack for network visibility.
  • IPv6: openCenter clusters use IPv4. If the underlying virtual network is dual-stack, ensure IPv4 connectivity is not affected by IPv6 router advertisements.
  • NSX vs. Calico: NSX operates at the hypervisor level (below the VM). Calico operates inside the Kubernetes cluster (above the VM). They are complementary, not competing. Use NSX for VM-level isolation and Calico for pod-level policy.