kube-vip
Purpose: For platform engineers, shows how to configure kube-vip for control plane HA with ARP/BGP modes.
What kube-vip Does
kube-vip provides a virtual IP (VIP) address for the Kubernetes control plane. In a multi-master cluster, the API server runs on each control plane node, but clients need a single stable endpoint. kube-vip elects a leader among control plane nodes and assigns the VIP to that leader. If the leader fails, another node takes over the VIP, maintaining API server availability.
How It's Deployed
kube-vip runs as a static pod on each control plane node. It is configured during cluster provisioning by Kubespray, not via FluxCD. The openCenter-cli generates the necessary Kubespray variables:
# Set by openCenter-cli in Kubespray inventory
kube_vip_enabled: true
kube_vip_controlplane_enabled: true
kube_vip_address: 192.168.12.10 # The floating VIP address
loadbalancer_apiserver:
address: 192.168.12.10
port: 6443
The VIP address must be a free IP on the same subnet as the control plane nodes, not assigned to any host.
Key Configuration
ARP Mode (Default)
In ARP mode, kube-vip uses gratuitous ARP to announce the VIP on the local network. This is the default and works in most L2 environments (same broadcast domain):
kube_vip_arp_enabled: true
ARP mode requires all control plane nodes to be on the same Layer 2 network segment. The leader node responds to ARP requests for the VIP.
BGP Mode
For environments where control plane nodes span multiple L2 segments, use BGP mode:
kube_vip_arp_enabled: false
kube_vip_bgp_enabled: true
kube_vip_bgp_routerid: "" # Auto-detected from node IP
kube_vip_bgp_as: 64513
kube_vip_bgp_peeraddress: 192.168.1.1
kube_vip_bgp_peeras: 64512
BGP mode advertises the VIP as a /32 route to the upstream router. This requires coordination with your network team to accept the BGP peering.
Static Pod Manifest
kube-vip runs as a static pod. The manifest is placed at /etc/kubernetes/manifests/kube-vip.yaml on each control plane node by Kubespray. The pod uses hostNetwork: true to manage the VIP on the node's network interface.
Verification
# Check kube-vip pods on control plane nodes
kubectl get pods -n kube-system -l app.kubernetes.io/name=kube-vip
# Verify the VIP is reachable
ping 192.168.12.10
# Confirm API server responds on the VIP
curl -k https://192.168.12.10:6443/healthz
# Check which node currently holds the VIP
kubectl get leases -n kube-system | grep kube-vip
# Verify from kubeconfig
grep server ~/.kube/config # Should point to the VIP address
Failover Testing
To test HA failover:
- Identify the current leader node (the node holding the VIP).
- Drain or power off that node.
- Verify the VIP migrates to another control plane node within a few seconds.
- Confirm
kubectlcommands still work against the VIP.
Failover time depends on the leader election interval (default: ~5 seconds for ARP mode).
Common Customizations
- VIP interface: If the node has multiple NICs, specify the interface with
kube_vip_interface. - Leader election timeout: Adjust election timing for faster or more conservative failover.
- Load balancing: kube-vip can also provide LoadBalancer services (similar to MetalLB), but openCenter uses MetalLB for that purpose to keep concerns separated.