MetalLB
Purpose: For platform engineers, shows how to configure MetalLB IP address pools, L2 vs. BGP mode, and Gateway API integration.
What MetalLB Does
MetalLB provides LoadBalancer service type implementation for bare-metal and VM-based Kubernetes clusters. Cloud providers offer built-in load balancers, but on-prem clusters need MetalLB to assign external IPs to Services of type LoadBalancer. MetalLB allocates IPs from configured address pools and announces them via ARP (Layer 2) or BGP.
How It's Deployed
MetalLB is deployed via FluxCD from openCenter-gitops-base:
openCenter-gitops-base/applications/base/services/metallb/
├── namespace.yaml
├── source.yaml
├── helmrelease.yaml
└── helm-values/
└── hardened-values.yaml
Customer overlay:
applications/overlays/<cluster>/services/metallb/
├── kustomization.yaml
└── override-values.yaml
Key Configuration
IP Address Pools
Define the IP ranges MetalLB can allocate. These must be routable IPs on your network, not assigned to any host:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.12.100-192.168.12.120
Multiple pools can be defined for different purposes (e.g., separate pools for platform services vs. customer applications).
L2 Advertisement (Default)
In L2 mode, MetalLB responds to ARP requests for allocated IPs. This works on any Layer 2 network without router configuration:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
L2 mode has a limitation: all traffic for a given IP goes to a single node (the one responding to ARP). That node proxies traffic to the actual pods. This means no true load balancing at the network level.
BGP Mode
For environments with BGP-capable routers, BGP mode distributes traffic across multiple nodes:
apiVersion: metallb.io/v1beta1
kind: BGPPeer
metadata:
name: tor-switch
namespace: metallb-system
spec:
myASN: 64513
peerASN: 64512
peerAddress: 192.168.1.1
---
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
BGP mode requires coordination with your network team to accept peering and route advertisements.
Gateway API Integration
Gateway resources (Envoy Gateway) create LoadBalancer services. MetalLB assigns IPs from the configured pool automatically. No special configuration is needed — any Service of type LoadBalancer gets an IP from MetalLB.
To reserve a specific IP for the Gateway:
apiVersion: v1
kind: Service
metadata:
name: envoy-gateway
annotations:
metallb.universe.tf/loadBalancerIPs: "192.168.12.100"
Verification
# Check MetalLB pods
kubectl get pods -n metallb-system
# List IP address pools
kubectl get ipaddresspools -n metallb-system
# Check allocated IPs
kubectl get svc -A --field-selector spec.type=LoadBalancer
# Verify L2/BGP advertisements
kubectl get l2advertisements,bgpadvertisements -n metallb-system
# Test connectivity to a LoadBalancer IP
curl http://192.168.12.100
Common Customizations
- Multiple pools: Create separate IPAddressPool resources for different teams or service tiers.
- Pool selection: Use
metallb.universe.tf/address-poolannotation on Services to select a specific pool. - Shared IPs: Multiple Services can share a single IP when using different ports, enabled via
metallb.universe.tf/allow-shared-ipannotation. - Node selectors: Restrict which nodes participate in L2/BGP announcements using
nodeSelectorsin the advertisement resource.