VMware to openCenter Migration
Purpose: For operators, shows step-by-step migration from VMware-based infrastructure to openCenter-managed clusters.
Scope
This guide covers migrating workloads running on VMware-based Kubernetes clusters (Tanzu, Rancher on vSphere, or manually provisioned) to openCenter-managed clusters. The target cluster can run on any provider openCenter supports — VMware vSphere, OpenStack, or cloud.
Prerequisites
- Inventory of workloads on the source VMware cluster
kubectlaccess to both source and target clusters- Velero installed on the source cluster (or ability to install it)
- An openCenter-managed target cluster provisioned and healthy
Step 1: Audit the Source Cluster
Document what is running on the VMware cluster:
# Workload inventory
kubectl get deployments,statefulsets,daemonsets -A -o wide
# Persistent storage
kubectl get pvc -A -o custom-columns=\
NAMESPACE:.metadata.namespace,\
NAME:.metadata.name,\
SIZE:.spec.resources.requests.storage,\
STORAGECLASS:.spec.storageClassName
# Ingress and routing
kubectl get ingress,httproute -A
# Custom Resource Definitions (may need manual migration)
kubectl get crd -o name
Identify VMware-specific dependencies that need replacement on the target:
| VMware Component | openCenter Equivalent |
|---|---|
| vSphere CSI | vSphere CSI (if target is vSphere) or Longhorn/OpenStack CSI |
| NSX-T networking | Calico CNI + MetalLB |
| Tanzu packages | openCenter-gitops-base services |
| Harbor (Tanzu-managed) | Harbor (openCenter-managed via FluxCD) |
| vSphere with Tanzu supervisor | Not needed — openCenter manages clusters directly |
Step 2: Provision the Target Cluster
opencenter cluster init <target-cluster> --org <org-id> --type <provider>
opencenter cluster edit <target-cluster>
opencenter cluster setup <target-cluster>
# Provision and bootstrap
cd infrastructure/clusters/<target-cluster>/
terraform apply
cd inventory/
ansible-playbook -i inventory.yaml -b --become-user=root cluster.yml
opencenter cluster bootstrap <target-cluster>
Step 3: Install Velero on the Source Cluster
If Velero is not already installed on the VMware cluster, install it with a shared backup location:
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.10.0 \
--bucket opencenter-migration \
--secret-file ./credentials-velero \
--backup-location-config region=us-east-1,s3ForcePathStyle=true,s3Url=https://s3.example.com \
--use-node-agent # Enables file-level backup for cross-provider volume migration
The --use-node-agent flag is critical for cross-provider migrations. It copies volume data at the file level rather than relying on provider-specific snapshots.
Step 4: Back Up Workloads from VMware
# Back up application namespaces (exclude VMware-specific system namespaces)
velero backup create vmware-migration \
--include-namespaces=production,staging \
--default-volumes-to-fs-backup=true \
--wait
# Verify the backup completed
velero backup describe vmware-migration --details
Step 5: Restore on the Target Cluster
# Ensure the target cluster's Velero can see the same backup storage
velero backup get # Should list vmware-migration
# Restore workloads
velero restore create --from-backup vmware-migration --wait
# Check restore status
velero restore describe <restore-name> --details
Step 6: Update Storage Classes and Ingress
After restore, workloads may reference VMware-specific StorageClasses or ingress controllers. Update them:
# Patch PVCs if the StorageClass name differs
kubectl get pvc -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}: {.spec.storageClassName}{"\n"}{end}'
# Update ingress resources to use Gateway API (openCenter standard)
# Replace Ingress objects with HTTPRoute resources
Step 7: Switch Traffic
# Update DNS to point to the target cluster's load balancer
# Verify application endpoints respond
curl -I https://app.example.com
# Monitor for errors in the first 24-48 hours
kubectl get events -A --sort-by='.lastTimestamp' | head -50
Verification
# All pods running on target
kubectl get pods -A | grep -v Running | grep -v Completed
# Persistent data intact
kubectl exec -it <db-pod> -- pg_isready # Example for PostgreSQL
# FluxCD managing the target cluster
flux get kustomizations -A
# Velero backups running on target
velero schedule get
Troubleshooting
- Volume restore fails — Ensure
--use-node-agentwas used during backup. Provider-specific snapshots (vSphere VMDK) cannot be restored on a different provider. - StorageClass not found — Create a matching StorageClass on the target, or use Velero's ConfigMap for storage class mapping.
- CRDs from Tanzu missing — Export CRDs from the source and apply them to the target before restoring workloads that depend on them.
- Network policies blocking traffic — Kyverno policies on the target may be stricter than the source. Check
kubectl get policyreport -Afor violations.