Skip to main content

Infrastructure Provisioning

Purpose: For platform engineers, shows how to use opencenter cluster setup to generate and apply OpenTofu/Terraform for VM provisioning.

Prerequisites

  • opencenter CLI installed
  • Cluster configuration created (opencenter cluster init and opencenter cluster edit completed)
  • Infrastructure provider credentials configured (OpenStack, VMware, or AWS)
  • Terraform or OpenTofu CLI installed (v1.5+)

What Setup Generates

opencenter cluster setup reads the cluster configuration and generates a complete customer GitOps repository, including Terraform/OpenTofu files for infrastructure provisioning:

infrastructure/clusters/<cluster-name>/
├── main.tf # VM definitions, networking, load balancers
├── provider.tf # Provider configuration (OpenStack/vSphere/AWS)
├── variables.tf # Input variables
├── outputs.tf # Cluster outputs (IPs, kubeconfig path)
├── inventory/ # Kubespray inventory (generated from Terraform output)
│ ├── inventory.yaml
│ └── group_vars/
└── kubeconfig.yaml # Created after cluster deployment

Steps

1. Generate the repository structure

opencenter cluster setup <cluster-name>

This creates the full directory tree under customers/<org>/.

2. Review the generated Terraform

cd customers/<org>/infrastructure/clusters/<cluster-name>/
cat main.tf

Verify node counts, instance flavors, network configuration, and storage volumes match your requirements.

3. Configure provider credentials

For OpenStack:

source openrc.sh  # OpenStack credentials
# Or set environment variables:
export OS_AUTH_URL=https://keystone.example.com:5000/v3
export OS_PROJECT_NAME=my-project
export OS_USERNAME=deployer
export OS_PASSWORD=<password>

For VMware vSphere, credentials are typically in provider.tf or injected via environment variables.

4. Initialize and apply

terraform init
terraform plan -out=tfplan

Review the plan output. Confirm the expected number of VMs, networks, and security groups.

terraform apply tfplan

Provisioning time depends on the provider — typically 5–15 minutes for a 6-node cluster.

5. Verify infrastructure

terraform output

Expected outputs include control plane IPs, worker IPs, bastion IP, and the generated Kubespray inventory path.

Confirm SSH access to each node:

for ip in $(terraform output -json node_ips | jq -r '.[]'); do
ssh deployer@${ip} "hostname && echo OK"
done

6. Generate Kubespray inventory

Terraform outputs feed into the Kubespray inventory. The setup command pre-generates inventory/inventory.yaml with placeholder IPs. After terraform apply, update the inventory with actual IPs:

# If using Terraform provisioner (automatic)
# The inventory is updated by the Terraform apply step

# If manual, update inventory.yaml with Terraform outputs
terraform output -json > tf-outputs.json
# Edit inventory/inventory.yaml with the IPs from tf-outputs.json

Air-Gap Considerations

In Zone C (disconnected), Terraform providers must be available locally. The terraform-providers Zarf component installs a filesystem mirror on the bastion:

# Terraform uses the local mirror (configured by Zarf deploy)
cat /root/.terraformrc

Expected .terraformrc:

plugin_cache_dir   = "/opt/opencenter/terraform-cache"
disable_checkpoint = true

provider_installation {
filesystem_mirror {
path = "/opt/opencenter/terraform-mirror"
include = ["*/*"]
}
direct {
exclude = ["*/*"]
}
}

Verification

# All VMs running
terraform state list | grep -c "instance"

# SSH connectivity
ssh deployer@<control-plane-ip> "uptime"

# Inventory populated
cat inventory/inventory.yaml

Troubleshooting

SymptomLikely causeFix
terraform init fails with provider errorProvider not in mirror (air-gap)Verify terraform-providers Zarf component was deployed
VM creation times outProvider quota or API issueCheck provider dashboard; increase quotas if needed
SSH connection refusedSecurity group missing SSH ruleReview main.tf security group definitions
Inventory has placeholder IPsTerraform output not appliedRe-run inventory generation from Terraform outputs