Add VM Disks
Purpose: For operators, shows how to attach additional storage volumes with provider-specific procedures.
When to Add Disks
Common reasons to attach additional disks to cluster nodes:
- Longhorn storage pool needs more capacity
- etcd data directory is running low on space (control plane nodes)
- A workload requires dedicated local storage (e.g., database nodes)
- Container runtime storage (
/var/lib/containerd) is filling up
OpenStack: Attach a Cinder Volume
Step 1: Create and Attach the Volume via Terraform
# infrastructure/clusters/<cluster>/main.tf
resource "openstack_blockstorage_volume_v3" "data_disk" {
name = "worker-01-data"
size = 500 # GB
volume_type = "ssd"
}
resource "openstack_compute_volume_attach_v2" "worker_01_data" {
instance_id = openstack_compute_instance_v2.workers[0].id
volume_id = openstack_blockstorage_volume_v3.data_disk.id
device = "/dev/vdb"
}
cd infrastructure/clusters/<cluster>/
terraform plan
terraform apply
Step 2: Format and Mount on the Node
SSH to the node and prepare the disk:
# Verify the disk is visible
lsblk
# Create a filesystem
sudo mkfs.ext4 /dev/vdb
# Create mount point and mount
sudo mkdir -p /mnt/data
sudo mount /dev/vdb /mnt/data
# Add to fstab for persistence across reboots
echo '/dev/vdb /mnt/data ext4 defaults 0 2' | sudo tee -a /etc/fstab
Step 3: Configure Longhorn (if used for storage pool)
If the disk is for Longhorn, add it as a storage path on the node:
# Create the Longhorn data directory on the new disk
sudo mkdir -p /mnt/data/longhorn
Then update the Longhorn node configuration:
apiVersion: longhorn.io/v1beta2
kind: Node
metadata:
name: worker-01
namespace: longhorn-system
spec:
disks:
default-disk:
path: /var/lib/longhorn
allowScheduling: true
data-disk:
path: /mnt/data/longhorn
allowScheduling: true
storageReserved: 10737418240 # 10 GB reserved
VMware: Add a Virtual Disk
Step 1: Add the Disk via Terraform
# infrastructure/clusters/<cluster>/main.tf
resource "vsphere_virtual_machine" "worker_01" {
# ... existing configuration ...
disk {
label = "disk0"
size = 100
unit_number = 0
}
# Additional data disk
disk {
label = "disk1"
size = 500
unit_number = 1
thin_provisioned = true
}
}
terraform plan
terraform apply
VMware hot-adds the disk without requiring a VM restart (if the VM hardware version supports it).
Step 2: Format and Mount
Follow the same format and mount steps as OpenStack (the disk appears as /dev/sdb on VMware).
Verification
# Confirm the disk is mounted
df -h | grep /mnt/data
# Check Longhorn disk status (if applicable)
kubectl get nodes.longhorn.io -n longhorn-system -o yaml | grep -A10 disks
# Verify no I/O errors
dmesg | grep -i error | grep -i disk
Troubleshooting
- Disk not visible after attach — Rescan the SCSI bus:
echo "- - -" | sudo tee /sys/class/scsi_host/host*/scan. On VMware, the guest OS may need a reboot if hot-add is not supported. - Longhorn not using new disk — Verify the Node CR has the new disk path and
allowScheduling: true. Check Longhorn manager logs. - fstab entry causes boot failure — Use
nofailmount option:/dev/vdb /mnt/data ext4 defaults,nofail 0 2. This prevents boot hangs if the disk is detached.