Skip to main content

Reference Architecture: Physical Compute

Purpose: For platform engineers, provides server hardware specifications, CPU/memory configurations, BIOS/UEFI settings, and bare-metal provisioning requirements.

Overview

openCenter deploys on x86_64 server hardware running VMware vSphere or OpenStack (KVM). This document covers physical server specifications for control plane nodes, worker nodes, and infrastructure hosts (vCenter, bastion, storage controllers). All sizing aligns with the values in Capacity & Sizing.

Server Role Definitions

RoleCountDescription
Hypervisor Host3–6Runs ESXi or KVM; hosts all Kubernetes VMs
Management Host1–2Runs vCenter Server, bastion, jump hosts
Storage Controller0–2Dedicated storage nodes (if using external SAN/NAS)

Minimum Hardware Specifications

Hypervisor Hosts

Each hypervisor host must support the aggregate VM workload for the Kubernetes nodes it hosts. Size for a 1.3× overcommit ratio on CPU and 1.0× (no overcommit) on memory.

ComponentMinimumRecommendedNotes
CPU2× Intel Xeon Silver 4314 (16C/32T)2× Intel Xeon Gold 6338 (32C/64T)AMD EPYC 7003 series also supported
Memory256 GB DDR4 ECC 3200 MHz512 GB DDR4 ECC 3200 MHzPopulate all memory channels evenly
Boot Disk2× 480 GB SATA SSD (RAID 1)2× 960 GB NVMe M.2 (RAID 1)For ESXi/KVM host OS only
Local Datastore2× 1.92 TB NVMe SSD4× 3.84 TB NVMe SSDVM disks, etcd, container images
NIC2× 25 GbE SFP284× 25 GbE SFP28Minimum two for redundancy
BMC/IPMIiLO 5 / iDRAC 9 / IPMI 2.0iLO 6 / iDRAC 9 EnterpriseOut-of-band management required
PSU2× 800W 80+ Platinum2× 1200W 80+ TitaniumRedundant, hot-swappable

Management Hosts

ComponentMinimumRecommended
CPU1× Intel Xeon Silver 4310 (12C/24T)1× Intel Xeon Gold 5318Y (24C/48T)
Memory128 GB DDR4 ECC256 GB DDR4 ECC
Boot Disk2× 480 GB SATA SSD (RAID 1)2× 960 GB NVMe (RAID 1)
NIC2× 10 GbE2× 25 GbE

BIOS/UEFI Settings

Configure these settings before installing the hypervisor OS. Incorrect BIOS settings cause measurable performance degradation for Kubernetes workloads, particularly etcd latency.

SettingValueReason
Boot ModeUEFIRequired for Secure Boot and modern OS support
Secure BootEnabledESXi 7.0+ and most Linux distros support it
Virtualization (VT-x / AMD-V)EnabledRequired for hypervisor operation
VT-d / AMD-Vi (IOMMU)EnabledRequired for PCI passthrough and SR-IOV
Hyper-Threading / SMTEnabledIncreases vCPU capacity; disable only if security policy requires it
Power ProfileMaximum PerformancePrevents C-state latency spikes affecting etcd
C-StatesDisabled (or C1 only)Deep C-states add microsecond-level wake latency
P-States / SpeedStepDisabledLock CPU frequency for consistent performance
Turbo BoostEnabledAllows burst capacity for short workloads
NUMAEnabledRequired for NUMA-aware VM placement
Memory InterleavingDisabled (use NUMA)Channel interleaving defeats NUMA locality
SR-IOVEnabledRequired if using SR-IOV virtual functions for network
TPM 2.0EnabledRequired for vTPM, Secure Boot attestation
Serial Console RedirectionEnabled (COM1, 115200)Enables remote BIOS access via BMC

Firmware and Driver Requirements

Keep firmware current. Mismatched firmware versions across hosts cause intermittent failures that are difficult to diagnose.

  • Update to the latest vendor-certified firmware bundle before deployment (e.g., HPE SPP, Dell DSU, Lenovo UpdateXpress).
  • Use the VMware HCL or Linux kernel compatibility list to verify NIC and storage controller driver versions.
  • Document firmware versions per host in the asset inventory. Track these in a CMDB or spreadsheet at minimum.

CPU and Memory Sizing Rationale

For a medium cluster (50–200 workers), a three-host hypervisor cluster needs to run:

VM RoleCountvCPUMemoryTotal vCPUTotal Memory
Control Plane3816 GB2448 GB
Worker (General)6416 GB2496 GB
Worker (Compute)3816 GB2448 GB
Bastion124 GB24 GB
Total1374196 GB

With a 1.3× CPU overcommit and 1.0× memory, three hypervisor hosts each need at least 26 physical cores and 66 GB RAM — well within the minimum spec above.

Bare-Metal Provisioning

openCenter does not manage bare-metal provisioning directly. Provision hosts using one of:

  • PXE/iPXE boot with a DHCP/TFTP server serving ESXi or Ubuntu installer images.
  • Vendor tools such as HPE OneView, Dell OpenManage, or Lenovo XClarity for image-based deployment.
  • Foreman/MAAS for automated bare-metal lifecycle in OpenStack environments.

After OS installation, configure networking and storage as described in the Physical Network and Physical Storage documents.

Considerations

  • Homogeneous hardware across hypervisor hosts simplifies vMotion/live migration and avoids EVC mode constraints in vSphere.
  • NUMA alignment matters for etcd and latency-sensitive workloads. Pin control plane VMs to a single NUMA node when possible.
  • Memory ECC is non-negotiable for production. Non-ECC memory causes silent data corruption.
  • GPU passthrough for ML/AI workloads requires IOMMU enabled and vendor-specific vGPU drivers (NVIDIA vGPU, AMD MxGPU). This is outside the standard reference architecture.
  • Warranty and support contracts should cover next-business-day or 4-hour parts replacement depending on SLA requirements.