Managed Kafka: Deploying a Cluster
Purpose: For platform engineers, shows how to provision a Strimzi-backed Kafka cluster with openCenter defaults.
Outcome
A 3-broker Kafka cluster running in the data-kafka namespace, managed by the Strimzi operator, with TLS enabled, JMX metrics exported, and the entire stack reconciled through FluxCD.
Prerequisites
- An openCenter-managed Kubernetes cluster with FluxCD bootstrapped
openCenter-gitops-basereferenced as a GitRepository source in FluxCD- cert-manager deployed and issuing certificates
- An approved storage class with qualified IOPS (check your cluster's storage profile)
kubectlaccess with permissions to create namespaces and CRDs
Step 1: Deploy the Strimzi Operator
The Strimzi operator is deployed from openCenter-gitops-base via FluxCD. Add the Kustomization to your cluster overlay:
# applications/overlays/<cluster>/services/fluxcd/strimzi-operator.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: strimzi-operator
namespace: flux-system
spec:
interval: 10m
sourceRef:
kind: GitRepository
name: opencenter-gitops-base
path: applications/base/services/strimzi-operator
prune: true
targetNamespace: data-kafka
postBuild:
substituteFrom:
- kind: ConfigMap
name: cluster-vars
Commit and push. FluxCD will reconcile the operator into the data-kafka namespace.
Verify the operator is running:
kubectl get pods -n data-kafka -l strimzi.io/kind=cluster-operator
# Expected: strimzi-cluster-operator-<hash> 1/1 Running
Step 2: Define the Kafka Cluster CR
Create the Kafka custom resource in your cluster overlay:
# applications/overlays/<cluster>/managed-services/kafka/kafka-cluster.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: production
namespace: data-kafka
spec:
kafka:
version: "3.7.0"
replicas: 3
listeners:
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
log.retention.hours: 168
storage:
type: persistent-claim
size: 100Gi
class: <your-approved-storage-class>
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-metrics
key: kafka-metrics-config.yml
template:
pod:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
strimzi.io/name: production-kafka
topologyKey: kubernetes.io/hostname
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 20Gi
class: <your-approved-storage-class>
entityOperator:
topicOperator: {}
userOperator: {}
Replace <your-approved-storage-class> with the storage class qualified for your environment (e.g., longhorn, cinder-ssd, vsphere-csi).
Step 3: Add the JMX Metrics ConfigMap
# applications/overlays/<cluster>/managed-services/kafka/kafka-metrics-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-metrics
namespace: data-kafka
data:
kafka-metrics-config.yml: |
lowercaseOutputName: true
rules:
- pattern: "kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value"
name: kafka_server_$1_$2
type: GAUGE
labels:
clientId: "$3"
topic: "$4"
partition: "$5"
- pattern: "kafka.server<type=(.+), name=(.+)><>Value"
name: kafka_server_$1_$2
type: GAUGE
- pattern: "kafka.server<type=(.+), name=(.+)><>Count"
name: kafka_server_$1_$2_total
type: COUNTER
Step 4: Create the FluxCD Kustomization
Wire the Kafka resources into FluxCD:
# applications/overlays/<cluster>/managed-services/kafka/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- kafka-cluster.yaml
- kafka-metrics-config.yaml
Add a FluxCD Kustomization to deploy it:
# applications/overlays/<cluster>/services/fluxcd/managed-kafka.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: managed-kafka
namespace: flux-system
spec:
interval: 10m
sourceRef:
kind: GitRepository
name: cluster-gitops
path: applications/overlays/<cluster>/managed-services/kafka
prune: true
dependsOn:
- name: strimzi-operator
Commit and push all files.
Step 5: Verify the Deployment
Watch the Kafka cluster come up:
kubectl get kafka -n data-kafka
# Expected: production True (after several minutes)
kubectl get pods -n data-kafka -l strimzi.io/cluster=production
# Expected: 3 kafka pods, 3 zookeeper pods, entity-operator pod
Check that all brokers are ready:
kubectl get pods -n data-kafka -l strimzi.io/name=production-kafka
# All 3 should show 1/1 Running
Verify TLS listener is active:
kubectl get kafka production -n data-kafka -o jsonpath='{.status.listeners[*].bootstrapServers}'
# Expected: production-kafka-tls-bootstrap.data-kafka.svc:9093
Check Your Work
- Strimzi operator pod is
Runningindata-kafka - 3 Kafka broker pods are
Runningwith anti-affinity spread - 3 ZooKeeper pods are
Running - Entity operator pod is
Running - TLS listener reports a bootstrap address in
.status.listeners - Persistent volumes are bound for all broker and ZooKeeper pods
- FluxCD Kustomization shows
Ready: True
Next Steps
- Configure security — set up client authentication and ACLs
- Create topics — declare topics via KafkaTopic CRDs
- Set up monitoring — connect Prometheus and Grafana dashboards