Managed Kafka: Topics & Configuration
Purpose: For platform engineers, shows how to create, configure, and manage Kafka topics via KafkaTopic CRDs.
Task Summary
Kafka topics in the openCenter blueprint are managed declaratively through Strimzi's KafkaTopic CRD. This guide covers creating topics, tuning configuration, and handling common lifecycle operations. All changes go through GitOps — no imperative topic creation in production.
Prerequisites
- A running Kafka cluster deployed per Deploying Kafka
- The Strimzi Entity Operator running (it manages topic reconciliation)
kubectlaccess to thedata-kafkanamespace
Creating a Topic
Define a KafkaTopic resource in your GitOps repository:
# applications/overlays/<cluster>/managed-services/kafka/topics/orders.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: orders
namespace: data-kafka
labels:
strimzi.io/cluster: production
spec:
partitions: 12
replicas: 3
config:
retention.ms: "604800000" # 7 days
min.insync.replicas: "2"
cleanup.policy: "delete"
segment.bytes: "1073741824" # 1 GiB segments
The strimzi.io/cluster: production label tells the Entity Operator which Kafka cluster owns this topic.
Commit and push. FluxCD reconciles the resource, and the Entity Operator creates the topic on the brokers.
Verify:
kubectl get kafkatopic orders -n data-kafka
# NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
# orders production 12 3 True
Topic Configuration Reference
| Config Key | Default | Description |
|---|---|---|
retention.ms | 604800000 (7d) | How long messages are retained before deletion |
retention.bytes | -1 (unlimited) | Max bytes retained per partition before deletion |
min.insync.replicas | 2 | Minimum ISR count for a write to succeed (with acks=all) |
cleanup.policy | delete | delete removes old segments; compact keeps latest per key |
segment.bytes | 1073741824 | Size of a single log segment file |
max.message.bytes | 1048588 | Maximum message size (including headers) |
compression.type | producer | Broker-side compression: producer, gzip, snappy, lz4, zstd |
Common Topic Patterns
Event Stream (Time-Based Retention)
spec:
partitions: 12
replicas: 3
config:
retention.ms: "2592000000" # 30 days
cleanup.policy: "delete"
min.insync.replicas: "2"
Compacted Topic (Latest-Value Lookup)
spec:
partitions: 6
replicas: 3
config:
cleanup.policy: "compact"
min.cleanable.dirty.ratio: "0.5"
delete.retention.ms: "86400000" # 1 day tombstone retention
min.insync.replicas: "2"
High-Throughput Ingestion
spec:
partitions: 24
replicas: 3
config:
retention.ms: "86400000" # 1 day
segment.bytes: "536870912" # 512 MiB segments
compression.type: "lz4"
min.insync.replicas: "2"
Modifying a Topic
Change the spec.config fields in the KafkaTopic YAML and commit. The Entity Operator applies the configuration change without downtime.
Partition count can be increased but not decreased:
spec:
partitions: 24 # was 12 — increase is allowed
Decreasing partitions requires deleting and recreating the topic, which causes data loss.
Deleting a Topic
Remove the KafkaTopic resource from the GitOps repository and commit. FluxCD prunes the resource (if prune: true is set on the Kustomization), and the Entity Operator deletes the topic from the brokers.
Confirm deletion:
kubectl get kafkatopic orders -n data-kafka
# Error from server (NotFound): kafkatopics.kafka.strimzi.io "orders" not found
Topic deletion is destructive and irreversible. All messages in the topic are lost.
Listing Topics
kubectl get kafkatopics -n data-kafka -l strimzi.io/cluster=production
For detailed status including partition assignments:
kubectl get kafkatopic orders -n data-kafka -o yaml
Verification
After creating or modifying a topic:
- Check the KafkaTopic status:
kubectl get kafkatopic <name> -n data-kafka -o jsonpath='{.status.conditions[*].type}'
# Expected: Ready - Verify from inside the cluster:
kubectl exec -n data-kafka production-kafka-0 -- \
bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic orders
Troubleshooting
Topic stuck in NotReady — Check the Entity Operator logs:
kubectl logs -n data-kafka -l strimzi.io/name=production-entity-operator -c topic-operator
Common causes: missing strimzi.io/cluster label, replication factor exceeds broker count, or the topic name conflicts with an internal topic.
Partition increase not applied — The Entity Operator reconciles on an interval. Wait up to 2 minutes, then check the operator logs for errors.
Cannot decrease partitions — This is a Kafka limitation. To reduce partitions, delete the topic and recreate it with the desired count. Plan for data loss.