Managed Kafka: Security & TLS
Purpose: For platform engineers, shows how to configure mTLS, SCRAM-SHA authentication, and ACLs for Kafka.
Task Summary
This guide covers the three security layers for openCenter Managed Kafka: TLS encryption in transit, client authentication (mTLS or SCRAM-SHA-512), and authorization via ACLs. All configuration is declarative through Strimzi CRDs and managed via GitOps.
Prerequisites
- A running Kafka cluster deployed per Deploying Kafka
- cert-manager deployed and issuing certificates in the cluster
kubectlaccess to thedata-kafkanamespace
TLS Encryption
TLS is mandatory for all Kafka listeners in the openCenter blueprint. The Kafka CR's listener configuration enables TLS, and Strimzi works with cert-manager to issue broker certificates automatically.
The tls listener defined in the Kafka CR handles certificate provisioning:
listeners:
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: tls
Strimzi creates a ClusterCA and ClientsCA for the Kafka cluster. Broker-to-broker communication and client-to-broker communication are both encrypted. Certificates are stored as Kubernetes Secrets in the data-kafka namespace:
kubectl get secrets -n data-kafka -l strimzi.io/cluster=production
# production-cluster-ca-cert — Cluster CA certificate
# production-clients-ca-cert — Clients CA certificate
# production-kafka-brokers — Broker certificates
Client Authentication
Option A: mTLS (Certificate-Based)
mTLS is the default authentication method. Each client gets a certificate signed by the Kafka cluster's Clients CA.
Create a KafkaUser with TLS authentication:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: my-app-producer
namespace: data-kafka
labels:
strimzi.io/cluster: production
spec:
authentication:
type: tls
authorization:
type: simple
acls:
- resource:
type: topic
name: orders
patternType: literal
operations: [Write, Describe]
- resource:
type: topic
name: orders
patternType: literal
operations: [Read, Describe]
host: "*"
Strimzi generates a Secret containing the client certificate and key:
kubectl get secret my-app-producer -n data-kafka
# Contains: ca.crt, user.crt, user.key, user.p12, user.password
Application pods mount this Secret and configure their Kafka client with the certificate.
Option B: SCRAM-SHA-512 (Password-Based)
For clients that cannot use certificates, SCRAM-SHA-512 provides password-based authentication. The listener must be configured for SCRAM:
listeners:
- name: scram
port: 9094
type: internal
tls: true
authentication:
type: scram-sha-512
Create a SCRAM user:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: my-app-consumer
namespace: data-kafka
labels:
strimzi.io/cluster: production
spec:
authentication:
type: scram-sha-512
authorization:
type: simple
acls:
- resource:
type: topic
name: orders
patternType: literal
operations: [Read, Describe]
- resource:
type: group
name: my-app-group
patternType: literal
operations: [Read]
Strimzi generates a Secret with the SCRAM password:
kubectl get secret my-app-consumer -n data-kafka -o jsonpath='{.data.password}' | base64 -d
ACL Authorization
ACLs control which users can perform which operations on which resources. The authorization block in the KafkaUser CR defines the rules.
ACL Resource Types
| Resource | Description | Common Operations |
|---|---|---|
topic | Kafka topic | Read, Write, Describe, Create, Delete, Alter |
group | Consumer group | Read, Describe, Delete |
cluster | Cluster-level operations | Create (for topic auto-creation), Alter, Describe |
transactionalId | Transactional producer ID | Write, Describe |
ACL Pattern Types
| Pattern | Behavior |
|---|---|
literal | Exact match on resource name |
prefix | Matches any resource starting with the given name |
Example: Read-Only Consumer
authorization:
type: simple
acls:
- resource:
type: topic
name: events-
patternType: prefix
operations: [Read, Describe]
- resource:
type: group
name: analytics-consumer
patternType: literal
operations: [Read]
This grants read access to all topics starting with events- and allows the consumer to join the analytics-consumer group.
Network Policies
The Kafka blueprint includes a default-deny NetworkPolicy for the data-kafka namespace. Allowed traffic:
- Broker-to-broker communication (ports 9091, 9093)
- Client-to-broker on declared listener ports
- Strimzi operator to broker management port
- Prometheus scraping on JMX exporter port (9404)
Application namespaces that need Kafka access must have a NetworkPolicy allowing egress to data-kafka on the listener port.
Verification
Confirm TLS is active on the listener:
kubectl get kafka production -n data-kafka \
-o jsonpath='{.status.listeners[?(@.name=="tls")].certificates}'
Verify a KafkaUser's Secret was created:
kubectl get secret my-app-producer -n data-kafka -o jsonpath='{.data.user\.crt}' | base64 -d | openssl x509 -noout -subject
Test connectivity from a pod with the client certificate:
kubectl run kafka-test --rm -it --image=quay.io/strimzi/kafka:latest-kafka-3.7.0 \
-n data-kafka -- bin/kafka-topics.sh \
--bootstrap-server production-kafka-tls-bootstrap:9093 \
--command-config /tmp/client.properties \
--list
Troubleshooting
Client cannot connect — Verify the client's Secret exists and contains user.crt and user.key. Check that the client is using the correct bootstrap address and port. Confirm the NetworkPolicy allows egress from the client namespace.
ACL denied errors — Check the KafkaUser's ACL rules match the topic name and operation. Use kubectl describe kafkauser <name> -n data-kafka to see the reconciled ACL state.
Certificate expired — Strimzi auto-renews certificates before expiry. If renewal failed, check the Strimzi operator logs: kubectl logs -n data-kafka -l strimzi.io/kind=cluster-operator.