Skip to main content

Managed Kafka (Streaming Blueprint)

Purpose: For platform engineers, operators, and app developers, describes the Streaming Services blueprint — openCenter Managed Kafka built on the Strimzi operator ecosystem with managed operations, security by default, and air-gap support.

Overview

A secure, managed Apache Kafka service for private, hybrid, and air-gapped environments. Built on the Strimzi operator ecosystem with repeatable deployment, managed operations, and a clear expansion path for future streaming capabilities. Standard Apache Kafka with a managed operating model — not a proprietary alternative.

What You Get

  • Standard Apache Kafka as a managed service — real Kafka, not a proprietary alternative.
  • Managed provisioning and operations — openCenter handles deployment, lifecycle, and operational workflows.
  • High availability deployment patterns — production-ready HA deployment options for supported infrastructure.
  • Dedicated deployment options — stronger isolation for regulated or higher-risk environments.
  • Encryption in transit — secure-by-default service posture with TLS for all communication.
  • Authentication and authorization — controlled access for Kafka users and client applications (SASL/SCRAM, mTLS, OAuth2/OIDC).
  • Observability and lag visibility — Prometheus, Grafana, and Kafka Exporter for operational visibility.
  • Air-gapped deployment support — works in disconnected and tightly controlled environments.
  • Compliance-oriented operating model — supports customers that need controlled operations and evidence-oriented practices.
  • Change windows and maintenance workflows — supports managed upgrades and controlled maintenance.

What Is Not Included

To keep the offering clear and supportable, the base service does not currently include:

  • Serverless Kafka
  • Native autoscaling
  • Built-in schema registry
  • Unlimited connector support
  • Managed runtime for Kafka Streams applications

Schema registry, governance, and curated connectors are planned as future add-ons after the base service stabilizes. See the Data Services roadmap.

Four Pillars

GitOps-Managed Lifecycle

Kafka clusters, topics, users, and ACLs are declared in Git and reconciled by FluxCD. No manual broker configuration. No SSH. Upgrades are a version bump in a YAML file with automatic rolling restarts and health checks.

Observability From Boot

Prometheus exporters, Grafana dashboards, and alerting rules ship with the blueprint. Consumer lag, under-replicated partitions, ISR shrink, and disk usage are monitored before you produce your first message.

Security by Default

Inter-broker TLS, SASL authentication, topic-level ACLs, and NetworkPolicies are configured out of the box. Secrets are SOPS-encrypted in Git and decrypted at reconciliation time. No plaintext credentials anywhere.

Operational Resilience

Topic configuration backup, partition reassignment tooling, and tested recovery runbooks. Capacity planning based on actual throughput metrics, not guesswork. Velero integration for disaster recovery of operator state.

Strimzi Ecosystem Components

The blueprint is built on the Strimzi ecosystem:

ComponentFunction
Strimzi Kafka OperatorCore control plane with Cluster, Topic, User, and Entity operators
Kafka Access OperatorService Binding-style Secrets for simplified app connectivity
Kafka BridgeHTTP 1.1 REST API for producing, consuming, and topic management
MQTT BridgeOne-way MQTT 3.1.1 to Kafka ingestion with topic mapping rules
Drain CleanerAdmission webhook for safe node draining during maintenance
Kafka OAuthOAuth2/OIDC authentication with Keycloak authorization integration
Quotas PluginAggregate broker quotas and storage-aware throttling
Config ProviderKubernetes Secret/ConfigMap integration for externalized config

All components are operator-managed, GitOps-ready, and include supply chain security artifacts (cosign signatures, SBOMs).

Platform Capabilities Matrix

CapabilityStatusImplementation
Runs Apache Kafka Brokers FullStrimzi deploys genuine Apache Kafka via CRDs. Full protocol compatibility.
Fully Managed Provisioning FullCluster Operator manages lifecycle via declarative CRs. Rolling upgrades with health checks.
High Availability / Multi-Zone FullRack awareness, PVC retention, KRaft controller scaling, Drain Cleaner.
Encryption in Transit / At Rest FullTLS for all connections. At-rest via PVC encryption (storage class).
AuthN / AuthZ FullSASL/SCRAM, mTLS, OAuth2/OIDC, Kafka ACLs via KafkaUser CR.
Kafka Connect FullKafkaConnect CR, KafkaConnector CR, OCI plugin artifacts.
Replication / DR FullKafkaMirrorMaker2 CR. Active-active and active-passive patterns.
Observability FullPrometheus exporters, Kafka Exporter, Grafana dashboards.
IaC / APIs FullKubernetes CRDs as IaC. Helm chart. HTTP Bridge REST API.
Air-gapped Support FullAll images mirrorable. Cosign-signed with SBOM.
Compliance / Audit FullCosign signatures, SPDX-JSON SBOMs, Keycloak audit trail.
Operational Scope Clarity FullPlatform team manages operators; app teams manage topics/users/connectors via CRs.
Procurement Clarity FullClear ownership: openCenter platform, Strimzi operator, customer application code.
Cluster Models⚠️ PartialShared and dedicated. Multi-tenant isolation via namespaces, not separate clusters.
Autoscaling⚠️ PartialManual scaling via node pools. No native HPA for brokers.
SLA Eligibility⚠️ PartialDepends on infrastructure provider SLA and deployment topology.
Incident Response⚠️ PartialRunbooks provided. Automated remediation limited to FluxCD reconciliation.
Schema Registry Not includedExternal deployment required. Planned as future add-on.

Where It Fits Best

AudienceWhy
Enterprise Platform TeamsStandardizing event streaming for internal app teams without taking on full broker operations.
Regulated OrganizationsNeed Kafka in private cloud or disconnected environments with controlled operations.
Hybrid Cloud CustomersCannot rely on a SaaS-managed Kafka control plane. Need private or hybrid deployment.
Modernization ProgramsAdopting event-driven integration patterns with a credible operating model.

Technical Deep Dives

For deployment procedures, topic management, security configuration, and monitoring setup, see the Data Services section: