Confluent Platform Operator

Confluent Platform

Confluent Platform is a streaming platform that enables you to organize and manage data from many different sources with one reliable, high performance system.

Quick Start


A Konvoy cluster with at least 7 worker nodes is required to install the Confluent operator and all platform services.

We start by downloading the Confluent helm bundle.

curl | tar -xz
cd helm

We first need to edit a few things in the providers/aws.yaml helm values file.

  • For zone configure the same that you use in the Konvoy cluster.yaml.

    region: us-west-2
        ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate
        ## If kubernetes is deployed in single availability zone then specify appropriate values
          - us-west-2c
  • For kafka enable metricsReporter.

      name: kafka
      replicas: 3
        enabled: true

Enable Load Balancing For External Access

There are a few more things that you have to edit in the providers/aws.yaml helm values file in case you want access to the Confluent platform services from outside the Konvoy cluster.

Here we show how to do that configuration for the kafka cluster, but it works analogues for the other platform services.

For kafka enable loadBalancer, and set the domain to one you own.

  name: kafka
  replicas: 3
    enabled: true
    domain: ""

The assumption is that the brokers of the kafka cluster are available at the following external endpoints:

For this to be true you will have to create CNAME DNS record's with your DNS provider (for example, AWS Route 53, …) once the cluster is up and running. More on this in a later step.

Install the Operator

  1. Install the operator.

    helm install -f ./providers/aws.yaml --name operator --namespace operator --set operator.enabled=true ./confluent-operator
  2. Update the default service account with the image pull secret.

    kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "confluent-docker-registry" }]}'

Install the Platform

In this section we show how to install an instance of the Confluent platform. The platform is made of many services.

  • zookeeper
  • kafka
  • controlcenter
  • schema registry
  • connect
  • replicator
  • KSQL

You can start getting a first experience with just zookeeper, kafka, and controlcenter.

  1. Install the zookeeper service.

    helm install -f ./providers/aws.yaml --name zookeeper --namespace operator --set zookeeper.enabled=true ./confluent-operator
  2. Install the kafka service.

    helm install -f ./providers/aws.yaml --name kafka --namespace operator --set kafka.enabled=true ./confluent-operator
  3. Install the controlcenter service.

    helm install -f ./providers/aws.yaml --name controlcenter --namespace operator --set controlcenter.enabled=true ./confluent-operator
  4. Install the schemaregistry service.

    helm install -f ./providers/aws.yaml --name schemaregistry --namespace operator --set schemaregistry.enabled=true ./confluent-operator
  5. Install the connect service.

    helm install -f ./providers/aws.yaml --name connect --namespace operator --set connect.enabled=true ./confluent-operator
  6. Install the replicator service.

    helm install -f ./providers/aws.yaml --name replicator --namespace operator --set replicator.enabled=true ./confluent-operator
  7. Install the KSQL service.

    helm install -f ./providers/aws.yaml --name ksql --namespace operator --set ksql.enabled=true ./confluent-operator

Access the Platform

Access to Control Center

We can use port forwarding to access the controlcenter service, the console of the platform.

kubectl port-forward service/controlcenter 9021:9021 -n operator

After you run the command, open your browser to http://localhost:9091. Log in with the username and password that you find in the controlcenter section of the provider/aws.yaml file. The default is admin/Developer1.

Internal Access To Kafka

Next validate that we can interact with the kafka cluster itself.

Exec into one of the kafka pods.

kubectl -n operator exec -it kafka-0 bash

Create a file with the following content. The username and password you find under sasl.plain in the providers/aws.yaml file.

cat << EOF >
sasl.mechanism=PLAIN required username=test password=test123;

As a first check, query the cluster status.

kafka-broker-api-versions --command-config --bootstrap-server kafka:9071

Next we create a topic named ravi.

kafka-topics --create --zookeeper zookeeper:2181/kafka-operator --replication-factor 3 --partitions 1 --topic ravi

Let’s produce some events for the topic.

seq 10000 | kafka-console-producer --topic ravi --broker-list kafka:9071 --producer.config

And then consume them.

kafka-console-consumer --from-beginning --topic ravi --bootstrap-server kafka:9071 --consumer.config

External Access to Kafka

This step assumes that you enabled external load balancer access for kafka as described earlier.

Once the kafka cluster is up and running you should see the following Services with their external IP's. Note the IP’s will be different in your case.

kubectl get services -n operator

NAME                         TYPE           CLUSTER-IP    EXTERNAL-IP                                                               PORT(S)                                        AGE
kafka-0-lb                   LoadBalancer    9092:31232/TCP                                 21m
kafka-1-lb                   LoadBalancer    9092:31076/TCP                                 21m
kafka-2-lb                   LoadBalancer    9092:32187/TCP                                 21m
kafka-bootstrap-lb           LoadBalancer   9092:31129/TCP                                 21m

Use your respective external IPs to create the following CNAME DNS records with your DNS provider (e.g. AWS Route 53, …).      -->      -->      -->   -->

Next use control center to create a new topic named ravi.

In the following use kafkacat to produce and consume from the new topic. On Mac OS X kafkacat can be installed using brew. You can run kafkacat from one command line in sequence. First use kafkacat -P ... to produce a few messages and then use kafkacat -C ... to consume them.

kafkacat -P -t ravi -b -X security.protocol=SASL_PLAINTEXT -X sasl.mechanisms=PLAIN -X sasl.username=test -X sasl.password=test123
kafkacat -C -t ravi -b -X security.protocol=SASL_PLAINTEXT -X sasl.mechanisms=PLAIN -X sasl.username=test -X sasl.password=test123

Delete The Platform

helm delete --purge ksql
helm delete --purge replicator
helm delete --purge connect
helm delete --purge schemaregistry
helm delete --purge controlcenter
helm delete --purge kafka
helm delete --purge zookeeper
helm delete --purge operator



Release Notes


Maintenance & Support