rasa-helm-charts

op-kits

Operator Kits Helm Chart

Version: 0.2.2 Type: application

Prerequisites

Installing the Chart

You can install the chart from either the OCI registry or the GitHub Helm repository.

Option 1: Install from OCI Registry

To install the chart with the release name my-release:

$ helm install my-release oci://europe-west3-docker.pkg.dev/rasa-releases/helm-charts/op-kits --version 0.2.2

Option 2: Install from GitHub Helm Repository

First, add the Rasa Helm repository:

$ helm repo add rasa https://helm.rasa.com/charts
$ helm repo update

Then install the chart:

$ helm install my-release rasa/op-kits --version 0.2.2

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Note: This only removes application resources (PostgreSQL and Kafka clusters). The operators will remain installed and can be reused for other deployments.

Pull the Chart

You can pull the chart from either source:

From OCI Registry:

$ helm pull oci://europe-west3-docker.pkg.dev/rasa-releases/helm-charts/op-kits --version 0.2.2

From GitHub Helm Repository:

$ helm pull rasa/op-kits --version 0.2.2

Operator Installation

Before installing this chart, you must install the required operators in your cluster. We recommend installing operators in their own dedicated namespaces for better separation and management.

1. CloudNativePG Operator

Install the CloudNativePG operator in its dedicated namespace:

# Add the CloudNativePG Helm repository
$ helm repo add cnpg https://cloudnative-pg.github.io/charts
$ helm repo update

# Install CloudNativePG operator in cnpg-system namespace
$ helm install cnpg-operator cnpg/cloudnative-pg \
    --namespace cnpg-system \
    --create-namespace

2. Strimzi Kafka Operator

Install the Strimzi Kafka operator in its dedicated namespace with cluster-wide permissions:

# Add the Strimzi Helm repository
$ helm repo add strimzi https://strimzi.io/charts/
$ helm repo update

# Install Strimzi operator in strimzi-system namespace
$ helm install strimzi-operator oci://quay.io/strimzi-helm/strimzi-kafka-operator \
    --namespace strimzi-system \
    --create-namespace \
    --set watchAnyNamespace=true

Important: The watchAnyNamespace=true setting allows the Strimzi operator to manage Kafka resources across all namespaces in the cluster. This is required for the operator to manage resources created by this chart in different namespaces.

3. Valkey Operator

Install the Valkey operator using the official Helm chart:

# Add the Hyperspike Helm repository
$ helm repo add hyperspike https://charts.hyperspike.io
$ helm repo update

# Install Valkey operator in valkey-system namespace
$ helm install valkey-operator hyperspike/valkey-operator \
    --namespace valkey-system \
    --create-namespace

4. Verify Operator Installation

After installing the operators, verify they are running:

# Check CloudNativePG operator
$ kubectl get pods -n cnpg-system

# Check Strimzi operator
$ kubectl get pods -n strimzi-system

# Check Valkey operator
$ kubectl get pods -n valkey-system

5. Install Application Resources

Once operators are installed and running, you can deploy your application resources using either installation method:

# Option 1: Install from OCI Registry
$ helm install my-release oci://europe-west3-docker.pkg.dev/rasa-releases/helm-charts/op-kits \
    --version 0.2.2 \
    --namespace my-app-namespace \
    --create-namespace

# Option 2: Install from GitHub Helm Repository (after adding the repo)
$ helm install my-release rasa/op-kits \
    --version 0.2.2 \
    --namespace my-app-namespace \
    --create-namespace

Uninstalling Operators

If you need to completely remove the operators from your cluster:

# Remove CloudNativePG operator
$ helm uninstall cnpg-operator -n cnpg-system

# Remove Strimzi operator 
$ helm uninstall strimzi-operator -n strimzi-system

# Remove Valkey operator
$ helm uninstall valkey-operator -n valkey-system

# Optionally remove the namespaces (only if empty)
$ kubectl delete namespace cnpg-system strimzi-system valkey-system

Warning: Removing operators will affect all PostgreSQL, Kafka, and Valkey clusters managed by them across the entire cluster.

General Configuration

Resource Naming

All resource names are generated based on the Helm release name. The chart uses a consistent naming pattern to ensure uniqueness and clarity.

Base Naming Logic

The chart uses a “fullname” template that combines the release name with the chart name:

All names are truncated to 63 characters to comply with Kubernetes DNS naming requirements.

Resource-Specific Names

Based on the fullname, resources are named as follows:

PostgreSQL (CloudNativePG)

Kafka (Strimzi)

Valkey

Examples

For a release named my-app:

# Default names (assuming chart name is "op-kits")
PostgreSQL Cluster: my-app-op-kits-pg
Kafka Cluster: my-app-op-kits-kafka
Kafka Controller Node Pool: my-app-op-kits-kafka-controllers
Kafka Broker Node Pool: my-app-op-kits-kafka-brokers
Kafka User: my-app-user
Valkey Cluster: my-app-op-kits-valkey

For a release named my-op-kits (contains chart name):

# Names (release name contains chart name)
PostgreSQL Cluster: my-op-kits-pg
Kafka Cluster: my-op-kits-kafka
Kafka Controller Node Pool: my-op-kits-kafka-controllers
Kafka Broker Node Pool: my-op-kits-kafka-brokers
Kafka User: my-op-kits-user
Valkey Cluster: my-op-kits-valkey

Overriding Names

You can override resource names using the following values:

# Override all resource names
fullnameOverride: "custom-name"

# Override individual resource names
cloudnativepg:
  cluster:
    nameOverride: "custom-pg-cluster"

strimzi:
  kafka:
    nameOverride: "custom-kafka-cluster"
  users:
    app:
      name: "custom-kafka-user"

valkey:
  cluster:
    nameOverride: "custom-valkey-cluster"

Important Configuration Notes

Storage Configuration

PostgreSQL, Kafka, and Valkey require persistent storage. Update the storage class names in your values:

cloudnativepg:
  cluster:
    storage:
      storageClass: "your-storage-class"  # Update this

strimzi:
  nodePools:
    controllers:
      storage:
        class: "your-storage-class"  # Update this
    brokers:
      storage:
        volumes:
          - class: "your-storage-class"  # Update this

valkey:
  cluster:
    storage:
      spec:
        storageClassName: "your-storage-class"  # Update this

Resource Configuration

Kafka Resources

CPU and memory resources for Kafka components are configured in specific sections:

Example:

strimzi:
  nodePools:
    controllers:
      resources:
        limits:
          cpu: "500m"
          memory: "1Gi"
        requests:
          cpu: "100m"
          memory: "256Mi"
    brokers:
      resources:
        limits:
          cpu: "1"
          memory: "2Gi"
        requests:
          cpu: "200m"
          memory: "512Mi"
  kafka:
    entityOperator:
      topicOperator:
        resources:
          limits:
            cpu: "500m"
            memory: "512Mi"
          requests:
            cpu: "100m"
            memory: "128Mi"
      userOperator:
        resources:
          limits:
            cpu: "500m"
            memory: "512Mi"
          requests:
            cpu: "100m"
            memory: "128Mi"

Note: The .spec.kafka.resources field is deprecated in Strimzi v1 API. Always use KafkaNodePool resources for broker and controller resource configuration.

Scaling Considerations

When scaling Kafka brokers, remember to adjust replication factors:

strimzi:
  nodePools:
    brokers:
      replicas: 3  # Scale brokers
  kafka:
    config:
      default.replication.factor: 3  # Match your broker count
      min.insync.replicas: 2
  topics:
    events:
      replicas: 3  # Match your broker count

Namespace Considerations

Since operators are installed cluster-wide but in dedicated namespaces, you can deploy multiple instances of this chart in different namespaces:

# Deploy for development
$ helm install dev-app . --namespace development

# Deploy for staging 
$ helm install staging-app . --namespace staging

# Deploy for production
$ helm install prod-app . --namespace production

Each deployment will create its own PostgreSQL, Kafka, and Valkey clusters, all managed by the same operators.

Kafka Users and Passwords

strimzi:
  users:
    app:
      enabled: true
      authentication:
        type: scram-sha-512
        password:
          secretName: app-secrets
          secretKey: KAFKA_SASL_PASSWORD

To read the password:

# Operator-generated password (Secret name equals KafkaUser name)
$ kubectl get secret -n <namespace> <kafka-user-name> -o jsonpath='{.data.password}' | base64 -d

# User-provided secret
$ kubectl get secret -n <namespace> <secretName> -o jsonpath='{.data.<secretKey>}' | base64 -d

Values

Key Type Description Default
affinity object Affinity rules for all pods Example: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux - key: node-role.kubernetes.io/worker operator: In values: - “true” podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - op-kits topologyKey: kubernetes.io/hostname {}
cloudnativepg.cluster.annotations object Additional annotations to apply to the PostgreSQL Cluster resource Example: annotations: backup.postgresql.cnpg.io/enabled: “true” monitoring.postgresql.cnpg.io/scrape: “true” {}
cloudnativepg.cluster.bootstrap.initdb.database string Database name to create during initialization "app"
cloudnativepg.cluster.bootstrap.initdb.owner string Database owner/user to create during initialization "appuser"
cloudnativepg.cluster.enableSuperuserAccess bool Enable superuser access (creates -superuser secret) true
cloudnativepg.cluster.image object PostgreSQL container image to use {"repository":"ghcr.io/cloudnative-pg/postgresql","tag":"16"}
cloudnativepg.cluster.instances int Number of PostgreSQL instances in the cluster 1
cloudnativepg.cluster.monitoring.enablePodMonitor bool Enable Prometheus PodMonitor for metrics collection false
cloudnativepg.cluster.nameOverride string Override cluster name. If empty, uses “-pg” ""
cloudnativepg.cluster.postgresql object Additional PostgreSQL configuration parameters Example: postgresql: parameters: max_connections: “200” shared_buffers: “256MB” {"parameters":{}}
cloudnativepg.cluster.resources object Resource limits and requests for PostgreSQL containers Example: resources: limits: cpu: “1” memory: “1Gi” requests: cpu: “100m” memory: “256Mi” {}
cloudnativepg.cluster.storage.size string Storage size for PostgreSQL data "15Gi"
cloudnativepg.cluster.storage.storageClass string Storage class name. Change to your storage class "gp2"
cloudnativepg.enabled bool Enable CloudNativePG cluster deployment true
commonAnnotations object Additional annotations to apply to all resources Example: commonAnnotations: monitoring.coreos.com/scrape: “true” prometheus.io/port: “8080” {}
commonLabels object Additional labels to apply to all resources Example: commonLabels: environment: production team: platform {}
fullnameOverride string Override the full qualified app name ""
global.namespace string Global namespace override. If empty, uses release namespace ""
nameOverride string Override name of app ""
nodeSelector object Node selector for all pods Example: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: “true” {}
strimzi.enabled bool Enable Strimzi Kafka cluster deployment true
strimzi.kafka.annotations object Annotations to apply to the Kafka Cluster resource Includes both operator configuration and custom annotations Example: annotations: strimzi.io/kraft: “enabled” # Enable KRaft mode (no ZooKeeper) strimzi.io/node-pools: “enabled” # Use KafkaNodePool resources strimzi.io/restart: “true” # Custom restart annotation kafka.strimzi.io/logging: “debug” # Custom logging annotation {"strimzi.io/kraft":"enabled","strimzi.io/node-pools":"enabled"}
strimzi.kafka.authorization.type string   "simple"
strimzi.kafka.config object Kafka configuration parameters for brokers With 1 broker, keep replication factors at 1 (increase when scaling out) {"auto.create.topics.enable":true,"default.replication.factor":1,"min.insync.replicas":1,"offsets.topic.replication.factor":1,"transaction.state.log.min.isr":1,"transaction.state.log.replication.factor":1}
strimzi.kafka.entityOperator.disableTopicFinalizer bool Resource limits and requests for User Operator Example: resources: limits: cpu: “500m” memory: “512Mi” requests: cpu: “100m” memory: “128Mi” true
strimzi.kafka.entityOperator.topicOperator object   {}
strimzi.kafka.entityOperator.userOperator object Resource limits and requests for Topic Operator Example: resources: limits: cpu: “500m” memory: “512Mi” requests: cpu: “100m” memory: “128Mi” {}
strimzi.kafka.listeners list Kafka listeners define how clients connect to the cluster [{"authentication":{"type":"scram-sha-512"},"name":"plain","port":9092,"tls":false,"type":"internal"},{"authentication":{"type":"scram-sha-512"},"name":"tls","port":9093,"tls":true,"type":"internal"}]
strimzi.kafka.nameOverride string Override Kafka cluster name. If empty, uses “-kafka” ""
strimzi.nodePools.brokers.enabled bool Enable broker node pool deployment true
strimzi.nodePools.brokers.replicas int Number of broker replicas 1
strimzi.nodePools.brokers.resources object Resource limits and requests for broker nodes Example: resources: limits: cpu: “1” memory: “2Gi” requests: cpu: “200m” memory: “512Mi” {}
strimzi.nodePools.brokers.roles list Node pool roles for brokers ["broker"]
strimzi.nodePools.brokers.storage.type string Storage type for broker nodes (JBOD allows multiple volumes) "jbod"
strimzi.nodePools.brokers.storage.volumes list Storage volumes configuration for brokers [{"class":"gp2","deleteClaim":true,"id":0,"size":"10Gi","type":"persistent-claim"}]
strimzi.nodePools.controllers.enabled bool Enable controller node pool deployment true
strimzi.nodePools.controllers.replicas int Number of controller replicas 1
strimzi.nodePools.controllers.resources object Resource limits and requests for controller nodes Example: resources: limits: cpu: “500m” memory: “1Gi” requests: cpu: “100m” memory: “256Mi” {}
strimzi.nodePools.controllers.roles list Node pool roles for controllers ["controller"]
strimzi.nodePools.controllers.storage.class string Storage class for controller nodes "gp2"
strimzi.nodePools.controllers.storage.deleteClaim bool Whether to delete PVC when node pool is deleted true
strimzi.nodePools.controllers.storage.size string Storage size for controller nodes "10Gi"
strimzi.nodePools.controllers.storage.type string Storage type for controller nodes "persistent-claim"
strimzi.nodePools.controllers.storage.volumeAttributesClass string Volume attributes class for the PVC (requires Kubernetes 1.34+) ""
strimzi.topics object   {}
strimzi.users.root.authentication.type string Authentication type for Kafka user "scram-sha-512"
strimzi.users.root.enabled bool Enable main application user creation true
strimzi.users.root.name string Kafka user name. If empty, defaults to “-user" ""
tolerations list Tolerations for all pods Example: tolerations: - key: “key1” operator: “Equal” value: “value1” effect: “NoSchedule” - key: “key2” operator: “Exists” effect: “NoExecute” []
valkey.cluster.annotations object Additional annotations to apply to the Valkey Cluster resource Example: annotations: valkey.hyperspike.io/monitoring: “enabled” valkey.hyperspike.io/backup: “enabled” {}
valkey.cluster.certIssuer string Certificate issuer name "selfsigned"
valkey.cluster.certIssuerType string Certificate issuer type "ClusterIssuer"
valkey.cluster.externalAccess.enabled bool Enable external access to Valkey cluster false
valkey.cluster.externalAccess.type string Type of external access (LoadBalancer, NodePort, etc.) "LoadBalancer"
valkey.cluster.image string Container image for Valkey image: “ghcr.io/hyperspike/valkey:8.0.2” ""
valkey.cluster.nameOverride string Override cluster name. If empty, uses “-valkey” ""
valkey.cluster.nodes int Number of primary nodes; set >1 only if you intend to run Valkey Cluster 1
valkey.cluster.replicas int Replicas per primary (0 = standalone, >0 = primary + replicas) 0
valkey.cluster.resources object Resource limits and requests for Valkey containers Example: resources: limits: cpu: “1” memory: “1Gi” requests: cpu: “100m” memory: “256Mi” {}
valkey.cluster.servicePassword.key string Secret key for the Valkey password "VALKEY_PASSWORD"
valkey.cluster.servicePassword.name string Secret name containing the Valkey password "app-secrets"
valkey.cluster.servicePassword.optional bool Whether the service password is optional false
valkey.cluster.storage.spec.accessModes list Access modes for Valkey storage ["ReadWriteOnce"]
valkey.cluster.storage.spec.resources object Storage size for Valkey data {"requests":{"storage":"10Gi"}}
valkey.cluster.storage.spec.storageClassName string Storage class name. Change to your storage class "gp2"
valkey.cluster.tls bool Enable TLS (requires cert-manager) false
valkey.cluster.volumePermissions bool Enable volume permissions initialization true
valkey.enabled bool Enable Valkey cluster deployment true