Skip to main content
StreamNative Kafka Service runs on the Ursa Engine, a lakehouse-native stream storage engine that delivers native Kafka API with lakehouse-native storage. You can use standard Kafka clients, tools, and ecosystems to produce and consume data without modifying application code. This guide walks you through choosing a cluster profile, selecting a deployment option, creating a Kafka cluster, and configuring it for production workloads.

Cluster profiles

StreamNative provides two cluster profiles for Kafka clusters. Choose a profile based on your workload’s latency requirements and cost sensitivity.
The Cost-Optimized profile uses the Ursa Engine with object storage (Amazon S3, Google Cloud Storage, or Azure Blob Storage) as the primary data persistence layer. This profile is ideal for workloads where throughput and cost efficiency matter more than ultra-low latency.Best for:
  • Event streaming and data pipelines
  • Log aggregation and analytics
  • Change data capture (CDC)
  • Long-term data retention
Performance characteristics:
  • Sub-second end-to-end latency (typically above 200 ms)
  • Up to 95% lower storage cost compared to disk-based clusters
  • Unlimited, elastic storage capacity
The Cost-Optimized profile uses Oxia for metadata management and leverages cloud-native object storage, making it well-suited for workloads with large data volumes and longer retention periods.
For a detailed comparison of profile features by deployment type, see Cluster Profiles.

Deployment options

StreamNative offers Kafka clusters in Dedicated and BYOC deployment options.

Dedicated

Fully managed clusters on StreamNative infrastructure with dedicated resources. Supports multi-AZ high availability.

BYOC

Deploy clusters in your own cloud account (AWS, GCP, or Azure) while StreamNative manages operations. Provides private networking and data sovereignty.
Kafka Clusters are not available in Serverless deployment yet. Serverless support for Kafka Clusters is coming soon.
For a full feature comparison across deployment options, see Cluster Types and Regions.

Create a Kafka cluster

Follow these steps to create a Kafka cluster using the StreamNative Console.

Prerequisites

  • A StreamNative Cloud account. If you do not have one, sign up.
  • An organization in StreamNative Cloud. For details, see Organizations.

Steps

  1. Log in and create an organization (if you have not already). Log in to the StreamNative Console and create or select your organization.
  2. Create an instance. Navigate to Instances and click New. Select your deployment type (Dedicated or BYOC). Enter a name for your instance, select your preferred cloud provider and region, and then proceed.
  3. Choose a resource type. On the Resource Type page, select Kafka Cluster. The page displays a comparison between Pulsar Cluster and Kafka Cluster with their supported features. Resource Type Selection
  4. Configure the cluster. Enter a cluster name, select your cloud environment, and choose a cluster profile (Latency Optimized or Cost Optimized). Select your preferred availability zone configuration (Multi AZ is recommended for production workloads). Cluster Configuration
  5. Configure lakehouse table (optional). On the Lakehouse Table page, optionally enable lakehouse table support for your cluster. Lakehouse Table
  6. Set the cluster size. Configure the cluster size using Throughput Units. Each Throughput Unit provides a defined capacity for ingress (data in), egress (data out), and data entries per second. Adjust the slider to match your expected workload. Cluster Size
  7. Finish. Review and confirm your configuration to create the cluster.
Wait for the cluster to finish provisioning. The cluster is ready when all components show a healthy status.
Each StreamNative instance can support multiple clusters. However, Pulsar Clusters and Kafka Clusters cannot currently co-exist in the same instance.
For step-by-step instructions for each deployment type, see:

Topic management

You can create and manage Kafka topics through the StreamNative Console, the Kafka CLI, or any Kafka AdminClient-compatible tool. You can use standard Kafka APIs to configure topics, partitions, and retention policies. When configuring topics, consider the following settings:
  • Partitions: Set the number of partitions based on your target parallelism and throughput. You can increase partitions after creation, but you cannot decrease them.
  • Retention: Configure time-based or size-based retention policies to control how long messages are stored. On the Cost-Optimized profile, object storage provides cost-efficient long-term retention.
  • Replication: StreamNative manages replication based on your cluster profile and availability zone configuration.

Consumer group management

StreamNative supports standard Kafka consumer groups. You can monitor and manage consumer groups through the StreamNative Console or Kafka CLI tools. Key operations include:
  • Viewing active consumer groups and their members
  • Monitoring consumer lag per partition
  • Resetting consumer group offsets
For details on connecting Kafka consumers, see Build Kafka Client Applications.

Scaling

StreamNative Kafka clusters use Throughput Units for scaling. Each Throughput Unit provides a defined amount of ingress, egress, and data entry throughput. Adjust the number of Throughput Units to match your workload requirements.
For Cost-Optimized clusters, storage scales automatically with object storage. For Latency-Optimized clusters, disk capacity scales with Throughput Units.