Skip to main content
The Ursa Engine is the cloud-native stream storage engine at the heart of the Lakestream architecture. It powers both StreamNative Kafka Service and Pulsar Service, delivering 100% Kafka API compatibility with lakehouse-native storage. StreamNative is standardizing all new instances and clusters on the Ursa Engine, designed for long-term scalability and cost efficiency. Users can choose from cluster profiles, including the Latency Optimized profile for sub-10 ms performance and the Cost Optimized profile for workloads that prioritize efficiency over low-latency guarantees. While the Classic Engine represents StreamNative’s first-generation Pulsar-based runtime, Ursa is the strategic engine moving forward.
All new clusters are powered by the Ursa Engine. StreamNative is converging on a single platform with two cluster profiles — Latency Optimized and Cost Optimized — to serve all workloads. To learn more about the two-profile model, see the blog post: One Platform, Two Profiles: Streaming for Latency or Cost.
StreamNative Cloud offers two data streaming engines to run your StreamNative clusters: Classic Engine and Ursa Engine.

Classic Engine

Classic Engine refers to the original ZooKeeper & BookKeeper based Apache Pulsar engine that StreamNative Cloud uses. It uses ZooKeeper for metadata storage and cluster membership, and BookKeeper for low-latency data persistence. Besides Pulsar protocol, the Classic Engine also supports the Kafka protocol via KSN. The Classic Engine only supports low-latency BookKeeper based storage, which is suitable for all latency-sensitive workloads. Lakehouse storage can be enabled for your Classic Engine clusters to support advanced data processing and analytics workloads.

Ursa Engine

Ursa Engine is the next-generation cloud-native stream storage engine that StreamNative Cloud offers. It is the storage engine powering the Lakestream architecture, recognized with the VLDB 2025 Best Industry Paper award. The Ursa Engine provides the storage layer for both Kafka Clusters and Pulsar Clusters on StreamNative Cloud. It uses Oxia for metadata storage and supports multiple storage backends — local disks for low-latency workloads and object storage (S3, GCS, Azure Blob Storage) for cost-optimized workloads. Key pillars of the Ursa Engine include:
  • Native Kafka support: Kafka Clusters run native Apache Kafka on the Ursa Engine.
  • Lakehouse storage for long-term durability and open standards-based storage.
  • Oxia as a scalable and durable metadata storage.
  • Support for both Latency-Optimized and Cost-Optimized cluster profiles.
The Ursa Engine supports two cluster profiles:
  • Latency-Optimized profile: Uses disk-based storage for sub-10 ms end-to-end latency. For Kafka Clusters, this profile uses KRaft for controller management and ISR for data replication. For Pulsar Clusters, this profile uses Apache BookKeeper for low-latency data persistence and Oxia for metadata storage.
  • Cost-Optimized profile: Uses object storage (S3, GCS, Azure Blob Storage) with leaderless, diskless brokers that write directly to object storage. This profile delivers sub-second latency with up to 95% lower infrastructure costs.

What is included in the Ursa Engine?

The Ursa Engine is the storage engine powering both Kafka Service and Pulsar Service on StreamNative Cloud. Key capabilities include:
  • Kafka protocol: Full native Kafka API support for producing, consuming, and managing topics.
  • Pulsar protocol: Native Pulsar API support (available on Pulsar Clusters).
  • Lakehouse storage: Data written directly to open table formats (Iceberg, Delta Lake) on object storage.
  • Cluster profiles: Choose between Latency-Optimized (disk-based) and Cost-Optimized (object storage) based on your workload requirements.
Kafka Clusters support the Kafka protocol natively. Pulsar protocol support is available on Pulsar Clusters. For Kafka workloads, create a Kafka Cluster. For Pulsar workloads, create a Pulsar Cluster.

Ursa Stream Storage

At the heart of the Ursa Engine is the concept of Ursa Stream Storage. It is a headless, multi-modal data storage layer built on lakehouse formats. At the heart of Ursa Stream Storage is a WAL (Write-Ahead Log) implementation based on S3. This design writes records directly to object storage services like S3, bypassing BookKeeper and eliminating the need for replication between brokers. As a result, Ursa Engine-powered clusters replace expensive inter-AZ replication with cost-efficient, direct-to-object-storage writes. This trade-off introduces a slight increase in latency (from 200ms to 500ms) but results in significantly lower network costs—on average 10x cheaper. Ursa Cost-Optimized Storage In the S3-based WAL implementation, brokers create batches of produce requests and write them directly to object storage before acknowledging the client. These brokers are stateless and leaderless, meaning any broker can handle produce or fetch requests for any partition. For improved batch and fetch performance, however, specific partitions may still be routed to designated brokers. This architecture eliminates inter-AZ replication traffic between brokers, while maintaining—and even improving—the durability and availability that customers expect from StreamNative. As with any engineering trade-off, these savings come at a cost: Produce requests must now wait for acknowledgments from object storage, introducing some additional latency. However, this trade-off can result in up to 95% cost savings, making it a compelling choice for cost-sensitive workloads.

Compare cluster profiles

All clusters on StreamNative Cloud run on the Ursa Engine. When you create a cluster, choose the profile that matches your workload requirements:
FeatureLatency OptimizedCost Optimized
Kafka ProtocolYesYes
Pulsar ProtocolYes (Pulsar Clusters)Yes (Pulsar Clusters)
Storage backendLocal disksObject storage (S3, GCS, Azure Blob)
End-to-end latencySub-10 msSub-second
Inter-AZ replicationRequiredEliminated (writes go directly to object storage)
Lakehouse StorageAdd-onBuilt-in
Best forReal-time, interactive workloadsEvent streaming, log aggregation, CDC
For implementation details:
  • Latency Optimized — Kafka Clusters use KRaft for controller management and ISR for data replication.
  • Latency Optimized — Pulsar Clusters use Apache BookKeeper for low-latency persistence and Oxia for metadata storage.
  • Cost Optimized uses leaderless, diskless brokers that write directly to object storage, delivering up to 95% lower infrastructure costs.

Choose the right profile for your workload

When you create a cluster, choose the cluster profile that matches your workload:
  • Latency-Optimized: For real-time, interactive workloads that require sub-10 ms latency.
  • Cost-Optimized: For event streaming, log aggregation, and CDC workloads that benefit from lower infrastructure costs.