- Manage StreamNative Clusters
Cluster Types & Regions in StreamNative Cloud
StreamNative offers various cluster types in the StreamNative Cloud. The type of cluster you choose impacts its features, capabilities, and cost. Utilize this guide to identify the cluster that best meets your requirements. Use Serverless clusters for experimentation and early development. For a production cluster, choose from Dedicated (formerly Hosted) clusters, BYOC, or BYOC Pro clusters.
The table below offers a high-level comparison of features across StreamNative Cloud cluster types.
Category | Feature | Serverless | Dedicated | BYOC | BYOC Pro |
---|---|---|---|---|---|
Cluster Service | Single AZ Clusters | N/A | Yes | Yes | Yes |
Multi-AZ Clusters | Yes | Yes | Yes | Yes | |
Uptime SLA | 99.95% | 99.95% Single AZ / 99.99% Multi AZ | 99.95% Single AZ / 99.99% Multi AZ | 99.95% Single AZ / 99.99% Multi AZ | |
Unlimited Pulsar Clusters | Yes | Yes | Yes | Yes | |
Autoscaling | On by default | Configurable | Configurable | Configurable | |
Classic Engine | Yes | Yes | Yes | Yes | |
Ursa Engine | Coming Soon | Coming Soon | Yes (Public Preview) | Yes (Public Preview) | |
Geo-Replication | Yes | Yes | Yes | Yes | |
Infrastructure & Provisioning & Network Connectivity | Supported Cloud Providers | AWS, GCP, Azure | AWS, GCP, Azure | AWS, GCP, Azure | AWS, GCP, Azure |
Choose any cloud region | No | No | Yes | Yes | |
Dedicated VPC | No | Yes | Yes | Yes | |
Private Link | No | No | Yes | Yes | |
VPC/VNet Peering | No | No | No | Yes | |
Transit Gateway | No | No | No | Yes | |
Observability | Metrics API | Yes | Yes | Yes | Yes |
Remote writes to external observability systems | No | No | No | Yes | |
Maintenance & Operations | Custom Maintenance Window | Production or Enterprise Support Plan tier only | Production or Enterprise Support Plan tier only | Production or Enterprise Support Plan tier only | Production or Enterprise Support Plan tier only |
Multi-Protocol Support | Pulsar | Yes | Yes | Yes | Yes |
Kafka | Yes | Yes | Yes | Yes | |
MQTT | Yes | Yes | Yes | Yes | |
WebSocket | Yes | Yes | Yes | Yes | |
REST | Yes | Yes | Yes | Yes | |
Connectivity and Processing | Pulsar IO (Built-in & Custom) | Yes | Yes | Yes | Yes |
Kafka Connect (Built-in & Custom) | Yes | Yes | Yes | Yes | |
Pulsar Functions | Yes | Yes | Yes | Yes | |
Managed Flink | No | No | Yes (Private Preview) | Yes (Private Preview) | |
Data Storage | Tiered Storage | Transparent | Transparent | Your Bucket | Your Bucket |
Security | Multi-tenancy | Yes | Yes | Yes | Yes |
Authentication | Yes | Yes | Yes | Yes | |
Authorization | Yes | Yes | Yes | Yes | |
Audit Logs | Yes | Yes | Yes | Yes | |
Data-at-rest Encryption | Yes | Yes | Yes | Yes | |
TLS Encryption | Yes | Yes | Yes | Yes | |
End-to-end Encryption | Yes | Yes | Yes | Yes | |
Bring Your Own Key | No | No | No | Yes |
Important
The capabilities provided in this topic are for planning purposes, and are not a guarantee of performance, which varies depending on each unique configuration.
Serverless Clusters
Public Preview
Serverless Clusters are currently in Public Preview. During this phase, we're actively gathering feedback and refining the service. Features and availability may be subject to change. We encourage you to try it out and share your experiences with us.
Serverless clusters are the newest addition to StreamNative Cloud, offering a fully managed, auto-scaling solution with minimal operational overhead. Key features include:
- Instant provisioning with zero base cost.
- Automatic scaling based on your workload, with billing only for resources used.
- Simplified management with StreamNative handling all infrastructure concerns.
- Ideal for development, testing, and production workloads with variable traffic patterns.
StreamNative uses Elastic Throughput Units (ETUs) to provision and bill for the Serverless clusters.
ETU limits per Serverless cluster
Serverless clusters are elastic, shrinking and expanding automatically based on load. You don't need to size your cluster. When you need more capacity, your Serverless cluster expands up to the fixed maximum. If you're not using any capacity, you're not paying for it.
Serverless cluster capacity
Note
During the Public Preview period, the serverless cluster capacity limit is not enforced. This allows for greater flexibility in testing and usage. However, please be aware that these limits may be implemented in the future as the service moves towards general availability.
Dimension | Minimum | Maximum |
---|---|---|
ETUs | 0 | 20 |
If consumption in a given hour is zero across all billable dimensions, you pay nothing. For more information, see Elastic Throughput Unit (ETU).
ETU capacity guidance
The dimensions in the following table describe the capacity of a single ETU. For more information about ETU, see Elastic Throughput Unit (ETU) and ETU vs CU/SU.
Dimension | ETU Capacity |
---|---|
Ingress (Data In) | 5 megabytes per second (MBps) |
Egress (Data Out) | 15 megabytes per second (MBps) |
Data Entries | 500 entries per second |
Serverless limits per cluster
Dimension | Capability | Additional details |
---|---|---|
Ingress (Data In) | Max 100 MBps | Number of bytes that can be produced to the cluster in one second. To reduce usage on this dimension, you can compress your messages. lz4 is recommended for compression. |
Egress (Data Out) | Max 300 MBps | Number of bytes that can be consumed from the cluster in one second. To reduce usage on this dimension, you can compress your messages and ensure each consumer is only consuming from the topics it requires. lz4 is recommended for compression. |
Data Entries | Max 10,000 per second | Number of data entries produced to and consumed from the cluster in one second. Each data entry represents a batch of messages. Both Pulsar and Kafka clients do batching at the client side. To reduce usage on this dimension, you can adjust producer batching configurations and shut down otherwise inactive clients. |
Serverless limits per partition
The partition capabilities that follow are based on benchmarking and intended as practical guidelines for planning purposes. Performance per partition will vary based on your specific configuration, and these benchmarks do not guarantee performance.
Dimension | Capability |
---|---|
Ingress per partition | 5 MBps |
Egress per partition | 15 MBps |
Storage per partition | Unlimited |
Serverless Cloud Providers & Regions
Serverless clusters are currently available in limited regions. Please check the StreamNative Cloud console for the most up-to-date information on available regions.
Serverless Features and Usage Limits
Currently, Serverless clusters are in Public Preview. The usage limits are not strictly enforced, but you should expect some limitations. Here are some guidelines for the cluster limits:
Dimension | Capability | Additional details |
---|---|---|
Ingress | Max 100 MBps | Number of bytes that can be produced to the cluster in one second. To reduce usage on this dimension, you can compress your messages. lz4 is recommended for compression. |
Egress | Max 300 MBps | Number of bytes that can be consumed from the cluster in one second. To reduce usage on this dimension, you can compress your messages and ensure each consumer is only consuming from the topics it requires. lz4 is recommended for compression. |
Storage | Unlimited | Number of bytes retained on the cluster, pre-replication. You can configure retention policy settings at namespace or topic level so you can control exactly how much and how long to retain data in a way that makes sense for your applications and helps control your costs. To reduce usage on this dimension, you can compress your messages and reduce your retention settings. lz4 is recommended for compression. |
Data Entries | Max 10,000 per second | Number of data entries produced to and consumed from the cluster in one second. Each data entry represents a batch of messages. To reduce usage on this dimension, you can adjust producer batching configurations and shut down otherwise inactive clients. |
Message size | Max 5 MB | None |
In addition to the usage limits, there are some additional limitations in features:
Tiered Storage is transparent, meaning you don't need to configure it. However, Serverless clusters don't support bringing your own bucket. If you need to use your own object storage bucket, you should consider using BYOC or BYOC Pro clusters.
Serverless clusters use Rapid Release Channel by default. You can't choose a different release channel.
Auto-scaling is enabled by default. You don't need to configure it.
Remote writes to external observability systems are not supported.
Custom maintenance windows are not available.
Private networking options are not available.
Dedicated Clusters
Dedicated (formerly Hosted) clusters are fully managed on StreamNative's cloud infrastructure. They support the following capabilities:
Deployment in supported regions on AWS, GCP, and Azure with a 99.95% uptime SLA for Single-Zone and 99.99% for Multi-Zone.
Optional multi-zone high availability, spreading a cluster across three availability zones for enhanced resilience.
Simplified scaling in terms of Compute Units (CUs) and Storage Units (SUs).
Programmable or automatic scaling options.
Dedicated Cloud Providers & Regions
The following is a list of cloud providers along with the regions and zones supported for Dedicated (formerly Hosted) Clusters:
AWS
Identifier | Location |
---|---|
ap-southeast-2 | Asia Pacific (Sydney) |
eu-central-1 | Europe (Frankfurt) |
eu-west-1 | Europe (Ireland) |
us-east-2 | US East (Ohio) |
GCP
Identifier | Location |
---|---|
europe-west1 | St. Ghislain, Belgium, Europe |
us-central1 | Council Bluffs, Iowa, North America |
Azure
Identifier | Location |
---|---|
eastus | US East |
Dedicated Features and Usage Limits
The following table outlines the features and usage limits for the Dedicated (formerly Hosted) Clusters:
Type | Features | Capability |
---|---|---|
Service | Uptime SLA | 99.95% |
Multi AZ | 99.99% | |
Scale | Throughput limit per topic | Max 100 MBps |
Storage limit per topic | Max 1000 TB | |
Tenant limit | Max 128 | |
Namespace limit | Max 1024 | |
Topic limit | Max 10240 | |
Cloud providers | GCP | Yes |
AWS | Yes | |
Azure | Yes |
In addition to the above, there are additional limitations in Dedicated (formerly Hosted) clusters:
Tiered Storage is transparent, meaning you don't need to configure it. However, Dedicated clusters don't support bringing your own bucket. If you need to use your own bucket, you should consider using BYOC or BYOC Pro clusters.
Remote writes to external observability systems are not supported.
Custom maintenance windows are not available.
Private networking options are not available.
BYOC Clusters
BYOC clusters are designed for production-ready deployments in your Cloud account, tailored to meet your data security, compliance, and sovereignty requirements. They offer the following capabilities:
Dedicated deployments in your chosen region within your cloud account (AWS, GCP, Azure) with a 99.95% uptime SLA for Single-Zone and 99.99% for Multi-Zone.
Private networking options including AWS PrivateLink, Azure PrivateLink, and GCP Private Service Connect.
Optional multi-zone high availability, spreading a cluster across three availability zones for increased resilience.
Simplified scaling in terms of CUs and SUs.
Programmable or automatic scaling options.
You can choose between Classic Engine and Ursa Engine.
StreamNative uses CU/SU to bill for the Classic Engine clusters and uses Elastic Throughput Units (ETUs) to for the Ursa Engine clusters.
BYOC Cloud Providers & Regions
A BYOC cluster can be deployed in any selected region in your cloud account across AWS, GCP, and Azure.
BYOC Features and Usage Limits
The performance limit is majorly bound by the underlying resources of your cloud account.
You can access most of the Pulsar and URSA features in your BYOC clusters.
You can use your own S3-compatible storage bucket for the Ursa Engine or Lakehouse tiered storage for the Classic Engine.
You can use your own keys for data-at-rest encryption.
Only PrivateLink is supported for private networking. Other private networking options are not supported. You can manually configure them if you choose to use a different private networking option, but it will be out of scope for StreamNative support.
Geo-replication via private networking is not supported. Only public networking geo-replication is supported. If you need geo-replication via private networking, you should consider using BYOC Pro.
Remote writes to external observability systems are not supported.
Custom maintenance windows are not available.
ETU capacity guidance
The dimensions in the following table describe the capacity of a single ETU in a Ursa Engine BYOC cluster. For more information about ETUs, see Elastic Throughput Unit (ETU) and ETU vs CU/SU.
Dimension | ETU Capacity |
---|---|
Ingress (Data In) | 25 megabytes per second (MBps) |
Egress (Data Out) | 75 megabytes per second (MBps) |
Data Entries | 2500 entries per second |
BYOC Pro Clusters
BYOC Pro Clusters are designed for critical production workloads and offer enhanced security and networking features, including:
Advanced private networking options like VPC/VNet Peering and Transit Gateway.
Remote writes to external observability systems.
Lakehouse tiered storage solutions.
Self-managed keys (Bring-Your-Own-Key) for AWS, Azure, or GCP.
Similar to BYOC clusters, BYOC Pro uses CU/SU to bill for Classic Engine clusters and uses Elastic Throughput Units (ETUs) for Ursa Engine clusters.
BYOC Pro Cloud Providers & Regions
BYOC Pro Clusters can be deployed in any selected region in your cloud account across AWS, GCP, and Azure.
BYOC Pro Features and Usage Limits
BYOC Pro is the most secure and flexible option, which provides you with full control over your data and network configurations.
The performance limit is majorly bound by the underlying resources of your cloud account.
You can access most of the Pulsar and URSA features in your BYOC Pro clusters.
You can use your own S3-compatible storage bucket for the Ursa Engine or Lakehouse tiered storage for the Classic Engine.
You can use your own keys for data-at-rest encryption.
Private Link, VPC/VNet Peering, and Transit Gateway are supported for private networking.
Geo-replication via private networking is supported.
ETU capacity guidance
The dimensions in the following table describe the capacity of a single ETU in a Ursa Engine BYOC Pro cluster. For more information about ETUs, see Elastic Throughput Unit (ETU) and ETU vs CU/SU.
Dimension | ETU Capacity |
---|---|
Ingress (Data In) | 25 megabytes per second (MBps) |
Egress (Data Out) | 75 megabytes per second (MBps) |
Data Entries | 2500 entries per second |