Frequently Asked Questions
This page provides answers to frequently asked questions about billing on StreamNative Cloud. If you don’t find the answer you’re looking for, please contact our support team.What is a Throughput Unit (TU)?
A Throughput Unit (TU) is a capacity planning abstraction used for Dedicated Kafka and BYOC clusters. Each TU represents a standardized amount of throughput capacity (25 MBps ingress, 75 MBps egress, 2,500 entries per second). TUs allow you to configure cluster capacity without managing infrastructure details. How TUs are charged depends on the cluster type — see RTU for Dedicated Kafka and ETU for BYOC clusters.What is the difference between RTU and ETU?
RTU (Reserved Throughput Unit) and ETU (Elastic Throughput Unit) are two charging models for Throughput Units:- RTU applies to Dedicated Kafka clusters. You are charged for the number of TUs you reserve when configuring the cluster, regardless of actual usage. This is a fixed hourly charge.
- ETU applies to Serverless and BYOC/BYOC Pro clusters. You are charged based on actual throughput usage. For BYOC clusters, you configure TUs as the reserved capacity, but billing is based on actual consumption.
Which billing model applies to my cluster?
The billing model depends on your cluster type:| Cluster Type | Capacity Planning | Charging Unit |
|---|---|---|
| Serverless | Auto-scaled | ETU (Elastic Throughput Unit) |
| Dedicated Kafka (Public Preview) | TU (Throughput Unit) | RTU (Reserved Throughput Unit) |
| Dedicated Pulsar | CU + SU | CU (Compute Unit) + SU (Storage Unit) |
| BYOC / BYOC Pro (Kafka) | TU (Throughput Unit) | ETU (Elastic Throughput Unit) |
| BYOC / BYOC Pro (Pulsar) | CU + SU | ETU (Elastic Throughput Unit) |
How do I scale my Dedicated Kafka cluster?
You can adjust the number of Throughput Units (TUs) for your Dedicated Kafka cluster using the TU slider in the StreamNative Cloud Console. The self-service range is 1 to 20 TUs (integer values only). For configurations beyond 20 TUs, contact StreamNative sales.What is the difference between pre-replication and post-replication write throughput and how does it impact pricing?
Pre-replication write throughput refers to the amount of data written by clients before it is replicated across multiple bookies. Post-replication write throughput, on the other hand, is the total amount of data written after replication has occurred. For example, if you write 1 GB of data with a replication factor of 3, the pre-replication write throughput is 1 GB, while the post-replication write throughput is 3 GB. In terms of pricing, StreamNative Cloud bills based on the pre-replication write throughput. This means you are charged for the actual data you write, not for the additional copies created for replication purposes. This pricing model ensures that you don’t incur extra costs for maintaining data redundancy and fault tolerance in your Pulsar cluster.How often are billing metrics updated?
Billing metrics are typically updated hourly. However, there may be a slight delay in the reporting of usage data, so the most recent hour’s data might not be immediately visible in your billing dashboard.Are there any additional costs for using the StreamNative Cloud Console?
No, there are no additional costs for using the StreamNative Cloud Console. The console is provided as a free tool to manage and monitor your Pulsar clusters, instances, and other resources.What happens if I exceed my free credits?
If you’re approaching or have exceeded your free credit limit, StreamNative will notify you via email. In most cases, your services will continue to run, but you are required to add a payment method to ensure uninterrupted service. If you have questions about your free credits, please contact the StreamNative team at https://streamnative.io/contact.What is dimensional consumption?
Dimensional consumption refers to the usage of resources across different dimensions in StreamNative Cloud, particularly for Serverless clusters. These dimensions include:- Ingress (Data In): The volume of data being written to your cluster, measured in bytes per second.
- Egress (Data Out): The volume of data being read from your cluster, measured in bytes per second.
- Data Entries: The number of entries (batches of messages) processed by the cluster per second, including both produce and consume operations.
Dimensional consumption applies to ETU-based clusters (Serverless, BYOC). For Dedicated Kafka clusters using RTU-based billing, you are charged at a fixed rate for the reserved capacity regardless of actual dimensional consumption.