What changes vs. what stays the same
Understanding what changes during migration helps you plan with confidence. The short answer: very little changes.What changes
- Bootstrap servers endpoint — You point your clients to the StreamNative Kafka Service endpoint instead of your current Kafka broker addresses.
- Authentication method — StreamNative uses OAuth 2.0 or API keys for authentication. You update your client configuration to use one of these methods. See Authentication overview for details.
What stays the same
- Client code — Your producers, consumers, and admin clients require no code changes. Only configuration properties change.
- Kafka Connect connectors — Your existing connector configurations work as-is after pointing to the new endpoint.
- Kafka Streams applications — Your stream processing topologies run unchanged.
- Topic names — Your topic naming scheme carries over directly.
- Consumer group IDs — Your consumer groups and their offsets can be preserved during migration.
Migration steps
Follow these steps to migrate your Kafka workloads to StreamNative Kafka Service.Step 1: Assess your current Kafka deployment
Before migrating, document your existing deployment:- Topics and partitions — List all topics, their partition counts, and replication factors.
- Throughput — Measure your peak produce and consume rates (MB/s and messages/s).
- Retention policies — Record retention times and sizes for each topic or namespace.
- Consumer groups — Identify all active consumer groups and their current offsets.
- Connectors — Inventory your Kafka Connect source and sink connectors.
- Security configuration — Note your current authentication and authorization setup (SASL, TLS, ACLs).
Step 2: Create a StreamNative Kafka cluster
Create a Kafka cluster in StreamNative Cloud. Choose your cloud provider, region, and cluster size based on the throughput requirements you identified in Step 1. See Get Started with Kafka Service for step-by-step instructions on creating a cluster.Step 3: Configure authentication
Set up authentication for your clients. StreamNative supports two authentication methods:- OAuth 2.0 — Recommended for production workloads. Provides token-based authentication with automatic rotation.
- API keys — Suitable for development and testing, or when OAuth 2.0 is not practical.
Step 4: Set up Universal Linking for zero-downtime migration
Universal Linking mirrors data between your source Kafka deployment and StreamNative, enabling zero-downtime migration. It replicates topics, consumer group offsets, and schemas from your existing cluster to StreamNative Kafka Service through object storage. Key capabilities of Universal Linking:- Offset preservation — Maintains consumer group offsets so consumers can resume from where they left off.
- Schema migration — Replicates schemas from your source schema registry.
- No cross-zone traffic — Transfers data through object storage, avoiding expensive cross-zone networking costs.
Step 5: Validate with test consumers
Before migrating production traffic, validate the migration:- Connect a test consumer to the StreamNative cluster and verify it can read mirrored data.
- Connect a test producer to StreamNative and confirm messages are written and readable.
- Verify that your Kafka Connect connectors work with the new endpoint.
- Confirm Kafka Streams applications process data correctly.
- Compare message counts and latencies between the source and StreamNative clusters.
Step 6: Cut over production traffic
Once validation is complete, migrate production traffic:- Update producer configurations to point to the StreamNative bootstrap servers.
- Wait for consumers to process any remaining messages from the source cluster.
- Update consumer configurations to point to the StreamNative bootstrap servers.
- Monitor consumer lag and throughput to confirm the cutover is successful.
- Decommission the source cluster after a stabilization period.
Keep your source cluster running for a stabilization period (typically 24-72 hours) after cutover. This gives you a rollback path if unexpected issues arise.
Coming from a specific platform
From Amazon MSK
From Amazon MSK
If you are migrating from Amazon MSK, consider the following advantages of StreamNative Kafka Service:
- Cost savings — StreamNative’s Ursa Engine uses tiered storage with object storage (S3), eliminating the need for expensive EBS volumes and reducing storage costs significantly.
- No AZ replication costs — Traditional MSK clusters replicate data across availability zones, incurring cross-AZ data transfer charges. StreamNative’s architecture avoids these costs.
- Lakehouse-native — Built-in support for Apache Iceberg and lakehouse formats allows you to query streaming data directly with analytics engines, without building separate ETL pipelines.
- Simplified operations — No need to manage broker instances, patch Kafka versions, or tune JVM settings. StreamNative handles infrastructure management.
- Elastic scaling — Scale throughput up or down without the manual broker rebalancing required in MSK.
From Confluent Cloud or Platform
From Confluent Cloud or Platform
If you are migrating from Confluent Cloud or Confluent Platform, consider the following advantages of StreamNative Kafka Service:
- Open formats — StreamNative stores data in open formats (Apache Iceberg) rather than proprietary storage layers, giving you full control over your data.
- No vendor lock-in — Standard Kafka API compatibility means your applications are portable. No proprietary client libraries or APIs required.
- Cost efficiency — StreamNative’s Ursa Engine provides significant cost savings through efficient tiered storage and compute-storage separation.
- Multi-protocol support — Access your data through both Kafka and Pulsar protocols, giving you flexibility in how you build applications.
- Transparent pricing — Predictable pricing without hidden costs for features like Schema Registry, connectors, or cluster linking.
From self-managed Apache Kafka
From self-managed Apache Kafka
If you are migrating from a self-managed Apache Kafka deployment, consider the following advantages of StreamNative Kafka Service:
- Zero operational overhead — Eliminate the need to manage ZooKeeper or KRaft controllers, broker instances, and operating system patches.
- Auto-scaling — StreamNative automatically scales compute and storage based on your workload, removing the need for manual capacity planning.
- Managed infrastructure — Automated upgrades, security patches, and monitoring are handled for you, freeing your team to focus on application development.
- Built-in observability — Pre-configured metrics, dashboards, and alerting replace the need to build and maintain your own monitoring stack.
- Enterprise security — OAuth 2.0, RBAC, and encryption are built in and ready to use, without manual configuration of SASL, ACLs, and TLS certificates.
Existing resources
Migrating to StreamNative
Detailed guide on Kafka-specific differences and data retention policies to consider when migrating.
Universal Linking
Set up data replication between your existing Kafka cluster and StreamNative for zero-downtime migration.
Kafka compatibility
Full reference of supported Kafka APIs, protocol versions, and feature compatibility.
Get started with Kafka Service
Create your first Kafka cluster and produce a message in under 5 minutes.