<cluster-name>
with your StreamNative Cloud cluster name:
networkaddress.cache.ttl
: Set it to 30
seconds.networkaddress.cache.negative.ttl
: Set it to 0
seconds.consumer.client.dns.lookup
: Set it to use_all_dns_ips
.producer.client.dns.lookup
: Set it to use_all_dns_ips
.4.0.0.7
or later. Classic Engine clusters use cross-AZ replication for data durability and availability and cannot take advantage of this optimization.client.id
to match the availability zone ID to enable zone-aware routing.zone_id=<zone-id>
to the client ID. The client ID must follow this format: zone_id=<zone-id>;key1=value1;key2=value2
us-west-1a
and the zone ID is usw-az1
, set your client ID to zone_id=usw-az1;other=value
. This ensures your client connects to brokers in the same zone.
zone_id
in the client ID must exactly match the availability zone ID for zone-aware routing to work correctly.sasl.mechanism=PLAIN
- Specifies SASL/PLAIN as the authentication mechanismsecurity.protocol=SASL_SSL
- Enables SASL authentication over SSL/TLSsasl.username
- Can be set to any value as it is not usedsasl.password=token:<API KEY>
- Must be set to token:
followed by your generated API key<API KEY>
with the API key you generated.
<API KEY>
with the API key you generated.
Configuration property | Java default | librdkafka default | Notes |
---|---|---|---|
client.id | empty string | rdkafka | You should set the client.id to something meaningful in your application, especially if you are running multiple clinets or want to easily trace logs or activities to specific client instances. This setting is also important for zone-aware routing, as it helps StreamNative Cloud route the traffic to the correct availability zone to eliminate cross-AZ networking traffic. See Eliminate Cross-AZ Networking Traffic for more information. |
connections.max.idle.ms | 540000 ms (9 mins) | See librdkafka socket.timeout.ms | You can change this when an intermediate load balancer disconnects idle connections after inactivity. For example, AWS 350 seconds, Azure 4 minutes, Google Cloud 10 minutes. |
socket.connection.setup.timeout.max.ms | 30000 ms (30 secs) | not available | librdkafka doesn’t have exponential backoff for this timeout, so you can increase socket.connection.setup.timeout.ms to avoid connection failures. |
socket.connection.setup.timeout.ms | 10000 ms (10 secs) | 30000 ms (30 secs) | librdkafka doesn’t have exponential backoff for this timeout, so you can increase this value to avoid connection failures. |
metadata.max.age.ms | 300000 ms (5 mins) | 900000 ms (15 mins) | librdkafka has the topic.metadata.refresh.interval.ms setting that defaults to 300000 ms (5 mins). |
reconnect.backoff.max.ms | 1000 ms (1 second) | 10000 ms (10 seconds) | |
reconnect.backoff.ms | 50 ms | 100 ms | |
max.in.flight.requests.per.connection | 5 | 1000000 | librdkafka produces to a single partition per batch, setting it to 5 limits producing to 5 partitions per broker |