Kafka Connection Configurations
For Kafka connection issues, I have used the config properties I will mention below. You can get more information about them here: https://kafka.apache.org/documentation/#producerconfigs
“retry.backoff.ms” did not help much but I added it with its initial value set in my application.properties.
I tried the case where both zookeper and bootstrap are down, “reconnect.backoff.ms” and “reconnect.backoff.max.ms” properties helped. There were too many tries before, now there are less. After max time passed, the intervals increased.
You can see the output below:
2020–01–18 22:35:46,910 INFO [main] [] misc — Thread pool was configured to max=2502020–01–18 22:35:52,942 WARN [kafka-producer-network-thread | producer-1] [] NetworkClient — [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.2020–01–18 22:36:12,958 WARN [kafka-producer-network-thread | producer-1] [] NetworkClient — [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.2020–01–18 22:36:42,990 WARN [kafka-producer-network-thread | producer-1] [] NetworkClient — [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.2020–01–18 22:36:43,432 INFO [mailboxd.csv] [] cache — setting message cache size to 20002020–01–18 22:37:13,016 WARN [kafka-producer-network-thread | producer-1] [] NetworkClient — [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.2020–01–18 22:37:48,057 WARN [kafka-producer-network-thread | producer-1] [] NetworkClient — [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.2020–01–18 22:38:23,093 WARN [kafka-producer-network-thread | producer-1] [] NetworkClient — [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
Later on, I tested the case when zookeeper is up and bootstrap is down. I did see that “max.block.ms” helped. After a waiting period, the below log is printed out. We can maybe lower its value.
2020–01–18 22:46:28,620 ERROR [qtp726379593–130:https://127.0.0.1/service/soap/SearchConvRequest] [name=admin@ubuntu.nils.local;mid=2;ip=192.168.9.169;port=49242;ua=ZimbraWebClient — GC79 (Linux)/8.8.15_GA_3890;soapId=7768cb18;] mailbox — [MAILBOX_LISTENER] [KAFKA_PRODUCER] [PRODUCE_ERROR= org.apache.kafka.common.errors.TimeoutException: Topic client-event-topic not present in metadata after 5000 ms.] :
From the link at the beginning of the post:
max.block.ms: The configuration controls how long
KafkaProducer.send()
andKafkaProducer.partitionsFor()
will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.Type: long
Default: 60000
Valid Values: [0,…]
Importance: medium
retry.backoff.ms: The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
Type: long
Default: 100
Valid Values: [0,…]
Importance: low
reconnect.backoff.ms: The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.
Type: long
Default: 50
Valid Values: [0,…]
Importance: low
reconnect.backoff.max.ms: The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.
Type: long
Default: 1000
Valid Values: [0,…]
Importance: low
Happy Coding!