Solved

Why does Kafka Producer not handle timeout exceptions?

  • 26 October 2021
  • 1 reply
  • 8033 views

Userlevel 6
Badge +3
  • Senior Technical Evangelist and Developer Advocate at Snowflake
  • 67 replies

When writing to Kafka Producer destination it doesn’t always handle timeout exceptions (as shown below) and the pipeline does not honor On Record Error » Send to Error setting on the Kafka Producer destination. How can this be resolved?

Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1186) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:880) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:803) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:690) at com.streamsets.pipeline.kafka.impl.BaseKafkaProducer09.enqueueMessage(BaseKafkaProducer09.java:64) at com.streamsets.pipeline.stage.destination.kafka.KafkaTarget.writeOneMessagePerRecord(KafkaTarget.java:242) ... 30 more Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

 

icon

Best answer by Dash 26 October 2021, 21:09

View original

1 reply

Userlevel 6
Badge +3

According to Confluent, the timeout exception can be resolved by setting/updating the following config on the Kafka Producer destination.

{
"key": "connections.max.idle.ms",
"value": "60000"
},
{
"key": "metadata.max.idle.ms",
"value": "60000"
},
{
"key": "metadata.max.age.ms",
"value": "60000"
}

Cheers,

Dash

Reply