Skip to main content

When writing to Kafka Producer destination it doesn’t always handle timeout exceptions (as shown below) and the pipeline does not honor On Record Error » Send to Error setting on the Kafka Producer destination. How can this be resolved?

Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1186) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:880) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:803) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:690) at com.streamsets.pipeline.kafka.impl.BaseKafkaProducer09.enqueueMessage(BaseKafkaProducer09.java:64) at com.streamsets.pipeline.stage.destination.kafka.KafkaTarget.writeOneMessagePerRecord(KafkaTarget.java:242) ... 30 more Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

 

According to Confluent, the timeout exception can be resolved by setting/updating the following config on the Kafka Producer destination.

{
"key": "connections.max.idle.ms",
"value": "60000"
},
{
"key": "metadata.max.idle.ms",
"value": "60000"
},
{
"key": "metadata.max.age.ms",
"value": "60000"
}

Cheers,

Dash


Reply