HADOOPFS_14 - Cannot write record: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try.

  • 13 June 2022
  • 0 replies
  • 105 views

Userlevel 4
Badge

Question:

Streamsets Instance is giving the following error:

com.streamsets.pipeline.api.StageException: HADOOPFS_13 - Error while writing to HDFS: com.streamsets.pipeline.api.StageException: HADOOPFS_14 - Cannot write record: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.227.165.17:1004,DS-e8b8d6a1-c366-4deb-a92a-6f796381d10d,DISK], DatanodeInfoWithStorage[10.227.165.15:1004,DS-28d037df-c357-4caa-8ca8-41435c2732e1,DISK]], original=[DatanodeInfoWithStorage[10.227.165.17:1004,DS-e8b8d6a1-c366-4deb-a92a-6f796381d10d,DISK], DatanodeInfoWithStorage[10.227.165.15:1004,DS-28d037df-c357-4caa-8ca8-41435c2732e1,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Answer:

This issue may be related to Cloudera configuration. The article below explains how to understand hdfs recovery process:

https://blog.cloudera.com/blog/2015/03/understanding-hdfs-recovery-processes-part-2/

Check with Cloudera support. Eventually, the issue was resolved after a cluster restart.


0 replies

Be the first to reply!

Reply