Solved

Azure Bug?

  • 29 October 2021
  • 1 reply
  • 1065 views

Hi I am on Data Collector StreamSets Data Collector 3.20.0 and received a new error on a production pipeline 


 Pipeline Status: RUN_ERROR: HADOOPFS_13 - Error while writing to HDFS: org.apache.hadoop.fs.FileAlreadyExistsException: Operation failed: "This endpoint does not support BlobStorageEvents or SoftDelete. Please disable these account features if you would like to use this endpoint.", 409, HEAD, https://gcdatalake2.dfs.core.windows.net/data-warehouse//?upn=false&action=getAccessControl&timeout=90


This pipeline was working, and now any of my pipelines that require access to any Azure Blobstore is 
StackTrace is 


com.streamsets.pipeline.api.StageException: HADOOPFS_13 - Error while writing to HDFS: org.apache.hadoop.fs.FileAlreadyExistsException: Operation failed: "This endpoint does not support BlobStorageEvents or SoftDelete. Please disable these account features if you would like to use this endpoint.", 409, HEAD, https://gcdatalake2.dfs.core.windows.net/data-warehouse//?upn=false&action=getAccessControl&timeout=90
    at com.streamsets.pipeline.stage.destination.hdfs.HdfsTarget.throwStageException(HdfsTarget.java:83)
    at com.streamsets.pipeline.stage.destination.hdfs.HdfsTarget.write(HdfsTarget.java:137)
    at com.streamsets.pipeline.api.base.configurablestage.DTarget.write(DTarget.java:34)
    at com.streamsets.datacollector.runner.StageRuntime.lambda$execute$2(StageRuntime.java:303)
    at com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:244)
    at com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:311)
    at com.streamsets.datacollector.runner.StagePipe.process(StagePipe.java:221)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.processPipe(ProductionPipelineRunner.java:864)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.lambda$executeRunner$3(ProductionPipelineRunner.java:908)
    at com.streamsets.datacollector.runner.PipeRunner.acceptConsumer(PipeRunner.java:207)
    at com.streamsets.datacollector.runner.PipeRunner.forEachInternal(PipeRunner.java:152)
    at com.streamsets.datacollector.runner.PipeRunner.executeBatch(PipeRunner.java:132)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.executeRunner(ProductionPipelineRunner.java:907)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.runSourceLessBatch(ProductionPipelineRunner.java:885)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.processBatch(ProductionPipelineRunner.java:515)
    at com.streamsets.datacollector.runner.StageRuntime$3.run(StageRuntime.java:383)
    at java.security.AccessController.doPrivileged(Native Method)
    at com.streamsets.datacollector.runner.StageRuntime.processBatch(StageRuntime.java:379)
    at com.streamsets.datacollector.runner.StageContext.processBatch(StageContext.java:293)
    at com.streamsets.pipeline.stage.origin.s3.AmazonS3Runnable.produce(AmazonS3Runnable.java:314)
    at com.streamsets.pipeline.stage.origin.s3.AmazonS3Runnable.run(AmazonS3Runnable.java:111)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(SafeScheduledExecutorService.java:226)
    at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:34)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.call(SafeScheduledExecutorService.java:222)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeRunnable.run(SafeScheduledExecutorService.java:188)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(SafeScheduledExecutorService.java:226)
    at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:34)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.call(SafeScheduledExecutorService.java:222)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeRunnable.run(SafeScheduledExecutorService.java:188)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: Operation failed: "This endpoint does not support BlobStorageEvents or SoftDelete. Please disable these account features if you would like to use this endpoint.", 409, HEAD, https://gcdatalake2.dfs.core.windows.net/data-warehouse//?upn=false&action=getAccessControl&timeout=90
    at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1077)
    at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:439)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1729)
    at com.streamsets.pipeline.stage.destination.hdfs.writer.RecordWriterManager.getWriter(RecordWriterManager.java:299)
    at com.streamsets.pipeline.stage.destination.hdfs.writer.ActiveRecordWriters.get(ActiveRecordWriters.java:125)
    at com.streamsets.pipeline.stage.destination.hdfs.HdfsTarget.write(HdfsTarget.java:196)
    at com.streamsets.pipeline.stage.destination.hdfs.HdfsTarget.access$100(HdfsTarget.java:47)
    at com.streamsets.pipeline.stage.destination.hdfs.HdfsTarget$1.run(HdfsTarget.java:112)
    at com.streamsets.pipeline.stage.destination.hdfs.HdfsTarget$1.run(HdfsTarget.java:100)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
    at com.streamsets.pipeline.stage.destination.hdfs.HdfsTarget.write(HdfsTarget.java:100)
    ... 40 more
Caused by: Operation failed: "This endpoint does not support BlobStorageEvents or SoftDelete. Please disable these account features if you would like to use this endpoint.", 409, HEAD, https://gcdatalake2.dfs.core.windows.net/data-warehouse//?upn=false&action=getAccessControl&timeout=90
    at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:146)
    at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:572)
    at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:554)
    at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:241)
    at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:594)
    at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:437)
    ... 51 more


I am hoping that Azure didn't change something that created this

icon

Best answer by Sanjeev 29 October 2021, 23:42

View original

Reply