Question

JDBC Producer record error: JDBC_90 - Record doesn't have any columns for table 'schema.tablename'

  • 12 September 2022
  • 4 replies
  • 399 views

Hi, running the PostgreSQL Metadata processor and the JDBC Producer in a pipeline with the Oracle CDC client I run into the above error when adding a column to a table on the source side.

Checking the destination table I found that the Metadata processor altered the target table perfectly but it seems that the JDBC producer does not recognize the new field and assumes that the record, consisting of two columns, the primary key column and a value for the new added column does not carry any relevant data and issues the error.

I attach a screenshot with the pipeline and data.

May be someone has an idea what’s wrong? I checked the articles about data drift and cannot find any hint about what I may have configured wrongly.

 

Any help is greatly appreciated!

Here the stacktrace:

com.streamsets.pipeline.api.base.OnRecordErrorException: JDBC_90 - Record doesn't have any columns for table 'bunkerstreamsets.s_product'
    at com.streamsets.pipeline.lib.jdbc.JdbcGenericRecordWriter.processQueue(JdbcGenericRecordWriter.java:199)
    at com.streamsets.pipeline.lib.jdbc.JdbcGenericRecordWriter.write(JdbcGenericRecordWriter.java:164)
    at com.streamsets.pipeline.lib.jdbc.JdbcGenericRecordWriter.writeBatch(JdbcGenericRecordWriter.java:111)
    at com.streamsets.pipeline.lib.jdbc.JdbcUtil.write(JdbcUtil.java:1239)
    at com.streamsets.pipeline.lib.jdbc.JdbcUtil.write(JdbcUtil.java:1161)
    at com.streamsets.pipeline.stage.destination.jdbc.JdbcTarget.write(JdbcTarget.java:264)
    at com.streamsets.pipeline.stage.destination.jdbc.JdbcTarget.write(JdbcTarget.java:253)
    at com.streamsets.pipeline.api.base.configurablestage.DTarget.write(DTarget.java:34)
    at com.streamsets.datacollector.runner.StageRuntime.lambda$execute$2(StageRuntime.java:291)
    at com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:232)
    at com.streamsets.datacollector.runner.StageRuntime.execute(StageRuntime.java:299)
    at com.streamsets.datacollector.runner.StagePipe.process(StagePipe.java:209)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.processPipe(ProductionPipelineRunner.java:859)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.lambda$executeRunner$3(ProductionPipelineRunner.java:903)
    at com.streamsets.datacollector.runner.PipeRunner.acceptConsumer(PipeRunner.java:195)
    at com.streamsets.datacollector.runner.PipeRunner.forEachInternal(PipeRunner.java:140)
    at com.streamsets.datacollector.runner.PipeRunner.executeBatch(PipeRunner.java:120)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.executeRunner(ProductionPipelineRunner.java:902)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.runSourceLessBatch(ProductionPipelineRunner.java:880)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.runPollSource(ProductionPipelineRunner.java:602)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunner.run(ProductionPipelineRunner.java:388)
    at com.streamsets.datacollector.runner.Pipeline.run(Pipeline.java:525)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipeline.run(ProductionPipeline.java:100)
    at com.streamsets.datacollector.execution.runner.common.ProductionPipelineRunnable.run(ProductionPipelineRunnable.java:63)
    at com.streamsets.datacollector.execution.runner.standalone.StandaloneRunner.startInternal(StandaloneRunner.java:758)
    at com.streamsets.datacollector.execution.runner.standalone.StandaloneRunner.start(StandaloneRunner.java:751)
    at com.streamsets.datacollector.execution.runner.common.AsyncRunner.lambda$start$3(AsyncRunner.java:150)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(SafeScheduledExecutorService.java:214)
    at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:44)
    at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:25)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.call(SafeScheduledExecutorService.java:210)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(SafeScheduledExecutorService.java:214)
    at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:44)
    at com.streamsets.datacollector.security.GroupsInScope.execute(GroupsInScope.java:25)
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.call(SafeScheduledExecutorService.java:210)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at com.streamsets.datacollector.metrics.MetricSafeScheduledExecutorService$MetricsTask.run(MetricSafeScheduledExecutorService.java:88)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
   

And the screenshot:

 

 


4 replies

Userlevel 5
Badge +1

@pedromanuel 

Please make sure the column names and its data type are same in both source and destination table.

In case of any mismatch , it will not proceed further.

Hi @Bikram, thank you! The column name and datatype match. The column in the destination was actually created by the PostgreSQL Metadata processor based on the field parameters, so I would expect this to work. Any other ideas?

 

Thank you!

Userlevel 5
Badge +1

@pedromanuel 

 

Thanks . 

The below error is saying that the column name s_product is not available in the destination table.

Kindly cross verify this one and  if you manage to handle this issue , the data will start processing to the destination table.

Error :

( Record doesn't have any columns for table 'bunkerstreamsets.s_product')

 

 

Thanks & Regards,

Bikram_

Userlevel 2
Badge +1

Hello @pedromanuel,

I have seen this error and here is an how it can be resolved

PostgreSQL is case sensitive.  
 
when you create a table with double quotes, it will create column with the case. if not then it will always create in lower case.  you can run the below query to find it out
 
SELECT *. FROM information_schema.columns. WHERE table_schema = 'your schema name'. AND table_name   = 'your table name';
 
create table test("EVENT_KEY" numeric(38) NOT NULL, "SEQ" int2 NOT NULL, "NAME" varchar (100) NULL, CONSTRAINT test_pkey PRIMARY KEY ("EVENT_KEY", "SEQ"));
 
now if you try to insert using below query, it will fail.
 
insert into test5(event_key, SEQ, NAME) values(3,1,'a')
 
if you don’t use double quotes, it will create tables with all columns as small caps. 
 
create table test1(EVENT_KEY numeric(38) NOT NULL, SEQ int2 NOT NULL, NAME varchar (100) NULL, CONSTRAINT test1_pkey PRIMARY KEY (EVENT_KEY, SEQ));
 
now if you try to insert using below query, it will success as all the below columns will be converted to small caps.
insert into test1(event_key, SEQ, NAME) values(3,1,'a')

So if you create tables using double quotes with appropriate case (upper or lower) will solve the issue.

Reply