Skip to main content
Solved

JDBC Multitable Consumer to Databricks

  • February 16, 2023
  • 9 replies
  • 111 views

Precision and scale aren’t kept when copying numeric or decimal data from say SQL Server to Databricks via the JDBC Multitable Consumer origin.  How can I maintain the original precision and scale in my pipeline?  Will a Field type converter do it?  If so, there’s an option for changing the scale, but not the precision.

Best answer by Bikram

@pkandra 

I just read the data from DB table having decimal values in it and the data is coming as expected. I am not seeing any changes in the data and not truncating any values while sending data to the destinations.

Below the sample input and output details for your reference. I believe in the destination end its truncating the values and taking the default values as (10,0).

 

I got the below details from the google as given below .

Both Spark and Hive have a default precision of 10 and scale of zero for Decimal type. Which means if you do not specify the scale, there will be no numbers after the decimal point.

Kindly check if data bricks having default value for decimal as  (10,0) , then do the needful if possible.

 

 

Thanks & Regards

Bikram_

View original
Did this topic help you find an answer to your question?

9 replies

Bikram
Headliner
Forum|alt.badge.img+1
  • Headliner
  • 486 replies
  • February 17, 2023

@pkandra 

 

I believe field converter should fix the issues. 


Bikram
Headliner
Forum|alt.badge.img+1
  • Headliner
  • 486 replies
  • February 17, 2023

@pkandra 

 

kindly try this as below example.

 


  • Author
  • Opening Band
  • 19 replies
  • February 17, 2023

@Bikram  I saw that post, but I may need to change 15 tables and 12 or more fields per table.  Entering the precision and scale for each field and table isn’t a solution.  I’m looking for a solution using something similar to the field renamer where I can use /(.*) for the source field(s).  I find it a pretty big limitation that precision and scale aren’t carried through to destinations.


Bikram
Headliner
Forum|alt.badge.img+1
  • Headliner
  • 486 replies
  • February 22, 2023

@pkandra 

Kindly confirm if the precision and scale will be fixed value or it can be changed in future.

can i get some example data on which you are having issues,so i can try it in SDC and will try to help you on that.

We. can handle it in jyhton script or in custom functions .

 

Thanks & Regards

Bikram_


Bikram
Headliner
Forum|alt.badge.img+1
  • Headliner
  • 486 replies
  • February 24, 2023

@pkandra 

Kindly provide some input data on which you are facing issues in it ,so i can try to reproduce it in my SDC and will try to prepare the solution for the same.

Input data and expected result you are looking for.

 

Thanks & Regards

Bikram_


  • Author
  • Opening Band
  • 19 replies
  • February 24, 2023

@Bikram  I don’t have any sample data...simply some decimal data with precision 19,4 that get truncated to 10,0 in Databricks


Bikram
Headliner
Forum|alt.badge.img+1
  • Headliner
  • 486 replies
  • Answer
  • February 25, 2023

@pkandra 

I just read the data from DB table having decimal values in it and the data is coming as expected. I am not seeing any changes in the data and not truncating any values while sending data to the destinations.

Below the sample input and output details for your reference. I believe in the destination end its truncating the values and taking the default values as (10,0).

 

I got the below details from the google as given below .

Both Spark and Hive have a default precision of 10 and scale of zero for Decimal type. Which means if you do not specify the scale, there will be no numbers after the decimal point.

Kindly check if data bricks having default value for decimal as  (10,0) , then do the needful if possible.

 

 

Thanks & Regards

Bikram_


  • Author
  • Opening Band
  • 19 replies
  • February 27, 2023

Thanks @Bikram.  If I use Avro as the staging format, I don’t have the issue, so for now that’s my fix.


Bikram
Headliner
Forum|alt.badge.img+1
  • Headliner
  • 486 replies
  • February 27, 2023

@pkandra 

Nice , the issue has been fixed.


Reply