Precision and scale aren’t kept when copying numeric or decimal data from say SQL Server to Databricks via the JDBC Multitable Consumer origin. How can I maintain the original precision and scale in my pipeline? Will a Field type converter do it? If so, there’s an option for changing the scale, but not the precision.
I believe field converter should fix the issues.
kindly try this as below example.
Kindly confirm if the precision and scale will be fixed value or it can be changed in future.
can i get some example data on which you are having issues,so i can try it in SDC and will try to help you on that.
We. can handle it in jyhton script or in custom functions .
Thanks & Regards
Bikram_
Kindly provide some input data on which you are facing issues in it ,so i can try to reproduce it in my SDC and will try to prepare the solution for the same.
Input data and expected result you are looking for.
Thanks & Regards
Bikram_
I just read the data from DB table having decimal values in it and the data is coming as expected. I am not seeing any changes in the data and not truncating any values while sending data to the destinations.
Below the sample input and output details for your reference. I believe in the destination end its truncating the values and taking the default values as (10,0).
I got the below details from the google as given below .
“
Both Spark and Hive have a default precision of 10 and scale of zero for Decimal type. Which means if you do not specify the scale, there will be no numbers after the decimal point.
“
Kindly check if data bricks having default value for decimal as (10,0) , then do the needful if possible.
Thanks & Regards
Bikram_
Thanks
Nice , the issue has been fixed.
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.