We were trying to configure, JDBC multitable consumer using python SDK 5.0. We were able to set below configurations:
JDBC_consumer = pipeline_builder.add_stage('JDBC Multitable Consumer')JDBC_consumer.jdbc_connection_string = 'jdbc:mysql://mysqldb:3306/zomato'JDBC_consumer.username = '****'JDBC_consumer.password = '****'
However, not able to set table config.
Same with Azure Data Lake Storage, we are able to set below configurations:
Azure_storage.data_format = 'DELIMITED'Azure_storage.authentication_method = 'Shared Key'Azure_storage.account_shared_key = '***'
However, not able to set Account FQDN & Storage Container / File System
Please guide us with these configurations.
Best answer by BikramView original
Easiest way is to create a pipeline in UI, set these properties and then use SDK to read the stage and see what values have been set for these properties. You will get the stage property name and respected values.
You can then use SDK to set these in your new pipeline
sdc = sch.data_collectors.get(url='http://12345:18630')
builder = sch.get_pipeline_builder(engine_type='data_collector', engine_id=sdc.id)
jdbc_mt1 = builder.add_stage('JDBC Multitable Consumer')
#jdbc_mt.configuration['tableJdbcConfigBean.tableConfigs']['schema'] = 'EMP'
jdbc_mt1.table_configs['schema'] = 'EMP'
jdbc_mt1.table_configs['tablePattern'] = 'Data'
trash = builder.add_stage('Trash')
jdbc_mt1 >> trash
pipeline = builder.build('Bikrams 2nd pipeline')
sch.publish_pipeline(pipeline, commit_message='fourth commit of Bikrams pipeline')
Can you guide me with the steps needed to read the stages from SDK as mentioned in your solution.