This pipeline is designed to orchestrate initial bulk load to change data capture workload from on-prem Oracle database to Snowflake Data Cloud. The pipeline takes into consideration the completion status of the initial bulk load job before proceeding to the next (CDC) step and also sends success/failure notifications. The pipeline also takes advantage of the platform’s event framework to automatically stop the pipeline when there’s no more data to be ingested.
@Dash , thank you very much for this example of an orchestration pipeline! It gave me a good idea about how to start with this topic.
I have one question though: what is the purpose of the reading of the SCN at the start of the pipeline? Is it just to make sure an SCN is available (say to ensure that Logminer is properly setup) or is the SCN actually later used as input for the CDC pipeline (say to configure the Oracle Client origin)?
Thank you very much!