I would like a job to loop in parallel around a pipeline using data from a control table as the parameters.
I know you can create an orchestration with multiple jobs and run in parallel but the issue is that the control table will be updated with more data and hence more job swill need to be added to the orchestration manually which is not ideal.
I have thought about looping using pyspark but then the job wont be run in parallel.
Is there a way to loop around a pipeline in parallel without creating multiple jobs ?