Question:
How to parameterize your DataBrick spark cluster configuration as runtime?
Cluster Manager Type : DataBrick
Answer:
We can leverage the runtime:loadResource function to call a runtime resource.
Step1: Create a resource file, cluster configuration JSON:
#cat test
{
"num_workers": 6,
"spark_version": "5.3.x-scala2.11",
"node_type_id": "i3.xlarge"
}Step2: In pipeline under cluster config section call the below EL:
${runtime:loadResource('test', false)}
