Skip to main content

 I am using Azure Kubernetes Cluster (AKS) and have deployed a docker image having tarball deployment script. When an error happens in the control hub’s pipeline a new pod is created in AKS and also an engine instance is created but the pipeline does not update its engine instance to an accessible one automatically. Is there any automated workaround for this use case. Your comments would be appreciated.

 

hi @abidali 

Engine attached to a pipeline when you create a pipeline is Authoring Engine. StreamSets recommends you to have a dedicated engine for this purpose. That way you will not need to change your authoring data collector on your pipelines during development.

 

To execute a pipeline you can have a set of engines with ‘labels’ when your AKS creates a new pod, if the label is still the same, your jobs will run as normal. 


Reply