Skip to main content

Hi, is there a way to load data (using JDBC and HTTP Client processors) into memory only once at the start of a pipeline and have it used by all streams?. This in order to avoid loading the data for each stream (in my case from the kafka topic).

 

I'll look forward for your answer. 

Carlos.

 

@ccariman 

You can use the data processing mode as batch mode ,to process data in StreamSets.

If you need to process the data in streaming mode then you can check the mode as streaming in HTTP client processor or kafka for real time data processing.

Kindly provide the issue in details so i can try to help you on it.


@ccariman if the requirement is to enrich data from Kafka using a JDBC source then you can use the JDBC Lookup processor. More details on the use-case will help to provide further guidance.  


Reply