Skip to main content

I'm working with StreamSets and have three pipelines that each consume from different Kafka topics. Each pipeline writes to its own dedicated table (e.g., table1,table2, table3), but all three also write to a shared table . I'm using logic to avoid duplicates(update or insert). My concern is about concurrent writes — if all three pipelines try to write to the same row (shared table) at the same time, will StreamSets or the database handle it safely? I'm also wondering how to properly configure retry attempts, error handling, and backpressure in StreamSets to ensure that no data is lost or skipped if database locks or contention occur. What are best practices for configuring the JDBC  and pipeline error handling for this kind of multi-pipeline write scenario?

Be the first to reply!

Reply