Here's a Kafka Stream Processor pipeline that reads events from a "raw" topic, performs streaming transforms, and publishes the transformed events to a "refined" topic that multiple downstream clients subscribe to:
![](https://uploads-us-west-2.insided.com/streamsets-en/attachment/6cfee201-a821-4f70-9624-5f78203e3ac9.png)
Here is the Stream Processor Pipeline’s placement within a StreamSets Topology that allows visualization and monitoring of the end-to-end data flow, with last-mile pipelines moving the refined events into Delta Lake, Snowflake, Elasticsearch, S3 and ADLS:
![](https://uploads-us-west-2.insided.com/streamsets-en/attachment/6de3930b-7ce3-4c43-88a8-a89caee1c93a.png)
Users can extend the Topology using Transformer for Snowflake to perform push-down ETL on Snowflake using Snowpark, and Transformer on Databricks Spark to perform ETL on Delta Lake
![](https://uploads-us-west-2.insided.com/streamsets-en/attachment/fa94c7d8-cb7b-417f-8398-c2a953279975.png)