Hi All, i am trying to start the multiple jobs, but throwing an error, but stopping of multiple jobs working as expected. Stop: /jobrunner/rest/v1/jobs/stopJobs ==> Workingstart: /jobrunner/rest/v1/jobs/startJobs ==> failling
I have a pipeline that gathers records from an HTTP client. At one point in the pipeline I have a stream selector that filters out records that I don’t want and sends them to trash, while remaining records gets written to an S3 destination. At the end is a pipeline finisher executor.The issue is when there are no remaining records after the stream selector, ie. the data contained no relevant records. In this case the pipeline job remains running indefinitely, and I have to stop it manually. How can I stop the pipeline when there are no records left after the stream selector?
Hi guys, can anyone help me?I have this pipeline that gets some ids from bigquery and injects them in a api but I keep receiving this error "Too Many Requests" even in the first record.I've configured the http to wait 1000 ms between the requests and it supposed to be more than enough to respect the API limits.
Hi Team, I am using Strigo lab environment for my training and it suddenly stopped working . I have attached the error screenshot. Please help me resolve the same.
Hi I am trying my lab to connect to My Sql I did the following In Deployment menu, I configured all the JDBC LibraryIn the engine I downloaded mysql-connector jar and restarted the engine In the connection I gave the connection string, username and password, however when I test connection I am getting the following error JDBC_00 - Cannot connect to specified database: java.sql.SQLException: No suitable driver found for jdbc:mysql://mysqldb:3306/zomatoIs there anything I am missing here.?
Can we rename a worksheet of an excel using StreamSets?
Hello All, I am reading file and parsing in Groovy Evaluator. File name is part of header information in Groovy Evaluator. Need help on how to read the information from header which is available at each record levelThanks,Bhaskar Pola
I am trying to fetch all the records from the API using http client and selecting pagination option.The processor is not giving any error but its only giving the records for 1st page only.Attaching the screenshot for refrence.value list contains 1000 records but the page size of the api is 100. I need to fetch all the records.
Exit: 1 STDOUT: WARN : Specified Classpath directory '/opt/streamsets-datacollector-5.2.0/libs-common-lib' is empty Abnormal exit: java.lang.RuntimeException: There cannot be 2 different versions of the same stage: [Stage='com_streamsets_pipeline_stage_executor_jdbc_JdbcQueryDExecutor' Version='2' Library='streamsets-datacollector-teradata-lib', Stage='com_streamsets_pipeline_stage_executor_jdbc_JdbcQueryDExecutor' Version='3' Library='streamsets-datacollector-sql-server-bdc-lib', Stage='com_streamsets_pipeline_stage_processor_jdbctee_JdbcTeeDProcessor' Version='2' Library='streamsets-datacollector-teradata-lib', Stage='com_streamsets_pipeline_stage_processor_jdbctee_JdbcTeeDProcessor' Version='3' Library='streamsets-datacollector-sql-server-bdc-lib', Stage='com_streamsets_pipeline_stage_destination_jdbc_JdbcDTarget' Version='6' Library='streamsets-datacollector-teradata-lib', Stage='com_streamsets_pipeline_stage_destination_jdbc_JdbcDTarget' Version='7' Library='streamsets-datacollect
We have Streamsets pipeline for generating Postgres incremental records using Logical replication slots.It ran successfully for few days and generated the files.Then after few days Streamsets are running and are in active state, but the incremental files are not getting generated. We dont see any errors in Streamsets and Postgres logs. The replication slots are getting filled up in the backend.Do, someone knows the rootcause of this issue and any solution to fix this.Please suggest. Thank you,Jose Kattakayam
Hello,Can I use regex patterns to filter records based on a field name pattern? Or use them in Field Type converter? For example: If a record has field names containing “cost” in them e.g. “Average Cost for two”, “Cost range”, “Parking Costs” etc. I want to send these records to a certain destination? Or if the field names including “cost” in them, use field type converter to convert the values to integer.I did not find any specific information on this in Docs. Played around with a few patterns. But even if some passed validation, they failed in preview. -Dhanashri
Hi, I am getting SNOWFLAKE_56 - Key fields not specified for table 'ORCL_EMP' errorWhen Table Auto create is enabled, CDC also enabled and given table key column as ID column
Become a leader!
Become a leader!
Learn how to make the most of StreamSets with user guides and tutorials
Get StreamSets certified to expand your skills and accelerate your success.
Contact our support team and we'll be happy to help you get up and running!
Already have an account? Login
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.