Got a Question?
Can't find what you're looking for? Ask it here!
- 604 Topics
- 1,552 Replies
Call of Oracle SQL Stored Procedure as Pipeline Destination
Hello, We would like to create a pipeline that performs as a final step the insertion / update of data in an Oracle database via the execution of an Oracle SQL Stored Procedure.Therefore, we would like to ask you if it is possible to call an Oracle SQL Stored Procedure either via a built-in Destination component or a custom Destination component.If the above approach is feasible, could you please provide a list of steps to perform the operation?Thank you,Bolgas
Error on Web Socket Client Destination
Hi I have a pipeline when origin and destination is a WS client having the same configuration.For the destination, i want to send a text message back to the server as an ACK message.The pipeline works well along the way but the destination is giving me an error for every single message.Can you please advice what this error message mean? Please note that the WS URL is configured correctly since it s the same than origin which works fine and get the data. Error Stack TraceHTTP_50 - Error sending resource. Reason: java.io.IOException: /opt/streamsets-datacollector-5.3.0 is a directorycom.streamsets.pipeline.api.StageException: HTTP_50 - Error sending resource. Reason: java.io.IOException: /opt/streamsets-datacollector-5.3.0 is a directory at com.streamsets.pipeline.stage.destination.websocket.WebSocketTarget.throwStageException(WebSocketTarget.java:202) at com.streamsets.pipeline.stage.destination.websocket.WebSocketTarget.write(WebSocketTarget.java:180) at com.streamsets.pipeline.api.ba
Can pipeline access kafka headers?
I would like to add OpenTelemetry tracing support to our pipelines. Trace information is propagated through headers, so I’d like to access the headers to pass them through the pipeline. I don’t see the headers on the record so I’m curious if it is possible to access them.
Edit Pipeline Stage Config & Republish
Facing some issues when trying to update an existing CH pipeline and re-publish as a new version.I am trying to update a particular stage within the pipeline and then re-publish as a new version. When I publish the pipeline, the version number is correct, it’s iterated by 1 but the pipeline is empty. How do I carry through all the contents of the previous version? (stages etc..)
Data doesn't show up in hive
The data pipeline is jdbc → hive_metadata → hadoop file system and hivemetastore. Data moves from oracle database to hadoop file system. Schema is also getting created but there is no data in the tables. Tried changing the number idle time settings to 10 seconds.Any help in this regard will be greatly appreciated.
Running SQL scripts using Streamsets Transformer
Hi Everyone,I am new to Streamsets and working on a project which has some specific requirement.We need to use the Transformer.We have run several sql scripts to create table and insert data into them from another table using Streamsets Transformer. The table creation needs to be done on AWS S3.I am using Spark SQL Query, but when i am trying to run 2-3 sql scripts within one Spark SQL Query component it is giving an error.Requesting your help on below:1. Can we run multiple scripts within one Spark SQL Query component? If yes, how.2. What is the best way to execute a set of sql scripts that needs to be run in a sequential manner in Streamsets transformer.?Any help or advise would be much appreciated.Thanks a lot in advance.
How to redirect to jira dashboard from servlet
I want to redirect from servlet to jira's dashboard.What i am trying to do : when the user click create isse and fill all the details and submit, i check if the user with openid, after i check, i need to redirect the user to jira dashboard or i should be above to redirect to any workflow screen.I tried the redirect with the following lines.response.sendRedirect("/Dashboard.jspa");after the servlet finishes, i want to redirect, i have the HttpServletResponse object. when i use the above redirect method it doesn't redirect.please help.
DELTA_LAKE_01 - Could not create SQL DataSource
After configuring databricks delta lake destination when I am trying to validate my pipeline, the following error I am receiving: DELTA_LAKE_01 - Could not create SQL DataSource: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: [Simba][SparkJDBCDriver](500593) Communication link failure. Failed to connect to server. Reason: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.
Invoke a get request in csv format
I want to send a GET request to a web url and get the result set in csv format which is the output format. I need to have the pagination set to read all the records at that point of time. My token will get expired every 30 minutes and its needs to be refreshed to get a new token. Any insight on how to implement this? Also Can I call multiple api’s in the same pipeline and merge all of them together?
Pipeline Status: RUN_ERROR: S3_SPOOLDIR_26 - S3 runner failed. Reason org.apache.http.NoHttpResponseException: The target server failed to respond
We have a multi-thread runner listening for new data on an S3 bucket and processing and filtering the data. Our pipeline seems to work well for a few hours, but gets this error below. We are getting about 5-10MB of data per second and that gets filtered down to about 2% of data through our processing and filtering. Is this due to too much data or a configuration problem? Pipeline Status: RUN_ERROR: S3_SPOOLDIR_26 - S3 runner failed. Reason org.apache.http.NoHttpResponseException: The target server failed to respond
Meet the SPOOLDIR_01 error message
I am using the Directory to readthe *.log.gz files from ther servce using lexicographically ascending file names as teh read order. Set the data format as ‘Text’ and the compression format is compressed file with 1024000000 as the max line length. I met the SPOOLDIR_01 Error code and the error message said ‘Failed to process file ‘…...log.gz’as posion ‘0’: com.streamsets.pipeline.lib.dirspooler.BadSpoolFileExcetion:com.streamset.pipline.lib.parser.DataParserException: TEXT_PARSER_00-Cannot advace file ‘…...log.gz’ to offset ‘0’ ’I am not sure why this problem happened and anyonehas idea?
Load Date column from Oracle to Snowflake Failure
Hi team we built a pipeline to load oracle table to Snowflake using StreamSets, there is one date column, and in oracle the value is always like "2012-04-25 00:00:00.000", we create this column as date in snowflake too, however it failed the load, only after I changed the data type of Snowflake corresponding column as TIMESTAMP_NTZ(9) and then it can be loaded.So how can I convert this column to Date in StreamSets when loading, which processor I can use and what is the evaluation of this convert? by the way what is the root cause of failure for this direct date to date column loading from oracle to snowflake with StreamSets?
Already have an account? Login
Login to the community
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.