30-Day Free Trial: It’s Never Been Easier To Get Started With StreamSets
JohnnyPdarosa956-903-6595 JohnnyPdarosa956_903_6595 JohnnyPdarosa JohnnyPdarosa9569036595
JohnnyPdarosa956-903-6595 JohnnyPdarosa JohnnyPdarosa9569036595
Greetings!I’ve created a pipeline that should listen and incrementaly load records from a login table, match the info from the logged user through a JDBC lookup and sending the enriched record to another table. the problem is that the pipeline won’t get new records after the 1st one and it keeps inserting it forever. here is the Query consumer configthe lookup config and the the destination operation Thank you!!
I’m following the DataOps Platform Fundamentals course and the Build a Pipeline chapter has you enter “/zomato” as a files directory in the configuration of a Directory source.However, when validating the pipeline I get the error: SPOOLDIR_12-Directory ‘/zomato’ does not exist: conf.spoolDir.Any solution for this?
No – Geek Studio is not a scam. While some scammers have misused the name by impersonating the company, these complaints involve fake representatives. The genuine Geek Studio has a strong reputation, excellent customer reviews, and a proven track record of providing trustworthy and professional tech support.
Hi ,I am creating a new Connection with SFTP protocol and trying to connect to the SFTP server using a private key .I inputted these while creating the Connection:Authentication as Private Key ,Private Key provider as plain text Private Key as the text that i copied from the ppk file Username the correct one there was no Passphraseand when i hit Test Connection I get the below errorStage 'com_streamsets_pipeline_lib_remote_RemoteConnectionVerifier_01' initialization error: java.lang.NullPointerException: Cannot invoke "net.schmizz.sshj.userauth.keyprovider.KeyProvider.getPublic()" because "this.kProv" is null (CONTAINER_0701) But alternatively if i input the same in the credentials tab of the pipeline and preview ,I am able to successfully connect to the sftp server and read the file there . Problem is when i create the same via a connection Kindly help me to resolve this issue .
I have a stored procedure that works when I call it manually, both within Snowflake and via snowsql. I am trying to automate the call of this procedure via StreamSets, but it continues to fail with the ambiguous error message:Technical details: An exception has arisen while executing the query 'CALL ETS_METRICS.TABLEAU_METADATA_COLLECTION();': SQL compilation error: Unknown user-defined function ETS_METRICS.TABLEAU_METADATA_COLLECTIONI am aware of this post and it is not relevant since it only is about how to do it, not troubleshooting error. That said, I did try making the query a stop event and the same error is thrown. What boggles my mind the most is that I am specifying the schema even though the defined connection already connects to the correct schema, and the same role that runs all our automation is used in the connection (and also happens to be the owner of the stored procedure).Any ideas that you might be able to offer are appreciated.Thank you in advance.
Hi All, I have a requirement to use the aggregation query using match,group,sum,min,max and counts on a MongoDB Atlas collection in a StreamSets pipeline probably in a lookup processor by passing the input fields.Is it possible? Thanks,Mahender
Deduplicate and keep latest based on timestamp field in streamsets ibm?
Hi, I have an API which takes page_no and page_size as parameter for pagination.API response is not a json data or had a records in list/array but a Binary Byte array as string.API returns a zip file.In response header it has total_pages, page_no and page_size. In streamsets for pagination with page number we have to provide result field path for incrementing to next page number. which is not possible in my case.refer doc https://learn.ocp.ai/guides/exports-api Could anyone please suggest solution for it.
We have an HTTP Client origin that pulls from our ServiceNow site records that would be returned if we ran a report in ServiceNow directly. The records pulled appear at first blush to all be accounted for, but unfortunately the origin just keeps looping and pulling the every record over and over. I reduced the batch size to as low as 100 records without effect (i.e., it still loops endlessly).The URL with parameters is:https://<intentionally_redacted>.servicenowservices.com/api/now/table/task?sysparm_query=closed_atISNOTEMPTY%5Eclosed_at%3E%3Djavascript%3Ags.dateGenerate('2023-01-01'%2C'00%3A00%3A00')%5Eclosed_by.department.nameSTARTSWITHPBO%20-%20ETS%5Eassignment_group!%3Daeb1d3bc3772310057c29da543990ea2%5Eassignment_group!%3D4660e3fc3772310057c29da543990e0b%5EnumberNOT%20LIKEGAPRV%5EnumberNOT%20LIKERCC%5Esysparm_display_value=true%5Esysparm_limit=100%5Esysparm_offset=0I have the stage set to pull in Batch mode, and for pagination I have tried all 5 modes including “None”. Since
Become a leader!
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.