30-Day Free Trial: It’s Never Been Easier To Get Started With StreamSets
Hi ,I am creating a new Connection with SFTP protocol and trying to connect to the SFTP server using a private key .I inputted these while creating the Connection:Authentication as Private Key ,Private Key provider as plain text Private Key as the text that i copied from the ppk file Username the correct one there was no Passphraseand when i hit Test Connection I get the below errorStage 'com_streamsets_pipeline_lib_remote_RemoteConnectionVerifier_01' initialization error: java.lang.NullPointerException: Cannot invoke "net.schmizz.sshj.userauth.keyprovider.KeyProvider.getPublic()" because "this.kProv" is null (CONTAINER_0701) But alternatively if i input the same in the credentials tab of the pipeline and preview ,I am able to successfully connect to the sftp server and read the file there . Problem is when i create the same via a connection Kindly help me to resolve this issue .
I have a stored procedure that works when I call it manually, both within Snowflake and via snowsql. I am trying to automate the call of this procedure via StreamSets, but it continues to fail with the ambiguous error message:Technical details: An exception has arisen while executing the query 'CALL ETS_METRICS.TABLEAU_METADATA_COLLECTION();': SQL compilation error: Unknown user-defined function ETS_METRICS.TABLEAU_METADATA_COLLECTIONI am aware of this post and it is not relevant since it only is about how to do it, not troubleshooting error. That said, I did try making the query a stop event and the same error is thrown. What boggles my mind the most is that I am specifying the schema even though the defined connection already connects to the correct schema, and the same role that runs all our automation is used in the connection (and also happens to be the owner of the stored procedure).Any ideas that you might be able to offer are appreciated.Thank you in advance.
Hi All, I have a requirement to use the aggregation query using match,group,sum,min,max and counts on a MongoDB Atlas collection in a StreamSets pipeline probably in a lookup processor by passing the input fields.Is it possible? Thanks,Mahender
Deduplicate and keep latest based on timestamp field in streamsets ibm?
Hi, I have an API which takes page_no and page_size as parameter for pagination.API response is not a json data or had a records in list/array but a Binary Byte array as string.API returns a zip file.In response header it has total_pages, page_no and page_size. In streamsets for pagination with page number we have to provide result field path for incrementing to next page number. which is not possible in my case.refer doc https://learn.ocp.ai/guides/exports-api Could anyone please suggest solution for it.
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.