Got a Question?
Can't find what you're looking for? Ask it here!
- 298 Topics
- 651 Replies
Hi StreamSets,I would like to know whether there is an initiative to introduce standard origins for various popular SaaS apps like Shopify, Magento, Branch etc? If there is a space where we can vote for these connectors based on which they can be prioritised for development, please do share that information.
I am using legacy version till now and facing this issue of build recently.I was trying to build the datacollector-edge-oss version 3.14 from source with commandgradlew goClean dist publishToMavenLocal --build-cache --stacktrace --info --scanand the build is failing as per below scan results : gradle scan linkTo resolve the issue, it seems the Bitbucket api is not accessible and the fork of same inflect library is present at volatile tech link. Kindly let me know how i can solve this?
Hi Team,I am facing issue i.e. reading data through S3origin within data transformer. I am able to read data through s3 origin in data collector.Trying to read data from S3 origin and copy the same in different location using S3 destination. I am using EMR as an computing engine. Job runs for several mins. on EMR and completed successfully. There is no error in Logs (Both EMR and StreamSet pipeline Logs). Do get this below Warning but not sure this is causing issue or not. java.nio.file.NoSuchFileException: /data/transformer/runInfo/testRun__9e731964-6f21-4956-99fa-82206f3451f5__149e11c1-f697-11eb-b9dc-fd846d33049d__56e36c1c-f8c6-11eb-9295-0fa62e75e081@149e11c1-f697-11eb-b9dc-fd846d33049d/run1630923519827/driver-topLevelError.logI have verified staging directory as well. seems like all required files are getting populated there which eventually being read through Spark submit. At the end, Transformer pipeline ends with status START_ERROR: Job completed successfully.This is a show stop
We are using 3.21 OSS StreamSets in a unique way where we are always creating pipelines as templates. We have given a custom UI in customer's hand to pick their preffered source. Based on their choice, we call Streami REST APIs to create customer specific pipelines in real time. That's awesome me and I beleive this is a unique way of building pipelines which others may find interesting. I'm thinking of writing a blog and curious if StreamSets suggest to follow any specific templates.
I have the JSON file of a streamset pipeline.I need to have two copies of this pipeline with different names in a single SCH. We tried to rename the title in the json file with the first name and imported to SCH. The next time I changed the title and on import to the same SCH, the version of the pipeline with the first name got updated rather than a new pipeline being created.Please provide some insights on having two copies of same pipeline with different names in same SCH.
Already have an account? Login
Login to the community
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.