Build it With Brenna: Hands-on Workshop with a MySQL Origin and a Snowflake Destination
Hello,I am looking for documentation for the transpose processor in transformer pipelines.I searched here : https://docs.streamsets.com/portal/platform-transformer/latest/search.html?searchQuery=transpose but did not find it.Can someone share the link to the relevant documentation?-Dhanashri
Hi,I use the kinesis consumer on DC 5.3. Configuration shows no errors. When I preview the pipeline everything works but no records are read. This seems to be due to that only one shard contains the actual data. How can I configure the shard id to read from. Thanks, Marco
Which REST API endpoint to use to check if an engine is offline.
Hi,Facing error in Data collector pipeline Oracle CDC to Snowflake while processing CLOB datatype table. When inserting value >= 4000 then works fine. But the less than that facing error.SNOWFLAKE_28 - Snowflake MERGE for 'sdc-df26c7ce-fd00-4b5d-a3b1-817b9809e0a3.csv.gz' processed '1' CDC records out of '3'Oracle:create table TEST_LOB_VAL(lobid number(10) primary key, lob_value clob );INSERT INTO TEST_LOB_VAL VALUES (1,to_clob(RPAD('AAAAAAAAAAAAAAAAAA',4000,'A')));INSERT INTO TEST_LOB_VAL VALUES (2,to_clob(RPAD('AAAAAAAAAAAAAAAAAA',2000,'A')));INSERT INTO TEST_LOB_VAL VALUES (3,'AAAAAAAAAA');COMMIT;
I am facing issue while configuring repartition method attribute for repartition processor using python. What is the right way to set it.
As a part of my requirement I am using http origin to connect to end point and need to perform lookup on file to get relevant data from file.How to perform lookup on a file using streamsets
Hello everyone,I just wanted to ask that don’t we have post-processing capability for “GoogleCloudStorage_01”?Basically we are using “streamsets/datacollector:3.18.1” for some testing purpose and we are fetching data from GCS and want to move file to some other location post processing.
In this article we will walkthroughs the steps involved to deploy engines in kubernetes using “Legacy Kubernetes” support in DataOps Platform. Note: Legacy Kubernetes integration is enabled only for some paid accounts. For more information, contact your StreamSets account team. Once you have Legacy Kubernetes enabled for your org, you should see the following additional option.Prerequisite:jq installed and of course Kubernates environment of your choice Create a namespace where you are expected to have the control-agent, StreamSets engines(SDC, Transformer for Spark etc.) Eg command: kubectl create namespace dataopsStep-1: Clone the git repo https://github.com/onefoursix/control-agent-k8s-deploymentStep-2: Set these variables at the top of the file deploy-control-agent-on-dataops-platform.sh:ORG_ID=<Your DataOps Platform Org ID>SCH_URL=<Your DataOps Platform URL> # for example: https://na01.hub.streamsets.comCRED_ID=<Your API Credential CRED_ID>CRED_TOKEN=<
After successfully upgrading Data Collector using Cloudera Manager from a previous version to 5.x you notice that the log screen does not get any log entries and there is no sdc.log file generated in the $SDC_LOG directory. Cloudera Manager uses CSD (custom service descriptor) files to determine the configuration, monitoring and file distribution of a custom service (in this case SDC) that will be managed by Cloudera Manager. You can learn more about this in Cloudera’s documentation. As part of the SDC upgrade steps you need to install the newest CSD file. This has always been a requirement but in previous versions, not updating it caused little to no problems. For this reason some users may have skipped this step for previous SDC upgrades. For upgrades of older SDC versions to 5.x, not updating the CSD file will result in no logs being generated. SDC 5.x uses log4j 2.x while older versions of SDC used log4j 1.x which are not compatible with each other. If the old CSD file is kept in
If i converting string type date to date type means the output show wrong.
My origin as dymanic columns changes will happen and also load into mysql not manually but i want to achieve in dynamically.
We currently have a C# console application for processing large CSV files. We are exploring using Streamsets to perform the same. Below is my high-level requirement.Currently we have 50+ consumers calling our application by placing a CSV file at on-premises shared drive folder. The file size can be from 10,000 to 2 million rows. The application should be able to handle processing up to 5 million rows an hour. Each file drop triggers an individual process run. So that multiple files are processed simultaneously. Below are the high-level steps that are involved in processing the file. Write the file to a database table. Enrich every record with additional data attributes. Make an API call for each record. Update the DB record with the response received from the API call. Also write the response of the API call to a result CSV file for the consumer. After all the records are processed write the result CSV file for the consumer to pick from the shared drive. Thank you.
i dont known how to configure the mongodb opblog origin.
I have a JDBC Multi-table consumer as a source where it is connected to oracle database.Now I want to migrate the data along with the tables and its constraints, triggers, procedure, sequences etc. to Postgres DB.Currently I was exploring the Postgres Metadata Processor, but it is only reflecting the tables and columns with its respective data type.Can anyone please help me out in configuring the Postgres Metadata processor to migrate the constraints, triggers etc.
need to implement a search functionality that can take JSON Object as query parameters. I am using Azure Cognitive Search. Is it possible to have JSON Object as Query Parameter in Azure Cognitive Search? For example HTTP://search.something?params={"type":"foo","color":"green"}
Become a leader!
Learn how to make the most of StreamSets with user guides and tutorials
Get StreamSets certified to expand your skills and accelerate your success.
Contact our support team and we'll be happy to help you get up and running!
Already have an account? Login
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.