@anthonyg Can you comment here? This is what I was originally discussing with you on the below thread?
https://community.streamsets.com/got-a-question-7/standard-out-of-box-origins-for-saas-apps-158
Hi @swayam . We do no processing on link provided in header to paginate. We just use the URL provided in the next link as provided. So, whether this link contains page, page_info, or any other argument is irrelevant to us to paginate. Probably you are facing another (perhaps related) issue. If you can provide the source JSON of your pipeline and any other valuable information, we will be in a better position to help you.
Hi @Dimas Cabré i Chacón
Many thanks for looking at my message. Here I’ve attached my pipeline.
Probably the issue was not with pagination - but related to my expectations ragarding HTT Client running in “Polling” mode.
As per SDC document:
Note: After the polling interval passes, the origin continues processing from where it stopped. For example, let’s say that you’ve configured the origin to use the polling mode with an interval of two hours and to use page number pagination. After the origin reads 25 pages of results, the 26th page returns no results and so the origin stops reading. After the two hour interval passes, the origin polls the server again, reading the results starting with page 26.
Based on above and what I’ve configured, I was expecting that after 2 polls (each poll fetching 15 records and there are total 30 records in Shopify), the pipeline shouldn’t fetch anything but running to check every 5 secs.
But I see that after every 5 secs, it fetchs the same data again and again.
Please help comment on this.
Hi @Dimas Cabré i Chacón ,
Could you please have a look at my request?
Regards
Swayam
Hi @swayam . Probably there is some confusion about pooling behavior. Pooling is expected to be used when next pages are not available immediately. So, when you paginate by page number, it makes sense to keep trying for a “next” page until available. When using pagination based on header link, there is no point in retrying for a missing next page. If there is a link in the header, next page exists and is available. If not, there is no next page and it is not expected to exist in the future. When there is no link, our offset keeps being the last requested page, and thus you keep receiving all the time the same page (because you configured Pooling Mode). So, I think this pipeline should be configured, not with Polling Mode, but using Batch Mode (which will finish the pipeline after no more pages). There can be debate about how this origin works, but it is clear it is not buggy. Doing all the time the same request for the last page could make sense, as you might expect at some moment it can appear a next link in your header. But, if you want to cover this scenario, you need to decide in advance what to do with duplicate records.
In short, use Batch Mode, and have prepared your functional scenario if you plan to invoke the same initial URL from time to time, as duplicates cannot be easily avoided then.
@swayam, did @Dimas Cabré i Chacón suggestions help?