I currently have a pipeline set up using the native ServiceNow connector as my source dataset. I’m using a simple query to pull records from the sc_req_item table. I have an Azure SQL database set up as my sink dataset. When I execute the pipeline, it works and copies over around 107k records into my Azure database, but it takes over 10 hours to run. Is there any way to increase the performance of this? I don’t think it should take this long to run. Has anyone achieved implementing something like this in an alternative way?
I created pipeline using the native ServiceNow Connector as my source dataset with a simple SQL query to pull records from the sc_req_item table. I created an Azure SQL database and set it up as my sink dataset. The pipeline runs but takes 10+ hours to finish. I was expecting this to be a lot shorter for 100k record or even more.