Importer Endpoints
We use importer endpoint to import products from shopify.
3 Endpoints.
ImportFromFile - Use to import product from file.
ImportFromStore - Use to import product from store. Calling shopify endpoints to get the product.
ImporterUpdate - Use to update/create products if not exist. only run once a day. Calling shopify endpoint to get the product and check it in db if its exist and if there’s an update.
2 Flags:
Live in TenantAiProfiles Database FeatureConfiguration Field.
ForceWebScrape - Default false. When true it will always scrape product even if its exist in webscrape db. When false it will scrape the product when its new, if product exist it will use the data in db.
ForceSkipScrape - Default false. When true it will skip scrape even ForceWebScrape is true.
ImportFromFile:
Parameters:
indexName - Elasticsearch indexname where products save.
path - File path
webScrapeIndexName - Elasticsearch indexname where webscrape data save. If its empty it will use the indexname to create webScrapeIndexName. Sample WebScrapeIndexName “webscraped-product-
bluebungalow-tnt-bcaxygquan9senw-ken
". It will add “webscraped-product-” in start of name. Note that it will always put “webscrape-product-” in the first part so only use the root part of the index name if we like to use existing index.forceSkipScrape - If true it will force importer to skip webscraping even ForceWebScrape flag is true and ForceSkipScrape is false.
Sample cURL.
curl --location 'https://localhost:44386/ImportFromFile' \
--header 'Content-Type: application/json' \
--data '{
"indexName": "bluebungalow-tnt-bcaxygquan9senw-ken",
"path": "C:\\Users\\Ken\\Documents\\preezie\\test",
"webScrapeIndexName": "",
"forceSkipScrape": false
}'
ImportFromStore
Parameters:
TenantId - Tenantid of customer. It will get the indexname and flags in this settings
IndexName - If its not null or empty it will overwrite the indexname in tenants and use to save the product.
WebScrapeIndexName - Elasticsearch indexname where webscrape data save. If its empty it will use the indexname to create webScrapeIndexName. Sample WebScrapeIndexName “webscraped-product-
bluebungalow-tnt-bcaxygquan9senw-ken
". It will add “webscraped-product-” in start of name. Note that it will always put “webscrape-product-” in the first part so only use the root part of the index name if we like to use existing index.forceSkipScrape - If true it will force importer to skip webscraping even ForceWebScrape flag is true and ForceSkipScrape is false.
Sample cUrl
curl --location 'https://localhost:44386/ImportFromStore' \
--header 'Content-Type: application/json' \
--data '{
"TenantId": "tnt_a6BIEcyTFpyju24",
"IndexName": "bluebungalow-tnt_pj22ngjqxirutau-10312024",
"webScrapeIndexName": "webscrape-bluebungalow-tnt_pj22ngjqxirutau-10312024"
"forceSkipScrape": false
}'
Importer Update
Parameters:
TenantId - Tenantid of customer. It will get the indexname and flags in this settings
IndexName - If its not null or empty it will overwrite the indexname in tenants and use to save the product.
WebScrapeIndexName - Elasticsearch indexname where webscrape data save. If its empty it will use the indexname to create webScrapeIndexName. Sample WebScrapeIndexName “webscraped-product-
bluebungalow-tnt-bcaxygquan9senw-ken
". It will add “webscraped-product-” in start of name. Note that it will always put “webscrape-product-” in the first part so only use the root part of the index name if we like to use existing index.forceSkipScrape - If true it will force importer to skip webscraping even ForceWebScrape flag is true and ForceSkipScrape is false.
Sample cURL
{
"TenantId": "tnt_V8rwwPeAtKCA70C",
"IndexName": "bluebungalow-tnt-bcaxygquan9senw-10312024", //always change on whats in db
"webScrapeIndexName": "bluebungalow-tnt-bcaxygquan9senw-10312024",
"forceSkipScrape": false
}