-
Notifications
You must be signed in to change notification settings - Fork 539
Closed
Labels
t-toolingIssues with this label are in the ownership of the tooling team.Issues with this label are in the ownership of the tooling team.
Description
Does Crawlee support per-domain concurrency?
In this example, the first domain (paklap.pk) can't handle much load, but the second domain (centurycomputerpk.com) can.
Does Crawlee allow setting concurrency limits per domain, or is concurrency managed globally?
In Scrapy, this is possible through the download_slot mechanism. I’m wondering if there’s an equivalent in Crawlee.
crawler = ParselCrawler(concurrency_settings=ConcurrencySettings(
desired_concurrency=1,
))
await crawler.run([Request.from_url("https://www.paklap.pk/laptops-prices.html", label="paklap_listing")])
await crawler.run([Request.from_url("https://centurycomputerpk.com/product-category/laptops", label="century_listing")])Metadata
Metadata
Assignees
Labels
t-toolingIssues with this label are in the ownership of the tooling team.Issues with this label are in the ownership of the tooling team.