a scaleable and efficient crawelr with docker cluster , crawl million pages in 2 hours with a single machine
What is the tonywangcn/scaleable-crawler-with-docker-cluster GitHub project? Description: "a scaleable and efficient crawelr with docker cluster , crawl million pages in 2 hours with a single machine". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the scaleable-crawler-with-docker-cluster issue tracker:
Open GitHub Issues