24 Forks
90 Stars
90 Watchers

images-web-crawler

This package is a complete tool for creating a large dataset of images (specially designed -but not only- for machine learning enthusiasts). It can crawl the web, download images, rename / resize / covert the images and merge folders..

How to download and setup images-web-crawler

Open terminal and run command
git clone https://github.com/amineHorseman/images-web-crawler.git
git clone is used to create a copy or clone of images-web-crawler repositories. You pass git clone a repository URL.
it supports a few different network protocols and corresponding URL formats.

Also you may download zip file with images-web-crawler https://github.com/amineHorseman/images-web-crawler/archive/master.zip

Or simply clone images-web-crawler with SSH
[email protected]:amineHorseman/images-web-crawler.git

If you have some problems with images-web-crawler

You may open issue on images-web-crawler support forum (system) here: https://github.com/amineHorseman/images-web-crawler/issues

Similar to images-web-crawler repositories

Here you may see images-web-crawler alternatives and analogs

 scrapy    Sasila    Price-monitor    webmagic    colly    headless-chrome-crawler    Lulu    newcrawler    scrapple    goose-parser    arachnid    gopa    scrapy-zyte-smartproxy    node-crawler    arachni    newspaper    webster    spidy    N2H4    easy-scraping-tutorial    antch    pomp    talospider    podcastcrawler    FileMasta    lux    scrapy-redis    haipproxy    DotnetSpider    TumblThree