29 Forks
196 Stars
196 Watchers

robots-txt

Determine if a page may be crawled from robots.txt, robots meta tags and robot headers

How to download and setup robots-txt

Open terminal and run command
git clone https://github.com/spatie/robots-txt.git
git clone is used to create a copy or clone of robots-txt repositories. You pass git clone a repository URL.
it supports a few different network protocols and corresponding URL formats.

Also you may download zip file with robots-txt https://github.com/spatie/robots-txt/archive/master.zip

Or simply clone robots-txt with SSH
[email protected]:spatie/robots-txt.git

If you have some problems with robots-txt

You may open issue on robots-txt support forum (system) here: https://github.com/spatie/robots-txt/issues

Similar to robots-txt repositories

Here you may see robots-txt alternatives and analogs

 scrapy    Sasila    Price-monitor    webmagic    colly    headless-chrome-crawler    Lulu    newcrawler    scrapple    goose-parser    arachnid    gopa    scrapy-zyte-smartproxy    node-crawler    arachni    newspaper    webster    spidy    N2H4    easy-scraping-tutorial    antch    pomp    talospider    podcastcrawler    FileMasta    lux    scrapy-redis    haipproxy    DotnetSpider    TumblThree