0 Forks
2 Stars
2 Watchers

webcrawler

Web Crawler is a Node.js application that allows you to crawl web pages, save them locally, and extract hyperlinks from the page body. It provides a simple command-line interface where you can enter the starting URL and specify the maximum number of crawls. The crawler follows the hyperlinks recursively, saves the web pages in a specified directory

How to download and setup webcrawler

Open terminal and run command
git clone https://github.com/ssharmapavitra/webcrawler.git
git clone is used to create a copy or clone of webcrawler repositories. You pass git clone a repository URL.
it supports a few different network protocols and corresponding URL formats.

Also you may download zip file with webcrawler https://github.com/ssharmapavitra/webcrawler/archive/master.zip

Or simply clone webcrawler with SSH
[email protected]:ssharmapavitra/webcrawler.git

If you have some problems with webcrawler

You may open issue on webcrawler support forum (system) here: https://github.com/ssharmapavitra/webcrawler/issues