Krawler
A configurable HTML Crawler written in Kotlin (JVM), powered by Coroutines, Kotlin Serialization (JSON), Ktor Client, Exposed, and SQLite.
How to download and setup Krawler
Open terminal and run command
git clone https://github.com/YektaDev/Krawler.git
git clone is used to create a copy or clone of Krawler repositories.
You pass git clone a repository URL. it supports a few different network protocols and corresponding URL formats.
Also you may download zip file with Krawler https://github.com/YektaDev/Krawler/archive/master.zip
Or simply clone Krawler with SSH
[email protected]:YektaDev/Krawler.git
If you have some problems with Krawler
You may open issue on Krawler support forum (system) here: https://github.com/YektaDev/Krawler/issuesSimilar to Krawler repositories
Here you may see Krawler alternatives and analogs
scrapy Sasila colly headless-chrome-crawler Lulu crawler newspaper isp-data-pollution webster cdp4j spidy stopstalk-deployment N2H4 memorious easy-scraping-tutorial antch pomp Harvester diffbot-php-client talospider corpuscrawler Python-Crawling-Tutorial learn.scrapinghub.com crawling-projects dig-etl-engine crawlkit scrapy-selenium spidyquotes zcrawl podcastcrawler