go-crawler
A web crawling framework implemented in Golang, it is simple to write and delivers powerful performance. It comes with a wide range of practical middleware and supports various parsing and storage methods. Additionally, it supports distributed deployment. 基于golang实现的爬虫框架,编写简单,性能强劲。内置了丰富的实用中间件,支持多种解析、保存方式,支持分布式部署。
How to download and setup go-crawler
Open terminal and run command
git clone https://github.com/lizongying/go-crawler.git
git clone is used to create a copy or clone of go-crawler repositories.
You pass git clone a repository URL. it supports a few different network protocols and corresponding URL formats.
Also you may download zip file with go-crawler https://github.com/lizongying/go-crawler/archive/master.zip
Or simply clone go-crawler with SSH
[email protected]:lizongying/go-crawler.git
If you have some problems with go-crawler
You may open issue on go-crawler support forum (system) here: https://github.com/lizongying/go-crawler/issuesSimilar to go-crawler repositories
Here you may see go-crawler alternatives and analogs
scrapy requests-html Sasila webmagic colly headless-chrome-crawler Embed artoo instagram-scraper django-dynamic-scraper scrapy-cluster Lulu newcrawler panther facebook_data_analyzer ImageScraper scrapple parsel nickjs jsoup-annotations jekyll Musoq goose-parser arachnid lambdasoup crawler geeksforgeeks.pdf scrapy-zyte-smartproxy sqrape comic-dl