Web crawler designed to efficiently retrieve unique href, script and form links from a web application.
$ crawl
$ crawl -h
|Usage:
| crawl [-f] [href|script|form]
|
|Options:
| -h show help menu
| -d number of depth to scrape.
| -f attribute a type of link. [href|script|form]
|
|Example:
| crawl -f script [domain].[TLD]
| crawl -d 1 [domain].[TLD]/directory
| crawl -f href [domain].[TLD]/directory?key=value
|
|Fetches all href, script and form links, if no flags are specified.
|Uses HTTPs as default protocol, if no protocol is specified.
$ echo google.com | crawl
Get all subdomains of owasp.org and crawl the ones that are alive.
subfinder -d owasp.org | httpx | crawl
git clone https://github.com/synacktraa/crawl.git && cd ./crawl
sudo mv ./crawl /usr/local/bin
cd .. && rm -rf "./crawl"