crawler4j is an open source web crawler for Java which provides a simple interface for crawling the Web. Using it, you can setup a multi-threaded web crawler in few minutes. You need to create a crawler class that extends WebCrawler. This class decides which URLs should be crawled and handles the downloaded page. shouldVisit function decides whether the given URL should be crawled or not. In the above example, this example is not allowing .css, .js and media files and only allows pages within ics domain. visit function is called after the content of a URL is downloaded successfully. ...
This tool is for the people who want to learn from a web site or web page,especially Web Developer.It can help get a web page's source code.Input the web page's address and press start button and this tool will find the page and according the page's quote,download all files that used in the page ,include css file and javascript files.
The html file's name will be 'index.html' and other file's will use it's source name.
Note:only support windows platform and http protocol.
PHPCrawl is a high configurable webcrawler/webspider-library written in PHP. It supports filters, limiters, cookie-handling, robots.txt-handling, multiprocessing and much more.
HarvestMan is a fully functional, multithreaded webcrawler cum offline-browser. It is highly customizable and supports as much as 55 plus options for controlling and customizing offline browsing. It is written entirely in the Python programming language.
Allows for simultaneous crawling of multiple URLs up to a depth of 3 levels and then to export all connections and coordinates to Excel 2003 for further use in social network analysis tools. You can also visualize and simulate the created network.
The CMS-Bandits is a set of php scripts, with online html editor, calendar, search engine, rss reader, revision log, personal nickpage, comment system, webcrawler and even more.
Crawler.NET is a component-based distributed framework for web traversal intended for the .NET platform. It comprises of loosely coupled units each realizing a specific web crawler task. The main design goals are efficiency and flexibility.