Samiksha Jaiswal (Editor)

Apache Nutch

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Development status
  
Active

Operating system
  
Cross-platform

Written in
  
Java

Type
  
Web Crawler

Apache Nutch

Developer(s)
  
Apache Software Foundation

Stable release
  
1.10 and 2.3 / May 6, 2015 (2015-05-06)

Apache Nutch is a highly extensible and scalable open source web crawler software project.

Contents

Features

Nutch is coded entirely in the Java programming language, but data is written in language-independent formats. It has a highly modular architecture, allowing developers to create plug-ins for media-type parsing, data retrieval, querying and clustering.

The fetcher ("robot" or "web crawler") has been written from scratch specifically for this project.

History

Nutch originated with Doug Cutting, creator of both Lucene and Hadoop, and Mike Cafarella.

In June, 2003, a successful 100-million-page demonstration system was developed. To meet the multi-machine processing needs of the crawl and index tasks, the Nutch project has also implemented a MapReduce facility and a distributed file system. The two facilities have been spun out into their own subproject, called Hadoop.

In January, 2005, Nutch joined the Apache Incubator, from which it graduated to become a subproject of Lucene in June of that same year. Since April, 2010, Nutch has been considered an independent, top level project of the Apache Software Foundation.

In February 2014 the Common Crawl project adopted Nutch for its open, large-scale web crawl.

While it was once a goal for the Nutch project to release a global large-scale web search engine, that is no longer the case.

Advantages

Nutch has the following advantages over a simple fetcher:

  • Highly scalable and relatively feature rich crawler.
  • Features like politeness, which obeys robots.txt rules.
  • Robust and scalable – Nutch can run on a cluster of up to 100 machines.
  • Quality – crawling can be biased to fetch "important" pages first.
  • Scalability

    IBM Research studied the performance of Nutch/Lucene as part of its Commercial Scale Out (CSO) project. Their findings were that a scale-out system, such as Nutch/Lucene, could achieve a performance level on a cluster of blades that was not achievable on any scale-up computer such as the POWER5.

    The ClueWeb09 dataset (used in e.g. TREC) was gathered using Nutch, with an average speed of 755.31 documents per second.

  • Hadoop – Java framework that supports distributed applications running on large clusters.
  • Search engines built with Nutch

  • Common Crawl – publicly available internet-wide crawls, started using Nutch in 2014.
  • Creative Commons Search – an implementation of Nutch, used in the period of 2004–2006.
  • DiscoverEd – Open educational resources search prototype developed by Creative Commons
  • Krugle uses Nutch to crawl web pages for code, archives and technically interesting content.
  • mozDex (inactive)
  • Wikia Search - launched 2008, closed down 2009
  • References

    Apache Nutch Wikipedia