Harman Patil (Editor)

DBpedia

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Repository
  
github.com/dbpedia/

Written in
  
Scala Java VSP

DBpedia

Developer(s)
  
Leipzig University University of Mannheim OpenLink Software

Initial release
  
10 January 2007 (10 years ago) (2007-01-10)

Stable release
  
DBpedia 3.11 a.k.a. DBpedia 2015-04 a.k.a. DBpedia 2015 A / September 2015

Operating system
  
Virtuoso Universal Server

DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets. Tim Berners-Lee described DBpedia as one of the most famous parts of the decentralized Linked Data effort.

Contents

Background

The project was started by people at the Free University of Berlin and Leipzig University, in collaboration with OpenLink Software, and the first publicly available dataset was published in 2007. It is made available under free licences (CC-BY-SA), allowing others to reuse the dataset; it doesn't however use an open data license to waive the sui generis database rights.

Wikipedia articles consist mostly of free text, but also include structured information embedded in the articles, such as "infobox" tables (the pull-out panels that appear in the top right of the default view of many Wikipedia articles, or at the start of the mobile versions), categorisation information, images, geo-coordinates and links to external Web pages. This structured information is extracted and put in a uniform dataset which can be queried.

Dataset

In September 2014, version 2014 was released. Compared to previous versions, one of the main changes was the way abstract texts got extracted. By running a local mirror of Wikipedia and retrieving the rendered abstracts from it, the extracted texts got considerably cleaner. Furthermore, a new data set containing contents extracted from Wikimedia Commons was introduced. The whole DBpedia data set describes 4.58 million entities, out of which 4.22 million are classified in a consistent ontology, including 1,445,000 persons, 735,000 places, 123,000 music albums, 87,000 films, 19,000 video games, 241,000 organizations, 251,000 species and 6,000 diseases. The data set features labels and abstracts for these entities in up to 125 different languages; 25.2 million links to images and 29.8 million links to external web pages. In addition, it contains around 50 million links into other RDF datasets, 80.9 million links to Wikipedia categories, and 41.2 million YAGO2 categories. The DBpedia project uses the Resource Description Framework (RDF) to represent the extracted information and consists of 3 billion RDF triples, 580 million extracted from the English edition of Wikipedia and 2.46 billion from other language editions.

From this data set, information spread across multiple pages can be extracted, for example book authorship can be put together from pages about the work, or the author.

One of the challenges in extracting information from Wikipedia is that the same concepts can be expressed using different parameters in infobox and other templates, such as |birthplace= and |placeofbirth=. Because of this, queries about where people were born would have to search for both of these properties in order to get more complete results. As a result, the DBpedia Mapping Language has been developed to help in mapping these properties to an ontology while reducing the number of synonyms. Due to the large diversity of infoboxes and properties in use on Wikipedia, the process of developing and improving these mappings has been opened to public contributions.

Examples

DBpedia extracts factual information from Wikipedia pages, allowing users to find answers to questions where the information is spread across many different Wikipedia articles. Data is accessed using an SQL-like query language for RDF called SPARQL. For example, imagine you were interested in the Japanese shōjo manga series Tokyo Mew Mew, and wanted to find the genres of other works written by its illustrator. DBpedia combines information from Wikipedia's entries on Tokyo Mew Mew, Mia Ikumi and on works such as Super Doll Licca-chan and Koi Cupid. Since DBpedia normalises information into a single database, the following query can be asked without needing to know exactly which entry carries each fragment of information, and will list related genres:

Use cases

DBpedia has a broad scope of entities covering different areas of human knowledge. This makes it a natural hub for connecting datasets, where external datasets could link to its concepts. The DBpedia dataset is interlinked on the RDF level with various other Open Data datasets on the Web. This enables applications to enrich DBpedia data with data from these datasets. As of September 2013, there are more than 45 million interlinks between DBpedia and external datasets including: Freebase, OpenCyc, UMBEL, GeoNames, MusicBrainz, CIA World Fact Book, DBLP, Project Gutenberg, DBtune Jamendo, Eurostat, UniProt, Bio2RDF, and US Census data. The Thomson Reuters initiative OpenCalais, the Linked Open Data project of the New York Times, the Zemanta API and DBpedia Spotlight also include links to DBpedia. The BBC uses DBpedia to help organize its content. Faviki uses DBpedia for semantic tagging.

Amazon provides a DBpedia Public Data Set that can be integrated into Amazon Web Services applications.

DBpedia Spotlight

In June 2010 researchers from the Web Based Systems Group at the Free University of Berlin started a project named DBpedia Spotlight, a tool for annotating mentions of DBpedia resources in text. This provides a solution for linking unstructured information sources to the Linked Open Data cloud through DBpedia. DBpedia Spotlight performs named entity extraction, including entity detection and name resolution (in other words, disambiguation). It can also be used for named entity recognition, amongst other information extraction tasks. DBpedia Spotlight aims to be customizable for many use cases. Instead of focusing on a few entity types, the project strives to support the annotation of all 3.5 million entities and concepts from more than 320 classes in DBpedia.

DBpedia Spotlight is publicly available as a web service for testing purposes or a Java/Scala API licensed via the Apache License. The DBpedia Spotlight distribution also includes a jQuery plugin that allows developers to annotate pages anywhere on the Web by adding one line to their page. Clients are also available in Java or PHP. The tool handles various languages through its demo page and web services. Internationalization is supported for any language that has a Wikipedia.

References

DBpedia Wikipedia