In computer science, the Earley parser is an algorithm for parsing strings that belong to a given context-free language, though (depending on the variant) it may suffer problems with certain nullable grammars. The algorithm, named after its inventor, Jay Earley, is a chart parser that uses dynamic programming; it is mainly used for parsing in computational linguistics. It was first introduced in his dissertation in 1968 (and later appeared in abbreviated, more legible form in a journal).
Contents
- Earley recogniser
- The algorithm
- Pseudocode
- Example
- Constructing the parse forest
- C C
- Haskell
- Java
- C
- JavaScript
- OCaml
- Perl
- Python
- Common Lisp
- Scheme Racket
- Resources
- References
Earley parsers are appealing because they can parse all context-free languages, unlike LR parsers and LL parsers, which are more typically used in compilers but which can only handle restricted classes of languages. The Earley parser executes in cubic time in the general case
Earley recogniser
The following algorithm describes the Earley recogniser. The recogniser can be easily modified to create a parse tree as it recognises, and in that way can be turned into a parser.
The algorithm
In the following descriptions, α, β, and γ represent any string of terminals/nonterminals (including the empty string), X and Y represent single nonterminals, and a represents a terminal symbol.
Earley's algorithm is a top-down dynamic programming algorithm. In the following, we use Earley's dot notation: given a production X → αβ, the notation X → α • β represents a condition in which α has already been parsed and β is expected.
Input position 0 is the position prior to input. Input position n is the position after accepting the nth token. (Informally, input positions can be thought of as locations at token boundaries.) For every input position, the parser generates a state set. Each state is a tuple (X → α • β, i), consisting of
(Earley's original algorithm included a look-ahead in the state; later research showed this to have little practical effect on the parsing efficiency, and it has subsequently been dropped from most implementations.)
The state set at input position k is called S(k). The parser is seeded with S(0) consisting of only the top-level rule. The parser then repeatedly executes three operations: prediction, scanning, and completion.
It is important to note that duplicate states are not added to the state set, only new ones. These three operations are repeated until no new states can be added to the set. The set is generally implemented as a queue of states to process, with the operation to be performed depending on what kind of state it is.
The algorithm accepts if (X → γ •, 0) ends up in S(n), where (X → γ) is the top level-rule and n the input length, otherwise it rejects.
Pseudocode
Adapted from Speech and Language Processing by Daniel Jurafsky and James H. Martin,
Example
Consider the following simple grammar for arithmetic expressions:
With the input:
2 + 3 * 4This is the sequence of state sets:
(state no.) Production (Origin) # Comment-----------------------------------------(1) P → • S (0) # start rule(2) S → • S + M (0) # predict from (1)(3) S → • M (0) # predict from (1)(4) M → • M * T (0) # predict from (3)(5) M → • T (0) # predict from (3)(6) T → • number (0) # predict from (5)(1) T → number • (0) # scan from S(0)(6)(2) M → T • (0) # complete from (1) and S(0)(5)(3) M → M • * T (0) # complete from (2) and S(0)(4)(4) S → M • (0) # complete from (2) and S(0)(3)(5) S → S • + M (0) # complete from (4) and S(0)(2)(6) P → S • (0) # complete from (4) and S(0)(1)(1) S → S + • M (0) # scan from S(1)(5)(2) M → • M * T (2) # predict from (1)(3) M → • T (2) # predict from (1)(4) T → • number (2) # predict from (3)(1) T → number • (2) # scan from S(2)(4)(2) M → T • (2) # complete from (1) and S(2)(3)(3) M → M • * T (2) # complete from (2) and S(2)(2)(4) S → S + M • (0) # complete from (2) and S(2)(1)(5) S → S • + M (0) # complete from (4) and S(0)(2)(6) P → S • (0) # complete from (4) and S(0)(1)(1) M → M * • T (2) # scan from S(3)(3)(2) T → • number (4) # predict from (1)(1) T → number • (4) # scan from S(4)(2)(2) M → M * T • (2) # complete from (1) and S(4)(1)(3) M → M • * T (2) # complete from (2) and S(2)(2)(4) S → S + M • (0) # complete from (2) and S(2)(1)(5) S → S • + M (0) # complete from (4) and S(0)(2)(6) P → S • (0) # complete from (4) and S(0)(1)The state (P → S •, 0) represents a completed parse. This state also appears in S(3) and S(1), which are complete sentences.
Constructing the parse forest
Earley's dissertation briefly describes an algorithm for constructing parse trees by adding a set of pointers from each non-terminal in an Earley item back to the items that caused it to be recognized. But Tomita noticed that this does not take into account the relations between symbols, so if we consider the grammar S → SS | b and the string bbb, it only notes that each S can match one or two b's, and thus produces spurious derivations for bb and bbbb as well as the two correct derivations for bbb.
It is relatively straightforward to take the complete items from the chart and search through them from the top down, assembling the ones that fit together to make the parse forest.
Another method, described in, is to build the parse forest as you go, augmenting each Earley item with a pointer to a shared packed parse forest (SPPF) node labelled with a triple (s, i, j) where s is a symbol or an LR(0) item (production rule with dot), and i and j give the section of the input string derived by this node. A node's contents are either a pair of child pointers giving a single derivation, or a list of "packed" nodes each containing a pair of pointers and representing one derivation. SPPF nodes are unique (there is only one with a given label), but may contain more than one derivation for ambiguous parses. So even if an operation does not add an Earley item (because it already exists), it may still add a derivation to the item's parse forest.
Note also that SPPF nodes are never labeled with a completed LR(0) item: instead they are labelled with the symbol that is produced so that all derivations are combined under one node regardless of which alternative production they come from.