A syntactic predicate specifies the syntactic validity of applying a production in a formal grammar and is analogous to a semantic predicate that specifies the semantic validity of applying a production. It is a simple and effective means of dramatically improving the recognition strength of an LL parser by providing arbitrary lookahead. In their original implementation, syntactic predicates had the form “( α )?” and could only appear on the left edge of a production. The required syntactic condition α could be any valid contextfree grammar fragment.
More formally, a syntactic predicate is a form of production intersection, used in parser specifications or in formal grammars. In this sense, the term predicate has the meaning of a mathematical indicator function. If p_{1} and p_{2,} are production rules, the language generated by both p_{1} and p_{2} is their set intersection.
As typically defined or implemented, syntactic predicates implicitly order the productions so that predicated productions specified earlier have higher precedence than predicated productions specified later within the same decision. This conveys an ability to disambiguate ambiguous productions because the programmer can simply specify which production should match.
Parsing expression grammars (PEGs), invented by Bryan Ford, extend these simple predicates by allowing "not predicates" and permitting a predicate to appear anywhere within a production. Morever, Ford invented packrat parsing to handle these grammars in linear time by employing memoization, at the cost of heap space.
It is possible to support lineartime parsing of predicates as general as those allowed by PEGs, but reduce the memory cost associated with memoization by avoiding backtracking where some more efficient implementation of lookahead suffices. This approach is implemented by ANTLR version 3, which uses Deterministic finite automata for lookahead; this may require testing a predicate in order to choose between transitions of the DFA (called "predLL(*)" parsing).
The term syntactic predicate was coined by Parr & Quong and differentiates this form of predicate from semantic predicates (also discussed).
Syntactic predicates have been called multistep matching, parse constraints, and simply predicates in various literature. (See References section below.) This article uses the term syntactic predicate throughout for consistency and to distinguish them from semantic predicates.
BarHillel et al. show that the intersection of two regular languages is also a regular language, which is to say that the regular languages are closed under intersection.
The intersection of a regular language and a contextfree language is also closed, and it has been known at least since Hartmanis that the intersection of two contextfree languages is not necessarily a contextfree language (and is thus not closed). This can be demonstrated easily using the canonical Type 1 language,
L
=
{
a
n
b
n
c
n
:
n
≥
1
}
:
Let
L
1
=
{
a
m
b
n
c
n
:
m
,
n
≥
1
}
(Type 2)
Let
L
2
=
{
a
n
b
n
c
m
:
m
,
n
≥
1
}
(Type 2)
Let
L
3
=
L
1
∩
L
2
Given the strings abcc, aabbc, and aaabbbccc, it is clear that the only string that belongs to both L_{1} and L_{2} (that is, the only one that produces a nonempty intersection) is aaabbbccc.
In most formalisms that use syntactic predicates, the syntax of the predicate is noncommutative, which is to say that the operation of predication is ordered. For instance, using the above example, consider the following pseudogrammar, where X ::= Y PRED Z is understood to mean: "Y produces X if and only if Y also satisfies predicate Z":
S ::= a X
X ::= Y PRED Z
Y ::= a+ BNCN
Z ::= ANBN c+
BNCN ::= b [BNCN] c
ANBN ::= a [ANBN] b
Given the string aaaabbbccc, in the case where Y must be satisfied first (and assuming a greedy implementation), S will generate aX and X in turn will generate aaabbbccc, thereby generating aaaabbbccc. In the case where Z must be satisfied first, ANBN will fail to generate aaaabbb, and thus aaaabbbccc is not generated by the grammar. Moreover, if either Y or Z (or both) specify any action to be taken upon reduction (as would be the case in many parsers), the order that these productions match determines the order in which those sideeffects occur. Formalisms that vary over time (such as adaptive grammars) may rely on these side effects.
ANTLR
Parr & Quong give this example of a syntactic predicate:
stat: (declaration)? declaration
 expression
;
which is intended to satisfy the following informally stated constraints of C++:
 If it looks like a declaration, it is; otherwise
 if it looks like an expression, it is; otherwise
 it is a syntax error.
In the first production of rule stat, the syntactic predicate (declaration)? indicates that declaration is the syntactic context that must be present for the rest of that production to succeed. We can interpret the use of (declaration)? as "I am not sure if declaration will match; let me try it out and, if it does not match, I shall try the next alternative." Thus, when encountering a valid declaration, the rule declaration will be recognized twice—once as syntactic predicate and once during the actual parse to execute semantic actions.
Of note in the above example is the fact that any code triggered by the acceptance of the declaration production will only occur if the predicate is satisfied.
The language
L
=
{
a
n
b
n
c
n

n
≥
1
}
can be represented in various grammars and formalisms as follows:
Parsing Expression Grammars
S ← &(A !b) a+ B !c
A ← a A? b
B ← b B? c
§Calculus
Using a bound predicate:
S → {A}
^{B}A → X 'c+'
X → 'a' [X] 'b'
B → 'a+' Y
Y → 'b' [Y] 'c'
Using two free predicates:
A → <'a+'>
_{a} <'b+'>
_{b} Ψ(
a b)
^{X} <'c+'>
_{c} Ψ(
b c)
^{Y}X → 'a' [X] 'b'
Y → 'b' [Y] 'c'
Conjunctive Grammars
(Note: the following example actually generates
L
=
{
a
n
b
n
c
n

n
≥
0
}
, but is included here because it is the example given by the inventor of conjunctive grammars.):
S → AB&DC
A → aA  ε
B → bBc  ε
C → cC  ε
D → aDb  ε
Perl 6 rules
rule S { <before <A> <!before b>> a+ <B> <!before c> }
rule A { a <A>? b }
rule B { b <B>? c }
Although by no means an exhaustive list, the following parsers and grammar formalisms employ syntactic predicates:
ANTLR (Parr & Quong)
As originally implemented, syntactic predicates sit on the leftmost edge of a production such that the production to the right of the predicate is attempted if and only if the syntactic predicate first accepts the next portion of the input stream. Although ordered, the predicates are checked first, with parsing of a clause continuing if and only if the predicate is satisfied, and semantic actions only occurring in nonpredicates.
Augmented Pattern Matcher (Balmas)
Balmas refers to syntactic predicates as "multistep matching" in her paper on APM. As an APM parser parses, it can bind substrings to a variable, and later check this variable against other rules, continuing to parse if and only if that substring is acceptable to further rules.
Parsing expression grammars (Ford)
Ford's PEGs have syntactic predicates expressed as the
andpredicate and the
notpredicate.
§Calculus (Jackson)
In the §Calculus, syntactic predicates are originally called simply
predicates, but are later divided into
bound and
free forms, each with different input properties.
Perl 6 rules
Perl 6 introduces a generalized tool for describing a grammar called
rules, which are an extension of Perl 5's regular expression syntax. Predicates are introduced via a lookahead mechanism called
before, either with "
<before ...>
" or "
<!before ...>
" (that is: "
not before"). Perl 5 also has such lookahead, but it can only encapsulate Perl 5's more limited regexp features.
ProGrammar (NorKen Technologies)
ProGrammar's GDL (Grammar Definition Language) makes use of syntactic predicates in a form called
parse constraints.
Conjunctive and Boolean Grammars (Okhotin)
Conjunctive grammars, first introduced by Okhotin, introduce the explicit notion of conjunctionaspredication. Later treatment of conjunctive and boolean grammars is the most thorough treatment of this formalism to date.