Bottom-up Parsing

Bottom-up parsing is

Revert to the (unambiguous) natural grammar for our example

E --> T + E | T
T --> int * T | int | (E)
Consider the string int * int + int

Bottom-up parsing reduces a string to the start symbol by inverting productions

int * int + int                int * T + int               T --> int
int * T + int                  T + int                     T --> int * T
T + int                        T + T                       T --> int
T + T                          T + E                       E --> T
T + E                          E                           E --> T + E
E
Reading the productions bottom-to-top, these are the productions.
Reading them top-to-bottom, these are reductions

What's the point? The productions read backwards, trace a rightmost derivation, i.e., we are producing the right-most non-terminal at each step

Important fact #1 about bottom-up parsing
Bottom-up parsing traces a right-most derivation in reverse

Picture

int * int + int
 |     |     |
 |   	 T     |
 \   /       |
   T         |
	 |         T
	  \       /
		 \     /
		  \   /
			  E

Consequence: Whenever we reduce abw -> aXw (using X -> b), it must be true that w is a string of terminals.

Shift-reduce parsing : strategy used by bottom-up parsers

Important fact #1 has an interesting consequence:

Idea: split string into two substrings

Example : int * int + int
Assume an oracle tells us whether to shift of reduce

|int * int  + int                     shift
int |* int + int                      shift
int * |int + int                      shift
int * int| + int                      shift
int *   T| + int                      reduce
T | + int                             reduce
T +| int                              shift
T + int|                              shift
T +   T|                              reduce
T +   E|                              reduce
E|                                    reduce
Left string can be implemented by a stack, because we only reduce the suffix of the left string. Top of the stack is |

shift pushes a terminal on the stack

reduce pops off of the stack (production rhs) and pushes a non-terminal on the stack (production lhs)

In a given state, more than one action (shift or reduce) may lead to a valid parse.

If it is legal to shift or reduce, there is a shift-reduce conflict. Need some techniques to remove them

If it is legal to reduce using two productions, there is a reduce-reduce conflict. Are always bad and usually indicate a serious problem with the grammar.

In either case, the parser does not know what to do, and we either need to rewrite the grammar, or need to give the parser a hint on what to do in this situation.

Deciding when to shift and when to reduce

Solution 1: backtracking: try both shift and reduce (exponential search space)

Solution 2: predictive LR(k)

Example : int * int + int

|int * int  + int                     shift
int |* int + int                      shift
At this point, we could reduce. T |* int + int reduce But this would be a fatal mistake because there is no way we can reduce to E (there is no production that begins with T *).

Intuition: want to reduce only if the result can still be reduced to the start symbol (E)

Assume a right-most derivation:

S -->* \alpha X \omega --> \alpha \beta \omega
Then \alpha \beta is a handle of \alpha \beta \omega

Is int a handle of int*int+int?

E --> T+E --> T+T --> T+int --> int*T+int --> ...
The int at the leftmost position was produced from T-->int*T and not a reduction to only int. Hence, int is not a handle of int*int+int.

Is int*T a handle of int*int+int? Yes.

Is int*T+E a handle of int*int+int?

E --> T+E --> T + T --> ...
Because, in a rightmost deriviation, we produce T from E at the rightmost position in the very beginning, int*T+E is not a handle of int*int+int.

Is \epsilon a handle of int*int+int?

E --> T+E --> ...
No. We never produce $\epsilon$ at the leftmost position in this rightmost derivation.

Handles formalize the intuition: A handle is a reduction that allows further reductions back to the start symbol.

We only want to reduce at handles.

Note: we have said what a handle is, not how to find handles.

Important fact #2: In shift-reduce parsing, handles appear only at the top of the stack, never inside. Proof by induction

  1. True initially, stack is empty
  2. Immediately after reducing a handle
Handles are never to the left of the rightmost non-terminal, and are always at the top of the stack.

Hence shift-reduce parsers can only shift or reduce at the top of the stack. However, how to recognize handles, i.e., when to shift and when to reduce?

Recognizing handles

For an arbitrary grammar, no efficient algorithms known to recognize handles.

Heuristics to identify handles. On certain types of CFGs, heuristics are always correct.

Strategy: try and rule out non-handles. Even better: rule out prefixes that can never result in handles.

int|*int+int: With lookahead one, the parser knows that the next token is *. If the string begins with T*, it has no hope of being reduced to E (for any possible string of terminals that follows). In other words, T * is not a viable prefix.

If there is a valid right-most derivation of the string: S --> ... --> \alphaX\omega --> \alpha\beta{}\omega --> ... --> string, then \alpha{}\beta{} and all its prefixes are viable prefixes. e.g., \epsilon is always a viable prefix

Example:

S --> E$
E --> T
T --> int
A rightmost derivation is: S --> E$ --> T$ --> int$. Here E$ is a viable prefix (production S-->E$ with X=S and \beta=E$). T is a viable prefix (with X=E and \beta=T). But T$ is not a viable prefix, as there is nothing that generates T$ in one step. Similarly, int$ is not a viable prefix.

Examples. Consider the grammar below.

S --> E$
E --> T+E | T
T --> int*T | int | (E)
Which of these are viable prefixes?

In other words, \alpha is a viable prefix iff there is an \omega such that \alpha|\omega is an intermediate state of a shift-reduce parser in a valid parse of any string. Here $\alpha$ is the stack and $\omega$ is the rest of the input. It can lookahead at $\omega$, but the parser does not know the whole thing. The parser knows the whole stack.

What does this mean? A viable prefix does not extend past the right end of the handle. It's a viable prefix because it is a prefix of the handle. As long as a parser has viable prefixes on the stack, no parsing error has been detected.

Predictive shift-reduce parsing: based on the lookahead, check if any of the shift or reduce choices do not produce a viable prefix. If so, discard that choice. For example, for lookahead one, consider the following state:

\alpha\beta\gamma|a\omega

Conversely, In a string is parse-able through the bottom-up parser, then every potential state of the stack should always be a viable prefix.

Consider the string (int * int):

Venn diagram: All CFGs \superset Unambiguous CFGs \superset LR(1) CFGs \superset LALR(1) CFGs \superset SLR \superset LR(0)

LR(k) are quite general, but most practical grammers are LALR(k). SLR(k) are simplifications over LALR(k) [simple LR grammars].

LL(1) is a subset of LR(1) but can cut across LR(0), SLR, LALR(1).

Consider the input (int) for the grammar:

E --> T + E | T
T --> int * T | int | (E)

The stack may have many prefixes on RHS's:

Prefix1 Prefix2 Prefix3 ... Prefix(n-1) Prefix(n)
Let Prefix(i) be a prefix of RHS of Xi --> \alpha_i Recursively, Prefix(k+1) ... Prefix(n-1) Prefix(n) eventually reduces to the missing part of \alpha(k)

Important fact #3 about bottom-up parsing: For any grammar, the set of viable prefixes is a regular language. The regular language represents the language formed by concatenating 0 or more prefixes of the productions (items).

For example, the language of viable prefixes for the example grammar:

S --> \epsilon | [S]
is
\epsilon | "["* | "["+S | "["+S"]"
Notice that "[["+S"]]" or [] are not viable prefixes.

The problem in recognizing viable prefixes is that the stack has only bits and pieces of the RHS of productions

These bits and pieces are always prefixes of RHS of some production(s).

An item is a production with a "." somewhere on the RHS in the production: e.g., the items for T --> (E) are: T --> .(E), T --> (.E), T --> (E.), T --> (E)..

The only item for an \epsilon production, X --> \epsilon is X --> .. Items are often called "LR(0) items".

To recognize viable prefixes, we must

Recognizing Viable Prefixes

  1. Add a dummy production S' --> S to G
  2. We will construct an NFA that will behave as follows:
    NFA(stack) = yes if stack is a viable prefix
               = no otherwise
    	
  3. The NFA will read the input (stack) bottom-to-top
  4. The NFA states are the items of G
  5. For item E --> \alpha.X\beta, add transition from (E --> \alpha.X\beta) --X--> (E --> \alphaX.\beta)
  6. For item E --> \alpha.X\beta and production X --> \gamma, add (E --> \alpha.X\beta) --\epsilon--> (X --> .\gamma)
  7. Every state is an accepting state (i.e., if the entire stack is consumed, the stack is a viable prefix)
  8. Start state is (S' --> .S)
The epsilon-closure of an NFA state (which is an item) is can be represented as a "stack of items"
T --> (.E)
E --> .T
T --> int * .T
says that Reading backwards, the LHS of every item becomes the RHS of the predecessor production.

In other words, every viable prefix can be represented as a stack of items, where the (n+1)th item is a production for a non-terminal that follows the "." in the nth item.

The items in s describe what the top of the item stack might be after reading input \alpha

An item is often valid for many prefixes. e.g., The item T --> (.E) is valid for prefixes

  (
  ((
  (((
  ((((
  ...
We can see this by looking at the DFA, which will keep looping into the same DFA state for each open paren. Need to show the NFA and DFA construction for our example grammar, and the valid items for these string prefixes. Will need a laptop and a projector!

The language of viable prefixes for the example grammar:

S --> \epsilon | [S] | S.S
is
X = \epsilon | "[" | "["* | "["*S | "["*S"]"
Y = "["*S.(X | Y)*
Z = X + Y

LR(0) parsing

If for any string, any of these conflicts is possible, then the grammar is not an LR(0) grammar.

Resolve conflicts by increasing lookahead. For example, during a shift-reduce conflict as shown in the example above, if the next token is not t, then the shift possibility can be discarded.

LR(1) parsing (we are not going to discuss this in detail): bake the lookahead into items. An item now looks like X-->\beta., t to indicate that if the next terminal is t, then reduce (but do not reduce if the next terminal is not t).

SLR parsing

SLR = "Simple LR": improves on LR(0) shift/reduce heuristics so fewer state have conflicts

Idea: Assume

If there are conflicts under these rules, the grammar is not SLR

SLR Parsing algorithm

  1. Let M be DFA for viable prefixes of G
  2. Let |x1...xn$ be initial configuration
  3. Repeat until configuration is S|$

If there can be a conflict in the last step, the grammar is not SLR. To check if there can be a conflict, check if for some state, both shifting and reducing is an option.

SLR Improvements

Action table: For each state si and terminal a

SLR Improvements

SLR Examples

T --> S'$
S' --> S
S --> Sa
S --> b
SLR parsers do not mind left-recursive grammars

The first state in the corresponding DFA will look like (\epsilon closure of the NFA state): (state 1)

S' --> .S
S --> .Sa
S --> .b
If we see a b in this state, we get another DFA state: (state 2)
S --> b.
Alternatively, if we see a S in this state, we get another DFA state: (state 3)
S' --> S.
S --> S.a
From this state, if we see a, we get (state 4)
S --> Sa.

The only state with a shift-reduce conflict is state3. Here, if we look at follow of S', we have only "$", and hence we can resolve this conflict by one lookahead. Hence this is an SLR grammar

If we get rid of non-terminal S' above, by short-cutting T --> S$, the grammar above becomes LR0.

Another example grammar

T --> S'$
S' --> S
S --> SaS
S --> b
Looking at the corresponding DFA: (state 1)
S' --> .S
S' --> .SaS
S --> .b
One possibility is that we see b in this state to get: (state 2)
S --> b.
Another possibility is that we see S in this state to get: (state 3)
S' --> S.
S' --> S.aS
If we get a in this state, we get (state 4)
S' --> Sa.S
S --> .SaS
S --> .b
(notice that we formed an \epsilon-closure of the first item to add more items

From here, if we get S, we get the following state: (state 5)

S --> SaS.
S --> S.aS
From here, if we get a again, we go back to state 4! If we get b, we go to state 2

The only states that have conflicts are: state3 (resolved by follow as follow(S') = $) and state5 (has a shift/reduce conflict because a \in follow(S))

Thus this is not an SLR grammar

If we get rid of non-terminal S' above, by short-cutting T --> S$, the grammar above becomes LR0!