Is there a difference in computational power between a deterministic and a non-deterministic stack automaton?
Yes, there is a difference between the two. And she is brutal. Non-deterministic stack automata can solve many more context-free languages than deterministic ones.
I would like to point out that this meets other automata, where the non-deterministic Turing Machine does not provide computational power beyond what deterministic can provide, just as the non-deterministic Finite State Machine can be transformed into deterministic without loss of computational power.
The set of languages recognized by deterministic stack automata is a proper subset of the set of languages recognized by non-deterministic stack automata.
For example, language made up of every parenthesis must be closed in opening order is deterministic. For example, using only square and square brackets, the grammar would be:
S --> '(' S ')' S --> '[' S ']' S --> '(' ')' S --> '[' ']'
Now, for palindromes, that doesn't happen. Take a simple palindrome, composed only of
S --> 'a' S 'a' S --> 'b' S 'b' S --> 'a' 'a' S --> 'b' 'b' S --> 'a' S --> 'b'
How does the automaton manage to distinguish whether it should consider an a terminal
a be the beginning of the creation of a pair or the end? For example, the following words are ambiguous:
It is impossible for a deterministic stack automaton to determine whether
b must be to start a new pair or whether it matches
a previous a.