01 May Traditional Grammar from the Top Down
Grammars provide the knowledge and rules necessary to understand or disambiguate the often ambiguous strings of words that constitute language. People disambiguate by searching available knowledge (Nirenburg, 1987). Because a traditional grammar specifies a finite set of rules or patterns which attempt to capture the regularities of a language (Grosz, 1986), it can be used in both the analysis and synthesis of natural languages (NL). This makes grammars important for all forms of communication.
|Understanding Context Cross-Reference|
|Click on these Links to other posts and glossary/bibliography references|
|Prior Post||Next Post|
|Exceptional Logic||Linguistic Building Blocks|
|grammar knowledge||Nirenburg 1987|
|disambiguate understand||Grosz 1986|
|ambiguity rule pattern||Aiken and Nucula|
Natural languages are so dynamic and irregular that exceptions tend to be as common as regularities. The abundance of exceptions makes any NL “infinite.” The wide expanse of infinity is naturally difficult to circumscribe in a finite set of rules or patterns. This explains why existing grammars used in NL processing tend to be either too limited to account for the diversity of structures in the languages they describe or too large and unwieldy to efficiently serve an inference engine running on an ordinary computer. What follows are some perspectives on grammar and language that may help divide the problem into manageable chunks for better digestion.
The illustration above shows parse tree of a typical example of a syntax analysis of an English sentence, with explanations of the abbreviations and the underlying rules below.
|S =||sentence||NP =||noun phrase|
|Noun =||noun||Det =||determiner|
|Adj =||adjective||VP =||verb phrase|
|TVerb =||tensed verb||Adv =||adverb|
|PP =||prepositional phrase||Prep =||preposition|
Top Down Analysis
The sentence is the basis for most grammar formalisms. These formalisms have a variety of symbolic representations, but the most popular is the tree. An example of a parse tree is depicted below. This tree may be generated with the production rules shown at left. The formula is described from the top of the tree down. Top-down parsers evaluate language by making inferences, or applying productions, and binding sentence constituents to the terminal vocabulary.
|S –> NP VP||A sentence is made up of a noun phrase and a verb phrase|
|NP –> Det Adj* N||A noun phrase may have a determiner, zero or more adjectives, and a noun|
|VP –> TV Adv PP||A verb phrase may have a tensed verb, an adverb, and a prepositional phrase|
|PP –> P NP||A prepositional phrase has a preposition and a noun phrase|
These rules are easy to implement in a parser. Both top-down and bottum-up strategies have yielded good results in parsing. Neither is magic. Alex Aiken and George Necula provide a good overview of Top-Down parsing. An example of bottom-up parsing has rules that begin with the constituents (word functions) and infers which types of phrases they belong in.
My multi-dimensional approach to language understanding includes syntactic analysis, using rules like this will help disambiguate complex patterns in sentences, and contribute to the effort to understand the intent of the speaker or writer.
|Click below to look in each Understanding Context section|
|4||Perception and Cognition||5||Fuzzy Logic||6||Language and Dialog||7||Cybernetic Models|
|8||Apps and Processes||9||The End of Code||Glossary||Bibliography|