28 Apr Mosaic of Concepts
On our way to knowledge representation (KR), we’ve looked at concepts and logical propositions and words and taxonomies. I know this can all be a bit confusing, but please bear with me a little longer. Word relations are more than a two-dimensional mosaic of related concepts – they form a deep hierarchy with multiple levels that act like independent dimensions (which explains my dodecahedron model). Hierarchical relations imply that some asymmetrical and transitive relations link words. For example, locative inclusion, class inclusion, subordinate, and part-whole relations are hierarchical (parent-child) while causal agent-instrument-action-object relations are on the same level (siblings). Syntactic relations may also, to a limited extent, be hierarchical, but we can safely assume that most syntactic relations are on a two-dimensional syntactic surface.
|Understanding Context Cross-Reference|
|Click on these Links to other posts and glossary/bibliography references|
|Prior Post||Next Post|
|Continuity of Learning||The Multiple Meanings of Polysemy|
|knowledge representation concepts||Miller 1976|
|propositions words taxonomies||Pinker 1984|
|input structure||Jorgensen 1973|
|process output||Sowa 1984|
Associations between words correspond to links in conceptual graphs, visually like a mosaic of concepts. Some have suggested that conceptual information is sometimes represented directly or declaratively and sometimes procedurally (Jorgensen and Kintsch, 1973). This would be consistent with the model of memory with specialized areas and also with the replication assumption.
Miller and Johnson-Laird point out that mature humans usually recall conceptually related words one after another. “The words are organized in memory into conceptually related categories, and words in the same category are recalled together. When the list contains hierarchically related words, people usually recall superordinate words first, then hyponyms of that category” (1976, p.250).
The illustration at right represents a two-dimensional matrix (mosaic) of circles, each representing a concept. Though simple and attractive, this model lacks the depth to be able to express some of the complexities of human language.
Pinker believes cognitive abilities have two parts: “the basic computational processes, types of data structures, and information pathways made available by the physiology of the brain; and the particular tokens, combinations, and sequences of those operations that specify how the basic mechanisms are actually used” (1984, p.4). Both these parts are covered in this discussion of basic assumptions about the process of communication. We shall hereafter attempt to paint a word-picture of cognitive linguistics: a three-dimensional lexical mosaic.
To define a formal representation for processing language, one must select a set of symbols, a model for establishing relationships between the symbols, and a set of constraints to act upon those symbols and relationships. Before deciding among possible formalisms, one must consider their limits of power and ensure they are not too weak to achieve the objective, nor so strong that they overpower the data. Some information formalisms have no embedded process.
- Syntax formalisms are purely descriptive – no process.
- Digital logic provides a simple representation and supports branching processes.
- Propositional logic provides premises, conclusions and truth tables as structure, and an evaluation process.
- Relational Databases have tables for structure, SQL, and primary and foreign keys that serve referential integrity management, as a process.
- Ontologies explicitly bind data to processes.
In seeking a balance between a formalism that will do the job and one that will not overpower the data, we chose a stratified lexicon-based approach that uses a parallel architecture and views conceptual and linguistic knowledge in the brain as a three-dimensional matrix with intelligent links. This language formalism is called Three-Dimensional Grammar (3-DG), and it is based on an ontology of knowledge. Later, the 3-DG model for automated language interpretation will be explained and justified. For the moment let’s borrow from our knowledge of the brain to describe the foundation of the formal representation:
- they have a specific variable input;
- they have specific fixed structure;
- they have specific dynamic processes; and
- they have specific variable outputs.
In the example of vision the components are:
- The input continually changes as visual stimuli in the form of lines, shapes and colors organized in space that enter the field of view
- Our model has at least two structural components:
- the structure of the input is light particles reflecting the lines shapes and colors
- The structure of the system used to process the input includes the eyes, and the brain hardware in the visual pathways and interpretation centers
- The process is dynamic in that different inputs trigger activation in different areas of the brain
- The outputs are recognition of the things we see, and decisions based on recognition
We need formal representations for building a model of automated techniques to imitate communicative processes. It will need input, structure, process and output. Watch for upcoming posts as we unfold the model and its components.
|Click below to look in each Understanding Context section|
|4||Perception and Cognition||5||Fuzzy Logic||6||Language and Dialog||7||Cybernetic Models|
|8||Apps and Processes||9||The End of Code||Glossary||Bibliography|