08 Sep Gnostic Learning Model
In prior posts in this section, and periodically in other sections of my blog, I have been exploring how humans learn, and how we might replicate those processes in computer software or (less likely) hardware. The context of the learning, or knowledge acquisition, upon which I choose to focus is language learning. While knowledge acquisition is much broader, this is an important part of our learning because we try to apply words or phrases to each thing we encounter, whether it is something physical like a brick wall, or something abstract like a writer’s block. Today, I wish to describe my conclusions about the brain and propose an explicit learning model for automated systems.
Gnostic Representations
A physiological process model for the brain that incorporates explicit (or gnostic) storage of knowledge is justified by the facts concerning feature selectivity in cells in perceptual processing centers such as the visual cortex. Because these cells respond to specific features in the perceptual environment, their aggregate response, when bound together by an intelligent link structure, can be described as an explicit representation of external objects. A gnostic learning model may leverage such an explicit representation.
Similar data on the complexity and roles of neurons involved in more complex processes such as recognition and reasoning is not yet available. As it is not always safe to generalize, assumptions used as a foundation for modeling these processes require a strong psychological justification. Because of the lack of data to show exactly how cells store data, many of these assumptions may have to be revised when science provides definitive answers. Nonetheless, the link structure between neurons and areas in the brain certainly indicates a continuum of complexity that could support explicit models for learning and remembering.
Understanding Context Cross-Reference |
---|
Click on these Links to other posts and glossary/bibliography references |
|
|
Prior Post | Next Post |
Knowledge in Non-Neural Models | |
Definitions |
References |
Stillings 1987 | |
Curtis 1990 | |
Aristotle (Categoriae) 1952 | |
Tutorial |
How are complex associations stored in the brain? Some general divisions define declarative or factual knowledge and causal or heuristic knowledge representations in the mind. The two major categories of declarative representations are propositional and imaginal. Propositions are language-like representations, while images are perception-like representations (Stillings, et al., 1987, p. 18). Since we are focusing on the more complex end of cognitive abilities, language-like, or gnostic, representations best serve our purposes. As we consider explicit representations of bits of knowledge in the brain, think ahead to Section 9 where we discuss object-oriented design techniques, semantics and ontologies.
When we talk about representing external objects, the connection with ontologies and object-oriented techniques feels natural, though, in actual development practices, there is often an impedance mismatch with some information modeling approaches (I’ll address this in a future post). We will find that this is not a coincidental or irrelevant connection. A distributed object model with object-oriented services may be the best way to implement truly neuromorphic systems capable of complex cognitive tasks.
Complex Symbolic Processing
Because of what we know about the complexity of cognitive processes and the structural and functional characteristics of the brain, we can establish some new parameters for a neuromorphic mechanism for processing complex or symbolic information. This is not an attempt to replace ANS, which has proven useful for processing image and audio data. The intent, rather, is to show what a system must do to perform or simulate more complex cognitive processes. What a true mechanical brain must be able to do…
A. It must be able to integrate multiple types or sources of input, such as image, sound, text and the like, to generate a useful interpretation of the input environment.
B. It must be able to yield interpretations of input in near real-time.
C. Its output should be unambiguous and easy to interpret by users.
The requirements above show two basic abilities (A and B) necessary for a mechanical brain to demonstrate complex symbolic processing attributes similar to the human brain, and one (C) to make it a useful tool. These may be tough requirements, but the payoff can be high. Before any model can be considered a true artificial intelligence machine, it has to perform difficult tasks well enough to persuade human skeptics. Here is the possible learning process flow:
- Encounter a new concept that is a candidate as valid knowledge.
- Formulate its associations with known facts (classify it and find its place in the ontology).
- Format it as a set of valid knowledge objects with associations in context based on the source.
- Set the new knowledge objects’ weights very low.
- When the same facts are encountered again, and the associations are shown valid, incrementally increase the weights of the knowledge objects.
- When new associated facts are learned, create associations with low weights, but leave the prior new knowledge weights unchanged.
- When contradictory facts are encountered, and the associations are shown invalid, decrement the weights of the knowledge objects.
- Keep a semantically indexed log of the facts and sources to support “supervised learning” with manual validation.
- Mark manually validated facts as such, and elevate the weights above a confidence threshold (make it a “true fact”)
This is not the only possible process flow for learning, but it is one that can be relatively easily implemented using weighted schemes with distributed knowledge, such as knowledge stored in an ontology or a Bayesian network.
Click below to look in each Understanding Context section |
---|