30 Jan From Aristotle to the Enchanted Loom
“Swiftly the brain becomes an enchanted loom, where millions of flashing shuttles weave a dissolving pattern-always a meaningful pattern-though never an abiding one” (Charles Sherrington). What of the centillion warps and woofs of ideation? Does it never abide? Passing seems to take away all that was ever weft, unless the Gods endow immortality on our thoughts and carry them on beyond the mouldering grave.
As I mentioned in my last post, Aristotle’s theory of neural learning and processing called associationism, suggests the brain has some way of linking concepts to each other. The linking or association of concepts is what enables us to recognize and interpret the things we perceive. It is also fundamental to the learning process. In the electrical flow model described in a prior post, the sixth step is the main association step. I will post more information on associationism, in my discussions of perception and cognition.
|Understanding Context Cross-Reference
|Click on these Links to other posts and glossary/bibliography references
Weaving purpose out of the chaos of perception and idea, our brains are Enchanted Looms, binding threads of experience into choate thought and resolution. The importance of binding the warps and woofs together through associations between neurons, is that sympathetically activated cells receive charges after output has been activated from prior excitation and inhibition, and at the same time as successive input or the next frame in the stream of input is in the network. The temporal lag could be critical to comprehension because it can aid in maintaining the immediate context of the incoming stimuli, providing an environment for each new frame of data. The entropy of information out of context increases dramatically, and disambiguation of inputs with multiple possible interpretations would become nearly impossible without the continuity provided by the temporal element of processing.
When we see only part of an object, our minds use experience to fill in the missing parts enabling comprehension in the face of partial information. Consider the messages with messed up words that people are perfectly able to read and comprehend. The image at right may baffle a person who has no experience with hot-air balloons, but you were probably able to recognize it immediately. The flow of electrical signals in the brain makes this possible, though, if you saw this same image inside a cave, cognitive dissonance may have made it more difficult to imagine the rest of the balloon.
- Reception cells produce impulses
- Relay cells relay impulses
- Filter cells analyze input features
- Memory cells recognize input
- Associated cells respond to input
- The overall pattern of activation elicits a conscious response
Recognition and Attention
Recognition helps us focus attention, thus bringing the cognitive process full circle, activating physical processes to adjust the eyes and ears to attend to specific stimuli. Assuming the hippocampus stores a contextual map of the environment, then recognition will help update the contextual reference, and output from the hippocampus will aid in recognition. Recognition is discussed in greater detail in Volume 4. Learning may involve the formation of temporary links through either activation of inactive synapses connecting associated data, or activation of inactive synapses connecting associated information through intermediate or relay cells. Initial links may be strengthened over time by repeated exposure to corroborating data, or new physical links may be formed through the sprouting of dendrites or spines.
This theory suggests layers and a directional flow of E/I in the system. The directionality may be accurate, but it is also possible that there is a strong reverse flow throughout the brain, particularly in and between the fourth and fifth layers. Another diastolic element is the interaction of the hippocampal map of the environment. Without the contextual fields of activation provided by the hippocampus, any stimulus would have to be more intense to elicit recognition. But with the contextual framework, certain regions of the brain will already be primed, thereby facilitating comprehension.
Nima Mesgarani and Edward F Chang, researchers at the University of California, San Francisco and Columbia have published results in the Journal Nature corroborating these assumptions vis-à-vis spoken language understanding. The research published in 2012 found that “the cortical representation of speech does not merely reflect the external acoustic environment, but instead gives rise to the perceptual aspects relevant for the listener’s intended goal.” [Journal Nature] This demonstrates the systaltic process of input (coming from outside) being primed with expectations and intent coming from inside the brain.
Since many artificial neural networks incorporate a strictly systolic flow, and many of them are only concerned with filtered data, they may represent only memory and association cells. Some image processing networks are primarily concerned with the first layer of receptor cells and their role in interpretation or conversion of the input into representations compatible with some model of feature storage. Most models have a restriction against flow across neurons in the same layer.
In the circulatory system, we describe the flow of blood from the heart as systolic and the flow of blood back into the heart as diastolic. The model of electrical impulse flow described in the last few posts can be called systolic because they travel in one direction from beginning to end, forward.
When systolic flow is complemented by either a reverse diastolic flow or the interaction of latent or overlapping charges, the model becomes systaltic (having more than one direction) and more complex by at least an order of magnitude. Notions of randomness in the link structure between neurons tend to justify more non-deterministic physiological models of cognition. However, recent data on the determinacy of neurite growth-cone activity (Axon and Dendrite Growth) and the complexity of connections between different specialized areas of the brain (Volume 1) indicate that randomness is not necessarily a characteristic of the link structure.
Attempts to simplify descriptions of the functions of the brain are understandable, even for the purpose of modeling. But attempts to describe all neurons as simple (often single bit) identical processors, and the flow of activation in the brain as cyclical, are not justifiable from a physiological perspective. Many simplified models lack the descriptive power to accurately reflect the complex processes in the brain that enable us to speak of ourselves as intelligent creatures.
Systolic and Diastolic
The correlations between the flow patterns in the brain and flow patterns in some parallel computer systems reflect the inherent strength of bi-directional or multi-directional flow models. These representations are designs for parallel multi-processor environments. But you don’t need many processors to simulate the distributed nature and behaviors of the brain. Processes and structures can be designed in software to process information in ways analogous to the structure and processes of the human brain. Such software-based processes can operate on computers with a single CPU.
Points to Consider
Here are the main points of Sections 1, 2 and 3 in Understanding Context:
- The brain has many areas performing special tasks
- The areas are tightly interconnected and mutually supportive
- Since neurons are not simple, their cybernetic roles may not be simple either
- Neurons do not all behave the same or perform the same tasks
- Connections between neurons are neither random nor static
- The flow of action potential between neurons is complex in intensity, duration, impact, and its routing through intracellular pathways
- Cytoskeletal members can transduce electrical potential and adopt states that affect their transductance properties.
Some of these points may contradict some popularly held assumptions about the way neurons behave individually and collectively. The last two points state that the possible role of filaments inside the cell is to act as wire-like pathways for action potential. These are particularly important in that they suggest that some commonly accepted neural models may oversimplify cognitive phenomena.
In posts in this section of Understanding Context, I have been describing a physiological process model for the brain that incorporates explicit storage of knowledge. Such a model is justified by the facts concerning feature selectivity in cells in perceptual processing centers. Because these cells respond to specific features in the perceptual environment, their aggregate response, when bound together by an intelligent link structure, can be described as an explicit representation of knowledge.
Similar data on the complexity and roles of neurons involved in more complex processes such as recognition and reasoning has not yet been discussed, nor yet fully understood by anyone. Certainly the assumptions used as a foundation for modeling these processes will require a strong psychological justification. Because of a lack of data to show exactly how cells can store data, many of these assumptions may have to be revised when science catches up to these theories. Our cybernetic model doesn’t require that we know everything today. Hopefully, what we do know will be enough to build good models and make progress.
In light of our current understanding of the complexity of neural structure and function, a brain simulator (mechanical brain) may be more complex than one designed under former assumptions. Before proposing a new approach, however, it is important to review some psychological perspectives of brain function. The focus of the next section will be on showing how our discussion about the human neural network fits into the framework of human cognitive behavior.
In the subject index of the bibliography you will find applicable references under the following topics:
|Click below to look in each Understanding Context section
|Perception and Cognition
|Language and Dialog
|Apps and Processes
|The End of Code