31 Jul Modeling Non-Random Synaptic Links
I have discussed the different meanings of “random” in “The Random Hamlet” and “That’s so Random!” in which the mathematical definition presumes there is some not yet known law that governs the phenomenon, where other definitions suggest that randomness means that the phenomenon is not governed by any law. Remember our reference to Rosenblatt’s early contributions in Neural Network Theory in Roots of Neural Nets? As we consider different modeling options for our language understanding capability, it’s important that we evaluate the underlying assumptions of each component of the model to ensure it is well-matched to the process. His first assumption includes randomness.
(R-1): The arrangement of physical links in the nervous system differs from person to person. “At birth, the construction of the most important networks is largely random” (1958, p. 388).
If we go by the mathematical definition of randomness, there is a very strong case for this assertion. If we are assuming that there is little or no order to the links, I take exception.
|Understanding Context Cross-Reference|
|Click on these Links to other posts and glossary/bibliography references|
|Prior Post||Next Post|
|Allen 1987 Hyams 1986|
An Organized Neural Model
I dedicated several posts in the first three sections of this blog to demonstrating the inherent order of neurons and links in the brain. As facts contradict the disordered structure theory, Rosenblatt’s first assumption can be replaced by one stating that the brain has functionally specialized areas, neurons and links. This new assumption bespeaks a highly specific organization and a structure that differs depending on the task. There is little randomness in the model if the model assumes an “adult” state in which the structure of the system is fixed, and learning only modifies it by adding new links between individual knowledge fragments.
For our new model to perform complex processing, it must replace randomness with a structure organized at multiple levels. We have shown that the brain is organized at the following levels:
|Macroscopic hardware||the areas of the brain specialize.|
|Microscopic hardware||the links between neurons within and between brain areas correspond to their functions (they are not random).|
|Macroscopic software||the flow of E/I in the brain is not directional and time phased but cyclical with varying durations of intense activation – particularly in context maintenance functions (see MIPUS).|
|Microscopic software||cognitive activities of different types involve different forms frequencies and durations of impulses.|
If you could open up MIPUS and watch the excitation and inhibition traveling around in his circuits, you would see that some things come and go very quickly (particularly onerous tasks he would rather ignore). More interesting impulses, however, tend to persist in his mechanical mind as he toys with appealing ideas (like bungee jumping). He has proven his ability to serve dinner efficiently, but sometimes, when his mind wanders, he makes small errors. The fact that he can think about two things at once bespeaks a complex model of cognition.
Neural Space and Time
Artificial neural systems (ANS) model Rosenblatt’s second assumption by adjusting weights of links between processing elements. Over time, these weights become entrenched and static. The variable transience of action potentials in the brain, however, contradicts the popular model. Because action potential at some synapses decays slower than at others, the propagation of successive impulses is facilitated. This relates to the practice of adjusting weights by learning.
After an ANS has learned a certain data set, the weights of links between neurodes will remain fairly static through input after input. In the nervous system, however, the strength of output potential from two identical inputs may differ radically due to residual charges from prior inputs. This illustrates the influence of temporal as well as spatial factors in neural functionality.
Rosenblatt’s second assumption (R-2) states that: Over time, neurons, as the result of applied stimuli and the probability of causing responses in other neurons, can undergo some long-lasting changes. This is an important assumption in building our model of learning and adaptability in the system that can gradually become better at performing brain-like tasks. I’ll address the time element in upcoming posts.
|Click below to look in each Understanding Context section|
|Perception and Cognition||5||Fuzzy Logic||6||Language and Dialog||7||Cybernetic Models|
|Apps and Processes||9||The End of Code||Glossary||Bibliography|