28 Jul Patterns in the Mind
As we look for suitable solution designs for representing the knowledge and processes we humans use to communicate, we realize that we have no idea what knowledge in the brain looks like. Further, we only have relatively vague ideas about the processes that occur in the brain as we produce and comprehend words, phrases and sentences. Yet, what we know is useful. I’ll spend just a little more time on connectionism as a model for replicating brain behavior, and its suitability for language processing.
Limitations of Feature Space
The feature-space model of pattern classification and recognition embodied in Artificial Neural Systems (ANS) is rather simple, accurate, and useful for computational modeling. There are limitations to the model, however, that, if ignored, may lead to futile attempts to design a functional computer model. The main limitation is that, while the dimensions of the feature space correspond directly to the number of specialized feature-detector layers, there is no efficient way of correlating multiple results from different feature layers into a single result. In the case of a single feature with two or more possible values, the feature space can be represented on a two-dimensional grid. But no matter how many layers we add to the network, the ability of the feature space model is limited to two-dimensional problems. Adding layers is interesting in that monitoring the behavior of more complex networks improves researchers’ ability to learn about and solve this class of problems. Addition of multiple constraints, however, is not what standard feature-space models are designed to solve.
|Understanding Context Cross-Reference|
|Click on these Links to other posts and glossary/bibliography references|
|Prior Post||Next Post|
|Modeling Non-Random Synaptic Links|
As mentioned in my posts in Section 3, especially “Roots of Neural Nets“, Rosenblatt‘s assumptions (1958) are still often used as a basis for design decisions in ANS. I have looked at these assumptions can now be reevaluated in the context of the information discussed in this blog to discover if changes in our understanding of the physiology and psychology of cognition suggest changes in the assumptions that impact on implementation decisions. The next few pages look again at model assumptions before restating Rosenblatt’s assumptions and providing responses.
To test a model of human thought, especially one that claims to accurately reflect physiological characteristics of the brain, we must first consider the assumptions governing the implementation. The necessity of this kind of deep inspection is evidenced by the frequency of errors found even in the most respected of technical publications of the past few decades of cybernetic research. An example of a persistent misconception is the belief that the interconnection structure of the brain is random and that each neuron has thousands of connections. Current research indicates that: “The average human brain has about 100 billion neurons (or nerve cells) and many more neuroglia (or glial cells) which serve to support and protect the neurons. Each neuron may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synaptic connections, equivalent by some estimates to a computer with a 1 trillion bit per second processor. Estimates of the human brain’s memory capacity vary wildly from 1 to 1,000 terabytes (for comparison, the 19 million volumes in the US Library of Congress represents about 10 terabytes of data)” (Mastin, 2010).
The errors in the literature are misleading in a few ways. First, the frequently cited estimate of 100 billion neurons in the brain may be accurate, but not all neurons are linked to thousands of other neurons. Some are connected to tens of thousands while others may be linked to dozens or hundreds. This simplification of the estimate of connections seems to corroborate the connectionist assumptions. Despite this order of magnitude error, some neural models use these numbers as a basis for design features or parameters. The assumption that links between neurons are random is also highly suspect.
My posts in Section 2 described the different types of cells that occupy the different brain strata. We learned that many of the smaller cell types, such as Golgi and Basket cells, have orders of magnitude fewer nerve processes (axons and dendrites) than larger cell types and thus are not capable of supporting the number of links found in larger cell types such as giant pyramidal and Purkinje.
What is currently known about neurite growth and growth cone navigation and targeting abilities shows that earlier assumptions about randomness are false. Links between neurons in different regions of the brain are very specific and non-random. Links between neurons in different layers of the same region are very specific, and I’ll address this further in my next post. The specificity of this architecture is considered critical to brain functionality because of the pervasive interplay of structure and function. The interaction of structure and function appear absolutely essential to enable sensory perception, action and cognition.
ANS theory has been evolving away from strictly random organizations. Hinton and his colleagues explain why: “The search for general principles that allow parallel networks to learn the structure of their environment has often begun with the assumption that networks are randomly wired. This seems to be just as wrong as the view that all knowledge is innate. If there are connectivity structures that are good for particular tasks the ANS will have to perform, it is much more efficient to build these in at the start” (Hinton, et al., 1984, p. 1). I believe there is a place in the model for the advances in our understanding of how distributed computational systems work and may be designed to imitate brain functions, and I think that systems with explicit representation of knowledge may benefit from connectionist formulas.
|Click below to look in each Understanding Context section|
|4||Perception and Cognition||5||Fuzzy Logic||6||Language and Dialog||7||Cybernetic Models|
|8||Apps and Processes||9||The End of Code||Glossary||Bibliography|