18 Aug Neuromorphic Computing
To Mimic is Human
When is imitation not flattering or sincere? I try to be sincere in my blogging, and I have tried not to unnecessarily emphasize the computing ability of the human brain, but the whole point of this blog is to imitate it using computers. A neuromorphic (resembling the brain and/or neurons) computing model may imitate the look and feel of the brain, or it may attempt to go all the way and try to think like a brain. The resemblance may be in form, or function or process. Ideally the computing model could resemble the brain in all three. But before we can hope to find resemblances, we need to get as accurate as possible an idea of the brain’s form, function and processes.
|Understanding Context Cross-Reference|
|Click on these Links to other posts and glossary/bibliography references|
|Prior Post||Next Post|
|Stimuli||Two Rights and a Village: Social Communication|
Neither a motherboard nor a memory chip look anything like a neuron. Logical processes, however, may be designed to be nearly identical to the physical processes, once we figure out what those are.
The computer designs could mimic the brain’s structure: kinda spherical, kinda mushy – or possibly billions of interconnected nodes divided into specialized areas, and so forth. The designs could be more focused on function: electrical impulses going every which way, some positive (excitation), some negative (inhibition), neurons responding in different ways depending on the nature and location of the spark, and so forth. Or the models could focus on replicating the processes: receiving and interpreting sensory input, deciding what to have for lunch, planning to take over the universe, and so forth.
A neuromorphic computational paradigm can deal with any or all of the three main issues in modeling the brain. There may be aspects of cognition not addressed in these three circles, but they suffice for now. As the discussion turns to chaos, as it eventually must, consider the regularity of the structures and functions of the human brain. How much of this is non-deterministic? If the macroscopic organization of the human brain is regular and predictable, is it possible that the microscopic structure (individual links between neurons) fits into some grand scheme that is definable and reproducible? Pivotal questions!
Finally, does context affect structure, function or process in a framework for cybernetic modeling? Regarding structure: do we need to make a place for context in our knowledge representation scheme? Is there a context function or heuristic in the model? When we create processes that mimic the brain’s language understanding process, where does context fit in?
There was a Massachusetts company a few years back, called Thinking Machines, that built computers with massive numbers of highly interconnected simple processors. Their idea was to use processors to represent neurons in the brain. If neurons are simple in their processing and the number of connections was sufficient, then their machines would have been the most brain-like machines on the planet. I contend that the model is more complex than that, and each neuron may be much more capable and functional than was assumed in this model. This begs the question: what should a genuinely neuromorphic computing model look like.
These are all questions I will address as we go deeper into understanding context.
|Click below to look in each Understanding Context section|
|4||Perception and Cognition||5||Fuzzy Logic||6||Language and Dialog||7||Cybernetic Models|
|8||Apps and Processes||9||The End of Code||Glossary||Bibliography|