11 May Thinking in Parallel
A Parallel Expert
I once rode the Trans-Siberian Railway from Moscow to Ulan-Batar, Mongolia (not the picture at right – the engines were diesel). Several times along the journey we passed slower trains, and we were passed by faster ones. When people and freight are confined to a single lane, the speed of the slowest defines the speed of transit for all.
Parallel or multiprocessor computers, like the brain, permit simultaneous execution of multiple tasks or programs. Within a fraction of a second I can perceive an obstacle on the path, recognize it for what it is, consider alternate possibilities, calculate and prioritize the possibilities, decide which action to perform, and send the necessary messages to my legs and arms, torso and neck to change my stance and stride in a way to continue my forward momentum without stepping in, slipping on or being bitten by whatever has appeared in my way. This all happens in parallel: it must.
I am trying to develop better smart or expert systems to perform brain tasks, specifically, the most complex brain task we have studied in Understanding Context: communication. To interpret human language, we need a flexible and powerful approach. For flexibility and power that will bring us into the next phase of the information revolution, I believe we need to run processes in parallel like the brain does.
|Understanding Context Cross-Reference|
|Click on these Links to other posts and glossary/bibliography references|
|Prior Post||Next Post|
|Knowledge Value-Chain Instrumentation||Data Convergence at Velocity|
|parallel computing 2-817.html" target="_blank">hyper-converged architecture||Haynes 1982 Rumelhart 1986|
|interpret neuromorphism||McClelland 1981 Desrochers 1987|
|multi-threading cognition||Hewitt 1986 Hwang||1985|
|SETI distributed processing||Stone 1987 Swaine 1988|
How many AI software guys does it take to change a lightbulb? None – It’s a hardware problem. Many AI guys don’t do hardware, but since this research effort was eclectic from the beginning…what the heck!
The team-tackle (parallel) approach to problem solving can yield interesting results. The structural (possibly hardware) and functional (mostly software or programming) issues of parallel computing correlate directly with the form and function of cognition. In this and future posts, I plan to discuss “A Parallel Dimension” of intelligent systems, and describe parallelism in modeling approaches, such as decentralized services, algorithms and heuristics.
There are computers with more than one central processing unit (CPU), and CPUs that can run more than one thread of code simultaneously or in parallel. With the rise of parallel computers and the obvious connections with human cognition, the parallel dimension is an important consideration in our modeling. The hardware of the brain is parallel in that specialized areas perform specialized tasks in parallel under the auspices of billions of neurons spreading fuzzy impulses around in parallel waves. It makes sense to look at the characteristics of parallel hardware as we consider approaches for imitating these capabilities.
Parallel is Logical
We have been talking about conceptual tools that may be mathematical or symbolic, graphical or logical. We can use such tools to capture and display information in expert systems. Any of the conceptual tools can be implemented as heuristics, and many of them can operate independently and in conjunction with other heuristics or processes. Let’s consider the value of parallel computers in the context of neuromorphic models.
While many computers can run code in parallel, there is a woeful absence of programs designed specifically for parallel computers. Notable exceptions include “crowd-sourcing” (see below). Clustering databases and distributing loads to server farms are other effective mechanisms of parallelizing information processing. Because automated parallelization technologies available today do not begin to approach the potential speed-up offered by fast parallel computers, explicit parallelism at the algorithm and system-design levels is essential to maximize the benefits of dividing to conquer.
Since aspects of human cognition are parallel, explicitly parallel algorithms or heuristics to run on parallel CPUs seem to be a logical choice for modeling human cognitive behavior. The structural parallelism of neurons in the brain has prompted architectural decisions in the design of neural networks that are used in Optical Character Recognition, voice recognition and complex statistical analyses. I see opportunities to use similar models for language understanding and translation.
Mixed up Formalisms
Artificial neural networks have been designed to mimic the structure and functionality of the human brain. They are often referred to as neuromorphic computer designs. Because connections between neurons in the brain transmit variable levels of action potential, neural net functionality centers on the weights of links between elements that strengthen or weaken electrical signals that pass between elements over the links. This resembles the use of multi-valued or fuzzy logic in processing. This functional approach applied to expert systems provides an opportunity for graceful degradation in the face of missing data, errors or failures: a strength of neural networks.
The structure of neural networks is distributed and massively parallel while most expert systems run on uniprocessor or coarse-grained parallel computers. The advantages of the parallelism of neural networks can be applied to expert systems to some extent by distributing the knowledge. Many techniques are available for distributing expert system knowledge. Rule-based, frame-based and network-based approaches are all forms of distributing functional and world knowledge.
Crowd-Sourcing as Parallelization
Grid computing and crowd-sourcing are ways to harness many computing nodes operating in parallel. Programs such as SETI and some malicious BOTS that use infected machines to create a network of distributed damage, are examples of crowd-sourcing to exploit distributed computing resources simultaneously and largely in parallel. This type of parallelism is based on many separate machines with the minimum necessary capabilities executing the same task independently on different inputs (divide the input and conquer it). I’ll describe other possibilities below.
The dichotomy between monolithic (one central processing unit with a dozen registers and an ALU) and massively parallel (computing with huge numbers of processors mimicking the brain with billions of neurons) is not necessarily helpful when we understand that the brain is a hyper-converged architecture with memory and process occupying the same hardware. In addition to good hardware, good software is a necessity. I will discuss programming languages and systems available on parallel processor computers in the next couple posts.
Parallelism and Neuromorphism
The brain has many neurons, each acting as a tiny processor to process the data. Each brain area is also similar to a computer, processing its own special part of the data. This resembles parallel computers and multicomputers. A process designer could use specialized processors for different types of tasks, such as digital signal processors for audio processes or arithmetic logic units for computational problems or neural networks for scene recognition. This capabilities-based approach is cropping up in commodity computing and communications devices today.
Computational paradigms that can provide both the flexibility of distributed processing and the strength of individual processors with access to large blocks of memory are found in multiprocessor computers and multicomputers available today. Many of these machines have powerful microprocessors linked together to complete complex tasks by splitting them up into subtasks (divide tasks and conquer) and sharing the results with each other. This approach, besides being extremely economical due to the availability of excellent microprocessors, is appealing because programming techniques and actual code from microcomputers are a proper subset of programming techniques and code for parallel machines based on the same CPUs. Section 9 will have posts with more information on these subjects.
Some machines, such as the Connection Machine developed by Thinking Machines, Inc., have been designed with distributed CPU architectures reminiscent of ANS. In such machines, thousands of simple processors are tightly linked to enable processing that resembles the neural function of spreading patterns of activation.
Interconnection Bandwidth Because of the slow speed of intercellular communication in the nervous system, it seems likely that many of the primary memory components used in daily cognitive activities have direct or nearly direct links with sensory input and output channels.
It is known that cells such as Purkinje cells in the cerebellum and many pyramidal cells in the cerebrum are directly or closely connected to both input and output channels. Because a system imitating the brain would be difficult to develop with all elements directly connected to I/O, a probabilistically organized model would be an efficient way to compensate for the difficulty of providing such a high bandwidth. In a probabilistically organized model, less frequently accessed data would be further away from the main I/O channels. Besides making the system more efficient, it may provide a good model for imitating human memory access.
In upcoming posts, I will continue this discussion and propose processes that can be run in parallel, and possible hardware models that are inherently like thinking in parallel to handle complex brain tasks.
|Click below to look in each Understanding Context section|
|4||Perception and Cognition||5||Fuzzy Logic||6||Language and Dialog||7||Cybernetic Models|
|8||Apps and Processes||9||The End of Code||Glossary||Bibliography|