linkedin facebook twitter rss

04 Jul Cognitive Multi-Processing

Joe RousharJoe Roushar – July 2017

Divide and Conquer

Swarm computing applications, with large numbers of autonomous agents are beginning to appear and deliver stunning results. The combination of autonomy, simple tasks and parallelism has great power. Today I’ll address parallel computing and models for breaking down computational problems. I will not address the question of autonomy today, but save the question of empowering independent agents for a future post.

ANS and Multiprocessors

Mechanical BrainArtificial Neural Systems (ANS) are probably the closest approximation of the mechanical brain paradigm, so it is useful to know how they work. Many ANS are implemented on standard computers with single central processing units (CPU). Windows PCs, Macs, Midrange Servers, Sun and many other mini and mainframe computers are popular platforms for ANS research. Because of the distributed nature of neural networks, presumably a distributed or multi-processor computer would be the most efficient hardware platform for implementing neural networks.

Intuitively, it seems that the massively parallel computer with thousands to millions of tightly coupled processors would be the ideal simulation of the brain. There are a couple of problems with this approach. One of the biggest problems is the ratio of processing to communication speed. This ratio is referred to as communications overhead because the time required to pass information between processors is dramatically greater than that required to manipulate information within a single processor. Because of the overhead, a powerful single processor with access to large amounts of memory can perform functions, including brain tasks, with great efficiency at dramatically lower costs, as long as the CPU is powerful and fast enough to handle the load. Further, though humans think fundamentally in parallel, it is sometimes extremely difficult for humans to tell computers how best to divide and conquer computational problems.

In this post I’m going to look at process polymorphism, or different characteristics of brain activity, and granularity in different cognitive processing tasks.

Understanding Context Cross-Reference
Click on these Links to other posts and glossary/bibliography references

  

Section 8 #31

AI Apps and Processes Icon

 

Table of Context

When I originally drafted the content for this post, the early 1990s, companies like n-Cube, Intel Parallel Scientific Computing, Cray and Thinking Machines, Inc. were competing to come up with the fastest and most powerful parallel machines to explore the boundaries of parallel computing. These companies’ products are largely absent from all but the headiest research centers today. By the end of this post, I hope to explain my ideas on what happened to them.

Parallel LinesI have stated in other posts that the ideal system will leverage the strengths of both natural and artificial systems. As the natural strengths of parallel computers and parallel algorithms have been shown to be ideal for certain functions and classes of problems, we should examine those functions for the specific characteristics that  allow parallel computers to perform so well. Parallel computers tend to perform exceptionally well on any class of problem with multiple independent components which can be computed separately then reassembled for a final result. There are many problems of this type, including weather prediction and molecular modeling and, possibly, natural language understanding.

In real application, ANS have been shown to be extremely good at image analysis in two dimensions. In fact, you can get ANS-based Optical Character Recognition (OCR) software for free. But if you add more dimensions to the problem – such as adding color to length, width and depth – you can push beyond the capacity of the ANS processing model to handle the complexity.

OCR Software

Polymorphism and Heterogeneity

In light of the phenomenon of specialization in the nervous system, it seems reasonable to expect any parallel system imitating the brain to perform different processes in tandem. Single-instruction, multiple-data (SIMD) computers such as the Connection Machine can simulate different processes that occur concurrently, but because of the “lock step” restriction of one instruction being performed by all processors simultaneously, it is difficult to imitate more than one part of the brain at a time. Although spreading patterns of activation constitute the main cybernetic function of the brain, this process must be polymorphic because of the different types of knowledge processed in the brain. Perhaps this polymorphism can best be expressed by the interaction of spatial and temporal aspects of cognition described earlier. Most programs are not polymorphic, and those that are tend to require extraordinary servers to perform well.

Computer ChipNo matter how general a microprocessor is, it is only as flexible as the software driving it. True flexibility is provided by programming languages more than by computer hardware because the complex tasks of artificial intelligence require powerful and eloquent mechanisms for expressing structure and function. Low-level and machine-level programming constructs are usually far too cumbersome and counterintuitive to be useful for designing complex simulators.

Heterogeneity is a term closely associated with polymorphism. We spoke earlier about the natural progression of automation technology today leading us into heterogeneous models for multi-computing networks. Parallel computers with heterogeneous processors are already available, and in use in the research community. An example of a parallel computer with heterogeneous processors is manufactured by iNTEL Scientific Computing and contains CISC microprocessors, CISC microprocessors with massive storage, vector processors and numeric processors.

Coarse Granularity

Course-grained parallelism may be associated with the centers of the brain which handle different analytical and synthetic processes in specialized brain centers. Remember, the various types of neurons and the way they are arranged are optimized for the types of processes they perform. The processes happen predominantly within the center responsible for the process. Intermediate and final results are passed to other centers to contribute to other processes.

Brain Areas by FunctionEach brain center can be compared to a microprocessor CPU:

  • a neural net chip for visual processing;
  • a DSP chip for analyzing sound waves;
  • a vector processor for coordinating requests for physical movements;
  • a pipelined processor for converting movement requests and sending the necessary signals to the muscles; and
  • a generic CISC chip for high-level analysis and decision making.

Medium-grained parallelism may be associated with the coordinating centers of the brain where no recognition, interpretation or volition processes occur. What does occur? Preprocessed data, perhaps in a raw form, is passed through to the other centers of the brain that need to process the data to recognize, interpret or act on the information.

Medium Granularity

iPSC Parallel ComputerMany commercial medium-grained parallel computers provide a tremendous amount of flexibility for users to configure their processors and memory to suit different types of processes and models. By adopting a user-configurable model, these computers implicitly provide a vehicle for implementing “specialists” for processing different types of data. While one processor or set of processors in a shared or distributed memory computer can implement one neural algorithm, say for image processing, another set of processors can be dedicated to symbolic processing. Such a model is supported by both physiological/anatomical data and psychological findings in that ANS of the currently prevailing model can be implemented on the same machine as other models of neural processing.

The power of medium-grained parallel machines enables them to simulate complex neural processes on one or more processors while at the same time performing essential mundane tasks such as user interaction and preprocessing of complex input. A medium-grained parallel machine thus provides the hardware component for our ideal expert system architecture, completing the model.

Once we have a Knowledge Representation (KR) scheme (see my posts “Distributed KR“, “State of the Art…“, “Ontology“), some good reasoning techniques (“Just in Time“,”Rings of Power“,”Instrumentation“) , and a good hardware architecture, we are ready to apply it all to our sample problem domain of machine translation (MT).

With this approach, a distributed processor computer with many processors may be able to execute many different specialists in parallel. While one bank of processors, say 9 through 1c, perform visual processing, another set of processors, say 1d through 22, process audio input. Yet another section operates on more symbolic rather than perceptual problems. This is the way connectionist networks can operate concurrently with symbolic ones.

Fine Granularity and Microservices

Artficial IntelligenceArtificial Neural Systems or Neural Networks implement knowledge processing at a very fine granularity. Fine-grained parallelism may be associated with neurons and their interconnections that process individual stimuli, or components of a stimulus, and collect results. There are computers that implement this fine granularity in the hardware such as the CM5 (Thinking Machines). From a software, programming or enterprise computing perspective, one may think of microservices as fine-grained computing models, and in some cases they may be. The definition is a bit murky, and some technologists think of services and microservices as essentially the same. In a service-oriented architecture, instead of mega apps that do everything you could possibly need in one massive program, you use JSON, RESTful APIs or other models for exchanging information and performing functions in a decoupled way. This may seem opposite from the brain, which appears to be 100 billion tightly interconnected processing elements. But considering how the brain is divided into specialized processing centers, the microservices model of specialization is certainly more neuromorphic than a mega-application.

I’ll persue this further in upcoming posts.

Click below to look in each Understanding Context section


 

Comments are closed.