linkedin facebook twitter rss

06 May Impulse Waves in Layers

Neuroph from SourceforgeLayered Model

Just as the brain has areas with three to six distinct layers, a typical artificial neural systems (ANS) also has several layers. The example at right shows a network with three layers that illustrate a neural network‘s distributed architecture. The uniform circles connected by lines are symbolic of the state of an ANS at rest. Most neural networks return to a state of rest between each input. Thus, every processing element (PE) in the system continually alternates between a resting state and an excited or inhibited state at perfectly regular intervals. The example at right, borrowed from Sourceforge – the open source java community constellation, shows how each input node represents one attribute in a complex diagnostic regime for cancer.

This example of a neural network, and others, are available for no cost, to experiment and learn about neural networks. The spread of E/I in this type of a system is like a wavefront that moves only from the input channel to the output channel. This directional flow is illustrated with lines that travel from layer to layer.

Understanding Context Cross-Reference
Click on these Links to other posts and glossary/bibliography references

  

Section 3 #19

Networks Section Icon

 

Table of Context

WavesSpecialized parallel computer hardware designs can implement wavefront array-type processing. This model, again, has been shown to be extremely useful for application domains that exhibit both fixed and regular duration inputs that can be simulated or captured in the regular cycle time of the clock that governs computer processing cycles. Signals processing for Radio, Television and Cellular is an example of an application domain where this kind of array and cycle is useful. Successful image-processing applications have also been demonstrated on massively parallel systems.

Hidden Layers

A model proposed by Elman (1988) overcomes many of the difficulties posed by strictly spatial simulation of temporal processing. Elman proposes a network with “short-term memory.” He achieves this by adding a “hidden layer” to a standard network that retains the weights obtained from the most recent passage of data and feeds them back through the system along with the next input.

The success of Elman’s model shows the importance of memory in a neural system. Unfortunately, it does not have the power necessary to enable an ANS to do a complete cognitive task such as language interpretation. Adding layers to a connectionist model improves its ability to handle the kinds of sensory (two-dimensional) problems we have discussed, but connectionist models have not been successfully generalized to be able to accommodate more complex tasks.

One of the reasons for the inability of current systems to migrate from the two-dimensional domains of image and audio data is that some of the underlying assumptions of the model are not sufficiently expressive of the complex interactions of data in the real world. The model must be revised to reflect physiological phenomena associated with more complex cognitive activities.

In some of the literature and in other blog posts the hidden layer is called a correlating layer.

Fukushima Style NetworkFukushima’s Model

We briefly describe Fukushima‘s model in Section 4. Here is an illustration to compare with the standard connectionist model.

The role of feedback in this model is its most important characteristic. This can be a model for several brain processes that contribute to cognitive processes such as recognition and interpretation.

I have found in my work with neural models, that Fukushima’s model is one of the best for extending beyond the limitation of 2-dimensional problems. At some level, it may always be necessary to flatten any multi-dimensional problem into connected sets of two dimensions. But I believe there are ways of building multi-dimensionality into the processing model. I’ll discuss this further in the posts and sections to follow.

Click below to look in each Understanding Context section


 

Comments are closed.