linkedin facebook twitter rss

01 Jul Chaos About Us

Chaos About Us

Messy BedroomChaos is all about us. I know that for certain each time I look into my kids’ rooms. When I recall my own youth, however, it occurs to me that I had a reason for the way I organized my life. It seemed meaningful to me, and although I recall how difficult it was to explain to Mom, I knew the rhyme at the time. The mind is an amazing instrument in its ability to sort through the natural messiness of life in the modern world, successfully navigate complex situations and come out with workable solutions. The mind, itself, is a mess of wires and cells and sparks flying around who knows where. Order arises out of the chaos of the mind, gives us understanding, and facilitates the development of said workable solutions.

I am seeking a model for automating the processes of the mind to make it easier for humans to interact with computers using natural language. The laws of natural language and meaning seem to exhibit chaotic properties as well. Both the abstract and physical principles behind interactions in this universe can be very complex to define and duplicate. As soon as people grasp a new law of physics, a whole lot of chaos dissipates into the past, and resolves into a neat set of explanations that are often accompanied by neat mathematical formulae that are relatively sound, provable and defensible (if not always empirically pragmatic).

Understanding Context Cross-Reference
Click on these Links to other posts and glossary/bibliography references

  

Section 7 #22

AI Modeling Section Icon

 

Table of Context

Fractal FlowerThe principles of symmetry and self-similarity are physical laws whose meanings are becoming more apparent all the time. Fractals are images created by mathematical processes based on self-similarity and other physical laws that demonstrate the same kind of interactions that produce the shapes of clouds and the eddies of water in a stream. Using computational self-similarity with quasi-random input as a central component, some simple formulae can generate a nearly infinite variety of images.

Neural networks often seem chaotic. The formulae are consistent from one part to the next, but the sheer size of the network and the introduction of random elements make them seem chaotic. Yet, we often get exactly the results we need from neural networks. Why?

Fractals

In the relatively new but thriving science of chaos, fractal modeling has become a favorite example of many principles. Fractals are the visual representations of mathematical formulae that describe symmetry, self-similarity and other principles that scientists use to help unravel seemingly chaotic phenomena.

Fractals are the natural, visible result of applying the principles of symmetry, or invariance against change. Fractals appear everywhere in nature. Now that we are beginning to understand them, we can even do them on computers. Manfred Schroeder tells us this about fractals: “The unifying concept underlying fractals, chaos and power laws is self-similarity…and most fundamental laws of physics, such as Newton’s law of gravitation, have an exact mirror symmetry” (Schroeder, 1991).

Some of the principles that fractals model include geometric symmetries such as mirror symmetry, rotation and transposition symmetry, and self-affinity and self-similarity. The net result of applying one or more of these principles to a graphical model that produces fractals is typically both chaotic and beautiful to the eye.

When we examine some of the more robust AI approaches, such as connectionism and conceptual schemata, the apparent chaos of their processes may be resolved by applying the same formulae that are used to draw fractals. Here is an example of how applying self-similarity can result in beautiful form:

Begin with a straight line;

modify a segment of the line using a symmetrical formula, a curve;

repeat the curve exactly, and what was dull becomes appealing;

repeat again for more variety;

and occasionally vary the curve slightly for infinite possibilities.

Mixed-up Formalisms

Artificial neural networks have been designed to mimic the structure and functionality of the human brain. They are often referred to as neuromorphic computer designs. Because connections between neurons in the brain transmit variable levels of action potential, neural net functionality centers on the weights of links between elements. This resembles the use of multi-valued or fuzzy logic in any kind of computer processing. When this kind of functional approach is applied to expert systems, it provides an opportunity for graceful degradation, a strength of neural networks.

Electronic BrainThe structure of neural networks is distributed and massively parallel while most expert systems run in uniprocessor or coarse-grained parallel computers. Still, the advantages of the parallelism of neural networks can be applied to expert systems to some extent by distributing the knowledge. Many techniques are available for distributing expert-system knowledge. Rule-based, frame-based and network-based approaches are all forms of distributing functional and world knowledge. As we consider the strengths and applications of various processing methods and frameworks, we will see how each of these formalisms can be used as tools in our sentient computer manufacturing shop.

Because neural models are so fundamental to cybernetics and neuromorphic system design, and because they offer tremendous advantages in learning and adaptation, the I will prepare some posts to take us on a tour of the fertile landscape of artificial neural systems.

As we explore different formalisms, we look at them from different perspectives:

  1. computational theory
  2. representational level
  3. algorithmic level
  4. learning techniques

Not all formalisms get complete coverage on all four levels, but I will attempt to cover the essential points.

Click below to look in each Understanding Context section


 

Comments are closed.