# 08 Aug The Fourth Dimension

To everything, turn, turn, turn, there is a season… Time is a fundamental and omnipresent element of context. It goes intrinsically with space, so much so, that we sometimes hear about a “time-space continuum” in which all things occur. Space and time are relevant to brain processes: electrical potential moves through physical pathways and brain areas, effectively brain geography (space), in sequential waves that move at a relatively constant speed as most excitatory spikes have roughly the same duration (time). Computing activities also take place in space and time, and as we model the ideal approaches for developing more intelligent systems, we need to understand and accommodate time and its interplay with everything else, physical and abstract.

### Time: The Fourth Dimension

The corollaries of time and space in both cognitive and computational domains are the four measured dimensions. The first three dimensions are length, width and depth, which are architectural tools used to define the measure and structure of all physical objects, including inputs and outputs in the computational domain. Time, the fourth dimension, is used by the programmer to define progress of program execution, sequence of instructions and points of rendezvous for interacting processes.

Understanding Context Cross-Reference
Click on these Links to other posts and glossary/bibliography references
 Prior Post Next Post Artificial Time Modeling Positive and Negative Activation Definitions References Dr. Maria Gini   Dechter 1989 Line Balancing Zechmeister 1982  Elman 1988

Linear sequences of instructions in a computer program are like a manufacturing assembly line. An assembly line has a beginning and an end, which are both physical spaces, and each production cycle has a beginning and an end that are recorded as times. The junction of physical space and time is measurable and critical to success in manufacturing. Efficiency is so important in manufacturing that there are complex algorithms used to analyze each point in the line to determine the most efficient way to make products. A “Line Balancing” algorithm analyzes the interactions of products and parts in time and space to assign tasks to workstations in such a way that the sub process at each workstation requires approximately equal duration to complete.

A different top-down flavor of “Line Balancing” looks at the overall process and tries to equally divide the work to be done among workstations to minimize the number of workers and workstations required for a production line and keep the flow constant and steady. Today I was speaking with Dr. Maria Gini at the University of Minnesota. Some of her most exciting work today is in coordinating multiple autonomous agents, such as robots, that don’t necessarily share a common data master. An important time factor in distributed agents of this type is that they develop or accumulate meaningful results at different times, and may not be able to communicate long distances.

Consider the example of searching ocean depths for a missing aircraft. If we assign 100 small autonomous submersible robots to scout a large area of sea floor, with instructions that they are to remain within communicating distance of their two nearest neighbors, they only need to be able to transmit results a short distance, yet the divide and conquer approach is likely to yield high efficiency in the search. Real-time communication is an advantage in this case. Even if they lack the range to communicate in real time with other autonomous agents, they may be able to deposit fragments of results along the way, that could later be picked up, and collated into a larger result set.

The modalities of time include frames such as {before}, {during} and {after}, as well as {up to and including} and {from start to finish}. Adding space and time together has important and meaningful applications, such as you see in a bus or train schedule where space is indicated by named stops, and times are shown for expected arrivals at the stops. The frames can be arbitrarily complex and ambiguous, often requiring prior knowledge or exformation such as {at the Starbucks Downtown at 3:30 this Friday} or {long ago in a galaxy far, far away}. “Before, during and after” are all very ambiguous concepts to implement, not just from a granularity perspective (are we measuring nanoseconds or millennia) but from an inclusion perspective – that is, defining the precise times or triggering events that signal the exact beginning and end of “during”.

Computer programs and cognition are not limited by space in the same way as objects and geography (length, width and depth), but the amount of storage space on disk and in RAM does affect program performance. Thus balancing in programs is important for efficiency as well. Top-down and bottom-up analyses are often justified in optimizing program and data flow. Many AI approaches in the past have had time/space challenges. Garbage collection is a classic example. Garbage collection attempts to look in memory (Short-term RAM or long-term disk) for temporary data created by the program that occupies space but is no longer needed. If too much temporary space is used or if the means of determining if it is no longer needed are too complex, the software can run out of usable space before it can complete its tasks. Some programming languages and approaches create such garbage collection bottlenecks as to render the process unusable.

Artificial Neural Systems (ANS) occupy a constant amount of space and consistent time cycles, thereby alleviating the time/space challenges of other systems that proliferate data in temporary storage.

Prior to the advent of parallel processors, the single path of the linear sequence of instructions and routines constituted the sole temporal mechanism: code ran one step at a time, each step took as long as it took, then the program was done. Branching, looping and nesting of procedures constituted the main spatial element of code, along with the space consumed by the data. As parallelism progresses from coarse to fine granularity, the interaction of temporal and spatial constraints on data and processes increases in importance, and complexity, by orders of magnitude. Because of this complexity, the strategy we choose to provide fully automated language understanding and translation (the main goal of my work), where possible, will need to manage the burden of temporal constraints, or time dependencies that limit the interaction of concurrent processes.

Considering the varied nature of sensory input, it is conceivable that the brain also economizes by using residual energy as a feedback mechanism to reduce the burden of temporal constraints. What do you do when you are near the end of your rope? Is there such a thing as cognitive exhaustion? Do you ever go into autopilot and do things that are perfectly reasonable but not what you intended? If so, it could have been due to constraint overload.

When MIPUS, our trustworthy personal robot, begins to wear out, he uses several devices (in addition to his solar cells) to conserve and recycle energy. One thing he does is stop complex reasoning algorithms (thinking), and begin operating on rote skills. It is much easier for him to retrieve an old routine he remembers than to decide what to do next. The last time he started to go into autopilot, he considered what onlookers might think if they saw his sleek metallic self plummeting from the bungee jumping tower. “It’s an airplane on autopilot.” Day-dreaming, he inadvertently wore out his batteries and got stuck in the bathroom.

### Perceptual vs. Abstract Knowledge Processes

For visual and, to some degree, auditory systems, in which the input has a fixed, short duration, the popular ANS model is excellent. For cognitive activities such as communication and reasoning which involve streams of associations in context, the lock-step time-bound connectionist model does not account for the processes known to occur in the brain. This inability to process symbolic data has been a weakness of some neural information-processing models. Static weights fail to accurately model the actual process of spreading patterns of activation because the temporal element of residual action potential, and the spatial element of different lengths of neural pathways, can profoundly change the cumulative impulse weights of nodes throughout the system.

Although a scheme incorporating variably weighted links seems accurate in view of the number of variables that can influence the amount of E/I propagated across synaptic links, the nature and significance of the links change, depending on temporal factors. Modern ANS predominantly implement static temporal flow of one input at a time, from one layer to the next . The restrictions of processing one input at a time and having a strictly unidirectional flow with neurodes that can only be activated once by any input may not suffice for abstract cognitive functions that involve huge numbers of constraints.

Does anybody really know what time it is? Does anybody really care? Where were you at 12:12 PM on December 12th, 2012? The system that can effectively understand your intent, based on your speech, must know its place in the geography and time that characterize the universe in which all meaning finds its nativity.

Click below to look in each Understanding Context section

 Intro Context 1 Brains 2 Neurons 3 Neural Networks 4 Perception and Cognition 5 Fuzzy Logic 6 Language and Dialog 7 Cybernetic Models 8 Apps and Processes 9 The End of Code Glossary Bibliography