linkedin facebook twitter rss

08 Sep Gnostic Learning Model

Hard Disk in Brain

In prior posts in this section, and periodically in other sections of my blog, I have been exploring how humans learn, and how we might replicate those processes in computer software or (less likely) hardware. The context of the learning, or knowledge acquisition, upon which I choose to focus is language learning. While knowledge acquisition is much broader, this is an […]

25 Aug Determinacy in Neural Connections

Neural Net

For many years, researchers thought that it was wrong to assume that there was a cell or set of cells in the brain that stored the memory of Grandma’s face. Though the comparison with computer memory was appealing, it was thought to be too simplistic and incorrect. Now, more researchers in different academic disciplines are assuming […]

18 Aug Modeling Positive and Negative Activation

Blue Neurons

Humans learn from both positive and negative experiences. The electrical flow between neurons can be positive (excitatory), propagating electrical potential flow along neural path to create further excitation and a bubbling-up effect, or negative (inhibitory) reducing or stopping the electrical potential flow along a pathway. Remember that a neural pathway is not like a long line, but like […]

02 Aug Artificial Time

Time and Space Perception

Time is omnipresent – you can’t get away from it. It is woven into everything we do and say and understand. It is an inextricable element of context. I was just speaking of how the connections in our brain develop, grow and evolve over time. Representing and handling this “temporal” element is fundamental to any […]

31 Jul Modeling Non-Random Synaptic Links

Random Hairdo

I have discussed the different meanings of “random” in “The Random Hamlet” and “That’s so Random!” in which the mathematical definition presumes there is some not yet known law that governs the phenomenon, where other definitions suggest that randomness means that the phenomenon is not governed by any law. Remember our reference to Rosenblatt’s early contributions in […]

26 Jul Parallel Distributed Pattern Processing

PDP Networks We have discussed recognition processes in the brain. Connectionism, a fundamentally implicit approach to neural modeling, was championed by the parallel distributed processing (PDP) group. PDP networks use many interconnected processing elements (PEs) that, according to the PDP Group, configure themselves to match input data with “minimum conflict or discrepancy” (Rumelhart & McClelland, 1986, Vol. 2, […]

24 Jul Pattern Classification in Space

Deep Space over the Water

Pattern Classification Visual patterns can be recognized and classified based on prior knowledge: I see that this hairy animal has four legs and is about the same size as my dog, so I’ll assume it is (or classify it as) a dog. This may not be a correct classification, but it’s more correct than classifying it […]

03 Jul Do Yawl do Petri Nets

Reactive vs Transformational Systems

Where do you draw a line? In geometry, digital theory, language and time, patterns tend to be linear: they bear distinct sequences. The sequences in these domains either contribute to the meaningfulness of the patterns, or, in the case of time, are the foundation of the patterns. Any logic that focuses on these sequential patterns is linear logic. Temporal Logic […]

06 May Impulse Waves in Layers

Waves

Layered Model Just as the brain has areas with three to six distinct layers, a typical artificial neural systems (ANS) also has several layers. The example at right shows a network with three layers that illustrate a neural network‘s distributed architecture. The uniform circles connected by lines are symbolic of the state of an ANS at […]

05 May Learning from Errors

Error

If at first you don’t succeed, try – try again. Humans are pretty good at learning from our mistakes. In fact, some suggest that whatever doesn’t kill you makes you stronger. Today I’d like to riff on that theme a bit, and talk about ways in which machines can implement learning from errors. Error Minimization […]