21 Jun The Coming Revolution
Another Revolution in Computing – Knowledge Processing
Where cognition and computation converge…the brink of the coming revolution? As James Bailey puts it, “The reason today’s electronic computers seem benign is that the true electronic revolution has not happened yet.” Bailey compares our current phase of computerization to the stage of history “when muscle tasks were moving from humans to animals” (Bailey, 1996). Bailey reminds us of Henry David Thoreau’s commentaries from Walden Pond on how the industrial revolution reshaped that era’s world. He suggests that today’s “tasks of the mind” are comparable to yesterday’s “muscle tasks,” asserting that we cannot “assume that the former tasks of the mind will remain unchanged after they are reassigned to electronic circuits” (ibid).
A key question to consider is: Which tasks of the mind can be successfully, and safely delegated to artificial devices, machines, gadgets and appliances? The key questions behind the Understanding Context blog is narrower: How can we build automated systems that accurately understand and translate human communication, and what is the role of context in understanding? On the way there, we will briefly touch other tasks of the mind, but I’ll try not to stray far from the main point.
|Understanding Context Cross-Reference
|Click on these Links to other posts and glossary/bibliography references
|Allusion of Clarity
|Varieties of Nerve Cells
|Bailey 1996 McCreary 2014 Muehlhauser 2013
|Walden Pond Scoble 2014
|Singh 1966 Hawkins 2004
Those of us involved in cybernetics already know a truth about automating tasks of the mind: We are only part way there in that there are many such tasks we can do persuasively today, and many others we can’t even understand, much less automate. If a look at some of the provocative titles in the Understanding Context bibliography doesn’t convince you that the revolution has begun, hopefully some of my blog posts will. As the cyber-social complexion of our culture evolves, it will change how we live our lives and perform future “tasks of the mind.”
The current social collaboration revolution, transforming human interaction over the internet and mobile devices, highlights the progress we have made in automating tasks that were once the exclusive province of face-to-face or mind-to-mind communication, as well as the importance of “context” in this progress. HD Thoreau, who found such peace in solitude, may be turning in his grave, or he may be following tweets and Facebook posts and “likes” to learn how readers receive his published works.
Correlations Between Brain and Computer
In 1943, McCulloch and Pitts proposed that human neurons were like logic gates in a computer, and proceeded to model thinking machines. This approach has not yet yielded the fruit of understanding (Hawkins 2004 p.15).
Of all the correlations that have been drawn between the human brain and modern parallel computers, one is particularly relevant: we only make use of a fraction of either’s capabilities. Because our cranial apparatus is so complex and confusing, scientists believe ten-percent utilization of our brain power is good. The machines we have invented, and understand quite well, should do much better. It is time to remedy their chronic underutilization. The Understanding Context blog describes and suggests ways to tap the capabilities of computers, including parallel computers and parallel techniques. The the research I have and will post describes both biological and mechanical information-processing devices or machines. Along the way, you will be challenged to rethink old assumptions about cognition and computing as you are introduced to computer modeling based on the structure and processes of the greatest computing machine ever known: the human brain.
Association and Specialization
Here are some of the parallels between the brain and the computer that this blog explores and develops: • The brain divides its big tasks between specialized areas; computers can do the same thing. • Brain processes take place in billions of distinct neurons; these processes can be imitated using parallel or distributed computer systems. • Reasoning processes involve emotional input; cybernetic models should provide for gut-level functionality. • Learning and memory rely on explicit associations between things; computers can use associative structures and processes. Models can be built on an associative information theory that can do many brain tasks, including automatically interpreting language. • Language relies on association at multiple levels, each of which must be understood if the computer is to accurately interpret. Object-oriented techniques, and extension of these techniques in a distributed, object-centered model, help implement associative models in modern computers. Truly intelligent machines must be able to do all of these things.
I have undertaken to look at correlations between human and mechanical brains for a simple reason: my experience in solving complex automation problems for companies and people suggests that we can do much more. “Some business problems require new thinking and technology to provide the best solution” (McCreary 2014 p.8). New thinking, and new combinations of existing approaches, are my focus in the later sections of Understanding Context.
Controversy or Utopian Dream?
Besides the controversy of whether or not computers will ever be able to think like the human brain, some technicians, designers, and information scientists argue that the brain is a poor model for computing. This may be because computers are digital (internally designed to process only two values: 1 and 0) and the brain is analog (able to process a continuum of values — see posts tied to Biological Networking). Many other scientists choose to look to biology for clues to help them improve the quality of information-processing tools. Since the human brain is the most advanced and complex information-processing system known, they (and I) presume that more accurate and comprehensive models of the brain and its exact data-processing mechanisms will help us design and implement better computers. Unfortunately, because the mechanisms the brain uses for data storage and processing are not fully understood, it is difficult to say how close current models are to reaching this paradigm.
Many, like Luke Muehlhauser, foresee super-intelligent machines that far surpass human reasoning capabilities. But my intent is to focus on the present and what can be done to solve real problems for individuals and companies, especially the problem of communication: enabling machines to understand what you mean when you say anything. To achieve that, the machines will not only need intelligence, but the abilities to adapt when they’re wrong or ignorant: self-learning, self-healing, self-aware. Today we only give lip-service to “self-healing” and automatic learning systems, while the brain does these things in your sleep. We can only hope to pull out some good ideas and make them part of our model for intelligent, and maybe someday, sentient computing.
|Click below to look in each Understanding Context section
|Perception and Cognition
|Language and Dialog
|Apps and Processes
|The End of Code