12 Nov Context Powers Backward Chaining Logic
A popular success strategies book suggests that if we “Begin with the End in Mind” we are likely to get where we’re going more consistently. We wander less if we think about what we want at the end from the very first steps of our journeys. Context helps us do that.
Human behaviorists and philosophers have described different approaches people use for making decisions – especially the difficult ones. Teleology is a philosophical approach to decisions that focuses on the outcomes or consequences, and tailors the actions to achieve the prescribed outcomes. Deontology is the opposite philosophical approach to decisions that focuses on the duties of the individual, and makes decisions that correspond with prescribed duties, regardless of the outcomes. Computer system behavior, especially inference techniques, can be guided from the outcome, backward or from the present state ,forward.
Niccolò di Bernardo dei Machiavelli was a Florentine writer, historian, politician, diplomat, philosopher, and humanist during the Renaissance. Among his ideas, was the perspective that there are end outcomes that are so important that they justify whatever means necessary to achieve them. This is a teleological approach, and could be said to use backward chaining logic.
|Understanding Context Cross-Reference|
|Click on these Links to other posts and glossary/bibliography references|
|Prior Post||Next Post|
|Seeking a Universal Theory of Knowledge||Robot Neurons: Analog versus Digital|
|Context inference||Fiammante 2010 Forgy 1982|
|decisions||S. Covey Haley 2008|
|Teleology logic||Kant 1781 Chisholm 2004|
Top-down: Backward Chaining Logic
Backward chaining logic focuses on the consequences of actions, using rules as a basis from complex problem resolution. It may operate like this:
In this model, an outcome or “resolution” of a problem is associated with a set of rules that are logical propositions applied against facts. The Fact names represent the variables in the formulas, and the values are evaluated as matching (true) or not (false). The Rules Iterator (as shown above) can be smart enough to organize nested rules when the resolution of one rule is dependent on the resolutions of one or more other rules. The Fact Evaluator does the matching. The Fact Accumulator is responsible, when needed to go to the user or external sources to gather additional information that may contribute to the resolution. The Resolution Arbiter observes the state of the machine and when sufficient information is present to deliver a resolution, the arbiter returns that information set as a solution. If the rules and/or the facts are exhausted before a resolution is achieved, the arbiter is responsible for delivering the bad news.
Bottom-Up: Forward Chaining Logic
Behaviorally, it is often simpler to make each successive step in a journey based on the present facts. When unexpected obstacles appear, you often have to make adjustments, such as dismounting and removing a fallen tree in the middle of your route. Communication is also like a journey with unknown obstacles around every corner. The “intentions” of the speaker are unknown to the hearer (the person responsible for understanding intent) when the communication begins, thus the hearer must forward chain from one word, phrase and sentence to the next, not knowing in advance what the outcome will be.
Immanuel Kant frames reasoning in terms of human behavior, prescribing that one must apply reason to guide our decisions based on the current situation (empiricism), ones known or prescribed duties, and a moral perspective of the rights of others. Pure Reason, he contends, is sufficient to get a person through life without having immediate consequences or an end goal in mind. In many respects, communication requires a similar kind of bottom up reasoning, chaining forward from the most recent utterance, even the previous word to the next, to puzzle out the intent of the speaker or writer.
Forward Chaining Logic may look something like this:
The engine (such as a Rete engine) uses preestablished sequences, and in the case of cascading rules, the engine uses branching to begin at the first step, use rules and facts associated with that sequence to determine the validity of the rules’ premises, then proceeds based on the results of the evaluation. The Fact Evaluator and Fact Accumulator are used the same way as in the Backward Chaining example. When the end goal is not known from the beginning (as is often the case in language understanding – each person starts knowing their own intent, but usually not the other speakers’) forward chaining may be the best option.
As we build a universal theory of knowledge that can support language understanding, we must be able to account for processes that represent both bottom-up forward chaining processes that are necessary because we don’t know what a person will say next, and top-down backward chaining processes that are needed because we must be able to predict what the person is going to say next at least 80% of the time, or we’ll never keep up. The top-down processes are governed by context: that is – knowing what can be or can occur within the circumscribed confines of the context we believe governs the current communication.
This backward chaining logic, in some respects, interferes with accuracy in understanding because hearers impose their own understanding of the context on the words, possibly diverging from the speaker’s intent. Thus, the most effective listeners are also good at chaining forward, often asking clarifying questions to avoid misunderstanding the speaker’s intent. This is not to say that backward chaining is not an essential component of cognition. The argument between top-down “Rationalism” and bottom-up “Empiricism” really ended with Kant as both sides acknowledged that they are two sides of the same coin.
The behavior of the system that most accurately and effectively reads the mind of it’s users, determines their intent based on a combination of backward chaining inference, based on context, and forward chaining inference based on the unpredictable words that stream on in the continuum of communication. I will clarify this with examples in upcoming posts.
|Click below to look in each Understanding Context section|
|4||Perception and Cognition||5||Fuzzy Logic||6||Language and Dialog||7||Cybernetic Models|
|8||Apps and Processes||9||The End of Code||Glossary||Bibliography|