linkedin facebook twitter rss

23 Feb Inference in Knowledge Apps

Fishbone DiagramIn Section 5 we discussed different kinds of knowledge, including existential or hierarchical knowledge and causal knowledge. In Section 7 we discussed modeling approaches and search techniques that could be applied to any kind of knowledge. We saw that causal knowledge can be modeled as chains of causes and effects, and that existential knowledge can be modeled in tree structures or taxonomies. Both causal chains, such as is represented in the fishbone diagram at right, and taxonomies are good structures for applying rules to perform inference. Where hard-coded branching is brittle because you can only have a limited set of branches or cases, each with a limited set of variables, constraint-based heuristics with arbitrarily large rule sets are much more flexible, especially at the edges of knowledge, in giving a best-guess answer.

In a post in Section 7 on the subject  (Context Powers Backward Chaining Logic) I discussed context and goal-driven inference. That post describes the justification and architecture for chaining inference. Here, I’d like to take this discussion a little further, give some examples, and tie the process into my Universal Information Theory.

Understanding Context Cross-Reference
Click on these Links to other posts and glossary/bibliography references

  

Section 8 #20

AI Apps and Processes Icon

 

Table of Context

  Prior Post   Next Post
Just In Time Knowledge Seven Reasons Businesses are Implementing Semantics
Definitions References
knowledge   modeling  Davies 2009   Graesser 1990
inference   rules BeInformed   Haley 2008
existential   taxonomies Slagle 1963   Hammond 1970

GIS MobileForward Chaining Inference

Forward chaining is a process of working from the known to the unknown. For example, we know where we are, we know where we need to go, and we know where to find a map, but we don’t know what roads we will need to take to get to our destination. Forward chaining would perform the following steps:

  1. Get the map
  2. Mark current location on the map (A)
  3. Mark destination on the map (B)
  4. Find most direct route between (A) and (B)
  5. Write down all the routes, mileages, landmarks and turns.
  6. The list of routes, mileages, landmarks and turns is the desired result. We got there using forward chaining inference.

Backward Chaining

Backward chaining is another useful inference technique. We use backward chaining to get from the solution to the problem. When there is a short list of solutions and a long list of problems, this is sometimes the preferred method. It is also very useful when the frequency of a few possible solutions is very high. By beginning with the most frequent solution, you can often finish very quickly with a good solution.

Solution ProblemThe way to get from the solution to the problem is to test the problem against the solution. This, of course, is done at the detail level where you match data in the input or other sources against the data you need to achieve a goal (satisfy its constraints). Because the constraints can be broken down into sub-goals to an arbitrary level, this approach is as flexible as you need. Once all the minimum constraints of the problem match the solution, you are done.

I worked with BeInformed USA on several enterprise-grade capabilities using their backward-chaining software with powerful results. BeInformed provides topological modeling with a goal-driven rules engine and powerful solution development tools. We were able to solve arbitrarily complex problems using agile strategies to build capability after capability on the foundation of the same model, saving significant up-front development costs and achieving greater speed-to-market than traditional programming.

Rule-Based Reasoning

Reasoning is how humans draw conclusions from a set of premises. There are three categories of reasoning:

  • Inductive reasoning applies general rules or observations to specific situations, is less certain than deduction, and is a common humans strategy. Induction can be used to derive a general rule from a known set of premises and applicable conclusions.
    • Example: I’ve seen a few hairy dogs with four legs, so that unrecognized animal with four legs must be a dog.
  • Deductive reasoning combines premises to draw conclusions where, if the premises are true, the conclusion must also be true.Deductioncan be used toderivea conclusion from a known rule and a known set of premises.
    • Example: All hairy animals with four legs are mammals, AND that hairy animal has four legs, thus that hairy animal is a mammal.
  • Abductive reasoning tries to explain a conclusion by guessing valid premises. Abduction can derive possible premises from a known set of rules and applicable conclusions.
    • Example: That hairy animal with four legs in the distance is probably a dog.

Inference is like human reasoning in that it deals with premises and conclusions. Basically, an inference engine deals with two things:

  1. Rules – Rules often take the form of propositions or patterns that bind premises and conclusions together.
  2. Facts – These are contained in the data to be processed and can serve as the substance of premises.

Backward-Forward ChainingHow do we use rules to reason? In Section 5 we saw how observing cause and effect and making generalizations are ways we reason about the “facts” we encounter. If we encounter an animal, for instance, we look at its size, shape and color and infer its biological species. When a fireman is called to a fire, he/she looks for clues as to what kind of a fire it is, then tries to put the fire out using the best available techniques. We can teach computers to use these same rules to identify animals and fires. Below, we show samples of code in which facts and rules and user input are compared to interpret things. These are extremely limited examples, but they show how you can build instructions or rules into a computational system so it will be able to perform simple reasoning tasks.

The more rules you teach a computer, the more interpretation it should be able to successfully perform. Unfortunately, we know that such systems are often limited by the sheer processing power it takes to perform inference. The distinction between facts and rules is like the distinction between data and code. Facts are like data (they are data) and rules are like code (they perform functions on the data). A significant difference, however, is that many well-designed expert systems treat the rules as data also. They do this to permit users to add rules as necessary without needing a programmer to expand the application. The code must be able to interpret rules as well as facts.

Examples of Rules

Here are some examples of rules written in logical clause form. See if you can reverse engineer them to see what what they do.

(defrule determine_type “figure out what kind of fire by material burning” (emergency fire)
=> (printout t “What’s burning?”) (assert (burning_material =(read) ) ))
(defrule type_A_combustible “” (burning_material paper | wood | cloth)
=> (printout t “Type A fire. Try to douse it with water” crlf))
(defrule type_B_combustible “” (burning_material flammable_liquid | grease | other_inflam_liquid)
=> (printout t “Type B fire. Try to douse it with CO2” crlf))
(defrule type_C_combustible “” (burning_material something_electrical & (plugged_in | turned_on))
=> (printout t “Type C fire. Cut power then fight fire” crlf))
(defrule type_D_combustible “” (burning_material magnesium | sodium | potassium | titanium | zirconium ))
=> (printout t “Type D fire. Fight fire with chemical extinguisher” crlf))

FireThe primitive interface of this little rule set is a question: “What’s burning?” As long as the answer fits within the predicted set of answers, the input will cause one of the rules to “fire” and it will deliver an answer.

(deffacts status “start the program with a fact” (begin))

(defrule startup “” (begin) => (printout t “Is it very big?” crlf)

(assert (question “Is it very big?” =(read) ) ))

(defrule not_big “” (question “Is it very big?” no) => (printout t “Does it squeak?” crlf)

(assert (question “Does it squeak?” =(read) ) ))

(defrule big “” (question “Is it very big?” yes) => (printout t “Does it have a long neck?” crlf)

(assert (question “Does it have a long neck?” =(read) ) ))

(defrule not_squeak “” (question “Does it squeak?” no) => (printout t “I guess it’s a squirrel!” crlf) )

(defrule squeak “” (question “Does it squeak?” yes) => (printout t “I guess it’s a mouse!” crlf) )

(defrule short_neck “” (question “Does it have a long neck?” no) => (printout t “Does it have a trunk?” crlf)

(assert (question “Does it have a trunk?” =(read) ) ))

(defrule long_neck “” (question “Does it have a long neck?” yes) => (printout t “I guess it’s a giraffe!” crlf) )

(defrule no_trunk “” (question “Does it have a trunk?” no) => (printout t “Does it like to be in water?” crlf)

(assert (question “Does it like to be in water?” =(read) ) ))

(defrule trunk “” (question “Does it have a trunk?” yes) => (printout t “I guess it’s an elephant!” crlf) )

(defrule hydrophobic “” (question “Does it like to be in water?” no) => (printout t “I guess it’s a rhino!” crlf) )

(defrule hydrophillic “” (question “Does it like to be in water?” yes) => (printout t “I guess it’s a hippo!” crlf) )

Rules and Knowledge

Thinking ProcessLet’s go back to the hardware and software of cognition for a moment. We proposed a universal structure of information governed by contextual associations. When we consider the possible approaches we can use for capturing and representing rules, natural structures may turn out to be useful for building smarter, more human-like systems. According to Joe’s Theory of Everything (JTE), relations between data items are the fabric of science. “What are scientific laws?” ask Hammond, et al., answering their own question, and ours as well: “For our purposes, it is enough to say that relations among facts, when sufficiently well established, become laws” (1970, p.6). Much or all of business, science, and knowledge, then, can be described as links between facts.

What are the facts? Generally, facts can apply to objects, concrete and abstract, and events. These events and the objects involved are the stuff of science. Statistical analysis is one way of examining the interaction of events and objects in order to answer questions like “what are the business rules,” “what are the scientific laws,” and “what are the facts.” Analysis can be very useful because it gives us perspective on events that have occurred. It can be even more useful when application of this perspective enables us to prepare for and respond to new or future situations, which leads to two critical questions:

  1. How do we design a system in such a way that it can both represent known facts and learn new facts?
  2. How can we structure facts in such a way that they can be interpreted as rules that can be used in processing?

Click below to look in each Understanding Context section


 

 

2 Responses so far.

  1. […] and CIOs lament the fact that systems need to be replaced too frequently because they tend to be brittle and costly to adapt. Yet changes in the marketplace and regulations force companies to update […]

  2. […] and CIOs lament the fact that systems need to be replaced too frequently because they tend to be brittle and costly to adapt. Yet changes in the marketplace and regulations force companies to update […]