Backward chaining





Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.[1]


In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction. In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess.


Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining systems usually employ a depth-first search strategy, e.g. Prolog.[2]




Contents






  • 1 How it works


  • 2 See also


  • 3 References


  • 4 External links





How it works


Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if any data supports any of these consequents.[3] An inference engine using backward chaining would search the inference rules until it finds one with a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule).


For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz:



  • Fritz croaks

  • Fritz eats flies


The goal is to decide whether Fritz is green, based on a rule base containing the following four rules:



An Example of Backward Chaining.

An Example of Backward Chaining.




  1. If X croaks and X eats flies – Then X is a frog


  2. If X chirps and X sings – Then X is a canary


  3. If X is a frog – Then X is green


  4. If X is a canary – Then X is yellow


With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proved: "Fritz is green".


1. Fritz is substituted for X in rule #3 to see if its consequent matches the goal, so rule #3 becomes:


 If Fritz is a frog – Then Fritz is green

Since the consequent matches the goal ("Fritz is green"), the rules engine now needs to see if the antecedent ("Fritz is a frog") can be proved. The antecedent therefore becomes the new goal:


 Fritz is a frog

2. Again substituting Fritz for X, rule #1 becomes:


 If Fritz croaks and Fritz eats flies – Then Fritz is a frog

Since the consequent matches the current goal ("Fritz is a frog"), the inference engine now needs to see if the antecedent ("Fritz croaks and eats flies") can be proved. The antecedent therefore becomes the new goal:


 Fritz croaks and Fritz eats flies

3. Since this goal is a conjunction of two statements, the inference engine breaks it into two sub-goals, both of which must be proved:


 Fritz croaks
Fritz eats flies

4. To prove both of these sub-goals, the inference engine sees that both of these sub-goals were given as initial facts. Therefore, the conjunction is true:


 Fritz croaks and Fritz eats flies

therefore the antecedent of rule #1 is true and the consequent must be true:


 Fritz is a frog

therefore the antecedent of rule #3 is true and the consequent must be true:


 Fritz is green

This derivation therefore allows the inference engine to prove that Fritz is green. Rules #2 and #4 were not used.


Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in modus tollens) and even then, their antecedents are then considered as the new goals (and not the conclusions as in affirming the consequent), which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule used is modus ponens.


Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems.


Programming languages such as Prolog, Knowledge Machine and ECLiPSe support backward chaining within their inference engines.[4]



See also



  • Backtracking

  • Backward induction

  • Forward chaining

  • Opportunistic reasoning



References




  1. ^ Feigenbaum, Edward (1988). The Rise of the Expert Company. Times Books. p. 317. ISBN 0-8129-1731-6..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ Michel Chein; Marie-Laure Mugnier (2009). Graph-based knowledge representation: computational foundations of conceptual graphs. Springer. p. 297. ISBN 978-1-84800-285-2.


  3. ^
    Definition of backward chaining as a depth-first search method:

    • Russell & Norvig 2009, p. 337



  4. ^
    Languages that support backward chaining:

    • Russell & Norvig 2009, p. 339




External links


  • Backward chaining example



Popular posts from this blog

Y

Mount Tamalpais

Indian Forest Service