PLN Backward Chaining

From OpenCog
Jump to: navigation, search

Backward Chaining is (or planned to be) used in OpenCog for

  1. OpenPsi planning,
  2. improving fitness estimates of MOSES models,
  3. generally finding subtler patterns that specialized algorithms like the pattern miner miss.

For a Background on PLN see Probabilistic logic networks

Theory


Backward chaining: given a target, an atom that may or not contain free variables, the system tries to find an inference tree that proves the target, grounding the free variables if necessary.

Because the target determines which rules are selected and used, this method is considered goal-driven, in contrast to data-driven forward-chaining inference. Backwards chaining is also considered to be goal-directed inference, or hypothesis driven, because inferences are not performed until the system is made to prove a particular goal (i.e. a question).

Backward Chaining is used when you want to work backwards from the Consequent to the Antecedent. It is usually the part that follows 'then' in a proposition.

Examples of antecedents & consequents:

  • If , then .

is the consequent of this hypothetical proposition. is the antecedent.

  • If is a mammal, then is an animal.

Here, " is an animal" is the consequent, and " is a mammal" an antecedent.

  • If OpenCog can think, then the Singularity is near.

"the Singularity is near" is the consequent, and "OpenCog can think" is an antecedent.

A logical example of Backward Chaining

Here we will try to satisfy the goal of determining whether 'Plato is mortal' or not.
Known facts:

  • Plato is a philosopher
  • Plato likes to wrestle

The goal is to find out whether Plato is mortal based on a rule base:

  1. If X is a philosopher and likes to wrestle then X is a man
  2. If X can fly and lives in the sky then X is a god
  3. If X is a god then X is immortal
  4. If X is a man then X is mortal

Using backwards reasoning we can determine whether Plato is mortal in 4 steps. As we can see from rule 4, that this rule has a consequent of 'X is mortal' matches our goal to find out if 'Plato is mortal', so this checks out. Look at the antecedent to step 4 (the bit just after the If clause) - it is 'X is a man'. So now this antecedent (Substituting X for Plato) becomes the new goal of finding out if 'Plato is a man'.
We can see rule 1 contains the consequent 'X is a man', and can see the antecedents are .. 'X is a philosopher', and 'X likes to wrestle'.
So now the antecedent(s) are broken down into two subgoals...
Given the two facts of 'Plato is a Philosopher' and 'Plato likes to wrestle', we can see that these two subgoals are satisfied.
We find rule 1 matches - and find that it is true that the consequent of rule 4 'X is mortal' matches - and therefore Plato is indeed mortal.

More theory

Here is short explainer of Backward Chaining by Jocelyn Ireson-Paine

Also see the Wikipedia Page on Backward Chaining

Practice

The following example is a simplified version of frog and relies on some of its files. So first go under

cd <ATOMSPACE_ROOT>/examples/rule-engine/frog

and fire guile

guile

then set the logging level to debug (will be handy to understand what the backward chainer does)

(use-modules (opencog logger))
(cog-logger-set-level! "debug")

In this example we have a black box for which we know there is something in it, and it makes croaking sounds and eats flies. The objective is to find the color of the thing in the black box. We have the following relations defined

  • If X croaks and X eats flies - Then X is a frog
  • If X is a frog - Then X is green

Let's say the thing in black box is named Fritz and from above relations we need to deduce its color.

Now load the rule-engine module, as well as the knowledge and rule bases

(use-modules (opencog rule-engine))
(load "knowledge-base.scm")
(load "rule-base.scm")

Let's have a look these files. Open knowledge-base.scm. You may observe ImplicationScopeLinks such as

(ImplicationScope (stv 1.0 1.0)
   (TypedVariable
      (Variable "$X")
      (Type "ConceptNode"))
   (And
      (Evaluation
         (Predicate "croaks")
         (Variable "$X"))
      (Evaluation
        (Predicate "eats_flies")
        (Variable "$X")))
   (Inheritance
      (Variable "$X")
      (Concept "frog")))

encoding the relationship

  • If X croaks and X eats flies - Then X is a frog

At the end of that same file, there are facts regarding Fritz and other animals as they relate to attributes using EvaluationLinks.

You may want to take a look at rule-base.scm which loads rules, associate them to a rule-base (called ci-rbs) and set some URE parameters (such as URE:maximum-iterations). As well as exploring the rule files conditional-instantiation-meta-rule.scm and fuzzy-conjunction-introduction-rule.scm as well. Don't worry if you don't understand it, just read the comments on the top of the files that explains what the rules are doing.

What the backward chainer is gonna do is use these rules over the knowledge base to find out what is green. We are going to call the backward chainer but let's first define the target

(define target
  (InheritanceLink (VariableNode "$what") (ConceptNode "green"))
)

meaning you want atoms that have an inheritance relationship with the concept green. To simplify the life the backward chainer and avoid non nonsensical answers (just as the target itself), let's restrict the type of the variable $what

(define vardecl
  (TypedVariable (VariableNode "$what") (TypeNode "ConceptNode"))
)

We may now call the backward chainer as follows

(cog-bc ci-rbs target #:vardecl vardecl)

The last argument is the focus set which here is left empty, meaning the backward chainer is gonna consider the entire atomspace. You should get the following result

$4 = (SetLink
   (InheritanceLink (stv 1 1)
      (ConceptNode "Fritz")
      (ConceptNode "green")
   )
)

If you wish to understand in more depth how the backward chainer came up with the answer, open the log file opencog.log under the same directory. Search the messages logged by the URE components (containing the string "[URE]") and try to make sense of what you see. :-) In particular you may find that the inference trees evolved by the backward chainer are logged right after the message "With inference tree:". For instance the inference tree corresponding to the solution is

                          [12543950459016998847][1] [15852562040974555944][1]
                          ======fuzzy-conjunction-introduction-formula=======
             [14398977004689637744][1] [15383395919935236078][1]
             ------conditional-full-instantiation-formula-------
[15603332628456784367][1] [13829542973489658864][1]
------conditional-full-instantiation-formula-------
             [13520875333812722983][1]

where the numbers corresponds the UUIDs of the atoms, target, intermediary targets and premises, involved with the inference. If you search these UUIDs you can get the corresponding atoms in the log file.

Quiz

1. Backward Chaining...

is a technique of using linked lists where new elements are added to the back
is a technique of using linked lists where new elements are added to the front
It is one of the two main methods of reasoning when using an inference engine
It can be described logically as repeated application of modus ponens
is a technique for defending against someone with a morning star
is a system that awaits new facts and tries applying conditions as soon as they arrive
is an inference method that can be described (in lay terms) as working backward from the goal(s)
starts with the known facts and asserts new facts
starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved

2. When would backward chaining be appropriate?

When you want to start with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent
When you have data available, and can use inference rules to extract more data to meet final goal
When you have a clearly defined final goal, and want to use inference rules to work backwards from it
When you are feeling lazy
when you want to determine whether the IF clause (antecedent) of a rule is true, and to do so you can search inference rules for a matching THEN clause (consequent)

3. It is important to make sure each step in the task is written into the program.

True
False

4. Backward Chaining is..

data-driven
goal-driven
hypothesis driven
goal-directed inference
data-directed inference

Your score is 0 / 0

*If you dear reader think of some good quiz questions, add them to the discussion branch of this wiki page

Info

Maintained by Misgana, Nil (Practise)

Theory/Quiz added by Adam

Priority: Medium

  • add a good example (William Ma should pick the example)