Unified rule engine

(Redirected from URE)

The unified rule engine (URE for short) is a generic opencog rule engine operating on the AtomSpace. Two chaining modes are currently supported, Forward Chaining and Backward Chaining. It is mostly built on top of the Pattern Matcher. For more history about its design see URE historical discussion.

Configuring and using the URE

For configuration and usage see URE Configuration.

Forward Chainer

The forward chainer applies rules from premises to conclusions. It currently uses a rather brute force algorithms, select sources and rules somewhat randomly, apply these to produce conclusions, re-insert them are new sources and re-iterate till a stop criterion has been met.

Forward Chainer Algorithm

The forward chainer is initialized with:

• A set of sources, that may or not contain variables, from which to start the chaining process.
• Optionally a focus set, to restrict rule application to a subset of the atomspace.
• A rule-base, or rule-based system, defining a set of rules and parameters, amongst which are:
• Fitness functions over sources and rules, to drive the inference control.
• (Optional) Other parameters for termination and finer control. See URE Parameters for more details.

The high-level pseudocode of the forward chainer is as follows:

1. Initialize the potential-source set with the initial source atoms.
2. Randomly select the next source from the potential-source set (according to a source fitness function based distribution).
3. Amongst all valid rules, randomly select one (Thompson Sampling based on the truth value of the rule).
4. Unify the selected rule with the selected source, and apply it to generate conclusions.
5. Insert the conclusions to the potential-source set.
6. Check if one of the stopping criteria has been met. If so, exit and return the potential-source set, otherwise repeat from step 2.

Fitness Functions

At the moment the fitness functions are crudely based on probability estimates of the given source or rule to leading to a desired forward inference chain.

For the source the probability estimate is calculated as the compounding probabilities of its applied rules so far (how much compounding takes place can be tweaked to control breadth vs depth expansion).

For the rule the probability estimate is expressed by the truth value of the rule, see Adding Rules, and the uncertainty taken into account via Thompson Sampling.

Rule Unification and Application

Unifying source and rule, as required in step 4, means unifying the source pattern with one of the rule premises patterns. For instance if the source is:

```InheritanceLink
ConceptNode "John"
VariableNode "\$X"
```

and the rule is (a simplified version of deduction tailored for this example):

```BindLink
VariableList
VariableNode "\$A"
VariableNode "\$B"
VariableNode "\$C"
VariableNode "\$A"
VariableNode "\$B"
VariableNode "\$B"
VariableNode "\$C"
GroundedSchemaNode "scm: deduction-formula"
VariableNode "\$A"
VariableNode "\$C"
VariableNode "\$A"
VariableNode "\$B"
VariableNode "\$B"
VariableNode "\$C"
```

The rule premises are

``` InheritanceLink
VariableNode "\$A"
VariableNode "\$B"
```

and

``` InheritanceLink
VariableNode "\$B"
VariableNode "\$C"
```

The source here can be unified with both premises. Either with the first one, where

```VariableNode "\$A" -> ConceptNode "John"
VariableNode "\$B" -> VariableNode "\$X"
```

or with the second one, where

```VariableNode "\$B" -> ConceptNode "John"
VariableNode "\$C" -> VariableNode "\$X"
```

either way the rule can be applied over the atomspace, but before this it will be rewritten to be specialized from that source, that is (assuming the first unification is selected):

```BindLink
VariableList
VariableNode "\$B"
VariableNode "\$C"
ConceptNode "John"
VariableNode "\$B"
VariableNode "\$B"
VariableNode "\$C"
GroundedSchemaNode "scm: deduction-formula"
ConceptNode "John"
VariableNode "\$C"
ConceptNode "John"
VariableNode "\$B"
VariableNode "\$B"
VariableNode "\$C"
```

By doing that we restrict the rule application over the selected source, decreasing its computational cost and pollution as well.

Termination

For now termination is determined by

1. the number of forward steps applied,
2. whether all combinations of sources and rules have been exhausted.

More termination criteria may be introduced in the future involving how well the inference goal is fulfilled, etc.

Backward Chainer

Conceptually one may see Backward Chaining as a search for forward inferences that would lead to the target. That is in practice how the Backward Chainer algorithm works, by evolving inference trees, that may prove, by construction, the given target.

Back-Inference Tree

The structure holding the evolving inference trees is called the Back-Inference Tree (BIT for short), and is inspired by Ari Heljakka's C++ Backward Chainer design. The discussion that led to this design can be found in this [issue]. A BIT is an and-or-tree, with a target at its root. The and-branches represent the multiplicity of premises that must be fulfilled in order to apply a rule, while the or-branches represent the multiplicity of rules that can match a target. In practice the BIT however is merely a collection of inference trees (with and-branches only) leaving in the AtomSpace.

Inference Tree

More specifically an inference tree (called and-BIT, due to being and-branches only tree), is a BindLink, looking like a rule with the exception that the rewrite term may be composed of multiple formula applications. For instance the inference tree representing a deduction application to obtain target (in scheme format to display the atom values as well)

```(InheritanceLink
(VariableNode "\$X") ; [6809909406030619949][1]
(ConceptNode "D") ; [246112806454457922][1]
) ; [13295355732252075959][1]
```

would be

```(BindLink
(VariableList
(VariableNode "\$X") ; [6809909406030619949][1]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(TypeNode "ConceptNode") ; [3788388250666256890][1]
) ; [11215517665988510382][6]
) ; [15424546366970522031][6]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(ConceptNode "D") ; [246112806454457922][1]
) ; [16015351290941397556][6]
(VariableNode "\$X") ; [6809909406030619949][1]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
) ; [17146615216377982335][6]
) ; [12118664779578609880][6]
(GroundedSchemaNode "scm: deduction-formula") ; [5481571512010723578][1]
(VariableNode "\$X") ; [6809909406030619949][1]
(ConceptNode "D") ; [246112806454457922][1]
) ; [13295355732252075959][1]
(VariableNode "\$X") ; [6809909406030619949][1]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
) ; [17146615216377982335][6]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(ConceptNode "D") ; [246112806454457922][1]
) ; [16015351290941397556][6]
) ; [12005927857233859901][6]
) ; [10461083420299819741][6]
) ; [16509059222210966642][6]
```

which looks like a rule after unification. This FCS may be represented graphically as a an up-side-down and-tree

```[17146615216377982335][6] [16015351290941397556][6]
-----------------deduction-formula-----------------
[13295355732252075959][1]
```

where the premises are over the line and the conclusion (the target) under the line.

An expanded inference tree from it, where an additional deduction occurs may be

```(BindLink
(VariableList
(VariableNode "\$X") ; [6809909406030619949][1]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(TypeNode "ConceptNode") ; [3788388250666256890][1]
) ; [11215517665988510382][6]
(VariableNode "\$B-6229393a") ; [6185394777777469381][6]
(TypeNode "ConceptNode") ; [3788388250666256890][1]
) ; [15556655485509547465][6]
) ; [17281223568524105048][6]
(VariableNode "\$B-6229393a") ; [6185394777777469381][6]
(ConceptNode "D") ; [246112806454457922][1]
) ; [11133117073607658831][6]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(VariableNode "\$B-6229393a") ; [6185394777777469381][6]
) ; [12731261225409633207][6]
(VariableNode "\$X") ; [6809909406030619949][1]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
) ; [17146615216377982335][6]
) ; [16361759001634035305][6]
(GroundedSchemaNode "scm: deduction-formula") ; [5481571512010723578][1]
(VariableNode "\$X") ; [6809909406030619949][1]
(ConceptNode "D") ; [246112806454457922][1]
) ; [13295355732252075959][1]
(VariableNode "\$X") ; [6809909406030619949][1]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
) ; [17146615216377982335][6]
(GroundedSchemaNode "scm: deduction-formula") ; [5481571512010723578][1]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(ConceptNode "D") ; [246112806454457922][1]
) ; [16015351290941397556][6]
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(VariableNode "\$B-6229393a") ; [6185394777777469381][6]
) ; [12731261225409633207][6]
(VariableNode "\$B-6229393a") ; [6185394777777469381][6]
(ConceptNode "D") ; [246112806454457922][1]
) ; [11133117073607658831][6]
) ; [10363704109909197645][6]
) ; [18042231709829933293][6]
) ; [14032808276122395638][6]
) ; [12487963839188355478][6]
) ; [12956955273435071231][6]
```

where the second premise

```(InheritanceLink
(VariableNode "\$B-6266d6f2") ; [4097372290580364298][6]
(ConceptNode "D") ; [246112806454457922][1]
) ; [16015351290941397556][6]
```

has been replaced by a deduction application. This expanded inference tree may be graphically represented by

```             [12731261225409633207][6] [11133117073607658831][6]
-----------------deduction-formula-----------------
[17146615216377982335][6] [16015351290941397556][6]
-----------------deduction-formula-----------------
[13295355732252075959][1]
```

All atoms of an inference tree (at least ones in the graphical representation) are called BIT-nodes. Amongst the BIT-nodes, all backwardly expandable ones are called premises, here they are [17146615216377982335][6], [12731261225409633207][6] and [11133117073607658831][6].

Backward Chainer Algorithm

Similarily to the Forward Chainer, the Backward Chainer is initialized with:

• A target, that may or not contain variables, from which to start the back chaining process.
• Optionally, a focus set, to restrict rule application to a subset of the atomspace.
• A rule-base (rule-based system) defining a set of rules and parameters for termination and control. See URE Parameters for more details.

The high-level pseudocode of the backward chainer is as follows:

1. Expand the BIT. The first time, initialize the BIT with the target, otherwise:
1. Randomly select an inference tree (according to its probability estimate of success).
2. Randomly select a premise of that inference tree (uniform for now).
3. Randomly select a rule amongst the valid ones (according to its second order probability of success).
4. Unify the select rule with the selected premise, and expand the inference tree.
2. Fulfill the BIT.
1. Select an inference tree (typically the one just being expanded)
2. Apply its corresponding BindLink to the atomspace, insert the results to the solution set, if any.
3. Reduce the BIT. To not let the BIT grow indefinitely large, randomly remove the inference trees that are the least likely to be expanded.
4. Check if one of the stopping criteria has been met. If so, exit and return the solution set, otherwise repeat from step 1.

Fitness Functions

One may invoke the backward chainer with different goals in mind. The most commons are for instance

1. Maximize the confidence of the target
2. Maximize the strength of the target

For the moment the only fitness function that is implement is the one associated with the goal of maximizing the confidence.

Target and Rule Unification

Unifying a rule and target, as required in step 1.4, means unifying the rule conclusion and the target. Unification works in same way as with the forward chainer, and a specialized rule is created at a result of the unification. The only difference is that it is performed between the target and the rule conclusion instead of between the source and the rule premises.

Termination

For now termination is solely determined by the number of BIT expansions. More termination criteria may be introduced in the future involving how well the inference goal is fulfilled, etc.

Mixed Forward and Backward Chaining [Not implemented yet]

Given that the forward and backward chaining processes as described above are so similar, it is unsurprising that the two can be combined together into a single “forward/backward chaining” process. We may assume the forward/backward chaining process is initialized with:

• A set of one or more source/target atoms.
• Fitness functions.
• A rule-base.
• Optionally, a focus-set.

High-level pseudocode for the forward/backward chaining process would be as follows.

Initialize the potential-source-target list with the initial source/target Atoms Repeat:

1. Choose the next source/target, from the potential-source-target list (via stochastic selection based on weights that are assigned to potential sources or targets based on multiple factors)
2. Choose whether to apply a forward or backward step to the source/target (via some criterion, which may be a random selection, or may be a sophisticated judgment based on the context of the inference process)
3. Choose a rule to apply to the source/target (selecting from among the admissible rules in the rule-base via stochastic selection, based on the weights of the rules in the current context. Also, if a forward step was chosen, select a position for the source to assume among the premises of the rule (in general this would also be done based on the weights of the rule positions in the current context.
4. Choose premises for the rule (all the premises in the case of a backward step; additional premises in the case of a forward step) based on applying the PatternMatcher with an appropriate callback and a potential restriction to the focus-set (and in the case of a forward step, a potential restriction to the potential-source-target list)
5. In the case of a forward step, apply the rule to the selected source premise and the chosen additional premises, and generate a conclusion
6. Check if one of the stopping criteria has been met. If so, exit the loop and stop repeating.
7. Push the result onto the potential-source-target list: for a forward step, the generated conclusion; for a backward step, the selected premises