URE Control Rules

From OpenCog

Control Rules as Cognitive Schematics

As part of its Control policy the URE can make use of Control rules to calculate the weight of selecting the next rule based on the state of the current inference.

Control rules are specialized cognitive schematics

Context & Action ⟹ Goal

where the context is a property of the current inference, the action is an inference rule applied to a given premise (for backward chaining) or conclusion (for forward chaining) of the current inference tree, and the goal is the goodness of the produced inference.

More specifically it can be represented with an ImplicationLink

ImplicationLink <TV>



expresses whether the inference tree being expanded, the node (premise or conclusion) from which it is expanded, and the rule used for expansion have some properties, or follow some patterns.


expresses whether the produced inference is good, according to some measure of goodness.

Learning Control Rules

Control rules can be handwritten, but most interestingly they can be learned by OpenCog. Currently the main way they are learned is via pattern mining by looking for conjunctive patterns such as

<inference-pattern> And <node-pattern> And <rule-pattern> And <good-produced-inference>

as well

<inference-pattern> And <node-pattern> And <rule-pattern> And Not <good-produced-inference>

then transformed to into control rules.

Aggregating Control Rules

In case multiple control rules apply to the same inference rule, which happens often when learned, they need to be aggregated in order to properly estimate the weight of that inference rule. For that Bayesian model averaging, in particular a specialized form of Solomonoff Operator Induction [1], is currently being used.


Once all rule weights have been estimated for the next rule selection, selection is performed with [Thompson Sampling] just like it normally would with statically defined rule weights.


For an example of control rule learning and usage, see inference control learning