URE Control Rules
Control Rules as Cognitive Schematics
Control rules are specialized cognitive schematics
Context & Action ⟹ Goal
where the context is a property of the current inference, the action is an inference rule applied to a given premise (for backward chaining) or conclusion (for forward chaining) of the current inference tree, and the goal is the goodness of the produced inference.
More specifically it can be represented with an ImplicationLink
ImplicationLink <TV> AndLink <inference-pattern> <node-pattern> <rule-pattern> <good-produced-inference>
<inference-pattern> <node-pattern> <rule-pattern>
expresses whether the inference tree being expanded, the node (premise or conclusion) from which it is expanded, and the rule used for expansion have some properties, or follow some patterns.
expresses whether the produced inference is good, according to some measure of goodness.
Learning Control Rules
Control rules can be handwritten, but most interestingly they can be learned by OpenCog. Currently the main way they are learned is via pattern mining by looking for conjunctive patterns such as
<inference-pattern> And <node-pattern> And <rule-pattern> And <good-produced-inference>
<inference-pattern> And <node-pattern> And <rule-pattern> And Not <good-produced-inference>
then transformed to into control rules.
Aggregating Control Rules
In case multiple control rules apply to the same inference rule, which happens often when learned, they need to be aggregated in order to properly estimate the weight of that inference rule. For that Bayesian model averaging, in particular a specialized form of Solomonoff Operator Induction , is currently being used.
For an example of control rule learning and usage, see inference control learning