OpenCogPrime:HebbianInferenceControl

From OpenCog
Jump to: navigation, search

Hebbian Inference Control

One aspect of inference control is Evaluator choice, which is based on mining contextually relevant information from the InferencePatternRepository (see OpenCogPrime:InferencePatternMining). But, what about the Atom choice aspect? This can in some cases be handled via the OpenCogPrime:InferencePatternRepository as well, but it is less likely to serve the purpose than in the case of Evaluator choice. Evaluator choice is about finding structurally similar inferences in roughly similar contexts, and using them as guidance. But Atom choice has a different aspect: it is also about what Atoms have tended to be related to the other Atoms involved in an inference, generically, not just in the context of prior inferences, but in the context of prior perceptions, cognition and actions in general.

Concretely, this means that Atom choice must make heavy use of HebbianLinks. The formation of HebbianLinks will be discussed in the following chapter, on attention allocation. Here it will suffice to get across the basic idea. The discussion of HebbianLinks here will hopefully serve to help you understand the motivation for the HebbianLink formation algorithm to be discussed later. Of course, inference is not the only user of HebbianLinks in the OCP system, but its use of HebbianLinks is somewhat representative.

The semantics of a HebbianLink between A and B is, intuitively: In the past, when A was important, B was also important. HebbianLinks are created via two basic mechanisms: pattern-mining of associations between importances in the system's history, and PLN inference based on HebbianLinks created via pattern mining (and inference). Thus, saying that PLN inference control relies largely on HebbianLinks is in part saying that PLN inference control relies on PLN. There is a bit of a recursion here, but it's not a bottomless recursion because it bottoms out with HebbianLinks learned via pattern mining.

As an example of the Atom-choices to be made by an individual Evaluator in the course of doing inference, consider that to evaluate (Inheritance A C) via a deduction-based Evaluator, some collection of intermediate nodes for the deduction must be chosen. In the case of higher-order deduction, each deduction may involve a number of complicated subsidiary steps, so perhaps only a single intermediate node will be chosen. This choice of intermediate nodes must be made via context-dependent prior probabilities. In the case of other Evaluators besides deduction, other similar choices must be made.

The basic means of using HebbianLinks in inferential Atom-choice is simple: If there are Atoms linked via HebbianLinks with the other Atoms in the inference tree, then these Atoms should be given preference in the Evaluator's (bandit problem based) selection process.

Along the same lines but more subtly, another valuable heuristic for guiding inference control is "on-the-fly associatedness assessment." If there is a chance to apply the chosen Evaluator via working with Atoms that are:

  • strongly associated with the Atoms in the Atom being evaluated (via HebbianLinks)
  • strongly associated with each other via HebbianLinks (hence forming a cohesive set)

then this should be ranked as a good thing.

For instance, it may be the case that, when doing deduction regarding relationships between humans, using relationships involving other humans as intermediate nodes in the deduction is often useful. Formally this means that, when doing inference of the form:

Inheritance (Evaluation human A) B
Inheritance (Evaluation human C) B
|-
Inheritance A C

then it is often valuable to choose B so that:

HebbianLink B human

has high strength. This would follow from the above-mentioned heuristic.

Next, suppose one has noticed a more particular rule — that in trying to reason about humans, it is particularly useful to think about their wants. This suggests that in abductions of the above form it is often useful to choose B of the form:

B = SatisfyingSet [ wants( human $X, concept C) ]

This is too fine-grained a cognitive-control intuition to come from simple association-following. Instead, it requires fairly specific data-mining of the system's inference history. It requires the recognition of "Hebbian predicates" of the form:

HebbianImplication[ ForAll $B]
  AND
    Inheritance $A human
    Inheritance $B human
    ThereExists $C
      Equivalence
        $B
        SatisfyingSet[$X]
          Evaluation wants ($X, $C)
  AND
    Inheritance $A $B
    Inheritance $C $B

The semantics of:

HebbianImplication X Y

is that when X is being thought about, it is often valuable to think about Y shortly thereafter.

So what is required to do inference control according to heuristics like think about humans according to their wants is a kind of backward-chaining inference that combines Hebbian implications with PLN inference rules. PLN inference says that to assess the relationship between two people, one approach is abduction. But Hebbian learning says that when setting up an abduction between two people, one useful precondition is if the intermediate term in the abduction regards wants. Then a check can be made whether there are any relevant intermediate terms regarding wants in the system's memory.

What we see here is that the overall inference control strategy can be quite simple. For each Evaluator that can be applied, a check can be made for whether there is any relevant Hebbian knowledge regarding the general constructs involved in the Atoms this Evaluator would be manipulating. If so, then the prior probability of this Evaluator is increased, for the purposes of the Evaluator-choice bandit problem. Then, if the Evaluator is chosen, the specific Atoms this Evaluator would involve in the inference can be summoned up, and the relevant Hebbian knowledge regarding these Atoms can be utilized.

To take another similar example, suppose we want to evaluate:

Inheritance pig dog

via the deduction Evaluator (which also carries out induction and abduction). There are a lot of possible intermediate terms, but a reasonable heuristic is to ask a few basic questions about them: How do they move around? What do they eat? How do they reproduce? How intelligent are they? Some of these standard questions correspond to particular intermediate terms, e.g. the intelligence question partly boils down to computing:

Inheritance pig intelligent

and:

Inheritance dog intelligent

So a link:

HebbianImplication animal intelligent

may be all that's needed to guide inference to asking this question. This HebbianLink says that when thinking about animals, it's often interesting to think about intelligence. This should bias the deduction Evaluator to choose intelligent as an intermediate node for inference.

On the other hand, the what do they eat question is subtler and boils down to asking; Find $X so that when:

  R($X) = SatisfyingSet[$Y] eats ($Y,$X)

holds, then we have:

Inheritance pig R($X)

and:

Inheritance dog R($X)

In this case, a HebbianLink from animal to eat would not really be fine-grained enough. Instead we want a link of the form:

HebbianImplication 
    Inheritance $X animal
    SatisfyingSet[$Y] eats ($Y,$X)

This says that when thinking about an animal, it's interesting to think about what that animal eats.

The deduction Evaluator, when choosing which intermediate nodes to use, needs to look at the scope of available HebbianLinks and HebbianPredicates and use them to guide its choice. And if there are no good intermediate nodes available, it may report that it doesn't have enough experience to assess with any confidence whether it can come up with a good conclusion. As a consequence of the bandit-problem dynamics, it may be allocated reduced resources, or another Evaluator is chosen altogether.

<< Integrative Inference