PLN Design, 2013

From OpenCog
(Redirected from New PLN Design, 2013)
Jump to: navigation, search

This page predates the actually new rule engine based URE Chainer Design. It considers a designed with a more "ambient" implementation of the PLN chainer, apparently supposed to be built on top of the now obsolete python version.

Introduction

This page sketches a design for an OpenCog-based PLN implementation that, compared to previous PLN implementations, looks less like a traditional logic system, and more like “a way of directing OpenCog’s broader cognitive dynamic to do PLN inference.”

NOTE: At the moment this page is both sketchy and preliminary, basically just some notes by Ben in need of review by others before it can be taken as a guide for implementation.

The downside of this sort of implementation is that it may be slower for the sorts of inferences that traditional logic engines are good at. The upside is that it should allow PLN to work together more easily with other OpenCog processes, thus (hopefully) enabling PLN to work better on problems involving a large number of uncertain knowledge items, and other kinds of inference typical in commonsense human reasoning but not well dealt with by traditional logic engines.

Some conceptual inspiration for the approach taken here is in A System-Theoretic Analysis of Focused Cognition, and its Implications for the Emergence of Self and Attention. Ben Goertzel. Novamente LLC. November 4, 2006.

Implementation Guide

New PLN Implementation Guide, 2013 provides some suggestions regarding how to implement the various aspects of the new PLN in a sensible order.

A tutorial on how to use the new implementation is available in opencog/python/pln/README.md.

Elements of PLN

This section reviews relevant basics of PLN. The same material is reviewed more thoroughly many other places.

Rules and Formulas

The key elements of PLN that must be understood, to understand this design sketch, are

  1. Rules
  2. Formulas

A PLN rule has

  1. Input: A tuple of Atoms (which must satisfy certain criteria, specific to the Rule)
  2. Output: A tuple of Atoms

(this is very general. In all existing cases that I can think of, the output is a single Atom; and the input is a single Atom or a pair of Atoms.)

The prototypical example is the DeductionRule. Its input must look like

''X_Link'' A B
''X_Link'' B C

And its output then looks like

''X_Link'' A C

Here, X_Link may be either InheritanceLink, SubsetLink, ImplicationLink or ExtensionalImplicationLink.

A PLN formula goes along with a PLN rule, and tells the uncertain truth value of the output, based on the uncertain truth value of the input. For example, if we have

''X_Link'' A B <s1>
''X_Link'' B C <s2>

then the standard PLN deduction formula tells us

''X_Link'' A C <s3>

with

s3 = ''(INSERT DEDUCTION FORMULA HERE)''

In this example, the uncertain truth value of each Atom is given as a single “strength” number. In general, uncertain truth values in PLN may take multiple forms, such as

  • Single strength values like .8, which may indicate probability or fuzzy truth value, depending on the Atom type
  • (strength, confidence) pairs like (.8, .4)
  • (strength, count) pairs like (.8, 15)
  • indefinite probabilities like (.6,. .9, .95) which indicate credible intervals of probabilities

Forward and backward chaining

Typical patterns of usage of PLN are forward-chaining and backward-chaining inference.

Forward chaining basically means:

  1. Given a pool (a list) of Atoms of interest
  2. One applies PLN rules to these Atoms, to generate new Atoms, hopefully also of interest
  3. Adding these new Atoms to the pool, one returns to Step 1

EXAMPLE: “People are animals” and “animals breathe” are in the pool of Atoms. These are combined by the Deduction rule to form the conclusion “people breathe”.

Backward chaining falls into two cases.

  1. Truth value query. Given a target Atom whose truth value is not known (or is too uncertainly known), plus a pool of Atoms, find a way to estimate the truth value of the target Atom, via combining the Atoms in the pool using the inference Rules
    EXAMPLE: The target is “do people breathe” (InheritanceLink people breathe)…. The truth value of the target is estimated via doing the inference “People are animals, animals breathe, therefore people breathe.”
  2. Variable fulfillment query. Given a target Link (Atoms may be Nodes or Links) with one or more VariableAtoms among its targets, figure out what Atoms may be put in place of these VariableAtoms, so as to give the target Link a high strength* confidence (i.e. a “high truth value”).
    EXAMPLE: The target is “what breathes?”, i.e. “InheritanceLink $X breathe”.
    Direct lookup into the AtomSpace reveals the Atom “InheritanceLink animal breathe”, indicating that the slot $X may be filled by “animal”. Inference reveals that “Inheritance people breathe” , so that the slot $X may also be filled by “people”….
    EXAMPLE: the target is “what breathes and adds”, ie “(InheritanceLink $X breathe) AND (InheritanceLink $X add)”.
    Inference reveals that the slot $X may be filled by “people” but not “cats” or “computers.”

Common-sense inference may involve a combination of backward chaining and forward chaining.

The hardest part of inference is “inference control” – that is, knowing which among the many possible inference steps to take, in order to obtain the desired information (in backward chaining) or to obtain interesting new information (in forward chaining). In an Atomspace with a large number of (often quite uncertain) Atoms, there are many, many possibilities and powerful heuristics are needed to choose between them. The best guide to inference control is some sort of induction based on the system’s past history of which inferences have been useful, of course….

Sketch of an Atomspace-focused PLN design

The Atomspace AttentionalFocus as PLN's Working Memory

Every Atom in the Atomspace has an STI (ShortTermImportance) value…. The AttentionalFocus is defined as the set of Atoms with STI above a certain threshold (the AttentionalFocusBoundary). Let us assume that the “active pool of Atoms” for PLN is the AttentionalFocus…. This approach integrates PLN and ECAN (economic attention allocation, the process that updates STI and LTI=LongTermImportance values) right from the start….

Representing PLN Rules as GroundedSchemaNodes

Next, let us represent inference rules as GroundedSchemaNodes…. So for instance the PLN Deduction Rule, becomes a GroundedSchemaNode with the properties:

The actual PLN Rules and Formulas are then packed into the internal execution methods of GroundedSchemaNodes. In the current PLN code, each inference rule has a Rule class and a separate Formula class. This separation seems worth maintaining…. So then, e.g. the PLNDeductionRule GroundedSchemaNode, would invoke a function of the general form

Link PLNDeductionRule(Link L1, Link L2)

which calculates the deductive consequence of two links. This function would in turn invoke a function of the form

TruthValue PLNDeductionFormula(TruthValue tAB, TruthValue tBC, TruthValue tA, TruthValue tB, TruthValue tC)

which in turn might invoke functions such as

SimpleTruthValue SimplePLNDeductionFormula(SimpleTruthValue tAB, SimpleTruthValue tBC, SimpleTruthValue tA, SimpleTruthValue tB, SimpleTruthValue tC)

IndefiniteTruthValue IndefinitePLNDeductionFormula(IndefiniteTruthValue tAB, IndefiniteTruthValue tBC, IndefiniteTruthValue tA, IndefiniteTruthValue tB, IndefiniteTruthValue tC)

Recording Executed PLN Inferences in the Atomspace

Once an inference has been carried out, it can be represented in the Atomspace, e.g. as

ExecutionLink
	GroundedSchemaNode: PLNDeductionRule
	ListLink
		HypotheticalLink
                      InheritanceLink people animal <tv1>
		HypotheticalLink
                       InheritanceLink animal breathe <tv2>
	HypotheticalLink
                 InheritanceLink people breathe <tv3>

In the above, e.g.

InheritanceLink people animal

is used as shorthand for

InheritanceLink C1 C2

where C1 and C2 are ConceptNodes representing “people” and “animal” respectively.

Note we can also have records of inferences involving variables, such as

ExecutionLink
	GroundedSchemaNode: PLNDeductionRule
	ListLink
		HypotheticalLink
                       InheritanceLink $V1 animal <tv1>
		HypotheticalLink
                       InheritanceLink animal breathe <tv2>
	HypotheticalLink
                       InheritanceLink $V1 breathe <tv3>

where $V1 is a specific VariableNode.

Anatomy of a Single Inference Step

A single inference step, then, may be viewed as follows:

  1. Choose an inference rule R and a tuple of Atoms that collectively match the input conditions of the rule
  2. Apply the chosen rule R to the chosen input Atoms
  3. Create an ExecutionLink recording the output found
  4. In addition to retaining this ExecutionLink in the Atomspace, also save a copy of it to the InferenceRepository (this is not needed for the very first implementation, but will be very useful once PLN is in regular use.)

(The InferenceRepository is a special Atomspace that exists just to save a record of PLN inferences. It can be mined, after the fact, to learn inference patterns, which can be used to guide future inferences.)

Basic Forward and Backward Inference Steps

The choice of an inference step, at the microscopic level, may be done in a number of ways, of which perhaps the simplest are:

  • Basic forward step. Choose an Atom A1, then choose a rule R. If R only takes one input, then apply R to A1. If R applies to two Atoms, then find another Atom A2 so that (A1, A2) may be taken as the inputs of R.
  • Basic backward step. Choose an Atom A1, then choose a rule R. If R takes only one input, then find an Atom A2 so that applying R to A2, yields A1 as output. If R takes two inputs, then find two Atoms (A2, A3) so that applying R to (A2, A3) yields A1 as output.

Given a target Atom such as

A1 = Inheritance $v1 breathe

the VariableAbstractionRule will do inferences such as

ExecutionLink
	VariableAbstractionRule
        HypotheticalLink
		Inheritance people breathe
	HypotheticalLink
		Inheritance $v1 breathe

This allows the basic backward step to carry out variable fulfillment queries as well as truth value queries. We may encapsulate these processes in the Atomspace as

GroundedSchemaNode: BasicForwardInferenceStep
GroundedSchemaNode: BasicBackwardInferenceStep

which take as input some Atom A1….

… and also as

GroundedSchemaNode: AttentionalForwardInferenceStep
GroundedSchemaNode: AttentionalBackwardInferenceStep

which automatically choose the Atom A1 they start with, via choosing some Atom within the AttentionalFocus, with probability proportional to STI.

Forward chaining, in its simplest form, then becomes: The process of repeatedly executing the AttentionalForwardInferenceStep SchemaNode.

Backward chaining, in the simplest case (we will discuss more complex cases below), becomes the process of

  1. Repeatedly executing the BasicBackwardInferenceStep SchemaNode, starting from a given target Atom
  2. Concurrently, repeatedly executing the AttentionalBackwardInferenceStep SchemaNode, to ensure that backward inference keeps occurring, regarding Atoms that were created via Step 1

Inside the BasicForwardStep or BasicBackwardStep schema, there are two choices to be made: choosing a rule R, and then choosing additional Atoms A2 and possibly A3. The choice of the rule R should be made probabilistically, choosing each rule with probability proportional to a certain weight associated with each rule. Initially we can assign these weights generically, by hand, separately for each application domain. Later on they should be chosen adaptively, based on information mined from the InferenceRepository, regarding which rules have been better in which contexts.

The choice of the additional Atoms A2 and A3 is subtler, and should be done using STI values as a guide:

  1. First the AttentionalFocus is searched, to find all the Atoms there that fit the input criteria of the rule R. Among all the Atoms found, an Atom is chosen with probability proportional to STI.
  2. If the AttentionalFocus doesn’t contain anything suitable, then an effort may be made to search the rest of the Atomspace to find something suitable. If multiple candidates are found within the amount of effort allotted, then one should be chosen with probability proportional to STI

If an Atom A is produced as output of a forward inference step, or is chosen as the input of a backward inference step, then the STI of this Atom A should be incremented. This will increase the probability of A being chosen for ongoing inference. In this way, attention allocation is used to guide the course of ongoing inference.

Interaction of Forward and Backward Inference

Starting from a target, a series of backward inferences can figure out ways to estimate the truth value of that target, or fill in the variables within that target.

However, once the backward-going chain of inferences is done (to some reasonable degree of satisfaction), there is still the remaining task of using all the conclusions drawn during the series of backward inferences, to actually update the target.

Elegantly, it seems this can be done via forward inference. So if forward and backward inference are both operating concurrently on the same pool of Atoms, it is forward inference that will propagate the information learned during backward chaining inference, up to the target of the backward chain.

Coordinating Variable Bindings

Probably the nastiest subtlety that comes up in a PLN implementation is the coordination of the values assigned to variables, across different micro-level inferences that are supposed to be coordinated together as part of the same macro-level inference.

For a very simple example, suppose we have a truth-value query with target

A1 = InheritanceLink Bob rich

Suppose the deduction rule R is chosen.

Then if we can find (A2, A3) that look like, say,

A2 = InheritanceLink Bob owns_mansion
A3 = InheritanceLink owns_mansion rich

, our problem is solved.

But what if there is no such simple solution in the Atomspace available? Then we have to build something like

A2 = InheritanceLink Bob $v1
A3 = InheritanceLink $v1 rich

and try to find something that works to fill in the variable $v1….

But this is tricky, because $v1 now has two constraints (A2 and A3). So, suppose A2 and A3 are both created as a result of applying BasicBackwardInferenceStep to A1, and thus A2 and A3 both get high STI values. Then both A2 and A3 are going to be acted on by AttentionalBackwardInferenceStep. But as A2 and A3 are produced via other inputs using backward inference, it is necessary that the values assigned to $v1 in the context of A2 and A3 remain consistent with each other.

Note that, according to the operation of the Atomspace, the same VariableAtom will be used to represent $v1 no matter where it occurs.

For instance, it will be problematic if one inference rule schema tries to instantiate $v1 with “owns_mansion”, but another tries to instantiate $v1 with “lives_in_Manhattan”.

That is, we don’t want to find

InheritanceLink Bob lives_in_mansion
InheritanceLink lives_in_mansion owns_mansion
|-
InheritanceLink Bob owns_mansion

which binds $v1 to owns_mansion

and

InheritanceLink lives_in_Manhattan lives_in_top_city
InheritanceLink lives_in_top_city rich
|-
InheritanceLink lives_in_Manhattan rich

which binds $v1 to lives_in_Manhattan

We want A2 and A3 to be derived in ways that bind $v1 to the same thing.

The most straightforward way to avoid confusion in this sort of context, is to introduce an addition kind of inference step,

  • Variable-guided backward step. Choose a set V of VariableNodes (which may just be a single VariableNode $v1), and identify the set S_V of all Atoms involving any of the variables in V.
    1. Firstly: If V divides into two sets V1 and V2, so that no Atom contains variables in both V1 and V2, then launch separate variable-guided backwards steps for V1 and V2. (This step is "Problem Decomposition")
    2. Carry out the basic backward step for all the Atoms in S_V, but restricting the search for Atoms A2, A3 in such a way that each of the variables in V is consistently instantiated. This is a non-trivial optimization, and more will be said about this below.
  • Variable-guided backward step, Atom-triggered. Choose an Atom A1. Identify the set V of VariableNodes targeted by A1, and then do a variable-guided backward step starting from V.

This variable guidance may, of course, be incorporated into the AttentionalBackwardInferenceStep as well…. In this case, backward chaining becomes the process of

  1. Repeatedly executing the VariableGuidedBackwardInferenceStep SchemaNode, starting from a given target Atom
  2. Concurrently, repeatedly executing the AttentionalVariableGuidedBackwardInferenceStep SchemaNode, to ensure that backward inference keeps occurring, regarding Atoms that were created via Step 1

The hard work here is then done in step 2 of the Variable Guided Backward Step, which has to search for multiple Atoms, to fulfill the requirements of multiple inference rules, in a way that keeps consistent variable instantiations. But this same difficulty exists in a conventional backward chaining framework, it’s just arranged differently, and not as neatly encapsulated.

An Example of Problem Decomposition

Here is an example of a case where, given a problem of finding values to assign a set of variables to make a set of expressions hold simultaneously, the appropriate course is to divide the set of expressions into two separate parts. Suppose we have the six expressions

E1 = Inheritance ( $v1,  Animal)

E2 = Evaluation( $v1, ($v2, Bacon) )

E3 = Inheritance( $v2, $v3)

E4 = Evaluation( Eat, ($v3, $v1) )

E5 = Evaluation (Eat, ($v7, $v9) )

E6 = Inheritance $v9 $v6

Since the set {E1, E2, E3, E4} doesn't share any variables with {E5, E6}, there is no reason to consider them all as one problem. Rather we will do better to decompose it into two problems, one involving {E1, E2, E3, E4} and one involving {E5, E6}.

In general, given a set of expressions, one can divide it into subsets, where each subset S has the property that: for every variable v contained in S, all occurrences of v in the Atomspace, are in expressions contained in S.

Example of Casting a Variable Assignment Problem as an Optimization Problem

Suppose we have the four expressions

E1 = Inheritance ( $v1,  Animal)
E2 = Evaluation( $v2, ($v1, Bacon) )
E3 = Inheritance( $v2, $v3)
E4 = Evaluation( Enjoy, ($v1, $v3) )

where Animal, Bacon and Enjoy are specific Atoms… Suppose the task at hand is to find values for ($v1, $v2, $v3) that will make all of these expressions confidently true.

If there is some assignment

($v1, $v2, $v3) = (A1,A2, A3)

ready to hand in the Atomspace, that fulfills the equations E1, E2, E3, E4, then the pattern matcher will find it !

For instance,

($v1, $v2, $v3) = (Cat, Eat, Chew)

would work here, since

E1 = Inheritance ( Cat,  Animal)
E2 = Evaluation( Eat, (Cat, Bacon) )
E3 = Inheritance( Eat, Chew)
E4 = Evaluation( Enjoy, (Cat, Chew) )

are all reasonably true.

If there is no such assignment ready to hand, then one is faced with a search problem.

I suggest this can be approached as an optimization problem, of maximizing a function

f($v1, $v2, $v3) = sc(E1) * sc(E2) * sc(E3) 

where

sc(A) = A.strength * A.confidence

f is then a function with signature

f: Atom^4 ==> float

f can then be optimized by a host of optimization algorithms. For instance a genetic algorithm approach might work, but a BOA (Bayesian Optimization Algorithm) approach would probably be better.

In a GA approach, mutation would work as follows. Suppose one had a candidate

($v1, $v2, $v3) = (A1,A2, A3)

Then one could mutate this candidate by (for instance) replacing A1 with some other Atom that is similar to A1, e.g. connected to A1 with a high-weight SimilarityLink in the Atomspace.

Backward Chaining via Nested Optimization

Given this framework that does inference involving variables via using optimization to solve simultaneous equations of logical expressions with overlapping variables, “backward chaining” becomes the iterative launch of repeated optimization problems, each one defined in terms of the previous ones.

I will illustrate this via continuing with the (E1, E2, E3, E4) example from above… Suppose one found an assignment

($v1, $v2, $v3) = (A1,A2, A3)

that worked for every equation except E3. Then there is the problem of finding some way to make

E3 = Inheritance( A2, A3)

work….

For instance, what if we have found the assignment

($v1, $v2, $v3) = (Cat, Eat, Chase)

In this case, we have

E1 = Inheritance ( Cat,  Animal)  -- YES !!
E2 = Evaluation( Eat, (Cat, Bacon) ) – YES!
E3 = Inheritance( Eat, Chase) --  NOT REALLY ;-(
E4 = Evaluation( Enjoy, (Cat, Chase) ) – YES!!

so the assignment works for every equation except E3. Then there is the problem of finding some way to make

E3 = Inheritance( Eat, Chase)

work…. But if the truth value of Inheritance( Eat, Chase) has a low strength and high confidence, this may seem hopeless, so this assignment may not get followed up on.

On the other hand, we might have the assignment For instance, what if we have found the assignment

($v1, $v2, $v3) = (Cat, Eat, SocialActivity)


In this case, for a particular reasoning system, we might have

E1 = Inheritance ( Cat,  Animal)  -- YES !!
E2 = Evaluation( Eat, (Cat, Bacon) ) – YES!
E3 = Inheritance( Eat, SocialActivity) --  UNKNOWN
E4 = Evaluation( Enjoy, (Cat, SocialActivity) ) – YES!!

The above would hold if the reasoning system knew that cats enjoy social activities, but did not know whether eating is a social activity. In this case, the reasoning system would have reason to launch a new inference process aimed at assessing the truth value of

E3 = Inheritance( Eat, SocialActivity) --  

This is backward chaining: Launching a new inference process to figure out a question raised by another inference process. For instance, in this case the inference engine might: Choose an inference Rule (let's say it's Deduction, for simplicity), and then look for $v4 so that

Inheritance Eat $v4
Inheritance $v4 SocialActivity

are both true…. In this case one has spawned a new Variable-Guided Backward Inference problem, which must be solved in order to make {A1, A2, A3} an OK solution for the problem of {E1, E2, E3, E4}….

Or it might choose the Induction rule, and look for $v4 so that

Inheritance $v4 Eat
Inheritance $v4 SocialActivity

Maybe then it would find that $v4=Dinner works, because it knows that

Inheritance Dinner Eat
Inheritance Dinner SocialActivity

But maybe $v4=Dinner doesn’t boost the truth value of

E3 = Inheritance( Eat, SocialActivity) 
 

high enough. In that case it may keep searching for more information about E4 in the context of this particular variable assignment. It might choose Induction again, and discover e.g. that

Inheritance Lunch Eat
Inheritance Lunch SocialActivity

In this example, I've assumed that some non-backward-chaining search mechanism (like a GA) found a solution that almost works, so that backward chaining is only needed on E3. But of course, one could backward chain on all of E1, E2, E3, E4 simultaneously – or various subsets thereof. For a simple example, suppose one backward chains on

E1 = Inheritance ( $v1,  Animal)

E3 = Inheritance( $v2, SocialActivity)

simultaneously. Then one is seeking, say, ($v4, $v5) so that

Inheritance $v1 $v5

Inheritance $v5 Animal

Inheritance $v2 $v4

Inheritance $v4 SocialActivity

This adds no complexity, as the four relations partition into two disjoint sets of two. Separate chaining processes may be carried out for E1 and E3.

On the other hand, for a slightly more complex example, what if we backward chain on

E2 = Evaluation( $v2, ($v1, Bacon) )

E3 = Inheritance( $v2, SocialActivity)

simultaneously? (Assuming that a decision has already been made to explore the possibility $v3 = SocialActivity.) Then we have a somewhat more complex situation. We are trying to find $v2 that is a SocialActivity, so that $v1 likes to do $v2 in conjunction with Bacon.

If the Member2Evaluation rule is chosen for E2 and the Deduction rule is chosen for E3, then we have

E5 = Inheritance $v2 $v6

E6 = Inheritance $v6 SocialActivity
 
E7 = Member ($v1, Bacon) (SatisfyingSet $v2)

and if the Inheritance2Member rule is then chosen for E7, we have

E5 = Inheritance $v2 $v6

E6 = Inheritance $v6 SocialActivity

E8 = Inheritance ($v1, Bacon) (SatisfyingSet $v2)

and if Deduction is then chosen for E8 then we have

E5 = Inheritance $v2 $v6

E6 = Inheritance $v6 SocialActivity

E9 = Inheritance ($v1 , Bacon) $v8

E10 = Inheritance $v8 (SatisfyingSet $v2)

.. .and woo hoo, we have expanded our search to involve more variables … we now get to deal with

E1 = Inheritance ( $v1,  Animal)

E4 = Evaluation( Enjoy, ($v1, SocialActivity) )

E5 = Inheritance $v2 $v6

E6 = Inheritance $v6 SocialActivity

E9 = Inheritance ($v1 , Bacon) $v8

E10 = Inheritance $v8 (SatisfyingSet $v2)

or some such … i.e. we have expanded our problem to include more and more simultaneous logical equations in more and more variables! Which is not necessarily a terrible thing, but it does get complicated…

We might find, for example, that $v=1 Pig, $v6=Dance, $v2=Waltz, $v8 = PiggyWaltz

E1 = Inheritance ( Pig,  Animal)

E4 = Evaluation( Enjoy, (Pig, SocialActivity) )

E5 = Inheritance Waltz Dance

E6 = Inheritance Dance SocialActivity

E9 = Inheritance (Pig , Bacon) PiggyWaltz

E10 = Inheritance PiggyWaltz (SatisfyingSet Waltz)

Here PiggyWaltz is a special dance that pigs do with their Bacon, as a SocialActivity!

Of course, this example is extremely contrived. Real inference examples will rarely be this simple, and will not generally involve Nodes that have simple English names. This example is just for illustration of the concepts involved.

Combining Backward and Forward Inference Steps with Attention Allocation to Achieve the Same Effect as Backward Chaining (and Even Smarter Inference Dynamics)

Backward chaining is a powerful heuristic, but I believe one can achieve the same effect – and even smarter inference dynamics – via a combination of

  • heuristic search to satisfy simultaneous expressions
  • boosting the STI of expressions being searched
  • importance spreading (of STI)
  • ongoing background forward inference

can combine to yield the same basic effect as backward chaining, but without explicitly doing backward chaining… The basic idea is: When system of expressions involving variables is explored using a GA or whatever other optimization process is deployed, these expressions also get their STI boosted…

Then, the atoms with high STI, are explored by the forward inference process, which is always acting in the background on the atoms in the Atomspace… Other atoms related to these also get STI via importance spreading. And these other related Atoms are then acted on by forward inference as well. This forward chaining will then lead to the formation of new Atoms, which may make the solution of the system of expressions easier the next time it is visited by the backward inference process So in the above example,

  • E1, E2, E3, E4 will all get their STI boosted
  • Other Atoms related to these (Animal, Bacon and Enjoy) will also get their STI boosted
  • These other Atoms will get forward inference done on them
  • This forward inference will then yield new Atoms that can be drawn on when the solution of the expression-system E1, E2, E3, E4 is pursued the next time

So, for example, if the system did not know that eating is a social activity, it might learn this during forward inference on SocialActivity. The fact that SocialActivity has high STI would cause forward inferences such as

Inheritance Dinner Eat
Inheritance Dinner SocialActivity
|-
Inheritance Eat SocialActivity

to get done. These forward inferences would then produce links that could simply be found by the pattern matcher when trying to find variable assignments to satisfy {E1, E2, E3, E4}.


Breakdown into MindAgents

To make this sort of PLN dynamic work, we basically require a number of MindAgents to be operating "ambiently" in the background whenever inference is occurring:

  • attentional forward chaining (i.e. each time this MindAgent is invoked, it chooses high-STI Atoms and does basic forward chaining on them)
  • attention allocation (importance updating is critical, Hebbian learning is also useful)
  • attentional (variable guided) backward chaining

On top of this ambient inference, we may then have query-driven backward chaining inferences submitted by other processes (via these launching backward inference steps and giving the associated Atoms lots of STI). The ambient inference processes will help the query-driven inference processes to get fulfilled.

Programming Language Issues

The original PLN implementation was in C++. The current PLN implementation is in Python. Currently the Atomspace is in C++, and python is the only other language that is bound to the Atomspace in a way making it easy to create new MindAgents in Python.

For the new implementation outlined here,

  • it would be easiest to implement the rules and formulas in C++. These are simple code anyway. Or one could keep the current python code for rules and formulas and wrap it in C++ GroundedSchemaNodes.
  • the control code, i.e. the basic and variable-guided forward and backward inference steps, could be implemented in python (probably better) or C++