URE historical discussion

From OpenCog

This is an old discussion about the URE. For a more recent specification you may want to take a look at Unified rule engine.

The idea of moving to a single unified rule engine for various OpenCog rule-based processes keeps coming up: e.g. RelEx, RelEx2Frame, PLN,...

Existing "General" Rule Engines

From Ben's, Linas', Moshe's emails...

It seems that the first step in this regard should be to study existing open-source rule engines a bit....

I really doubt any of them out there could serve our purposes ... but perhaps looking at these generalized rule engines should help us in designing our own.... (And of course there's always the remote chance one of them will prove adequate...)

Most open-source rule engines are based on the Rete algorithm.

There is also Rete II, which I don't understand. Rete has many drawbacks e.g. it performs poorly when matching rules against dynamic data.

The best-known open-source rule engine is DROOLS. which is in Java. I used something called RC++ years ago but I suspect it hasn't been maintained.

DROOLS currently only does forward chaining, but they say a backward chainer will be supported in the next release.

Its big and not easily grokked. A quick skim through the docs hints that it implements the equivalent of what we call the AtomSpace; they call it "WorkingMemory". It seems to assume crisp logic.

The "facts" that are inserted are Java objects. This is what Drools calls OO RETE. For instance, if one inserts in WorkingMemory a Java object with 2 properties like this:

 Person p = new Person();
 wm.insert( p );

this asserts the 100% truth of properties assignments FirstName and setLastName on p .

User:Jvanel , a Drools expert added this :) .

It also separates the RuleBase (where the rules are kept) from the WorkingMemory, whereas we want the two to be mashed into one. Whether Drools would allow the two to intermingle is unclear.

If I was starting from scratch, and was a big Java fan, then Drools would be the way to go. ... but err, my gut instinct is to stay away.

OTOH, Jess has a more annoying license making it not suitable for OSS use, but has a backward chainer...

The classic is CLIPS which has this improved closed-source version as well.

Does Alchemy have anything that resembles what we would call a rule engine? Their web page makes no hint of this, and yet there's that slippery word "inference" ...

Basic Requirements

Anyway ... if none of these or others like them proves suitable (which is what I suspect), it seems one path we might take is to develop our own generic rule engine as a separate module, i.e. a rule engine which uses rules stored in the AtomSpace format but other than that operates generically without specialization to any particular OpenCog AI process.

We'd then need to decide what chaining algorithms to use ... presumably we could make the forward and backward chaining algorithms pluggable, so that different MindAgents could potentially invoke the OpenCogRuleEngine and specify different chaining algorithms (e.g. some chaining algorithms might provide better real-time response whereas others might be smarter if you have more time available...)

Linas: That would be my impression. The rule-engine would need to be peppered with lots of little call-outs, so as to compare atoms appropriately at various times.

That is, the rule engine *algorithm* would need to be completely detached from the data representation that it operates on. I get the feeling that some rule engines don't detach in this way. Other requirements:

  • Doesn't assume crisp logic, i.e. provides callbacks where we can plug in calculations to meld confidence, truth, importance, etc.
  • Doesn't have a compile step: i.e. doesn't assume that the rules come from some file, and are compiled into code.

Advanced requirements

1) I took a declarative approach (probably easier in Lisp, but doable in C++) that seems to have been successful in decreasing the incremental complexity of adding a new rule to the system. Instead of manually telling the system when to apply rules and in what order, a rule declaration has an associated type, preconditions, and postconditions (i.e. "rule X assumes that rule Y has already been applied (precondition) and if rule Z has already been applied, then it needs not be reapplied after rule X has been applied (postcondition)"). The central controller synthesizes all of this information into a DAG, and does topological sort to obtain the actual order that rules are applied in for some type.

2) You really want to have some mechanism for keeping track of when a subtree is simplified according to some rule, to avoid having to reapply a expensive rules to trees that are in fact simplified. There are two basic ways to do this, a cache (i.e.. hash table) mapping trees to their simplifications, and annotations on subtrees indicating if they are simplified, and according to which rules. Unfortunately both have their drawbacks (Richard Fateman has some nice papers kvetching about this, IIRC), the main one in both cases being that "simplified" is in fact context-dependent (e.g. X AND Y is simplified on its own, but not if it is embedded into a clause such as X OR (X AND Y)). I chose the latter, and suffice it to say that a fair bit of finesse is required to get things fully bug-free...

3) There is lots of know-how from the folks that built computer algebra systems (stretching back 40+ years) that is worth reading up on...

Representing Rules in the AtomSpace

See main article Expressing PLN Rules As Atoms

The distinction between what PLN calls "rules" and most Rule engines call "rules" is slightly different. Or rather, PLN has a separate layer of base rules that alter and synthesise axioms in the system.

One way to represent the PLN rules in the AtomSpace, while also allowing them to use specific formulas for the consequent, is to use the following representation (Deduction rule example):

     InheritanceLink $A $B
     InheritanceLink $B $C
     ExOutLink GetTruthValue (InheritanceLink $A $C)
        GroundedSchemaNode: DeductionFormula
           ExOutLink GetTruthValue $A
           ExOutLink GetTruthValue $B
           ExOutLink GetTruthValue $C
           ExOutLink GetTruthValue (InheritanceLink $A $B)
           ExOutLink GetTruthValue (InheritanceLink $B $C)

Specific Design Requirements

There are some aspects related to the architecture of the new Unified Rule Engine which must be considered, given the acquired experience at developing and making use of a Rule Engine for Embodiment:

  • Action execution callback: it would be great if a listener system was develop to notify a list of listener objects, that a specific action was selected and executed in a given cycle.
  • Avoiding repeating the same action: maybe it would be a good solution to provide a mechanism that permits consulting a history of executed actions and using that information as preconditions, rules strengths, etc. Maybe creating a minimal Common Sense reasoning engine to do that work can be a good solution.

Current plans

Linas is currently (sept 2009) making plans to implement some of the ideas above. The core ideas/steps are as follows:

  • The current query/pattern matcher can evaluate implication links, and has a sufficiently rich set of callbacks to provide the flexibility described above. Unfortunately, it is slow; as it is essentially a "naive" rule engine, and provides no caching, etc. Linas believes he knows how to add Rete-algo-type caching structures to vastly improve performance (possibly in exchange for significant mem usage). However:
  • To implement the Rete-algo data structures, a new set of indexes are required for the atomtable. Currently, the atom table provides 5 "hard-wired" indexes, and no pluggable index infrastructure. So a preliminary step is to add pluggable indexes (and convert the existing indexes to that infrastructure).

Simplified ImplicationLinks

Currently, writing ImplicationLinks by hand is quite tedious, as the syntax is not very compact. Thus, a separate, distinct project would be to create a new, more compact language that can be quickly/trivially translated into ImplicationLinks. Linas believes that a very good choice would be patterned on the syntax of Prolog. This is in part because current RelEx output is very prolog-ish already, and because most of the casual email discussions already tend to use a casual prolog-like syntax. So the idea is to formalize this a bit -- its already the way that people are thinking about things.

Note: the Schelog project allows embedding of prolog into Scheme as a first-class guest. Unfortunately, this can't really work within OpenCog, but something similar would be nice.

Anyway, prolog is almost but not quite the right approach. Answer set programming (ASP) has a syntax nearly identical to prolog, can can be thought of as "prolog that runs in parallel and finds all answers, instead of terminating at the first one". One reason it can do this is that ASP has a much sounder theoretical foundation than prolog (based on stable model semantics), and as a result can apply very fast algorithms (DPLL algorithm, SAT solvers) to gain a much better performance than is possible with prolog.

Ben's 2013 Design Sketch for a Unified Rule Engine

(from an 2013 email by Ben G)

Here are my initial thoughts on requirements for an OpenCog rule engine. This is based on lots of prior thinking and discussion of the topic.

  • 1) Rules will be specified as ImplicationLinks or PredictiveImplicationLinks between Atoms. Textually, this could be done by making each rule a Scheme expression connoting an ImplicationLink or PredictiveImplicationLink.
  • 2) Mechanisms will be provided for backward chaining from a given target, or forward chaining from a set of premises
  • 3) The control mechanism, i.e. the policy for choosing which rules to apply in a given case, must be pluggable. That is, when you invoke the rule engine, you get to choose which of the available control policies will be used.
  • 4) I'm not sure how control policies should be specified. Making them C++ objects would be a simple course.
  • 5) Rules may have metadata associated with them, which may be reference within control policies. Since rules are expressed as Links, the metadata may be stored in the Atomspace as well..
  • 6) There should be a standard way to express exclusivity between rules, and priority levels of rules --- but both of these things should be relative to a given control policy (i.e rule X might have higher priority than rule Y with respect to policy A, but not policy B)
  • 7) Some reasonable standard default control policy should be provided, maybe something Viterbi-like as Linas has suggested...
  • 8) The rule engine should be associated with some way of keeping track of which rules have been applied to which Atoms. This information may be used by some control policies.


With this kind of framework in place, creating a new "rule engine" in OpenCog would be a matter of

  • A) writing a Scheme file expressing one's rules in Atom format
  • B) if necessary, writing a custom control policy (perhaps with associated exclusion or prioritization of rules)

Probably the subtlest issue in all this is the control mechanism/policy, i.e. the choice of which rules to apply in which situations. For example

  • In the new PLN chainer, a motivating principle is to use STI (as determined by attention allocation dynamics) to drive the choice of which rules to apply to which Atoms, at each stage
  • In Shujing's planner, one of her "secret sauces" is a set of simple but effective heuristics for which rules to choose when. These rules are particular for *planning*, and wouldn't be appropriate for reasoning in general

Of course, an even funkier step would be to have the control policies written as Atoms. But I don't think we need to go there, at this point. The above would be a large step forward.

I don't see anything extremely difficult in the above. But it has to be fast and the software design has to be simply usable by folks who want to make custom control policies.

Note that the above design sketch intentionally does NOT try to force every AI process in OpenCog to use the same scheme for choosing what rules to apply, or to use the same rules. Rather, it seeks to keep the diversity where it's important (in the rule-bases and control policies), and abstract the routine plumbing of rule interpretation and rule application so it doesn't need to be re-coded over and over.

Preliminary Enumeration of Key Structures

A rough, preliminary indication of structures (some of these may become software classes?) to be associated with the rule engine framework is:

  • RuleBase: a rule-base consists of a list of Atoms considered as rules, and a set of Atoms to which the rules should be potentially applied (which could be the whole Atomspace, or a specific set of Atoms). A RuleBase is also associated with a set of priority and exclusion relationships among rules.
  • Control policy: this tells the rule engine how to choose which rule to apply in a given case, AND how to guide the search for Atoms fulfilling the precondition of a given rule
  • Rule application: the application of a rule should be done by the PatternMatcher, probably extended a bit from its current form
  • Chainers: a generic forward chainer and backward chainer may be written, which can act on any specified rule base according to any specified control policy