From OpenCog
Jump to: navigation, search

Recognizing And Creating Self-Referential Structures

This is an overly brief page that covers a large and essential topic: how OCP system will be able to recognize and create large-scale self-referential structures in itself.

More raw material exists that will in future be used to expand this page into a more thorough treatment.

Essential Self-Referential Structures

Some of the most essential structures underlying human-level intelligence are self-referential in nature. These include"

  • the phenomenal self (see Thomas Metzinger's book "Being No One")
  • the will
  • reflective awareness

A casual but mathematical discussion of will and reflective awareness in terms of self-referential structures may be found at:

There, the following recursive definitions are given:

"S is conscious of X" is defined as: The declarative content that {"S is conscious of X" correlates with "X is a pattern in S"}
"S wills X" is defined as: The declarative content that {"S wills X" causally implies "S does X"}

Similarly, we may posit:

"X is part of S's self" is defined as: The declarative content that {"X is a part of S's self" correlates with "X is a persistent pattern in S over time"}

(which has the fun, elegant implication that "self is to long-term memory as awareness is to short-term memory").

Related, one may posit multiple similar processes that are mutually recursive, e.g.

S is conscious of T and U
T is conscious of S and U
U is conscious of S and T

The cognitive importance of this sort of mutual recursion is discussed in the paper:

which was published in the journal "Cybernetics and Human Knowing" in 2007.

None of these are things that should be programmed into an artificial mind. Rather, they must emerge in the course of a mind's self-organization in connection with its environment.

However, a mind may be constructed so that, by design, these sorts of important self-referential structures are encouraged to emerge.

Encouraging the Recognition of Self-Referential Structures in the AtomTable

How can we do this — encourage an OCP instance to recognize complex self-referential structures that may exist in its AtomTable? This is important, because, according to the same logic as map formation: if these structures are explicitly recognized when they exist, they can then be reasoned on and otherwise further refined, which will then cause them to exist more definitively ... and hence to be explicitly recognized as yet more prominent patterns ... etc. The same virtuous cycle via which ongoing map recognition and encapsulation is supposed to lead to concept formation, may be posited on the level of complex self-referential structures, leading to their refinement, development and ongoing complexity.

One really simple way is to encode self-referential operators in the Combo vocabulary, that is used to represent the procedures grounding GroundedPredicateNodes.

That way, one can recognize self-referential patterns in the AtomTable via standard OCP methods like MOSES and IntegrativeProcedureAndPredicateLearning, so long as one uses Combo trees that are allowed to include self-referential operators at their nodes. All that matters is that one is able to take one of these Combo trees, compare it to an AtomTable, and assess the degree to which that Combo tree constitutes a pattern in that AtomTable.

But how can we do this? How can we match a self-referential structure like:

  EvaluationLink will (S,X)
    EvaluationLink will (S,X)
    EvaluationLink do (S,X)

against an AtomTable or portion thereof?

(note the shorthand notation; e.g.

EvaluationLink will (S,X)

really means



The question is whether there is some "map" of Atoms (some set of PredicateNodes) willMap, so that we may infer the SMEPH relationship:

  EvaluationEdge willMap (S,X)
    EvaluationEdge willMap (S,X)
    EvaluationEdge doMap (S,X)

as a statistical pattern in the AtomTable's history over the recent past. (Here, doMap is defined to be the map corresponding to the built-in "do" predicate.)

If so, then this map willMap, may be encapsulated in a single new Node (call it willNode), which represents the system's will. This willNode may then be explicitly reasoned upon, used within concept creation, etc. It will lead to the spontaneous formation of a more sophisticated, fully-fleshed-out will map. And so forth.

Now, what is required for this sort of statistical pattern to be recognizable in the AtomTable's history? What is required is that EquivalenceEdges (which, note, must be part of the Combo vocabulary in order for any MOSES-related algorithms to recognize patterns involving them) must be defined according to the logic of hypersets rather than the logic of sets. What is fascinating is that this is no big deal! In fact, the AtomTable software structures support this automatically; it's just not the way most people are used to thinking about things. In fact there is no reason, in terms of the AtomTable, not to create self-referential structures like the one given above.

The next question is how to we calculate the truth values of structures like the above, though. The truth value of a hyperset structure turns out to be an infinite order probability distribution, which is a funny and complex sort of mathematical beast, which however turns out not to be all that nasty in practice as one might expect. These probabilities are described at:

Infinite-order probability distributions are partially-ordered, and so one can compare the extent to which two different self-referential structures apply to a given body of data (e.g. an AtomTable), via comparing the infinite-order distros that constitute their truth values. In this way, one can recognize self-referential patterns in an AtomTable, and carry out encapsulation of self-referential maps. This sounds very abstract and complicated, but the class of infinite-order distributions defined in the above-referenced papers actually have their truth values defined by simple matrix mathematics, so there is really nothing that abstruse involved in practice.

Finally, there is the question of how these hyperset structures are to be logically manipulated within PLN. The answer is that regular PLN inference can be applied perfectly well to hypersets, but some additional hyperset operations may also be introduced, such as "hyperset convolution" — material on this will be added to this page at a later date.

Clearly, with this highly speculative component of the OCP design we have veered rather far from anything the human brain could plausibly be doing in detail. For some ideas about how the brain might do this kind of stuff, take a look at the paper "How Might Probabilistic Logic Emerge from the Brain?" in the Proceedings of the AGI-08 conference. It is explored there how the brain may embody self-referential structures like the ones considered here, via using the hippocampus to encode whole neural nets as inputs to other neural nets. Regarding infinite-order probabilities, it is certainly the case that the brain is wired to carry out matrix manipulations, so that it's not completely outlandish to posit the brain could be doing something mathematically analogous. Thus, all in all, it does seem plausible to me that the brain could be doing something roughly analogous to what I've described here, even though the details would obviously be very different.

The ideas on this page have been fleshed out in more detail in various notes existing on Ben Goertzel's hard drive, and will be disbursed and polished in more detail when the time is right! At the moment (July 2008) there is a lot of more basic work on OCP to be done before we get to this stuff.