OpenCogPrime:Emergence

From OpenCog

Emergence in OpenCog Prime

The Atoms, MindAgents and Units within OCP are critical to the systemís intended intelligence ñ but they are critical as ìmeans to an end.î The essence of the OCP systemís mind, if the system operates as intended, will be the emergent dynamical structures arising in the Atom-network as it evolves over time. These, not the specific software structures ìwired intoî the system by its designers and programmers, will be the stuff of the OCP systemís ìmindî as subjectively self-perceived.

As an important example, one of the key things we hope to see via teaching OCP in the AGISim environment is the adaptive emergence within the systemís knowledge base of an active and effectively evolving ìphenomenal self.î The process of the emergence of the self may, we hypothesize, be productively modeled in terms of the processes of forward and backward synthesis discussed above. This point is made carefully in (Goertzel, 2006) and just briefly summarized here.

What is ventured there is that the dynamic pattern of alternating forward and backward synthesis may play a fundamental role in cognition. Put simply, forward synthesis creates new mental forms by combining existing ones. Then, backward synthesis seeks simple explanations for the forms in the mind, including the newly created ones; and, this explanation itself then comprises additional new forms in the mind, to be used as fodder for the next round of forward synthesis. Or, to put it yet more simply:

⇒ Combine ⇒ Explain ⇒ Combine ⇒ Explain ⇒ Combine ⇒

This sort of dynamic may be expressed formally, in a OCP context, as a dynamical iteration on the space of Atoms. One may then speak about attractors of this iteration: fixed points, limit cycles and strange attractors. And one may hypothesize some key emergent cognitive structures are strange attractors of this equation. I.e., the iterative dynamic of combination and explanation leads to the emergence of certain complex structures that are, in essence, maintained when one recombines their parts and then seeks to explain the recombinations. These structures are built in the first place through iterative recombination and explanation, and then survive in the mind because they are conserved by this process. They then ongoingly guide the construction and destruction of various other temporary mental structures that are not so conserved. Specifically, we suggest that both self and attentional focus may be viewed as strange attractors of this iteration. Here we will focus only on self.

The "self" in this context refers to the "phenomenal self" (Metzinger, 2004) or "self-model." That is, the self is the model that a system builds internally, reflecting the patterns observed in the (external and internal) world that directly pertain to the system itself. As is well known in everyday human life, self-models need not be completely accurate to be useful; and in the presence of certain psychological factors, a more accurate self-model may not necessarily be advantageous. But a self-model that is too badly inaccurate will lead to a badly-functioning system that is unable to effectively act toward the achievement of its own goals.

The value of a self-model for any intelligent system carrying out embodied agentive cognition is obvious. And beyond this, another primary use of the self is as a foundation for metaphors and analogies in various domains. A self-model can in many cases form a self-fulfilling prophecy (to make an obvious double-entendreí!). Actions are generated based on oneís model of what sorts of actions one can and/or should take; and the results of these actions are then incorporated into oneís self-model. If a self-model proves a generally bad guide to action selection, this may never be discovered, unless said self-model includes the knowledge that semi-random experimentation is often useful.

In what sense, then, may it be said that self is an attractor of iterated forward-backward synthesis? Backward synthesis infers the self from observations of system behavior. The system asks: What kind of system might we be, in order to give rise to these behaviors that we observe myself carrying out? Based on asking itself this question, it constructs a model of itself, i.e. it constructs a self. Then, this self guides the systemís behavior: it builds new logical relationships its self-model and various other entities, in order to guide its future actions oriented toward achieving its goals. Based on the behaviors new induced via this constructive, forward-synthesis activity, the system may then engage in backward synthesis again and ask: What must we be now, in order to have carried out these new actions? And so on.

Our hypothesis is that after repeated iterations of this sort, in infancy, finally during early childhood a kind of self-reinforcing attractor occurs, and we have a self-model that is resilient and doesnít change dramatically when new instances of action- or explanation-generation occur. This is not strictly a mathematical attractor, though, because over a long period of time the self may well shift significantly. But, for a mature self, many hundreds of thousands or millions of forward-backward synthesis cycles may occur before the self-model is dramatically modified. For relatively long periods of time, small changes within the context of the existing self may suffice to allow the system to control itself intelligently.

And of course the phenomenal self is not the only structure that must emerge in this sort of way, if OCP is to give rise to powerful AGI. Another example of an emergent structure is the Attentional Focus -- the "moving bubble of attention" that constitutes the systemís focus of conscious attention at a point in time. Another example is the Mind Ontology: Dual Network, consisting of harmonized and synchronized hierarchical and heterarchical patterns of activity. All these structures must emerge and self-perpetuate, grow and mature based on the coordinated activity of MindAgents on Atom-networks situated in Units, if OCP is to make the leap from being a software system to being an autonomous and reflective intelligence.