OpenCog Prime Cognitive Architecture
Cognitive architecture is the aspect of AGI design that seeks to answer questions such as:
- What parts should a mind-system be decomposed into?
- How should these parts relate to each other?
- How should these parts work together to cause the system to carry out purposeful actions in the world?
Division into High-Level Functional Units
In the OpenCog Prime design, cognitive architecture is based on conceiving the mind as divided into multiple cognitive units, where each unit
- focuses on a certain type of information or a certain way of allocating attention to information
- processes all the 4 major types of knowledge in a synthetic way
The most important cognitive units handle:
- Sensation in various modalities
- General “background” cognition
- Focused cognition
- Social modeling (self and other)
At a high level OpenCog Prime's cognitive architecture is based on the state of the art in cognitive psychology and cognitive neuroscience. Most cognitive functions are distributed across the whole system, yet principally guided by some particular module. The following is a high level architecture diagram for OpenCog Prime:
It doesn't really tell you much ... the meat of the design is in what lies inside the boxes, and how the insides of the boxes dynamically interact with each other and give rise to system-wide attractors and other emergent dynamic phenomena.
Tying Cognitive Architecture in with Software Architecture
Each of the high-level cognitive units in the above diagram, may be implemented on the software level as a OpenCog Unit that consists of potentially multiple machines sharing a common AtomSpace, as shown in the following figure:
Goal-Driven Cognitive Dynamics
The combined action of OpenCog Prime's functional sub-units follows the basic logic of animal behavior:
Enact a procedure so that
Context & Procedure ==> Goals
i.e. at each moment, based on its observations and memories, the system chooses to enact procedures that it estimates (based on the properties of the current context) will enable it to achieve its goals, over the time-scales these goals refer to.
For the details of this process, see the page on OpenCogPrime Psyche.
The "procedures" in the above paragraphs may be either Procedure objects or more complex procedures formed by the coordinated enaction of multiple Procedure objects.
Explicit versus Implicit Goals
There is an important distinction between explicit goals and implicit goals
- Explicit goals: the objective-functions the system explicitly chooses actions in order to maximize, which are represented in Nodes or Links that are explicitly marked as Goals.
- Implicit goals: the objective-functions the system actually does habitually maximize, in practice. These can be seen as the results of actions taken by the system.
For a system that is both rational, and capable with respect to its goals in its environment, these will be basically the same; the system won't do anything except what it explicitly aims to do. But in many real cases, they may be radically different; the system will do things incidentally.
As an example, the system might have an explicit goal of walking across a room, but an implicit goal of tripping over an obstacle.
Not all of OCP's dynamics are explicitly guided by the pursuit of explicitly defined goals. (Edit needed: provide example) This is largely for computational efficiency reasons.
A few key points about goals are as follows:
- A sufficiently intelligent system is continually creating new subgoals of its current goals, a process called subgoal refinement.
- Some intelligent systems may be able to replace their top-level supergoals with new ones.
- Goals may operate on radically different time-scales.
- Humans habitually experience “subgoal alienation” -- what was once a subgoal of some other goal, becomes a top-level goal in itself. An AGI need not be so prone to this phenomenon.
The cognitive architecture within which the representational, learning and reasoning mechanisms discussed above exist within OCP is a fairly simple one. A OCP instance is divided into a set of Units, each of which contains an AtomTable containing a hypergraph of Nodes and Links, and also a set of MindAgent objects embodying various cognitive processes. The MindAgents are perpetually cycled through, carrying out recurrent actions and creating Task objects that carry out processor-intensive one-time actions. Different Units deal with different high-level cognitive functions and may contain different mixes of MindAgents, or at least differently-tuned MindAgents. Conceptually, the idea is that each Unit uses the same knowledge representations and cognitive mechanisms to achieve a particular aspect of intelligence, such as perception, language learning, abstract cognition, action selection, procedure learning, etc.
Note that a Unit may span several machines or may be localized on a single machine: in the multi-machine case, Nodes on one machine may link to Nodes living in other machines within the same unit. On the other hand, Atoms in one Unit may not directly link to Atoms in another Unit; though different Units may of course transport Atoms amongst each other. This architecture is workable to the extent that Units may be defined corresponding to pragmatically distinct areas of mental function, e.g. a Unit for language processing, a Unit for visual perception, etc.
The full cognitive architecture has not yet been implemented, but an architecture capable of supporting it is in place, along with various key components. The following diagram illustrates an Experiential Learning Focused OCP Configuration:
Many different OCP configurations are possible within the overall design, but our plan is to work with a specific OCP configuration, intended for experiential learning and more specifically, for a OCP system that controls a real or simulated body that is perceiving and acting in some world. The breakdown into units conceived for this purpose, roughly indicated in this figure, is based loosely on ideas from cognitive science, and is fairly similar to that presented in other integrative AGI architectures proposed by Stan Franklin (2006), Aaron Sloman (1999) and others. However, the specifics of the breakdown have been chosen with care, with a view toward ensuring that the coordinated dynamics of the mechanisms and units will be able to give rise to the emergent structures and dynamics associated with intelligence, including a sophisticated self-model and an appropriately adaptive moving focus of attention.
Currently we doing limited work with physical robotics, and so we value work using OCP to control a simple simulated body in 3D simulation worlds such as OpenSim. It would also be possible to construct OCP configurations unrelated to any kind of embodiment; for instance, we have designed a configuration intended specifically for mathematical theorem-proving. However, as argued in The Hidden Pattern (Publications#HiddenPattern), we believe that pursuing some form of embodiment is likely the best way to approach AGI. This is not because intelligence intrinsically requires embodiment, but rather because physical environments present a host of useful cognitive problems at various levels of complexity, and also because understanding of human beings and human language will probably be much easier for an AGI that shares human grounding in physical environments.
OCP's experiential learning configuration centers around a Unit called the Central Active Memory, which is the primary cognitive engine of the system. There is also a Unit called the Global Attentional Focus, which deals with Atoms that have been judged particularly important and subjects them to intensive cognitive processing. There are Units dealing with sensory processing and motor control; and then Units dealing with highly intensive PLN or PEL-based pattern recognition, using control mechanisms that are not friendly about ceding processor time to other cognitive processes. Each Unit may potentially span multiple machines; the idea is that communication within a Unit must be very rapid, whereas communication among Units may be slower.
The focus on experiential learning leads to yet another way of categorizing the OCP system's cognitive activities: goal-driven versus ambient. This classification is orthogonal to the control/forward/backward trichotomy discussed above. Ambient cognitive activity includes for instance:
- MindAgents that carry out basic PLN operations on the AtomTable, deriving obvious conclusions from existing knowledge.
- MindAgents that carry out basic perceptual activity, e.g. recognizing coherent objects in the perceptual stimuli coming into the system.
- MindAgents related to attention allocation and assignment of credit.
- MindAgents involved in moving Atoms between disk and RAM.
Goal-driven activity, on the other hand, involves an explicitly maintained list of goals that is stored in the Global Attentional Focus and Central Active Memory. Two key processes are involved:
- Learning SchemaNodes that, if activated, are expected to lead to goal achievement.
- Activating SchemaNodes that, if activated, are expected to lead to goal achievement.
- The goal-driven learning process is ultimately a form of backward-chaining learning, but subtler than usual backward chaining due to its interweaving of PLN and PEL and its reliance on multiple cognitive Units.