- 1 Knowledge Creation in OpenCog Prime
- 2 Algorithms for Procedural Knowledge Creation
- 3 Algorithms for Declarative Knowledge Creation
- 4 General Cognitive Dynamics
- 5 Attention Allocation
- 6 Summary of Some Synergies Between Knowledge Creation Processes
Knowledge Creation in OpenCog Prime
Learning, reasoning, invention and creativity are all aspects of knowledge creation . New knowledge is nearly always created via judicious combination and variation of old knowledge — but due to the phenomenon of “emergence,” this can lead to the impression of radical novelty . Superior capability for knowledge creation is the main thing that separates humans from other animals . Knowledge creation allows the creation of context-specific knowledge representations using the basic representational mechanisms available.
The OpenCog Prime approach to knowledge creation involves, firstly, positing distinct knowledge creation mechanisms corresponding to the 4 major subtypes of knowledge
- Sensory: Memory-driven simulation of sensory memory
- Probabilistic logical inference
- Probabilistic Logic Networks formalism
- Conceptual Blending
- Statistical Pattern Mining
- Procedural: Probabilistic Evolutionary Program Learning
- MOSES, PLEASURE algorithms
- Episodic: Internal simulation
- Third Life simulation world
Algorithms for Procedural Knowledge Creation
The key algorithm for procedural knowledge creation used in the OCP design is the MOSES Probabilistic Evolutionary Learning algorithm. MOSES combines the power of two leading AI paradigms: evolutionary and probabilistic learning. As well as its use in AGI, MOSES has a successful track record in bioinformatics, text and data mining, and virtual agent control.
An alternative approach is also being explored, which is complementary rather than contradictory to MOSES: The PLEASURE Algorithm
Algorithms for Declarative Knowledge Creation
- The first general, practical integration of probability theory and symbolic logic.
- Extremely broad applicability. Successful track record in bio text mining, virtual agent control.
- Based on mathematics described in Probabilistic Logic Networks, published by Springer in 2008.
- Grounding of natural language constructs is provided via inferential integration of data gathered from linguistic and perceptual inputs.
In addition to PLN, OCP contains multiple heuristics for Atom creation, including “blending” of existing Atoms.
and clustering and other heuristics, see OpenCogPrime:SpeculativeConceptFormation.
Declarative Knowledge Creation via Natural Language Processing
See page on OpenCogPrime:NLP
General Cognitive Dynamics
The knowledge creation mechanisms corresponding to the four knowledge types may be subsumed under a single, universal mathematical scheme of “iterated cognitive transformation”.
For a detailed discussion of this general perspective, see http://www.goertzel.org/dynapsyc/2006/ForwardBackward.htm, and the wiki topics MindOntology:Focused_Cognitive_Process
However, none of the knowledge-subtype-specific knowledge creation mechanisms can stand on its own, except for simple or highly specialized problems.
To support general intelligence, the four basic knowledge creation mechanisms must richly interact, and support each other. This interaction must occur within each cognitive unit in the overall Cognitive Architecture
In computer science language, we may say that each of these mechanisms is subject to combinatorial explosions, and must be interconnected in such a way that they can help each other with pruning. The understanding of the KC mechanisms as manifestations of the same universal cognitive dynamic, aids in working out the details of these interactions.
Regulating all this knowledge creation across a large knowledge base requires robust mechanisms for OpenCogPrime:AttentionAllocation — allocation of both processing and memory.
The allocation of attention based on identified patterns of goal-achievement is known as Credit Assignment.
Attention allocation is itself a subtle AI problem integrating declarative, procedural and episodic knowledge and knowledge creation.
The OpenCogPrime approach to attention allocation involves artificial economics, with two separate currencies:
- STI (short-term importance) currency, corresponding to processor time.
- LTI (long-term importance) currency, corresponding to RAM.
Each node or link in the NCE’s knowledge network is tagged with a probabilistic truth value, and also with an “attention value”, containing Short-Term Importance and Long-Term Importance components.
An artificial-economics-based process is used to update these attention values dynamically — a complex, adaptive nonlinear process.
Attention Allocation & Knowledge Creation
Patterns among currency values may be used as raw material for knowledge creation, a process called Map Formation.
Aspects of this process, corresponding to the four key types of knowledge, include:
- Declarative map formation: Formation of a new concept grouping together concepts that have often been active together (often had high STI at the same time) … a clustering problem.
- Procedural map formation: Formation of a new procedure whose execution is predicted to generate a time series of STI values similar to one frequently historically observed … a procedure learning problem.
- Episodic map formation: Formation of new episodic memories with the property that experiencing/remembering these episodes would generate a time-series of STI values similar to one frequently historically observed … an optimization problem.
- Sensory map formation: Formation of new sensory memories with the property that perceiving these sensations would generate a pattern of STI values similar to one frequently historically observed.
The following diagram illustrates the map formation process. Atoms associated in a dynamic “map” may be grouped to form new Atoms: the AtomSpace hence explicitly representing patterns in itself.
Summary of Some Synergies Between Knowledge Creation Processes
A detailed discussion of the synergies between the different knowledge creation processes in OpenCog Prime may be found at OpenCogPrime:EssentialSynergies. The next few paragraphs, on this page, just give a brief overview of the topic.
How Declarative KC helps Procedural KC
MOSES and PLEASURE both require
- Procedure normalization, which involves execution of logical rules, which for the case of complex programmatic constructs, may require nontrivial logical inference.
- Probabilistic modeling of the factors distinguishing good from bad procedures (for a certain purpose), which may benefit from the capability of advanced probabilistic inference to incorporate diverse historical factors.
How Procedural KC helps Declarative KC
State-of-the-art logical reasoning engines (probabilistic or not) falter when it comes to “inference control” of complex inferences, or inferences over large knowledge bases . At any given point in a chain of reasoning, they know which inference steps are correct to take, but not which ones are potentially useful at achieving the overall inference goal . When logical inference gets stuck in reasoning about some concept, one recourse is to use procedure learning to figure out new procedural rules for distinguishing that concept from others . These rules may then be fed into the inference process, adding new information that often allows greater-confidence inference control.
How Episodic KC helps Declarative KC
When an inference process can’t tell which of many possible directions to choose, another potential solution is to rely on life-history: on the memory-store of prior inferences done in related situations.
In inference control as elsewhere, what worked in the past is often a decent guide to what will work in the future.
How Declarative KC helps Episodic KC
An embodied organism experiences a great number of episodes during its lifetime, and without some kind of abstract organization, it will only be able to call these to mind via simple associative cueing. Declarative knowledge creation, acting on the episodic memory store, forms ontologies of episodes, allowing memory access based on abstract as well as associative cues. Furthermore, episodes are not generally stored in the memory in complete detail. Declarative knowledge is used to fill in the gaps — an aspect of “the constructive nature of memory”.
How Procedural KC helps Episodic KC
Suppose the mind wants to re-create an episode that it doesn’t recall in full detail, or to create an episode that it never actually experienced?
In many cases this episode will involve other agents with their own dynamic behaviors.
Procedure-learning mechanisms are used to infer the processes governing other agents’ behaviors, which is needed to simulate other agents.
How Episodic KC helps Procedural KC
MOSES and PLEASURE require probabilistic modeling of the factors distinguishing good from bad procedures (for a certain purpose), which may benefit from simple associative priming based on episodic memory of what procedures worked before in similar circumstances.
Interactions with Sensory KC
The three other forms of KC benefit from sensory KC simply via acting on the input provided by sensory KC, in the same manner as they act on direct sensory input . And, sensory KC works primarily via having the sensory cognitive units stimulated by other cognitive units generating “mock stimuli” e.g.
- Daydreaming fake sensations and experiences one never had (episodic).
- Using imagistic thought to help guide mathematical reasoning (declarative/procedural).