OpenCogPrime:Knowledge Creation

From OpenCog
Jump to: navigation, search

Knowledge Creation in OpenCog Prime

Learning, reasoning, invention and creativity are all aspects of knowledge creation
. New knowledge is nearly always created via judicious combination and variation of old knowledge — but due to the phenomenon of “emergence,” this can lead to the impression of radical novelty
. Superior capability for knowledge creation is the main thing that separates humans from other animals
. Knowledge creation allows the creation of context-specific knowledge representations using the basic representational mechanisms available.

The OpenCog Prime approach to knowledge creation involves, firstly, positing distinct knowledge creation mechanisms corresponding to the 4 major subtypes of knowledge

  1. Sensory: Memory-driven simulation of sensory memory
  2. Declarative:
    1. Probabilistic logical inference
    2. Probabilistic Logic Networks formalism
    3. Clustering
    4. Conceptual Blending
    5. Statistical Pattern Mining
  3. Procedural: Probabilistic Evolutionary Program Learning
    1. MOSES, PLEASURE algorithms
  4. Episodic: Internal simulation
    1. Third Life simulation world

Algorithms for Procedural Knowledge Creation

The key algorithm for procedural knowledge creation used in the OCP design is the MOSES Probabilistic Evolutionary Learning algorithm. MOSES combines the power of two leading AI paradigms: evolutionary and probabilistic learning. As well as its use in AGI, MOSES has a successful track record in bioinformatics, text and data mining, and virtual agent control.

An alternative approach is also being explored, which is complementary rather than contradictory to MOSES: The PLEASURE Algorithm

Algorithms for Declarative Knowledge Creation

The most complex algorithm for declarative knowledge creation in OCP is the Probabilistic Logic networks engine, OpenCogPrime:ProbabilisticLogicNetworks

  • The first general, practical integration of probability theory and symbolic logic.
  • Extremely broad applicability. Successful track record in bio text mining, virtual agent control.
  • Based on mathematics described in Probabilistic Logic Networks, published by Springer in 2008.
  • Grounding of natural language constructs is provided via inferential integration of data gathered from linguistic and perceptual inputs.
KnowCreate1.png

In addition to PLN, OCP contains multiple heuristics for Atom creation, including “blending” of existing Atoms.

KnowCreate2.png

and clustering and other heuristics, see OpenCogPrime:SpeculativeConceptFormation.

Declarative Knowledge Creation via Natural Language Processing

See page on OpenCogPrime:NLP

General Cognitive Dynamics

The knowledge creation mechanisms corresponding to the four knowledge types may be subsumed under a single, universal mathematical scheme of “iterated cognitive transformation”.

KnowCreate3.png

For a detailed discussion of this general perspective, see http://www.goertzel.org/dynapsyc/2006/ForwardBackward.htm, and the wiki topics MindOntology:Focused_Cognitive_Process

However, none of the knowledge-subtype-specific knowledge creation mechanisms can stand on its own, except for simple or highly specialized problems.

To support general intelligence, the four basic knowledge creation mechanisms must richly interact, and support each other. This interaction must occur within each cognitive unit in the overall Cognitive Architecture

In computer science language, we may say that each of these mechanisms is subject to combinatorial explosions, and must be interconnected in such a way that they can help each other with pruning. The understanding of the KC mechanisms as manifestations of the same universal cognitive dynamic, aids in working out the details of these interactions.

Attention Allocation

Regulating all this knowledge creation across a large knowledge base requires robust mechanisms for OpenCogPrime:AttentionAllocation — allocation of both processing and memory.

The allocation of attention based on identified patterns of goal-achievement is known as Credit Assignment.

Attention allocation is itself a subtle AI problem integrating declarative, procedural and episodic knowledge and knowledge creation.

The OpenCogPrime approach to attention allocation involves artificial economics, with two separate currencies:

  1. STI (short-term importance) currency, corresponding to processor time.
  2. LTI (long-term importance) currency, corresponding to RAM.

Each Atom in the AtomTable has an AttentionValue consisting of OpenCogPrime:STI and OpenCogPrime:LTI currency values, along with (in most cases) a probabilistic truth value.

The following figure illustrates the semantics of STI and LTI.

KnowCreate4.png

Each node or link in the NCE’s knowledge network is tagged with a probabilistic truth value, and also with an “attention value”, containing Short-Term Importance and Long-Term Importance components.

An artificial-economics-based process is used to update these attention values dynamically — a complex, adaptive nonlinear process.

Attention Allocation & Knowledge Creation

Patterns among currency values may be used as raw material for knowledge creation, a process called Map Formation.

Aspects of this process, corresponding to the four key types of knowledge, include:

  1. Declarative map formation: Formation of a new concept grouping together concepts that have often been active together (often had high STI at the same time) … a clustering problem.
  2. Procedural map formation: Formation of a new procedure whose execution is predicted to generate a time series of STI values similar to one frequently historically observed … a procedure learning problem.
  3. Episodic map formation: Formation of new episodic memories with the property that experiencing/remembering these episodes would generate a time-series of STI values similar to one frequently historically observed … an optimization problem.
  4. Sensory map formation: Formation of new sensory memories with the property that perceiving these sensations would generate a pattern of STI values similar to one frequently historically observed.

The following diagram illustrates the map formation process. Atoms associated in a dynamic “map” may be grouped to form new Atoms: the AtomSpace hence explicitly representing patterns in itself.

KnowCreate5.png

Summary of Some Synergies Between Knowledge Creation Processes

A detailed discussion of the synergies between the different knowledge creation processes in OpenCog Prime may be found at OpenCogPrime:EssentialSynergies. The next few paragraphs, on this page, just give a brief overview of the topic.

How Declarative KC helps Procedural KC

MOSES and PLEASURE both require

  1. Procedure normalization, which involves execution of logical rules, which for the case of complex programmatic constructs, may require nontrivial logical inference.
  2. Probabilistic modeling of the factors distinguishing good from bad procedures (for a certain purpose), which may benefit from the capability of advanced probabilistic inference to incorporate diverse historical factors.

How Procedural KC helps Declarative KC

State-of-the-art logical reasoning engines (probabilistic or not) falter when it comes to “inference control” of complex inferences, or inferences over large knowledge bases
. At any given point in a chain of reasoning, they know which inference steps are correct to take, but not which ones are potentially useful at achieving the overall inference goal
. When logical inference gets stuck in reasoning about some concept, one recourse is to use procedure learning to figure out new procedural rules for distinguishing that concept from others
. These rules may then be fed into the inference process, adding new information that often allows greater-confidence inference control.

How Episodic KC helps Declarative KC

When an inference process can’t tell which of many possible directions to choose, another potential solution is to rely on life-history: on the memory-store of prior inferences done in related situations.

In inference control as elsewhere, what worked in the past is often a decent guide to what will work in the future.

How Declarative KC helps Episodic KC

An embodied organism experiences a great number of episodes during its lifetime, and without some kind of abstract organization, it will only be able to call these to mind via simple associative cueing.
 Declarative knowledge creation, acting on the episodic memory store, forms ontologies of episodes, allowing memory access based on abstract as well as associative cues.
 Furthermore, episodes are not generally stored in the memory in complete detail. Declarative knowledge is used to fill in the gaps — an aspect of “the constructive nature of memory”.

How Procedural KC helps Episodic KC

Suppose the mind wants to re-create an episode that it doesn’t recall in full detail, or to create an episode that it never actually experienced?

In many cases this episode will involve other agents with their own dynamic behaviors.

Procedure-learning mechanisms are used to infer the processes governing other agents’ behaviors, which is needed to simulate other agents.

How Episodic KC helps Procedural KC

MOSES and PLEASURE require probabilistic modeling of the factors distinguishing good from bad procedures (for a certain purpose), which may benefit from simple associative priming based on episodic memory of what procedures worked before in similar circumstances.

Interactions with Sensory KC

The three other forms of KC benefit from sensory KC simply via acting on the input provided by sensory KC, in the same manner as they act on direct sensory input
. And, sensory KC works primarily via having the sensory cognitive units stimulated by other cognitive units generating “mock stimuli” e.g.


  1. Daydreaming fake sensations and experiences one never had (episodic).
  2. Using imagistic thought to help guide mathematical reasoning (declarative/procedural).