A glossary of terms commonly used in the context of OpenCog & OpenCogPrime. Terms that have AGI-generic meanings are marked with a star (★). Where terms are used in multiple ways within the OpenCog sphere, more than one meaning is given.
Abduction A PLN abduction rule, a specific First-Order PLN rule (If A implies C, and B implies C, then maybe A is B), which embodies a simple form of abductive inference. OpenCog may also carry out abduction, as a general process, in other ways.
Action Selection★: The selecting of an Action in real time, without Deliberation. In minds with an elementary Cognitive Cycle, this generally refers to the selecting of an action during a single cognitive cycle. Action selection is at the center of Cognition, according to Stan Franklin's action selection paradigm in his book Artificial Minds http://www.msci.memphis.edu/~franklin/paradigm.html.
Active Schema Pool: The set of Schema currently in the midst of Schema Execution. A data structure within OpenCogPrime that contains the Schema that are currently checked every Cognitive Cycle regarding whether any of their component modules is worthy of execution (see Action Selection).
Adaptive Control System★: A System that, as it carries out external Actions and thus gains Feedback from its environment, also carries out internal Actions involving modifying its internals based on its judgements of how it will better achieve its Goals.
Adaptive Inference Control: Algorithms or heuristics for guiding PLN inference, that cause inference to be guided differently based on the context in which the inference is taking place, or based on aspects of the inference that are noted as it proceeds.
AGI Preschool★: A virtual world or robotic scenario roughly similar to the environment within a typical human preschool, intended for AGIs to learn in via interacting with the environment and with other intelligent agents
AGI System★: A software system (or a hardware system) that displays a considerable degree of General Intelligence. Strictly speaking, no AGI System yet exists, but we do have a number of proto-AGI Systems, some of which are described under AGI Projects.
Artificial Intelligence (AI)★ search: A branch of computer science and engineering that deals with intelligent behavior, learning, and adaptation in machines. The term Artificial Intelligence was first used in 1955 by John McCarthy to propose the 1956 Dartmouth Summer Research Project on Artificial Intelligence. See also Artificial General Intelligence and Wikipedia:Artificial_intelligence.
Atom: The basic entity used in OpenCog as an element for building representations. Some Atoms directly represent patterns in the world or mind, others are components of representations. There are two kinds of Atoms: Nodes and Links. See also OpenCog Atom types and the category Atom type.
Atom, Frozen: See Atom, Saved
Atom, Realized: An Atom that exists in RAM at a certain point in time
Atom, Saved: An Atom that has been saved to disk or other similar media, and is not actively being processed
Atom, Serialized An Atom that is serialized for transmission from one software process to another, or for saving to disk, etc.
Atom2Link: A part of OpenCog’s language generation system, that transforms appropriate Atoms into words connected via link parser link types.
Atomspace: A collection of Atoms, comprising the central part of the memory of an OpenCog instance.
Attention★: The process of focusing mental activity (Actions) on some particular subset of a Mind. Attention is the process of bringing content to consciousness. The aspect of an intelligent system’s dynamics focused on guiding which aspects of an the system's memory & functionality gets more computational resources at a certain point in time.
Attention Allocation: The cognitive process concerned with managing the parameters and relationships guiding what the system pays attention to, at what points in time. This is a term inclusive of Importance Updating and Hebbian Learning.
Attentional Currency: Short Term Importance and Long Term Importance values are implemented in terms of two different types of artificial money, STICurrency and LTICurrency. Theoretically these may be converted to one another.
Attentional Focus: The Atoms in an OpenCog Atomspace whose ShortTermImportance values lie above a critical threshold (the AttentionalFocus Boundary). The Attention Allocation subsystem treats these Atoms differently. Qualitatively, these Atoms constitute the system’s main focus of attention during a certain interval of time, i.e. it’s moving bubble of attention.
Autopoietic System: Autonomous, self-producing systems with self-defined boundaries. Cells and organisms are examples. As self-producing systems they contain all the organizational information necessary for their own development and continuation. However, they depend on structural inputs. The systems tend to be homeostatic, have finite trajectories, are relatively predictable and rely on transmitted self-organization. The term originated from Maturana and Varela (1980, also Varela et al. 1974). Their definition is quoted on p 27, and their identification key in Box 2.2 (p 28). Table 2.1 (p 30) lists autopoietic characteristics in contrast with synpoietic characteristics. Definition and characteristics are discussed in Section 2.2. source ref
Backward Chainer: A piece of software, wrapped in a MindAgent, that carries out backward chaining inference using PLN
CIM-Dynamic: Concretely-Implemented Mind Dynamic, a term for a cognitive process that is implemented explicitly in OpenCog (as opposed to allowed to emerge implicitly from other dynamics). Sometimes a CIM-Dynamic will be implemented via a single MindAgent, sometimes via a set of multiple interrelated MindAgents, occasionally by other means. See also OpenCogPrime:Concretely-Implemented Mind Dynamics
Cognition★: The entire, complex, process via which a Mind chooses what to do next on the basis of sensory input and/or internally generated content. This is an imprecise term; sometimes it means any process closely related to intelligence, but and sometimes it’s used specifically to refer to more abstract reasoning/learning/etc, as distinct from lower-level perception and action.
Cognitive Architecture★: A high level conceptual architecture design of a Mind which specifies how the mind is broken down into Functionally Specialized Units. The logical division of an AI system like OpenCog into interacting parts and processes representing different conceptual aspects of intelligence. It’s different from the software architecture, though of course certain cognitive architectures and certain software architectures fit more naturally together.
Cognitive Control Process★: A general class of Cognitive Processes, encompassing any cognitive process that is mainly concerned with exerting Control over some other subset of the Mind. A Cognitive Control Process may be either Focused or Global, depending on whether it aims to control a small or large section of the Mind in which it is embedded. See also Glocal.
Cognitive Cycle★: The basic “loop” of operations that an AGI system, used to control an agent interacting with a world, goes through rapidly each “subjective moment.” It minimally involves perceiving data from the world, storing data in memory, and deciding what if any new actions need to be taken based on the data perceived. It may also involve other processes like deliberative thinking or metacognition.
Cognitive Cycle: A typical OpenCog cognitive cycle lasts <1 second. Not all OpenCog processing needs to take place within a cognitive cycle.
Cognitive Equation: The principle, identified in Ben Goertzel’s 1994 book Chaotic Logic, that minds are collections of pattern-recognition elements, that work by iteratively recognizing patterns in each other and then embodying these patterns as new system elements. This is seen as distinguishing mind from “self-organization” in general, as the latter is not so focused on continual pattern recognition. Colloquially this means that “a mind is a system continually creating itself via recognizing patterns in itself.”
Cognitive Synergy★: The phenomenon by which different cognitive processes, controlling a single agent, work together in such a way as to help each other be more intelligent. Typically, if one has cognitive processes that are individually susceptible to combinatorial explosions, cognitive synergy involves coupling them together in such a way that they can help one another over come each other’s internal combinatorial explosions.
Cognitive Synergy: The CogPrime design is reliant on the hypothesis that its key learning algorithms will display dramatic cognitive synergy when utilized for agent control in appropriate environments.
Cognitive Science★: Cognitive science is usually defined as the scientific study either of mind or of intelligence (e.g. Luger 1994). Practically every formal introduction to cognitive science stresses that it is a highly interdisciplinary research area in which psychology, neuroscience, linguistics, philosophy, computer science, anthropology, and biology are its principal specialized or applied branches. Therefore we may distinguish cognitive studies of either human or animal brains, mind and intelligence." See also Wikipedia:Cognitive_science.
Cognitive Strucutre★: A temporally localized pattern or else a pattern extended over time. For example, a Cognitive Process is a specific kind of cognitive structure, which is a pattern that is specifically dynamical in nature. Many of the most interesting cognitive structures are subcategories of Cognitive Processes.
CogServer: A component of OpenCog that wraps up an Atomspace and a number of MindAgents, along with other mechanisms like a Scheduler for controlling the activity of the MindAgents, and code for important and exporting data from the Atomspace.
Combo: The programming language used internally by MOSES to represent the programs it evolves. SchemaNodes may refer to Combo programs, whether the latter are learned via MOSES or via some other means. The textual realization of Combo resembles LISP with less syntactic sugar. Internally a Combo program is represented as a program tree. Combo has been described as "LISP with a bad haircut," and with a bunch of primitives designed specifically for interaction with other aspects of the OpenCog system.
Composer: In the PLN design, a rule is denoted a composer if it needs premises for generating its consequent. See generator.
CogBuntu: : An Ubuntu Linux remix that contains all required packages and tools to test and develop OpenCog.
Competition for Attention★ : A competition to decide what perceptual information and local associations will receive more Attention, in humans and other animals, information carrying more affect has an advantage. In OpenCogPrime, the competition for attention is specifically managed by a process called Economic Attention Allocation. See also Global Cognitive Process.
Concept Creation: A general term for cognitive processes that create new ConceptNodes, PredicateNodes or concept maps representing new concepts.
Conceptual Blending★: The Cognitive Process by which two or more Concepts are combined to form a new one. A process of creating new concepts via judiciously combining pieces of old concepts. This may occur in OpenCog in many ways, among them the explicit use of a ConceptBlending MindAgent, that blends two or more ConceptNodes into a new one. There are particular sorts of heuristics commonly used in human minds for carrying out this operation, and many of these embody fundamental information-theoretic insights. The details of and importance of conceptual blending have been discussed considerably in the Cognitive Science literature, e.g. in the book The Way We Think by Gilles Faulconier and Mark Turner. See also Blending.
Concrete Operational Stage★ | search: Based on the Piagetan Stages, as compared to the Infantile Stage and Pre-Operational Stage, more abstract logical thought is applied to the physical world at this stage. Among the feats achieved here are: reversibility--the ability to undo steps already done; conservation--understanding that properties can persist in spite of appearances; theory of mind--an understanding of the distinction between what I know and what others know. (If I cover my eyes, can you still see me?) Complex concrete operations, such as putting items in height order, are easily achievable. Classification becomes more sophisticated, yet the mind still cannot master purely logical operations based on abstract logical representations of the observational world. In the theory of Developmental Stages of Uncertain Logic Based AI Systems, the Concrete Operational stage may be characterized as a stage where the logical inference faculty operates as an Adaptive Control System, in the specific sense that the "pruning function" used to control Forward Chaining Inference and Backward Chaining Inference utilizes knowledge gained from experience carrying out inferences in various contexts.
Confidence: A component of an OpenCog/PLN TruthValue, which is a scaling into the interval [0,1] of the weight of evidence associated with a truth value. In the simplest case (of a probabilistic Simple Truth Value), one uses confidence c = n / (n+k), where n is the weight of evidence and k is a parameter. In the case of an Indefinite Truth Value, the confidence is associated with the width of the probability interval.
Confidence Decay: The process by which the confidence of an Atom decreases over time, as the observations on which the Atom’s truth value is based become increasingly obsolete. This may be carried out by a special MindAgent. The rate of confidence decay is subtle and contextually determined, and must be estimated via inference rather than simply assumed a priori.
Consciousness★: CogPrime is not predicated on any particular conceptual theory of consciousness. Informally, the AttentionalFocus is sometimes referred to as the “conscious” mind of a OpenCogPrime system, with the rest of the Atomspace as “unconscious” – but this is just an informal usage, not intended to tie the OpenCogPrime design to any particular theory of consciousness. The primary originator of the CogPrime design (Ben Goertzel) tends toward panpsychism, as it happens.
Context★: In addition to its general common-sensical meaning, in OpenCogPrime the term Context also refers to an Atom that is used as the first argument of a ContextLink. The second argument of the ContextLink then contains Links or Nodes, with TruthValues calculated only restricted to the context defined by the first argument. For instance, (ContextLink USA (InheritanceLink person obese <.3>)).
Core: The MindOS portion of OpenCog, comprising the Atomspace, the CogServer, and other associated “infrastructural” code.
Corrective Learning★: When an agent learns how to do something, by having another agent explicitly guide it in doing the thing. For instance, teaching a dog to sit by pushing its butt to the ground.
CSDLN★: (Compositional Spatiotemporal Deep Learning Network): A hierarchical pattern recognition network, in which each layer corresponds to a certain spatiotemporal granularity, the nodes on a given layer correspond to spatiotemporal regions of a given size, and the children of a node correspond to sub-regions of the region the parent corresponds to. Jeff Hawkins HTM is one example CSDLN, and Itamar Arel’s DeSTIN (currently used in OpenCog) is another.
Declarative Memory★: Stores the meanings of concepts, ie, the definition of concepts / patterns. For example, the definition of "face" in terms of eyes, nose, mouth, and their relative positions, etc.
Deduction★: In general, this refers to the derivation of conclusions from premises using logical rules. In PLN in particular, this often refers to the exercise of a specific inference rule, the PLN Deduction rule (A B, B C, therefore AC)
Deep Learning★: Learning in a network of elements with multiple layers, involving feedforward and feedback dynamics, and adaptation of the links between the elements. An example deep learning algorithm is DeSTIN, which is being integrated with OpenCog for perception processing.
Defrosting: Restoring, into the RAM portion of an Atomspace, an Atom (or set thereof) previously serialized onto disk or other storage medium.
Demand: In OpenCogPrime’s OpenPsi subsystem, this term is used in a manner inherited from the Psi model of motivated action. A Demand in this context is a quantity whose value the system is motivated to adjust. Typically the system wants to keep the Demand between certain minimum and maximum values. An Urge develops when a Demand deviates from its target range.
Deme: In MOSES, an “island” of candidate programs, closely clustered together in program space, being evolved in an attempt to optimize a certain fitness function. The idea is that within a deme, programs are generally similar enough that reasonable syntax-semantics correlation obtains.
Derived Hypergraph★: The SMEPH hypergraph obtained via modeling a system in terms of a hypergraph representing its internal states and their relationships. For instance, a SMEPH vertex represents a collection of internal states that habitually occur in relation to similar external situations. A SMEPH edge represents a relationship between two SMEPH vertices (e.g. a similarity or inheritance relationship). The terminology “edge /vertex” is used in this ontext, to distinguish from the “link / node” terminology used in the context of the Atomspace.
Dialogue★: Linguistic interaction between two or more parties.
Dialogue Control: The process of determining what to say at each juncture in a dialogue. This is distinguished from the linguistic aspects of dialogue, language comprehension and language generation. Dialogue control applies to Psynese or Lojban, as well as to human natural language.
Dimensional Embedding★: The process of embedding entities from some non-dimensional space (e.g. the Atomspace) into an n-dimensional Euclidean space. This can be useful in an AI context because some sorts of queries (e.g. “find everything similar to X”, “find a path between X and Y”) are much faster to carry out among points in a Euclidean space, than among entities in a space with less geometric structure.
Disembodied Mind★: A mind with no body at all, and no need for one. Examples would be: Google, were it to become generally intelligent; A mathematical theorem-proving system that became so adept it deserved the label General Intelligence.
Distributed Atomspace: An implementation of an Atomspace that spans multiple computational processes; generally this is done to enable spreading an Atomspace across multiple machines.
Dual Network★: A network of mental or informational entities with both a hierarchical structure and a heterarchical structure, and an alignment among the two structures so that each one helps with the maintenance of the other. Dual Network structures are Emergent, dynamically evolving networks of Knowledge Components that superpos: a Logical network composed of probabilistic Inheritance relationships (a network in which A is the child of B if "A inherits from B"); an associative Hebbian Network in which A and B are connected if A and B tend to be utilized together.
Dual Network: The notion of an Emergent dual network is critical to the theory underlying the CogPrime design; it is hypothesized to be a critical emergent structure, that must emerge in a mind (e.g. in an AtomSpace) in order for it to achieve a reasonable level of human-like general intelligence (and possibly to achieve a high level of pragmatic general intelligence in any physical environment). The Dual Network archetype manifests itself in OpenCogPrime largely as follows: Hierarchical links are Inheritance relationships defined using PLN semantics and managed/modified by PLN; Associative, heterarchical links are HebbianLinks created based on common activity and utilized by Economic Attention Allocation; The flexible interoperation of inference and attention allocation is critical to maintaining a functioning emergent dual network structure.
Dynamical Phenomenon search: Dynamical Systems Theory describes a variety of phenomena that occur in systems that change over time. Many of these phenomena are of fundamental importance for intelligent systems. Examples are Chaos Emergence Attractors Feedback Homeostasis Control Emergent Structures
Economic Goal Selection: Goals are not special objects within OpenCogPrime; rather, any Atom may be taken as a Goal. There is a Pool of Supergoals, which are continually allocated STICurrency and LTICurrency merely for being supergoals.
Efficient Pragmatic General Intelligence: A formal, mathematical definition of general intelligence (extending the pragmatic general intelligence), that ultimately boils down to: the ability to achieve complex goals in complex environments using limited computational resources (where there is a specifically given weighting function determining which goals and environments have highest priority). More specifically, the definition weighted-sums the system’s normalized goal-achieving ability over (goal, environment) pairs, where the weights are given by some assumed measure over (goal, environment pairs), and where the normalization is done via dividing by the (space and time) computational resources used for achieving the goal.
Elegant Normal Form (ENF)★: Used in MOSES, this is a way of putting programs in a normal form while retaining their hierarchical structure. This is critical if one wishes to probabilistically model the structure of a collection of programs, which is a meaningful operation if the collection of programs is operating within a region of program space where syntax-semantics correlation holds to a reasonable degree. The Reduct library is used to place programs into ENF.
Embodied Communication Prior: The class of prior distributions over goal/environment pairs, that are imposed by placing an intelligent system in an environment where most of its tasks involve controlling a spatially localized body in a complex world, and interacting with other intelligent spatially localized bodies. It is hypothesized that many key aspects of human-like intelligence (e.g. the use of different subsystems for different memory types, and cognitive synergy between the dynamics associated with these subsystems) are consequences of this prior assumption. This is related to the Mind-World Correspondence Principle.
Embodiment★: The use of an AI software system to control a spatially localized body in a complex (usually 3D) world. There are also possible “borderline cases” of embodiment, such as a search agent on the Internet. In a sense any AI is embodied, because it occupies some physical system (e.g. computer hardware) and has some way of interfacing with the outside world.
Emergence★: A property or pattern in a system is emergent if it arises via the combination of other system components or aspects, in such a way that its details would be very difficult (not necessarily impossible in principle) to predict from these other system components or aspects.
Emergent Structure★ A Cognitive Structure that is intended to emerge from the dynamics operating in an OpenCogPrime system, either spontaneously or in the course of the system's interactions with an environment, possibly an environment including human teachers. Only a subset of the cognitively relevant structures and dynamics associated with the Novamente system fall into this category; others fall into the category of Concretely Implemented Structures.
Emotion★: Emotions are system-wide responses to the system’s current and predicted state. Dorner’s Psi theory of emotion contains explanations of many human emotions in terms of underlying dynamics and motivations, and most of these explanations make sense in a OpenCogPrime context, due to OpenCogPrime’s use of OpenPsi (modeled on Psi) for motivation and action selection.
Episodic Knowledge★: Knowledge about episodes in an agent’s life history, or the life-history of other agents. OpenCogPrime includes a special dimensional embedding space only for episodic knowledge, easing organization and recall.
Event★ An event is, simply, a logical predicate that maps time-points or time-intervals into truth values. Events may undergo Initiation or Termination at particular time-points or time-intervals. Events also have certain optional properties, e.g. Event Persistence: the property of maintaining roughly constant truth value after initiated, until terminated; Event Continuity: for non-persistent events, the property of maintaining continuous rather than discontinuous variation in truth value over time These are not intrinsic properties of events; events may gain or lose persistence or continuity during the course of their existence.
Evolutionary Learning★: Learning that proceeds via the rough process of iterated differential reproduction based on fitness, incorporating variation of reproduced entities. MOSES is an explicitly evolutionary-learning-based portion of OpenCogPrime; but OpenCogPrime’s dynamics as a whole may also be conceived as evolutionary.
Exemplar★: In the context of imitation learning, when a supervisor wants to teach an OpenCog-controlled agent a behavior by imitation, he/she gives the agent an exemplar. To teach an agent "fetch" for instance, the supervisor throws a stick, runs to it, grabs it with his/her mouth, and returns to his/her initial position.
Exemplar(in the context of MOSES) -- Candidate chosen as the core of a new deme , or as the central program within a deme, to be varied by representation building for ongoing exploration of program space
Explicit Knowledge Representation★: Knowledge representation in which individual, easily humanly identifiable pieces of knowledge correspond to individual elements in a knowledge store (elements that are explicitly there in the software and accessible via very rapid, deterministic operations)
Extension★: In PLN, the extension of a node refers to the instances of the category that the node represents. In contrast is the intension.
Fact★:: A logical statement, eg "Kicks(john,mary,t1)" meaning that John kicks Mary at time t1.
Feeling★: search: Each of a mind's "feelings" is a label for a specific type of Arousal. Feelings are important as components for Drives. Examples: Hunger, fear, urge to mate, pleasure, pain. Related: Experience, Emotion, Reinforcement
Fishgram (Frequent and Interesting Sub-hypergraph Mining): A pattern mining algorithm for identifying frequent and/or interesting sub-hypergraphs in the Atomspace.
First-Order Inference (FOI): The subset of PLN that handles Logical Links not involving VariableAtoms or higher-order functions. The other aspect of PLN, Higher-Order Inference, uses Truth Value formulas derives from First-Order Inference.
Forgetting★: The facility of an AGI system to discard unused knowledge.
Forgetting: The process of removing Atoms from the in-RAM portion of AtomSpace, when RAM gets short and they are judged not as valuable to retain in RAM as other Atoms. This is commonly done using the LTI values of the Atoms (removing lowest LTI-Atoms, or more complex strategies involving the LTI of groups of interconnected Atoms). May be done by a dedicated Forgetting MindAgent. VLTI may be used to determine the fate of forgotten Atoms.
Formal Stage★ | search : Based on the Piagetan Stages, abstract deductive reasoning, the process of forming, then testing hypotheses, and systematically reevaluating and refining solutions, develops at this stage, as does the ability to reason about purely abstract concepts without reference to concrete physical objects. This is adult human-level intelligence. Note that the capability for formal operations is intrinsic in Logic Based AGI Projects, but in-principle capability is not the same as pragmatic, grounded, controllable capability. In the theory of Developmental Stages of Uncertain Logic Based AI Systems, the Formal stage may be characterized as a stage where the logical inference faculty operates as an Symbolically Interactive System, in the specific sense that the "pruning function" used to control Forward Chaining Inference and Backward Chaining Inference utilizes the results of explicit logical inference about the optimal way to control inference in various relevant contexts.
Forward Chainer: A control mechanism (MindAgent) for PLN inference, that works by taking existing Atoms and deriving conclusion from them using PLN rules, and then iterating this process. The goal is to derive new Atoms that are interesting according to some given criterion.
Frame2Atom: A simple system of hand-coded rules for translating the output of RelEx2Frame (logical representation of semantic relationships using FrameNet relationships) into Atoms
Freezing: Saving Atoms from the in-RAM Atomspace to disk.
Friendly AI★ A nontechnical term introduced by Eliezer Yudkowsky to refer to an AI that is "benevolent to humans" -- not killing us nor torturing us, but helping us and basically letting us live happily, perhaps even helping us to do so. Guaranteeing AGI Friendliness in the situation where AGI's are undergoing Strong Self-Modification and rapidly improving their own intelligence beyond the human level, is a tough problem to say the least.
General Intelligence★: Often used in an informal, commonsensical sense, to mean the ability to learn and generalize beyond specific problems or contexts. Has been formalized in various ways as well, including formalizations of the notion of “achieving complex goals in complex environments” and “achieving complex goals in complex environments using limited resources.” Usually interpreted as a fuzzy concept, according to which absolutely general intelligence is physically unachievable, and humans have a significant level of general intelligence, but far from the maximally physically achievable degree.
Generalized Hypergraph: A hypergraph with some additional features, such as links that point to links, and nodes that are seen as “containing” whole sub-hypergraphs. This is the most natural and direct way to mathematically/visually model the Atomspace.
Global Cognitive Process: A general class of Cognitive Processes, encompassing all such processes that operate on a vast majority of Knowledge Components in a Mind on a reasonably rapid time scale (but not necessarily within, e.g, a single Cognitive Cycle). See also Competition for Attention, Forgetting
Global Distributed Memory: Memory that stores items as implicit knowledge, with each memory item spread across multiple components, stored as a pattern of organization or activity among them.
Glocal Memory★: The storage of items in memory in a way that involves both localized and global, distributed aspects.
Goal★: A goal that an intelligent system, in practice, strives to achieve.
Goal: An Atom representing a function that a system (like OpenCog) is supposed to spend a certain non-trivial percentage of its attention optimizing. The goal, informally speaking, is to maximize the Atom’s truth value.
Goal, Implicit★: A goal that an intelligent system, in practice, strives to achieve; but that is not explicitly represented as a goal in the system’s knowledge base.
Goal, Explicit★: A goal that an intelligent system explicitly represents in its knowledge base, and expends some resources trying to achieve.
Goal, Explicit: Goal Nodes (which may be Nodes' or, e.g. ImplicationLinks) are used for this purpose in OpenCog.
Goal Context search: An actual or hypothetical Goal, together with a Codelet Coalition of Procedures which, together, are estimated by a Mind to be plausibly likely to lead to the accomplishment of the Goal.
Goal-Driven Learning: Learning that is driven by the cognitive schematic – i.e. by the quest of figuring out which procedures can be expected to achieve a certain goal in a certain sort of context.
Hebbian Learning: An aspect of Attention Allocation, centered on creating and updating HebbianLinks, which represent the simultaneous importance of the Atoms joined by the HebbianLink
Hebbian Links: Links recording information about the associative relationship (co-occurrence) between Atoms. These include symmetric and asymmetric HebbianLinks.
Heterarchical Network: A network of linked elements in which the semantic relationships associated with the links are generally symmetrical (e.g. they may be similarity links, or symmetrical associative links). This is one important sort of subnetwork of an intelligent system; see Dual Network.
Hierarchical Network: A network of linked elements in which the semantic relationships associated with the links are generally asymmetrical, and the parent nodes of a node have a more general scope and some measure of control over their children (though there may be important feedback dynamics too). This is one important sort of subnetwork of an intelligent system; see [[Dual Network]].
Hillclimbing★: A general term for greedy, local optimization techniques, including some relatively sophisticated ones that involve “mildly nonlocal” jumps.
Huge-Resources Mind★ The mind of an AI that is theoretically constructible (unlike an Infinite-Resources Mind), but would require infeasibly much computational power. Designing huge-resources minds is vastly simpler than designing Modest-Resources Minds. This is intuitively obvious, and Hutter and Schmidhuber have showed it convincingly with their AIXItl and Goedel Machine designs. Another way to say this is that the hard problems of AGI design are ultimately all about computational (space and time) efficiency.
Human-Level Intelligence★: General intelligence that’s “as smart as” human general intelligence, even if in some respects quite unlike human intelligence. An informal concept, which generally doesn’t come up much in OpenCogPrime work, but is used frequently by some other AI theorists.
Human-Like Intelligence: General intelligence with properties and capabilities broadly resembling those of humans, but not necessarily precisely imitating human beings.
Hypergraph: A conventional hypergraph is a collection of nodes and links, where each link may span any number of nodes. OpenCog makes use of generalized hypergraphs (the Atomspace is one of these).
Imitation Learning★: Learning via copying what some other agent is observed to do.
Implication: Often refers to an ImplicationLink between two PredicateNodes, indicating an (extensional, intensional or mixed) logical implication.
Implicit Knowledge Representation★: Representation of knowledge via having easily humanly identifiable pieces of knowledge correspond to the pattern of organization and/or dynamics of elements, rather than via having individual elements correspond to easily humanly identifiable pieces of knowledge.
Importance: A generic term for the Attention Values associated with Atoms. Most commonly these are STI (short term importance) and LTI (long term importance) values. Other importance values corresponding to various different time scales are also possible. In general an importance value reflects an estimate of the likelihood an Atom will be useful to the system over some particular future time-horizon. STI is generally relevant to processor time allocation, whereas LTI is generally relevant to memory allocation.
Importance Decay: The process of Atom’ importance values (e.g. STI and LTI) decreasing over time, if the Atoms are not utilized. Importance decay rates may in general be context-dependent.
Importance Spreading: A synonym for Importance Updating, intended to highlight the similarity with “activation spreading” in neural and semantic networks
Importance Updating: The CIM-Dynamic that periodically (frequently) updates the STI and LTI values of Atoms based on their recent activity and their relationships.
Imprecise Truth Value★: Peter Walley’s imprecise truth values are intervals [L,U], interpreted as lower and upper bounds of the means of probability distributions in an envelope of distributions. In general, the term may be used to refer to any truth value involving intervals or related constructs, such as indefinite probabilities.
Indefinite Probability: An extension of a standard imprecise probability, comprising a credible interval for the means of probability distributions governed by a given second-order distribution.
Indefinite Truth Value: An OpenCog TruthValue object wrapping up an indefinite probability
Induction★: | search : The process of heuristically inferring that what has been seen in multiple examples, will be seen again in new examples. In PLN, a specific inference rule (A -> B, A -> C, therefore B -> C). Induction in the broad sense, may be carried out in OpenCog by methods other than PLN induction. When emphasis needs to be laid on the particular PLN inference rule, the phrase PLN Induction is used.
Inductive Learning Module: Performs "data-mining" functions. Given some facts in Working Memory, discover general rules that cover them. For example, from many instances of women having long hair generalize that women have long hair with probability.
Inductive Bias★: | Bias search : Inductive bias refers to knowledge that a mind is provided with at its creation, which helps it solve certain classes of Problems significantly better than it could have done without this knowledge. Eric Baum, in his book What is Thought?, has argued that much of human intelligence consists of inductive bias provided via the genome. Chomsky made this same argument regarding language in particular, years ago. Baum's current work involves attempting to create AGI by combining fairly simple Learning and Reasoning mechanisms with human-coded modules serving the role that inductive bias is hypothesized to provide for humans.
Inference★ | search : The process of deriving conclusions from assumptions according to rules of Logic. In an OpenCog context, this often refers to the PLN inference system. Inference in the broad sense is distinguished from general learning via some specific characteristics, such as the intrinsically incremental nature of inference: it proceeds step by step. The most commonly discussed type of inference is Deductive Inference but other kinds such as Inductive Inference and Abductive Inference are equally important. There is also a distinction between First Order Inference and Higher Order Inference, which has a different significance within different Logic systems. One may also refer to Commonsense Inference, meaning inference about everyday human life type events rather than for instance Mathematical Theorem Proving. See also Predictive Inference, Postdictive Inference, Temporal Inference, Belief Revision.
Infantile Stage★ | search : Based on the Piagetan Stages, the stage of development of a Mind that is concerned mainly with learning basic things required of any Autonomous Agent with Intelligence, such as: how to effectively integrate Perception, Action and Cognition within a Cognitive Cycle; how to construct a Self via interactions with the world and other minds (see Intersubjective Reality). In this stage a mind develops basic world-exploration driven by instinctive actions. Reward-driven reinforcement of actions learned by imitation, simple associations between words and objects, actions and images, and the basic notions of time, space, and causality are developed. The most simple, practical ideas and strategies for action are learned. In the theory of Developmental Stages of Uncertain Logic Based AI Systems, the Infantile stage may be characterized as a stage where the logical inference faculty operates as a stand-alone system without sensitive adaptation to its environment based on experience.
Inference Control★ search : A cognitive process that determines what logical inference rule (e.g. what PLN rule) is applied to what data, at each point in the dynamic operation of an inference process.
Infinite-Resources Mind★ This term refers to the minds of intelligent systems like Marcus Hutter's AIXI, that can only exist if supplied with infinite computational resources. See also Modest-Resources Mind and Huge-Resources Mind.
Information★ An informal concept that has been formalized in a variety of different ways, each with strengths and weaknesses. There are also various mathematical and conceptual relationships between the different formalizations. In thermodynamics and communication theory, the information content of a communication channel or probability distribution may be defined as the negative of its Entropy. The Algorithmic Information of an entity is defined as the length of the shortest self-delimiting program for producing that entity (note that this depends on an assumption regarding the Universal Turing Machine the program is running on; an assumption that is irrelevant as program length goes to infinity but potentially very relevant in practical cases). In Pattern Theory, information may be conceptualized as a synonym for Pattern in some entity, that is recognized by some Observer. In this context, information is almost but not quite synonymous with Knowledge. The latter consists of Information that is stored within a Mind in a way that is usable by that mind for making judgments (i.e., it must be part of the mind's Memory).
Integrative AGI★ | search : An AGI architecture, like CogPrime, that relies on a number of different powerful, reasonably general algorithms all cooperating together. This is different from an AGI architecture that is centered on a single algorithm, and also different than an AGI architecture that expects intelligent behavior to emerge from the collective interoperation of a number of simple elements (without any sophisticated algorithms coordinating their overall behavior).
Integrative Cognitive Architecture: A cognitive architecture intended to support integrative AGI.
Intelligence: An informal, natural language concept. “General intelligence” is one slightly more precise specification of a related concept; “Universal intelligence” is a fully precise specification of a related concept. Other specifications of related concepts made in the particular context of OpenCogPrime research are the pragmatic general intelligence and the efficient pragmatic general intelligence.
Intension★: In PLN, the intention of a node consists of Atoms representing properties of the entity the node represents.
Intentional memory: A system’s knowledge of its goals and their subgoals, and associations between these goals and procedures and contexts (e.g. cognitive schematics).
Internal Simulation World: A simulation engine used to simulate an external environment (which may be physical or virtual), used by an AGI system as its “mind’s eye” in order to experiment with various action sequences and envision their consequences, or observe the consequences of various hypothetical situations. Particularly important for dealing with episodic knowledge.
IRC Learning: (Imitation, Reinforcement, Correction): Learning via interaction with a teacher, involving a combination of imitating the teacher, getting explicit reinforcement signals from the teacher, and having one’s incorrect or suboptimal behaviors guided toward betterness by the teacher in real-time. This is a large part of how young humans learn.
Knowledge Base★: A shorthand for the totality of knowledge possessed by an intelligent system during a certain interval of time (whether or not this knowledge is explicitly represented). Put differently: this is an intelligence’s total memory contents (inclusive of all types of emory) during an interval of time.
Knowledge Component: A generic term for a Pattern (within a Mind) forming a "coherent piece of mental Information," which could be a Percept, a Concept, a Procedure, etc. See also Mental Pattern, Statement, Cognitive Map.
Knowledge Representation (KR)★ | search : A system for representing Knowledge inside the Memory of an intelligent system. Put differently, KR has to do with the nature and organization of Knowledge Components in a Mind. Every mind must have some system of KR, but the system could be highly complex and dynamic ... or it could be simple, static, and easy to describe.
Language | search : Languages are complex Semiotics systems for communicating between minds. Learning human language is among the most difficult tasks an AGI System faces, because after all human language is specialized for humans. The correct strategy to achieve AGI understanding of language is controversial. Approaches include: injecting logical knowledge about language into an AGI's knowledge base; teaching an AGI language via pure conversation; having an AGI learn language via analysis of text; teaching an AGI language via its embodiment in a simulated or physical robot; teaching an AGI one of the Constructed Languages as a means of giving it enough knowledge of Semantics and Pragmatics that it can then learn the Syntax of natural language; various combinations of the previous methods. See also Natural Language, Syntax, Linguistic Semantics, Pragmatics, Phonology.
Language Comprehension★: The process of mapping natural language speech or text into a more “cognitive”, largely language-independent representation.
Language Comprehension: In OpenCog this has been done by various pipelines consisting of dedicated natural language processing tools, e.g. a pipeline: text ==> Link Parser --> RelEx ==> RelEx2Frame ==> Frame2Atom – Atomspace; and alternatively a pipeline Link Parser ==>Link2Atom ==> Atomspace. It would also be possible to do language comprehension purely via PLN and other generic OpenCog processes, without using specialized language processing tools.
Language Generation★: The process of mapping (largely language-independent) cognitive content into speech or text.
Language Generation: In OpenCog this has been done by various pipelines consisting of dedicated natural language processing tools, e.g. a pipeline: Atomspace ==> NLGen ==> text; or more recently Atomspace ==> Atom2Link ==> surface realization ==> text. It would also be possible to do language generation purely via PLN and other generic OpenCog processes, without using specialized language processing tools.
Learning★ | search : The Process of a System gaining Information about some source of data (e.g. a table of numbers, or a complex environment, or the system itself) via interacting with it. The process of a system adapting based on experience, in a way that increases its intelligence (its ability to achieve its goals). The theory underlying CogPrime doesn’t distinguish learning from reasoning, associating, or other aspects of intelligence.
Learning Algorithm★ | search : An algorithm that performs Learning. The creation of machine learning algorithms has been a very active part of AI over the last few decades. Most of this work has been focused on Narrow AI, yet in some AGI approaches, some of these machine learning algorithms may still be useful for various purposes, especially when integrated with each other or with other sorts of algorithms. See also Evolutionary Algorithm, Neural Net Learning Algorithm, Statistical Learning Algorithm, Frequent Itemset Mining, Statistical Pattern Mining, Clustering Algorithm.
Learning Server: search : In some OpenCog configurations, this refers to a software server that performs “offline” learning tasks (e.g. using MOSES or hillclimbing), and is in communication with an Operational Agent Controller software server that performs real-time agent control and dispatches learning tasks to and receives results from the Learning Server.
Linguistic Links: A catch-all term for Atoms explicitly representing linguistic content, e.g. WordNode, SentenceNode, CharacterNode….
Link: A type of Atom, representing a relationship among one or more Atoms. Links and Nodes are the two basic kinds of Atoms.
Link Parser: A natural language syntax parser, created by Sleator and Temperley at Carnegie-Mellon University, and currently used as part of OpenCog’s natural language comprehension and natural 'language generation system.
Link2Atom: A system for translating link parser links into Atoms. It attempts to resolve precisely as much ambiguity as needed in order to translate a given assemblage of link parser links into a unique Atom structure.
Lobe: A term sometime used to refer to a portion of a distributed Atomspace that lives in a single computational process. Often different lobes will live on different machines.
Localized Memory★: Memory that stores each item using a small number of closely-connected elements.
Logic★ : Usually refers to FOL (first order predicate logic). FOL includes predicates, variables, functions, and the universal and existential quantifiers. For some knowledge representation purposes, using a subset of FOL may be desirable. One subset is Horn logic. In an OpenCog context, Logic usually refers to a set of formal Rules for translating certain combinations of Atoms into “conclusion” Atoms. The paradigm case at present is the PLN probabilistic logic system, but OpenCog can also be used together with other logics.
Logical Links: Atoms whose truth values are primarily determined or adjusted via logical Rules, e.g. PLN’s InheritanceLink, SimilarityLink, ImplicationLink, etc. The term isn’t usually applied to other links like HebbianLinks whose semantics isn’t primarily logic-based, even though these other links can be processed via (e.g. PLN) logical inference via nterpreting them logically.
Lojban: A constructed human language, with a completely formalized syntax and a highly formalized semantics, and a small but active community of speakers. In principle this seems an extremely good method for communication between humans and early-stage AGI systems.
Lojban++: A variant of Lojban that incorporates English words, enabling more flexible expression without the need for frequent invention of new Lojban words.
Long Term Importance (LTI): A value associated with each Atom, indicating roughly the expected utility to the system of keeping that Atom in RAM rather than saving it to disk or deleting it. It’s possible to have multple LTI values pertaining to different time scales, but so far practical implementation and most theory has centered on the option of a single LTI value.
LTI: Long Term Importance
Map: A collection of Atoms that are interconnected in such a way that they tend to be commonly active (i.e. to have high STI, e.g. enough to be in the AttentionalFocus, at the same time)
Map Encapsulation: The process of automatically identifying maps in the Atomspace, and creating Atoms that “encapsulate” them; the Atom encapsulation a map would link to all the Atoms in the map. This is a way of making global memory into local memory, thus making the system’s memory glocal and explicitly manifesting the “cognitive equation.” This may be carried out via a dedicated MapEncapsulation MindAgent.
Map Formation: The process via which maps form in the Atomspace. This need not be explicit; maps may form implicitly via the action of Hebbian Learning. It will commonly occur that Atoms frequently co-occurring in the AttentionalFocus, will come to be joined together in a map.
Memory★ | search : The property of a Mind wherein Patterns experienced within the mind at a previous point in time, in a certain previous context, may be re-elicited in the mind in the absence of that entire context. In many Minds, memory is subdivided into multiple Memory Subsystems, for reasons of practical computational efficiency.
Memory types: Refers to the different types of memory that are embodied in different data structures or processes. In the CogPrime architecture, e.g. declarative (semantic), procedural, attentional, intentional, episodic, sensorimotor.
Metacognition★ | search : Thinking about thinking. Among biological species, it seems likely that metacognition occurs only in humans. Metacognition may be an extremely important part of general intelligence; in one hypothesis (Ben Goertzel, others), metacognition is the critical aspect of the transition to the Formal Developmental Stage from the Concrete Operational Developmental Stage. See also Focused Cognitive Process.
Mind-World Correspondence Principle: The principle that, for a mind to display efficient pragmatic general intelligence relative to a world, it should display many of the same key structural properties as that world. This can be formalized by modeling the world as mind as probabilistic state transition graphs, and saying that the categories implicit in the state transition graphs of the mind and world should be inter-mappable via a high-probability morphism.
Mind OS: A synonym for the OpenCog Core
MindAgent: An OpenCog software object, residing in the CogServer, that carries out some processes in interaction with the Atomspace. A given conceptual cognitive process (e.g. PLN inference, Attention allocation, etc.) may be carried out by a number of different MindAgents designed to work together.
Mindspace: A model of the set of states of an intelligent system as a geometrical space, imposed by assuming some metric on the set of mind-states. This may be used as a tool for formulating general principles about the dynamics of generally intelligent systems.
Modulators: Parameters in the Psi model of motivated, emotional cognition, that modulate the way a system perceives, reasons about and interacts with the world.
MOSES (Meta-Optimizing Semantic Evolutionary Search): An algorithm for procedure learning, which in the current implementation learns programs in the Combo language. MOSES is an evolutionary learning system, which differs from typical genetic programming systems in multiple aspects including: a subtler framework for managing multiple “demes” or “islands” of candidate programs; a library called Reduct for placing programs in Elegant Normal Form; and the use of probabilistic modeling in place of, or in addition to, mutation and crossover as means of determining which new candidate programs to try.
Motoric: Pertaining to the control of physical actuators, e.g. those connected to a robot. May sometimes be used to refer to the control of movements of a virtual character as well.
Narrow AI★ | search : A kind of AI system that displays Specialized Intelligence in some domain. Most AI software in history, and under development, fits into this category. Many Narrow AI systems are based on the use of individual Learning Algorithms, customized for particular domains. See also Specialized Intelligence vs Artificial General Intelligence.
NARS (Non-Axiomatic Reasoning System) | search : According to its creator, Pei Wang, "NARS is a general-purpose reasoning system, coming from my study of Artificial Intelligence (AI) and Cognitive Sciences (CogSci). What makes NARS different from conventional reasoning systems is its ability to learn from its experience and to work with insufficient knowledge and resources. NARS attempts to uniformly explain and reproduce many cognitive facilities, including reasoning, learning, planning, etc, so as to provide a unified theory, model, and system for AI as a whole. The ultimate goal of this research is to build a thinking machine." For a discussion of some of the types of reasoning that NARS's logic system can carry out see the entry on Inference.
Natural Language Comprehension: See Language Comprehension
Natural Language Generation: See Language Generation
Natural Language Processing: See Language Processing
Natural Language Processor: Performs the mapping of natural language to and from the internal knowledge representation. This mapping may also be defined by modifiable rules, thus allowing language learning.
NLGen: Software for carrying out the surface realization phase of natural language generation, via translating collections of RelEx output relationships into English sentences. Was made functional for simple sentences and some complex sentences; not currently under active development, as work has shifted to the related Atom2Link approach to language generation.
Node: A type of Atom. Links and Nodes are the two basic kinds of Atoms. Nodes, mathematically, can be thought of as “0-ary links. Some types Nodes refer to external or mathematical entities (e.g. WordNode, NumberNode); others are purely abstract, e.g. a ConceptNode is characterized purely by the Links relating it to other atoms. GroundedPredicateNodes and GroundedSchemaNodes connect to explicitly represented procedures (sometimes in the Combo language); ungrounded PredicateNodes and SchemaNodes are abstract and, like ConceptNodes, purely characterized by their relationships.
Node Probability: Many PLN inference rules rely on probabilities associated with Nodes. Node probabilities are often easiest to interpret in a specific context, e.g. the probability P(cat) makes obvious sense in the context of a typical American house, or in the context of the center of the sun. Without any contextual specification, P(A) is taken to mean the probability that a randomly chosen occasion of the system’s experience includes some instance of A.
Novamente Cognition Engine (NCE) : A proprietary proto-AGI software system, the predecessor to OpenCog. Many parts of the NCE were open-sourced to form portions of OpenCog, but some NCE code was not included in OpenCog; and now OpenCog includes multiple aspects and plenty of code that was not in NCE.
OpenCog★: A software framework intended for development of AGI systems, and also for narrow-AI application using tools that have AGI applications. Co-designed with the CogPrime cognitive architecture, but not exclusively bound to it.
OpenCog Self : The Self-Model in OpenCog Prime consists of a set of Atoms embodying logical relationships, largely formed by PLN inference acting on the results of Map Formation. Economic Attention Allocation will sometimes cause the system to simultaneously assign high Short Term Importance to a number of Atoms related to the Self-Model, which encourages links to form between these Atoms, thus making the Self-Model a richer and more fully interconnected pattern, and giving map formation more self-related patterns of collective activation to work with.
Operational Agent Controller (OAC): In some OpenCog configurations, this is a software server containing a CogServer devoted to real-time control of an agent (e.g. a virtual world agent, or a robot). Background, offline learning tasks may then be dispatched to other software processes, e.g. to a Learning Server.
Pattern Mining★: Pattern mining is the process of extracting an (often large) number of patterns from some body of information, subject to some criterion regarding which patterns are of interest. Often (but not exclusively) it refers to algorithms that are rapid or “greedy”, finding a large number of simple patterns relatively inexpensively.
Pattern Recognizer: (aka Concept Recognizer) Recognizes patterns in Working Memory. For example, if 2 eyes, 1 nose, 1 mouth on a head is presented as Facts in WM, then the "face" pattern is recognized and the result is added to WM as a new fact.
Patternism★ | search : The philosophical principle holding that, from the perspective of engineering intelligent systems, it is sufficient and useful to think about mental processes in terms of (static and dynamical) patterns.
Perception★ | search: The Process of assigning Meaning to incoming Sensory Stimulus (stimuli), thus creating Percepts; the process of understanding data from sensors. When natural language is ingested in textual format, this is generally not considered perceptual.
Perception: Perception may be taken to encompass both pre-processing that prepares sensory data for ingestion into the Atomspace, processing via specialized perception processing systems like DeSTIN that are connected to the Atomspace, and more cognitive-level process within the Atomspace that is oriented toward understanding what has been sensed.
Perceptual Learning: Learning recognition of new objects, new categorizations of objects, new relationships between objects, and so forth. This leads to improvement of both Top-Down Perception and Bottom-Up Perception.
Piagetan Stages★: A series of stages of cognitive development hypothesized by developmental psychologist Jean Piaget, which are easy to interpret in the context of developing OpenCogPrime systems. The basic stages are: Infantile, Pre-Operational Stage, Concrete Operational and Formal. Post-formal stages have been discussed by theorists since Piaget and seem relevant to AGI, especially dvanced AGI systems capable of strong self-modification.
Plan★ | search : A set of Statements regarding Actions to be taken, as a (parallel, and/or sequential) group, in order to have a plausible chance of achieving some Goal. The process of formulating plans, aka Planning, is an important instance of Deliberation.
PLN: Probabilistic Logic Networks
PLN, First-Order: See First-Order Inference
PLN, Higher-Order: See Higher-Order Inference
PLN Rules: A PLN Rule takes as input one or more Atoms (the “premises”, usually Links), and output an Atom that is a “logical conclusion” of those Atoms. The truth value of the consequence is determined by a PLN Formula associated with the Rule.
PLN Formulas: A PLN Formula, corresponding to a PLN Rule, takes the TruthValues corresponding to the premises and produces the TruthValue corresponding to the conclusion. A single Rule may correspond to multiple Formulas, where each Formula deals with a different sort of TruthValue.
Pragmatic General Intelligence: A formalization of the concept of general intelligence, based on the concept that general intelligence is the capability to achieve goals in environments, calculated as a weighted average over some fuzzy set of goals and environments.
Predicate Evaluation: The process of determining the Truth Value of a predicate, embodied in a PredicateNode. This may be recursive, as the predicate referenced internally by a Grounded PredicateNode (and represented via a Combo program tree) may itself internally reference other PredicateNodes.
Pre-Operational Stage★ | search: Based on the Piagetan Stages, the stage of development of a Mind that involves the ability to solve fairly complex problems. At this stage we see the formation of mental representations, mostly poorly organized and un-abstracted, building mainly on intuitive rather than logical thinking. Word-object and image-object associations become systematic rather than occasional. Simple syntax is mastered, including an understanding of subject-argument relationships. One of the crucial learning achievements here is “object permanence”--infants learn that objects persist even when not observed. However, a number of cognitive failings persist with respect to reasoning about logical operations, and abstracting the effects of intuitive actions to an abstract theory of operations.
Probabilistic Logic Networks (PLN): A mathematical and conceptual framework for reasoning under uncertainty, integrating aspects of predicate and term logic with extensions of imprecise probability theory. OpenCog’s central tool for symbolic reasoning.
Procedure★ | search : A Knowledge Component containing specific instructions regarding how to carry out an Action or Behavior. See also Knowledge Component, Procedure Learning, Procedural Memory, Schema.
Procedural Knowledge★ | search : Knowledge regarding which series of actions (or action-combinations) are useful for an agent to undertake in which circumstances. In OpenCogPrime these may be learned in a number of ways, e.g. via PLN or via HebbianLearning of Schema Maps, or via explicit learning of Combo programs via MOSES or hillclimbing. Procedures are represented as SchemaNodes or Schema Maps.
Procedure Evaluation/Execution★: A general term encompassing both Schema Execution and Predicate Evaluation, both of which are similar computational processes involving manipulation of Combo trees associated with ProcedureNodes.
Procedure Node: A SchemaNode or PredicateNode
Psi★: A model of motivated action, and emotion, originated by Dietrich Dorner and further developed by Joscha Bach, who incorporated it in his proto-AGI system MicroPsi.
Psynese: A system enabling different OpenCog instances to communicate without using natural language, via directly exchanging Atom subgraphs, using a special system to map references in the speaker’s mind into matching references in the listener’s mind.
Psynet Model: An early version of the theory of mind underlying OpenCogPrime, referred to in some early writings on the Webmind AI Engine and Novamente Cognition Engine. The concepts underlying the psyynet model are still part of the theory underlying OpenCogPrime, but the name has been deprecated as it never really caught on.
Region Connection Calculus: A mathematical formalism describing a system of basic operations among spatial regions. Used in OpenCogPrime as part of spatial inference, to provide relations and rules to be referenced via PLN and potentially other subsystems.
Reinforcement Learning&dagger: Learning procedures via experience, in a manner explicitly guided to cause the learning of procedures that will maximize the system’s expected future reward.
Reinforcement Learning: OpenCogPrime does this implicitly whenever it tries to learn procedures that will maximize some Goal whose Truth Value is estimated via an expected reward calculation(where “reward” may mean simply the Truth Value of some Atom defined as “reward”). Goal-driven learning is more general than reinforcement learning as thus defined; and the learning that CogPrime does, which is only partially goal-driven, is yet more general.
RelEx: A software system used in OpenCog as part of natural language comprehension, to map the output of the link parser into more abstract semantic relationships. These more abstract relationships may then be entered directly into the Atomspace, or they may be further abstracted before being entered into the Atomspace, e.g. by RelEx2Frame rules.
RelEx2Frame: A system of rules for translating RelEx output into Atoms, based on the FrameNet ontology. The output of the RelEx2Frame rules make use of the FrameNet library of semantic relationships. The current (2012) RelEx2Frame rule-based is problematic and the RelEx2Frame system is deprecated as a result, in favor of Link2Atom. However, the ideas embodied in these rules may be useful; if cleaned up the rules might profitably be ported into the Atomspace as ImplicationLinks.
Reflective Stage★ | search: Based on the Piagetan Stages, and also called the Post-Formal Stage, this is the stage of development of a Mind that is characterized by thorough reflection on, and modification, of internal structures, based on both formal and intuitive modeling thereof. It is controversial whether or not any humans have truly mastered the the reflexive stage. Followers of various meditative and pedagogical practices claim Reflexive stage abilities, but such claims are not as yet considered verified. In an AGI context, this corresponds roughly to the powerful capability for self-modification of internal software structures and cognitive dynamics. But it does not necessarily entail full Strong Self-Modification. In the theory of Developmental Stages of Uncertain Logic Based AI Systems, the Reflective stage may be characterized as a stage where the logical inference faculty possesses the capability to update its own logical inference rules and formulas, and modify its high-level inference control procedures, in a context-appropriate way. I.e. the Inference Control subsystem, and the mind as a whole, become Reflectively Interactive Systems.
Representation Building: A stage within MOSES, wherein a candidate Combo program tree (within a deme) is modified by replacing one or more tree nodes with alternative tree nodes, thus obtaining a new, different candidate program within that deme. This process currently relies on hand-coded knowledge regarding which types of tree nodes a given tree node should be experimentally replaced with (e.g. an AND node might sensibly be replaced with an OR node, but not so sensibly replaced with a node representing a “kick” action).
Request for Services (RFS): In OpenCogPrime’s Goal-driven action system, a RFS is a package sent from a Goal Atom to another Atom, offering it a certain amount of STI currency if it is able to deliver the goal what it wants (an increase in its Truth Value). RFS’s may be passed on, e.g. from goals to subgoals to sub-subgoals, but eventually an RFS reaches a Grounded SchemaNode, and when the corresponding Schema is executed, the payment implicit in the RFS is made.
Robot Preschool: An AGI Preschool in our physical world, intended for robotically embodied AGIs.
Robotic Embodiment: Using an AGI to control a robot. The AGI may be running on hardware physically contained in the robot, or may run elsewhere and control the robot via networking methods such as WiFi.
Rule: A general logical statement, ie one with variables. For example "forall x: Man(x)->Mortal(x)" means that all men are mortal.
Scheduler: Part of the CogServer that controls which processes (e.g. which MindAgents) get processor time, at which point in time.
Schema: search: A script describing a process to be carried out. This may be explicit, as in the case of a GroundedSchemaNode, or implicit, as the case in Schema maps or ungrounded SchemaNodes. A Codelet embodied as a small program that carries out some process (perhaps externally in a simulation world or NLP interface connected to OpenCog; or, in the case of a so-called Cognitive Schema, perhaps internally). Schemata are produced, within OpenCog, either by Evolutionary Learning (MOSES) or by Predicate Schematization acting on sets of Statements learned via Logic (PLN).
Schema Encapsulation: The process of automatically recognizing a Schema Map in an Atomspace, and creating a Combo (or other) program embodying the process carried out by this Schema Map, and then storing this program in the Procedure Repository and associating it with a particular SchemaNode. This translates distributed, global procedural memory into localized procedural memory. It’s a special case of Map Encapsulation.
Schema Execution | search : The process of “running” a Grounded Schema, similar to running a computer program. Or, phrased alternately: The process of executing the Schema embodying in a learned (or in rare cases, hard-wired) Procedure within OpenCog. This may be recursive, as the predicate referenced internally by a Grounded SchemaNode (and represented via a Combo program tree) may itself internally reference other Grounded SchemaNodes. Running a Shcema is carried out by the "Combo interpreter", in combination with the Economic Action Selection MindAgent. The Combo interpreter executes individual code modules within schemata; the economic action selection process figures out what modules to execute at what times based on the intrinsic relationships between the modules and other criteria.
Schema, Grounded: A Schema that is associated with a specific executable program (either a Combo program or, say, C++ code)
Schema Map: A collection of Atoms, including SchemaNodes, that tend to be enacted in a certain order (or set of orders), thus habitually enacting the same process. This is a distributed, globalized way of storing and enacting procedures.
Schema, Ungrounded: A Schema that represents an abstract procedure, not associated with any particular executable program.
Schematic Implication: A general, conceptual name for implications of the form ((Context AND Procedure) IMPLIES Goal)
SegSim: A name for the main algorithm underlying the NLGen language generation software. The algorithm is based on segmenting a collection of Atoms into small parts, and matching each part against memory to find, for each part, cases where similar Atom-collections already have known linguistic expression.
Self-Generating System : An abstract model of complex systems, which consists of a set of components that act on and combine with each other to create new components, recursively. It is similar to a Component System but is explicitly understood to be computable. A brief discussion of the relevance of these concepts for consciousness, by Allan Combs, is at http://www.goertzel.org/dynapsyc/1995/COMBS.html. The cognitive processes of Forward Synthesis and Backward Synthesis are specific manifestations of self-generating systems. The Cognitive Equation is a hypothesis regarding which kinds of self-generating systems will be able to give rise to General Intelligence given moderate computational resources. See also [Complex System]], Component System.
Self-Model★ : A Cognitive Structure within a Mind, consisting of a pragmatic, approximate model of the Mind as a whole. The experience of the Self-Model within the Subjective Reality of the mind has been called the Phenomenal Self.
Self-Modification: A term generally used for AI systems that can purposefully modify their core algorithms and representations. Formally and crisply distinguishing this sort of Strong self-modification from “mere” learning is a tricky matter.
Sensorimotor: Pertaining to sensory data, motoric actions, and their combination and intersection.
Sensory: | search : Pertaining to data received by the AGI system from the outside world. In a OpenCogPrime system that perceives language directly as text, the textual input will generally not be considered as “sensory” (on the other hand, speech audio data would be considered as “sensory”).
Short Term Importance: A value associated with each Atom, indicating roughly the expected utility to the system of keeping that Atom in RAM rather than saving it to disk or deleting it. It’s possible to have multple LTI values pertaining to different time scales, but so far practical implementation and most theory has centered on the option of a single LTI value.
Similarity: A link type indicating the probabilistic similarity between two different Atoms. Generically this is a combination of Intensional Similarity (similarity of properties) and Extensional Similarity (similarity of members).
Simple Truth Value: A TruthValue involving a pair (s,d) indicating strength (e.g. probability or fuzzy set membership) and confidence d. d may be replaced by other options such as a count n or a weight of evidence w.
Simulation World: See Internal Simulation World
SMEPH(Self-Modifying Evolving Probabilistic Hypergraphs): A style of modeling systems, in which each system is associated with a derived hypergraph
SMEPH Edge: A link in a SMEPH derived hypergraph, indicating an empirically observed relationship (e.g. inheritance or similarity) between two
SMEPH Vertex: A node in a SMEPH derived hypergraph representing a system, indicating a collection of system states empirically observed to arise in conjunction with the same external stimuli
Society of Mind★: The concept, introduced by Marvin Minsky, that the mind may be viewed as a population of agents called Critics, each of which carries out some specialized function, and which collaborate to produce overall Cognitive action. The Society of Mind theory comes along with a specialized vocabulary for describing aspects of AI systems and Minds in general, some of which are fairly standard (Frames) and others of which are more idiosyncratic.
Spatiotemporal Inference: PLN reasoning including Atoms that explicitly reference spatial and temporal relationships.
Specialized Intelligence★ search: The capability of a system to carry out some instance of Problem Solving that is considered difficult. This includes the intelligence displayed by Narrow AI programs like Deep Blue or Google that are highly tailored to do just one sort of thing.
STI: Short Term Importance.
Strength: The main component of a TruthValue object, lying in the interval [0,1], referring either to a probability (in cases like InheritanceLink, SimilarityLink, EquivalenceLink, ImplicationLink, etc.) or a fuzzy value (as in MemberLink, EvaluationLink).
Strong Self-Modification★: The ability of a mind to modify all of its cognitive processes in all of their aspects of operation. Humans cannot do this, no matter how self-actualized they may become. AGIs, properly designed and educated, should be able to, one day. Strong self-modification presents a serious challenge in terms of ensuring Friendly AI. It is generally used as synonymous with Self-Modification, in the OpenCogPrime context.
Subsymbolic★: | search : Involving processing of data using elements that have no correspondence to natural language terms, nor abstract concepts; and that are not naturally interpreted as symbolically “standing for” other things. Often used to refer to processes such as perception processing or motor control, which are concerned with entities like pixels or commands like “rotate servomotor 15 by 10 degrees theta and 55 degrees phi.” The distinction between “symbolic” and “subsymbolic” is conventional in the history of AI, but seems difficult to formalize rigorously. Logic-based AI systems are typically considered “symbolic”, yet.
Supercompilation★ : A technique for program optimization, which globally rewrites a program into a usually very different looking program that does the same thing. A prototype supercompiler was applied to Combo programs with successful results.
Surface Realization★: The process of taking a collection of Atoms and transforming them into a series of words in a (usually natural) language. A stage in the overall process of language generation.
Symbol Grounding★ | search : Any means by which a symbolic representation becomes non-arbitrary because of its binding to sensory experience. For example the word "apple" can be an arbitrary signifier. The mapping of the word "apple" to a node that detects apples from the vision sensors enables the binding of the word to sensory experience. Also, the mapping of a symbolic term into perceptual or motoric entities that help define the meaning of the symbolic term. For instance, the concept “Cat” may be grounded by images of cats, experiences of interactions with cats, imaginations of being a cat, etc.
Syntax-Semantics Correlation: In the context of MOSES and program learning more broadly, this refers to the property via which distance in syntactic space (distance between the syntactic structure of programs, e.g. if they’re represented as program trees) and semantic space (distance between the behaviors of programs, e.g. if they’re represented as sets of input/output pairs) are reasonably well correlated. This can often happen among sets of programs that are not too widely dispersed in program space. The Reduct library is used to place Combo programs in Elegant Normal Form, which increases the level of syntax-semantics programs between them. The programs in a single MOSES deme are often closely enough clustered together that they have reasonably high syntax-semantics correlation.
System Activity Table: An OpenCog component that records information regarding what a system did in the past.
Temporal Inference★: Reasoning that heavily involves Atoms representing temporal information, e.g. information about the duration of events, or their temporal relationship (before, after, during, beginning, ending). As implemented in OpenCogPrime, makes use of an uncertain version of Allen Interval Algebra.
Truth Value★: A package of information associated with an Atom, indicating its degree of truth. SimpleTruthValue and IndefiniteTruthValue are two common, particular kinds. Multiple truth values associated with the same Atom from different perspectives may be grouped into CompositeTruthValue objects.
Universal Intelligence★: A technical term introduced by Shane Legg and Marcus Hutter, describing (roughly speaking) the average capability of a system to carry out computable goals in computable environments, where goal/environment pairs are weighted via the length of the shortest program for computing them.
Unit: A Unit, in OpenCogPrime, is a functionally specialized component of the system (including the example of the Central Cognition Unit which is specialized to integrate information from all other Units). The overall CogPrime cognitive architecture explains how the functions of OpenCogPrime are subdivided into separate Units. The Single-Unit OpenCog Architecture explains how Cognition is architected within a single Unit.
Very Long Term Importance (VLTI) : A bit associated with Atoms, which determines whether, when an Atom is forgotten (removed from RAM), it is saved to disk (frozen) or simply deleted.
Virtual AGI Preschool★ : A virtual world intended for AGI teaching/training/learning, bearing broad resemblance to the preschool environments used for young humans.
Virtual Embodiment★ : Using an AGI to control an agent living in a virtual world or game world, typically (but not necessarily) a 3D world with broad similarity to the everyday human world.
Webmind AI Engine: A predecessor to the Novamente Cognition Engine and OpenCog, developed 1997-2001 – with many similar concepts (and also some different ones) but quite different algorithms and software architecture.
Working Memory★: A conduit for information exchange among modules. WM is passive in the sense that it does not modify its contents.