From OpenCog

Knowledge Representation

In a general sense, knowledge representation in AI refers to the patterns in an AI system (or in some cases, patterns emergent between the AI system and its environment) that correlate with patterns the system has experienced, and/or patterns the system is likely to enact or experience in future.

In this very general sense, the easiest way to formally define KR is via reference to an ensemble of AI's (so that, across this ensemble, one can define correlations between experience/enacted patterns and internal patterns).

In many AI designs however, KR is much more explicit than the above description suggests. In traditional, symbolic AI systems, there are explicit, logical, human-comprehensible representations within the system's knowledge base, that reflect the system's prior or future experiences in a transparent way. In AI systems built according to other methodologies, however, there need not be any explicit KR in this sense.

A key point for AGI is that, in an AGI system that is going to remotely approach human-level intelligence, knowledge representation is necessarily going to be mixed up with meta knowledge representation: the ability to create new contextually-appropriate representations. For instance, if an AGI is taught to play chess, rather than being programmed to do it, it's going to have to somehow conceive its own representation scheme(s) for internally representing chess pieces and their relationships. If an AGI is taught language, rather than having its linguistic knowledge fully preprogrammed, then it's going to have to somehow conceive its own representations for the linguistic constructs it has learned. And so forth. Specific domains generally demand specialized KR, in order to enable reasonably efficient domain-specific intelligent action selection; and an AGI by its nature cannot have preprogrammed specialized KR for every domain it confronts.

In particular, the KR scheme of an AGI that deals with the same sorts of problems that the human mind does, must be able to handle meta-KR in domains ranging across perceptual, motoric, procedural, declarative, episodic, abstract, linguistic, introspective, etc.

Four Kinds of Knowledge

One powerful approach to knowledge representation (and the one taken in OpenCog Prime) is to decompose knowledge into 4 major subtypes:

  1. Sensory = data coming into the system
  2. Procedural = actions carried out by the system (internally or externally)
  3. Episodic = memories of the system’s experience
  4. Declarative = knowledge abstracted from the system’s experience, actions and sensations

While the OpenCog framework does not enforce this distinction, it does provide useful tools for managing these types of knowledge in a specific way. For instance

  1. the OpenCogPrime:AtomTable is a flexible method of storing declarative knowledge, as well as some sorts of sensory, procedural and episodic knowledge
  2. the OpenCog Procedure Repository is a flexible method of storing Procedures that are represented as simple LISP-like program trees (note that the index of a procedure in the repository is the Atom Handle of the OpenCogPrime:Node representing the OpenCogPrime:Procedure in the OpenCogPrime:AtomTable.

The OpenCogPrime:AtomTable as a software structure may be used in many different ways. As used in OpenCog Prime, it bridges the gap between subsymbolic (neural net) and symbolic (logic / semantic net) representations, achieving the advantages of both, and synergies resulting from their combination. It is a container consisting of nodes and links (a weighted, labeled hypergraph), which are labeled with both

  1. Probabilistic weights, like an uncertain semantic network (these are OpenCogPrime:TruthValue objects
  2. Hebbian weights, like an attractor neural network (these are OpenCogPrime:AttentionValue objects

It is not quite a neural net, not quite a semantic net: the best of both worlds!

In OpenCog Prime, episodic knowledge is stored largely via special extensions to the AtomTable such as the OpenCogPrime:SpaceServer and OpenCogPrime:TimeServer, together with the Mind's Eye internal simulation environment. The Space and Time servers allow Atoms to be indexed by absolute or relative space and time, which allows the AtomTable to be used to store sets of events constituting the outlines of episodes. These episode-outlines may then be internally replayed in the Mind's Eye, thus constituting episodic knowledge.

Sensory knowledge in OCP is to be represented as a combination of Atoms and more specialized representations. For instance, if an OCP based system is perceiving polygonal objects in a simulation world, it may be supplied with a PolygonTable which stores a collection of object-descriptions, in which each object is represented as a collection of coordinates (the corners of the polygons that constitute the object). Each perceived or remembered object may then be represented as an Atom, and the Atom Handle of this Atom is used as an index into the PolygonTable, which contains information regarding the polygons making up the object.

This decomposition of knowledge into four primary subtypes appears relevant to the human brain, as evidenced by cognitive science and neuroscience research. It is also natural in terms of existing computer science technologies and mathematical formalisms.

However, one may also isolate additional knowledge subtypes and build in specialized KR mechanisms for them. For instance, there is some literature arguing that the human brain contains special mechanisms for handling social knowledge. In OpenCog Prime, in the current design, this is handled via specialized mechanisms embedded within the AtomTable. For instance, when the system encounters another agent, it creates an OpenCogPrime:AgentNode corresponding to that agent, and then there is a Theory of Mind Agent OpenCogPrime:MindAgent that specificaly updates these AgentNodes. In this case, there is a dynamic distinction between AgentNodes and ordinary declarative knowledge; there is not a dramatic representational difference on the software level, but there is a dramatic representational difference on the deeper level in which knowledge representation is considered as internal patterns correlating with experienced patterns.

Probability as Glue

While each of the four types of knowledge mentioned above may be represented and manipulated separately, it’s also important to be able to convert knowledge between the various types.

For this purpose, having a cross-type mathematical formalism and conceptual framework is highly valuable from an AGI point of view.
 One powerful approach is to use probability theory as the “lingua franca” enabling integration of the different knowledge types. This is the approach taken in OpenCog.

This approach is fairly well aligned with the mainstream of academic AI research: probabilistic AI of various sorts has been increasingly popular and successful in recent years.