# OpenCog By Way of Wikipedia

One way to gain general background relevant for understanding OpenCog is via Wikipedia...

(To be clear: the pages linked below don't say anything about OpenCog directly. But they fill you in on ideas related to OpenCog -- stuff that, if you understand it, will help you understand what we're doing with OpenCog a lot better.... No pretense of completeness is made, and Wikipedia pages are always subject to change. Reader beware ... and enjoy!)

-- The **"atomspace"** is a graphical database ... with types.

https://en.wikipedia.org/wiki/Graph_database

https://en.wikipedia.org/wiki/Type_theory

-- Many of the Atom types are for representing **logical assertions**. Thus, it vaguely resembles the DataLog subset of ProLog.

https://en.wikipedia.org/wiki/Datalog

-- Among other knowledge, the Atomspace can/should be able to store the kind of data that Cyc stores, in a vaguely similar way, except that the atomspace generalizes the true-false "truth values" to general floating point truth values.

https://en.wikipedia.org/wiki/Cyc

-- Floating point "truth values" attached to Atoms can be interpreted as "**probabilities**" (enabling Bayesian probability) and also fuzzy logic, Markovian networks (e.g. hidden markov models), artificial neural networks. By contrast, Cyc has to use "microtheories" to resolve inconsistencies.

https://en.wikipedia.org/wiki/Bayesian_network

https://en.wikipedia.org/wiki/Markov_logic_network

https://en.wikipedia.org/wiki/Hidden_Markov_model

https://en.wikipedia.org/wiki/Theory_(mathematical_logic)

The above gives a flavor of what the atomspace can hold/represent. We (the opencog team) played with all the above, but not scaled it up in a big way, yet (except in a few narrow commercial applications). Two big items that work with the atomspace are:

-- **PLN: a probabilistic logic reasoner**, does forward/backward chaining and inference in a probabilistic manner.

https://en.wikipedia.org/wiki/Inference

https://en.wikipedia.org/wiki/Forward_chaining

https://en.wikipedia.org/wiki/Backward_chaining

-- **MOSES: a genetic program learne**r. Used for learning. Don't confuse genetic algorithm with genetic programming. Genetic programming uses the genetic algorithm to learn programs. In the case of opencog, it learns graphs (that represent logical expressions, i.e. simple programs).

https://en.wikipedia.org/wiki/Genetic_programming

-- The **pattern matcher**. This is a lower-level tool. Possibly mis-named, as it is far more powerful than an "ordinary" pattern matcher, and is instead a subgraph isomoprhism solver. Call it a graph satisfiability solver. Or maybe a graph query language, like SQL but for graphs. Vaguely DPLL-like-ish in how it works, kind-of-ish.

https://en.wikipedia.org/wiki/SPARQL

https://en.wikipedia.org/wiki/Pattern_matching

https://en.wikipedia.org/wiki/Satisfiability

https://en.wikipedia.org/wiki/Subgraph_isomorphism_problem

https://en.wikipedia.org/wiki/DPLL_algorithm

https://en.wikipedia.org/wiki/Boolean_satisfiability_problem

-- A **natural language subsystem.** This includes components for dependency parsing, and turning dependency parses into logical expressions:

https://en.wikipedia.org/wiki/Dependency_grammar

https://en.wikipedia.org/wiki/Argument_(linguistics)

https://en.wikipedia.org/wiki/Thematic_relation

https://en.wikipedia.org/wiki/Meaning%E2%80%93text_theory

https://en.wikipedia.org/wiki/Conceptual_graph

and also tools for **language generation** (e.g. microplanning and surface realization):

https://en.wikipedia.org/wiki/Natural_language_generation

-- The **flow of attention** through the system is regulated by ECAN, Economic Attention Networks, a customized variant of attractor neural networks:

https://en.wikipedia.org/wiki/Spreading_activation

https://en.wikipedia.org/wiki/Hopfield_network

-- **Creativity** is achieved in the system via many means; one is concept blending:

https://en.wikipedia.org/wiki/Conceptual_blending

-- **Action selection** (how does the system choose what to do?) is handled via OpenPsi, a customized version of the Psi model of motivated action

https://en.wikipedia.org/wiki/Psi-Theory

-- Experimentation is ongoing with DeSTIN and other **deep machine learning** algorithms, to recognize patterns in perceptual data and feed these patterns into the atomspace

https://en.wikipedia.org/wiki/Deep_learning

-- Goal-driven learning of procedures is an important aspect of OpenCog learning (along side other learning algorithms); this is a kind of **reinforcement learning**...