AI Documentation

From OpenCog
Jump to: navigation, search

This page gives pointers to online documentation, within this wiki site, explaining aspects of OpenCog's AI code (mostly currently implemented stuff, but some "intended for implementation" stuff here and there also). See also the sister page on Infrastructure Documentation.

For pointers to books and papers describing the theory underlying OpenCog AI or its aspects, see the Background Publications page; or see the CogPrime Overview article for a summary of Dr. Goertzel's AGI theories that inspired OpenCog originally and currently guide a significant portion of OpenCog development.

Some aspects of the CogPrime theory underlying OpenCog are not addressed on this page, because they don't exist in the code yet. Key examples are: episodic memory, and map formation (meaning recognition of attractor patterns in the system's dynamics, and encapsulation of these patterns as Atoms). So this is not a review of the OpenCog design or theory (though it links to some of these), but a quick guided tour of all the AI stuff implemented in the code right now.

After the initial CogPrime-architecture subsection, the subsections on this page are arranged in alphabetical order -- NOT in the order in which a newbie should proceed in terms of understanding OpenCog. For general getting-started guidance see the Getting Started page. See also the Code Maturity Guide for an indication of the state of development of various parts of the system.

Another useful resource is the OpenCog Glossary, which provides a guide to the lingo that has developed for discussing various aspects of OpenCog and related AI issues.

If you're an OpenCog developer, feel free to add new sections here, or new links within existing sections. If you have an idea that's not yet implemented in OpenCog, though, this probably isn't the place for it; you should instead create a new wiki page and add it to the Ideas category

CogPrime AGI Design

The CogPrime Overview article gives a several dozen page overview of the broad "CogPrime" AGI design that Ben Goertzel had in mind when initiating the OpenCog project.

This is the best place to get a concise overview of the CogPrime vision regarding how the various other AI structures and processes described on this page are intended to fit together into an overarching whole.

For more depth on these ideas, see the books linked from the Background_Publications page.

In published literature on these topics, the name "OpenCogPrime" or OCP is often used to refer to the OpenCog implementation of the CogPrime architecture. In practical communication within the OpenCog project, however, the term "OpenCog" is often used loosely, sometimes to refer to the OpenCog general-purpose software platform, and sometimes to the OpenCogPrime implementation or the CogPrime design.

CogPrime and other uses of OpenCog

It's worth noting that CogPrime is not the only kind of AGI architecture that can be built with the OpenCog framework, nor the only use for the other AI structures and processes discussed below on this page. However, CogPrime does have a special status in relation to OpenCog, as it was the main inspiration for the creation of the OpenCog framework in the first place.

In fact, the current practical uses of OpenCog AI algorithms are not especially CogPrime-ish, in that CogPrime is focused on synergetic interactions between components, whereas current applications tend to use single components of the system in application-specific ways. Most likely this is a result of the early stage of development of the whole OpenCog system, and we look forward to seeing practical applications that leverage more of the integrative CogPrime concepts, as system development progresses. However, non-CogPrime uses of OpenCog are also encouraged -- the general attitude of the OpenCog community is that all uses of OpenCog should be encouraged as may end up leading to worthwhile applications, or pushing in interesting R&D directions.

AtomSpace

The Atomspace is the center of OpenCog -- a weighted, labeled hypergraph for storing any kind of knowledge.

See

  • Atoms
  • Atom types
  • Why Hypergraphs? -- A blog post by key OpenCog engineer Linas Vepstas discussing why hypergraphs are a good, general-purpose mode of knowledge representation

Current Atomspace code is at https://github.com/opencog/opencog/tree/master/opencog/atomspace

An example of code that imports complex structured data (some biology ontologies) into the Atomspace is at: AGI-BIO knowledge import.

Concept Formation

"Concept Formation", in a CogPrime context, refers to the speculative creation of new ConceptNode representing potentially useful new concepts. This may potentially be done by many different methods.

Chapter 20 of Engineering General Intelligence, vol. 2, which is linked from here, describes various possible methods of concept creation. The ones listed here are the ones that currently seem most promising for near-term exploration.

Clustering

Clustering is a tried-and-true method for forming concepts from data. Some specific design ideas for implementing clustering for concept formation in OpenCog are on the Clustering page.

Clustering is also intended to play a key role in the unsupervised language learning experiments we intend to run in OpenCog in the hopefully not too distant future (March 2015), which are described at

http://arxiv.org/abs/1401.3372

Blending

Concept blending is a deep idea from the cognitive psychology of creativity, which some thinkers say underlies all human creativity. The basic idea is that new concepts are formed via judiciously blending pieces from previously existing concepts in the same mind. The subtle point is what "judiciously" really means here.

A pretty good list of references is at: http://markturner.org/blending.html

Some basic ideas about implementation of blending in OpenCog are on the Blending page.

Outside of OpenCog, blending has been implemented in a few different AI software systems; one example is http://www.cc.gatech.edu/~riedl/pubs/iccc12.pdf ....

Predicate Conceptualization

Another good way to form concepts is via patterns extracted from predicate-argument relationships stored in the Atomspace.

For instance, if we have many links in the Atomspace of the form


EvaluationLink
  PredicateNode "in"
  ConceptNode "Beijing"
  ConceptNode "China"

EvaluationLink
  PredicateNode "in"
  ConceptNode "women"
  ConceptNode "China"

etc. Then a ConceptNode can be formed consisting of all $X so that


EvaluationLink
  PredicateNode "in"
  ConceptNode $X
  ConceptNode "China"


These ConceptNodes can then be fed into PLN inference and conclusions drawn about them. These conclusions will lead to new links being formed, which can then be mined to form new concepts, etc.

DeSTIN machine perception system

DeSTIN is a machine perception framework, based on deep learning principles (in one phrase that's been used, it's a "compositional spatiotemporal deep learning network").

It was developed first by Itamar Arel and his grad students at the University of Tennessee Knoxville, and then brought into the OpenCog fold.

Currently DeSTIN is implemented for computer vision (image and video processing), but in concept it should apply to a variety of sensory data types. Similar architectures have been used by many researchers for audio data processing, for example, and also for processing of other time series data (e.g. biological, financial,...).

Currently DeSTIN runs separately from the main OpenCog system. (However, code has been written to extract frequent patterns from a collection of DeSTIN states, and these patterns have been experimentally imported into the OpenCog Atomspace and used as input for PLN inference. But this was a research prototype experiment not done with general-purpose, reusable code.)

A C version of DeSTIN is here:

https://github.com/opencog/destin

and that page also contains a list to the various publications on DeSTIN and its intended future integration into OpenCog.

Currently more effort is focused on developing a python version of DeSTIN

https://github.com/opencog/python-destin

utilizing the software infrastructure from Theano, which is also used within pylearn2. One aspiration is that pydestin could eventually be integrated into pylearn2.

As of Feb 2015, the python version does not yet contain all the functionality of the C version; but it is easier to extend and modify.

Dimensional Embedding

The dimensional embedding module serves to map the Atomspace (or appropriately defined subsets of the Atomspace, e.g. all ConceptNodes, etc.) into an N-dimensional space.

This may be useful because many types of queries can be more effectively executed in an N-dimensional space than in a hypergraph (e.g. "find everything that is somewhat similar to a particular target Atom A", "find a path between Atom A and Atom B",....)

The code is at

https://github.com/opencog/opencog/tree/master/opencog/learning/dimensionalembedding

and the best source for the ideas is Chapter 21 of Engineering General Intelligence, vol. 2, which is linked from here.

The current code has been tested on a prototype application (embedding WordNodes linked by SimilarityLinks connoting co-occurrence frequency) and seemed to perform as expected on that case. It's not that well documented (as of Feb 2015) and possibly needs some improvement to be used in the full generality desired.

Economic Attention Allocation

ECAN is a system for regulating the OpenCog system's attention -- specifically for ongoingly updating the STI (ShortTerm Importance) and LTI (LongTerm Importance) values associated with Atoms; and creating HebbianLinks based on STI values.

Very roughly, ECAN could be described as an "economic neural network". STI and LTI values spread around from Atom to Atom along links, like activation values in a neural network. But the math is different than in the case of ordinary neural networks, for one thing because the total amount of STI in the system is conserved (and so is the total amount of LTI). So STI and LTI can be considered roughly as "virtual currencies."

The concepts and math underlying ECAN are described in Chapters 5 and 6 of Engineering General Intelligence, vol. 2, which is linked from here.

The ECAN page on this wiki gives a sketchy but accurate discussion.

The current ECAN code (under active development as of March 15 2015, and rapidly changing/improving) is at

https://github.com/opencog/opencog/tree/master/opencog/dynamics/attention

Embodiment

The Embodiment portion of OpenCog, generally speaking, carries out functions such as

  • messaging between OpenCog and external embodiments of an OpenCog agent, such robots or game characters
  • execution of action plans to be carried out via some external embodiment
  • transformation of perceptions received from some embodiment, into Atoms in the Atomspace

The Embodiment page describes the "legacy" Embodiment module, created between 2008-2014, which has in various versions been used to control game characters in various game worlds (CrystalSpace, Multiverse, RealXTend (a fork of OpenSim), Unity3D). It was also forked a bit in 2009 to create code controlling a Nao humanoid robot at Xiamen University.

The legacy Embodiment code is here

https://github.com/opencog/opencog/tree/master/opencog/embodiment

and while there's a lot of complexity and mess there, there are also numerous lessons to be learned from it.

Currently (March 2015) the legacy Embodiment code is considered deprecated, and implementation of a new Embodiment design is underway.

A couple related notes are:

  • Some components like OpenPsi, which aren't really intrinsically part of the "embodiment" of OpenCog (OpenPsi is a more general approach to motivated action), are currently implemented within the legacy Embodiment module; they need to be disentangled
  • Embodiment works closely with the Spacetime module (see code at https://github.com/opencog/opencog/tree/master/opencog/spacetime) which will be retained in the new Embodiment design, though it may get modified somewhat as robotics work proceeds (since in some not-too-deep ways it's specialized for game characters rather than robots)

Embodiment, Perception and Action

Chapter 8 of Engineering General Intelligence, vol. 2, which is linked from here, discusses many general issues related to perception, action and their interaction with cognition in CogPrime.

Generally speaking these issues are intended to be handled in OpenCog by a combination of the new Embodiment module, with DeSTIN (see discussion on this page) for sense perception, and a DeSTIN-like hierarchical-deep-learning approach to action (also see links in DeSTIN section on this page).

Integration of the proposed (March 2015) new Embodiment design with deep learning perception and action will be a next stage of development; first the new Embodiment needs to be made to work and DeSTIN needs to be made to work better.

Language Comprehension Pipeline

The goal of OpenCog's language comprehension pipeline is to map (currently English) sentences into Atom-sets representing the syntactic and (most critically) semantic (and to some extent pragmatic) structure of the sentences.

Link Grammar

The foundation of OpenCog's current NL system is the link grammar and link parser. These were originally developed at Carnegie-Mellon University but development has long since been taken over by Linas Vepstas and others involved with the open source link grammar and OpenCog communities.

Link grammar is a separate project from OpenCog, though closely related. The link grammar page is

http://www.abisource.com/projects/link-grammar/

and there is a separate Link Grammar mailing list which can be found from that page.

The link parser currently deals with multiple languages with various degrees of coverage; OpenCog uses only the English version. If OpenCog language learning research is successful then this will enable "automatic" extension of link grammar to any language with sufficient digitized text.

RelEx

Main page: RelEx

RelEx uses a set of hand-coded rules to map the link parse of a sentence (produced by applying the link parser to a sentence) into "syntactico-semantic" relations that present a more abstract picture of the semantics of the sentence. The output of the link parser is somewhat like the output of a typical dependency parser, though with more richness in some aspects.

Relex is Java code, and can be run separately from OpenCog, though it's been designed mainly for use as an input to OpenCog. The code is here:

https://github.com/opencog/relex

The Relex page gives a pretty good overview, though it may not be up to date in every aspect.

RelEx2Logic

RelEx2Logic maps the output of RelEx into more abstract Atoms that are more suitable for some sorts of semantic inference. It does so using a complex set of hand-coded rules.

The RelEx2Logic code is at:

https://github.com/opencog/opencog/tree/master/opencog/nlp/relex2logic

Details on how to use the RelEx2Logic pipeline can be found at the Running Relex2Logic with OpenCog wiki page.

The OpenCog representations for various English expressions are documented at RelEx2Logic Representation wiki page.

RelEx2Logic Implementation Evolution

Currently (March 2015) RelEx2Logic code consists of two parts:

  • a special Relex output generator, part of the Relex Java codebase, which outputs sets of Atoms based on the Relex output from a sentence
  • Scheme files that postprocess this output in various ways, producing final output suitable for abstract logical inference

There is an intention to replace the former part with Implications in the Atomspace, to be executed by the general-purpose RuleEngine (which did not exist when RelEx2Logic was first created). This may be done soon.

Word Sense Disambiguation

The currently used NLP comprehension pipeline, as described just above, doesn't do any word-sense disambiguation.

A variant of Mihalcea's word sense disambiguation algorithm was implemented in OpenCog: for documentation and code see

https://github.com/opencog/opencog/tree/master/opencog/nlp/wsd

This code hasn't been used for a while and might have some incompatibilities with the current NLP system.

WSD Vision

The Michalcea algorithm tries to find the most likely word senses by exploring an interconnected network of the possible word senses that could be assigned to given words in a sentence. It does this by initially attaching every possible meaning to each word in a set of sentences, and creating a network where every sense is connected to every other possible word-sense. These links, however, are weighted: some word-senses, quite naturally, are more closely related than others. Initially, every word-sense is equally likely; the goal is to find that some senses are more likely than others. The Mihalcea algorithm treats this as a linear problem, essentially solving the Frobenius-Perron operator for this network; equivalently, solving the Markov chain for it, or, equivalently, applying the Google Page Rank algorithm.

The vision for WSD is to replace the link-weights and the FP/Markov/PageRank mechanism by reasoning, driven by PLN. That is, rather than saying that one sense is similar to another, a-priori, one should be able to deduce that, in the context of all of the words in a sentence, that some meanings are extremely unlikely.

Language Generation Pipeline

Language generation, in a current OpenCog context, is about taking sets of Atoms and transforming them into grammatical sentences (currently in English) expressing the ideas in the Atom-sets.

Microplanner

The microplanner, roughly, takes large chunks of Atoms and divides them into small "roughly sentence sized" chunks of Atoms to be sent to the surface realizer (SuReal) to be linearized into grammatical sentences. It also does some other business like inserting anaphora.

A conceptual design for the microplanner, written before the current (March 2015) microplanner was implemented, is on the Microplanner page.

Documentation of the currently implemented microplanner code, along with the code, is in gibhub: opencog/nlp/microplanning.

Surface Realization

The surface realizer (SuReal) takes small sets of Atoms and turns them into grammatical sentences, according to link grammar. It deals purely with matters of syntax, at this stage.

The high-level concept of the algorithm used is described in Chapter 28 of Engineering General Intelligence, vol. 2, which is linked from here.

The code, along with a description of the particular algorithm currently in use, is in github: /opencog/nlp/sureal.

Language Learning

Main page: Language learning

As of March 2015, OpenCog's NL comprehension and generation pipelines rely on hand-coded rule-bases. This is not really in the spirit of OpenCog's underlying ambitions, and it came about for historical reasons (OpenCog's predecessor system Novamente Cognition Engine was partly developed with funding from a source that required rapid NL comprehension functionality, for which reliance on hand-coded rule-bases was the easiest approach at that time...).

R&D is underway aimed at replacing these hand-coded rule-bases with rules learned via unsupervised corpus analysis. The general ideas underlying this work are here:

http://arxiv.org/abs/1401.3372

and a process, coding and stauts overview is given on the language learning wiki page. The code is in github: /opencog/nlp/learn.

MOSES

Main page: MOSES

MOSES is an automated program learning system, integrating ideas from evolutionary programming, stochastic local search, ensemble learning and Estimation of Distribution Algorithms.

A brief, moderately technical summary of the algorithm is on the MOSES Algorithm page.

The original source for the ideas underlying MOSES is Moshe Looks' 2007 PhD thesis, although the MOSES code has drifted somewhat far from the original ideas in that thesis, in various respects. (Specifically: The core ideas of program tree normalization and demes presented in the thesis are still central in MOSES. Probabilistic modeling plays less of a role in current uses of MOSES, and GP crossover is now utilized.)

The MOSES code is in github.

See also this Guide to the terminology used in MOSES

MOSES is in current heavy use in a few practical commercial applications, mainly leveraging its capability to learn Boolean program trees representing patterns in data.

Some basic MOSES tutorials are here:

These tutorials cover usage of MOSES as a standalone program, run without use of the Atomspace or other OpenCog constructs.

Code for importing Boolean MOSES models (program trees) into the Atomspace is here:

https://github.com/opencog/agi-bio/tree/master/moses-scripts

MOSES makes effective use of multiple processors; and also supports distributed processing (on a network of multiple machines each with multiple processors).

Bindings for MOSES in other Programming Languages

An R wrapper for MOSES exists, used mainly to support the application of MOSES to analyzing gene expression data. It is not optimally documented but is currently in heavy use:

https://github.com/mjsduncan/Rmoses

A Python wrapper for MOSES also exists, with tutorials at:

Key Points Where Improvement is Needed

In the past MOSES was used to learn small programs controlling virtual characters in game worlds. See this paper and also (for more details) Chapter 4 of Engineering General Intelligence, vol. 2. As of Jan 2015, this application doesn't work with the current Embodiment version; background work is going on aimed at re-enabling this.

As of Jan 2015, the MOSES code for learning program trees with continuous variable inputs seems not to be all that effective. It works for simple problems but doesn't that effectively model dependencies between continuous variables. This may not be so hard to fix but nobody has seriously tried, instead choosing to discretize continuous variables and use the (high effective) Boolean program evolution functionality.

MOSES has never dealt effectively with general programmatic constructs beyond Boolean and arithmetic operators, loops and conditionals. Reduct (the part of MOSES that does program tree normalization) handles the fold operator, which is known to be universal among primitive recursive functions; one pending (large) task is to make MOSES work well for learning primitive recursive functions expressed using fold (along with logical and arithmetic operators). This may end up requiring fitness functions that look at "execution traces" (partially evaluated program trees) as well as program tree structure and input/output behavior.

OpenPsi (Motivational System)

OpenPsi is an implementation within OpenCog of significant aspects of the Psi model of human motivation and emotion, inspired significantly by aspects of Joscha Bach's MicroPsi AI system. (However, MicroPsi also contains other aspects not included in OpenCog, for example a specific neural net like knowledge representation.)

An overview of Psi, oriented toward software implementation and tuning in the robotics context, is Emotion modelling on the Hanson Robotics wiki (see also OpenPsi on Hanson Robotics). That page also contains many useful URLs linking into the relevant literature.

The OpenPsi (2010) wiki page describes the implementation as it was in 2007-2014. That implementation is obsolete, and the code ws removed from github in 2015.

Proposed OpenPsi Refactoring

Moving the position of OpenPsi within the codebase

It's not logically necessary for OpenPsi to be associated with "avatar control", as is done in the current codebase for historical reasons. Actually, Psi represents a more general methodology for motivated action.

The number of MindAgents in use could be reduced -- there seems no good reason to have so many different "updating" agents instead of just one. One updating agent and the action selection agent probably is enough.

Generalizing/extending the action selection process

The current (June 2015) version of OpenPsi's action selection agent has several significant limitations; there is a proposal for Improved Action Selection

Pattern Matcher

Main page: Pattern matcher

The pattern matcher is a basic OpenCog tool, allowing the AtomSpace to be searched for specific patterns.

Basically, it lets you describe a pattern comprising of a set of Atoms, some of which are concrete and some of which are left as variables. It then searches the Atomspace to find concrete Atoms that can fill in the slots left by the variables, in a way that is consistent with the concrete Atoms in the pattern.

The search process the pattern matcher does behind the scenes is controlled by a set of callbacks, which can be customized for particular purposes, as needed. Callbacks are customized in C++, but the pattern matcher is generally invoked (as of March 2015) via Scheme.

This is a query tool vaguely similar to, but significantly more powerful than, the Cypher query language in Neo4j. As well as being used for querying, though, the pattern matcher's primary use is as a tool used inside other AI algorithms, whenever they need to get stuff out of the AtomSpace in a complex way.

Due to its integration with ExecutionOutputLink and GroundedSchemaNode, the pattern matcher can basically be used as an interpreter for executable programs that are implemented in Atoms. In the New Embodiment Design (proposed March 2015) this is proposed to be used to enable complex behaviors for a robot or game character to be specified via Atoms and executed via pattern matching.

Pattern Miner

Pattern mining refers to the process of scanning the whole Atomspace, or large parts of it,

Chapter 21 of Engineering General Intelligence, vol. 2, which is linked from here describes some possible approaches to pattern mining, including evolutionary learning (e.g. MOSES) based pattern mining and greedy pattern mining.

OpenCog's Pattern Miner carries out greedy pattern mining in a relatively scalable way. It is designed to effectively exploiting multiple processors to handle even large Atomspaces living on a single machine.

One conceptual challenge raised here is what kinds of patterns the pattern miner should be looking for. Mining frequent patterns is the most straightforward thing to do, but most frequent patterns are very boring. Mining informationally surprising patterns is probably a generally better approach, but then there is a subtle issue regarding how to measure surprisingness.

A distributed version of the pattern miner is needed, to handle data-stores so big that they can't fit in an Atomspace in RAM in a single machine. This is hoped to be implemented in the second half of 2015....

Fishgram

An earlier software module implementing greedy pattern mining in a less scalable way was Fishgram. An academic paper describing some simple experiments done with Fishgram is here:

http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_39.pdf

The class of patterns recognizable by Fishgram, in principle, was not terribly different from the current Pattern Miner. However, Fishgram's internal data structures and algorithms were not scalable so it was only able to effectively deal with small Atomspaces.

Planner

OpenCog contains a planner, sometimes called OCPlanner, with code at

https://github.com/opencog/opencog/blob/master/opencog/embodiment/Control/OperationalAvatarController/OCPlanningAgent.h

which was written by Shujing Ke and integrates logical inference and navigation in a unique way.

The concepts underlying the planner are fairly general in nature, but the planner code is currently written in a way that's fairly specialized to the task of planning actions involving moving around in a game world.

An example of what the planner does is: If a game agent needs to get from point A to point B, but there is no navigable path from A to B, then it may break a whole through a wall or build a bridge with blocks to get from A to B. This combines logical inference, manipulation and navigation seamlessly in a single planning process.

A presentation describing the basic ideas underlying the patterns is at Media:OCPlanner.pdf

The OCPlanner has been integrated with OpenPsi (so as to plan actions that are chosen as important by OpenPsi) and with the (currently, as of March 2015, deprecated) Embodiment module.

It would be desirable to reimplement the OCPlanner, retaining the core logic but replacing the custom rule engine with use of the same forward and backward chainers that PLN uses. As well as simplifying the code, this would enable ECAN attention allocation to help control the planning process, and allow PLN (and other) inference to be done in tight collaboration with the planning process.

PLN (Probabilistic Logic Networks)

PLN is a system for uncertain logical reasoning, integrating quite general (predicate and term) logic with probabilistic and fuzzy management of uncertainty.

The math and concepts underlying PLN are described in several books, which are listed on the Background Publications page: Probabilistic Logic Networks (2008), Real World Reasoning (2011), and parts of Engineering General Intelligence vol.2.

The PLN code uses the OpenCog "unified rule engine" to perform forward and backward chaining and can be found here:

https://github.com/opencog/opencog/tree/master/opencog/reasoning/pln

Some documentation on the PLN implementation is in the RuleEngine folder on GitHub:

https://github.com/opencog/opencog/tree/master/opencog/reasoning/RuleEngine

Current research (March 2015) includes using ECAN to guide/control PLN inference, and applying PLN to generalize MOSES models -- both simple example of the "cognitive synergy" principle underlying OpenCog.

There have been several prior implementations of PLN as well.

Some speculative thought has gone into solving PLN inference problems using Weighted Partial MaxSAT, see Mapping_Atomspace_to_MaxSAT

Robotics

OpenCog has been used to help control Nao humanoid robots in prototype experiments done in Xiamen in 2009, but that code has long been abandoned. (For instance, in those experiments, OpenCog represented the robot's environment using the 2DSpaceMap structure (since replaced by the 3DSpaceMap), and helped the robot to navigate to get to a goal.)

Work is underway aimed at using OpenCog to control a variety of robots (initially humanoid, but ultimately any robots). As of Feb 2015, this work is still focused on making a robust, ROS-based "cognitive robot control" framework handling perception, decision, control and movement in an effectively integrated way (conceptualy and software-wise). This framework is being built with OpenCog integration in mind, and should make OpenCog-based robot control possible in a robust, general, and relatively straightforward way. But OpenCog hasn't been integrated with the framework yet. The next major step is wrapping OpenCog in a ROS node, and figuring out exactly what messages this node should send and receive.

Most of this work is being sponsored by Hanson Robotics; some has been done at Hong Kong Poly U as well. For the relevant codebases see:

https://github.com/hansonrobotics

Conceptual (and some technical) documentation related to this robotics software infrastructure can be found at the Hanson robotics wiki.

See also

Making OpenCog-based robot control work really well will require a number of changes to the Embodiment module.

Rule Engine

OpenCog contains a "unified rule engine", which can be used for execution and chaining of any rules that are expressed as Atoms. It comes along with forward and backward chainers. This rule engine is currently (March 2015) used to implement the PLN logic system, but is intended for much more general utilization.

A design document about this rule engine is [[Unified_rule_engine | here]; however, this was written before the rule engine was implemented so does correspond exactly to the code. Some more current documentation can be found along with the code, here:

/opencog/reasoning/RuleEngine

/opencog/reasoning/RuleEngine/rule-engine-src

In general, if you want to write code for OpenCog that involves execution or chaining of software rules at some level, please strongly consider using our general-purpose unified RuleEngine rather than embedding a specialized rule engine in your code!