Hands On With OpenCog
This is an outline for an in-depth, hands-on OpenCog tutorial. It is meant to compliment and extend a fairly broad set of AtomSpace examples and demos located in the AtomSpace github repo. If you want to learn opencog, starting with those examples is a very good choice. It is assumed that you have at least a basic understanding of Unix commandline, with some background in scripting language programming.
You may also wish to check the OpenCog By Way of Wikipedia page to get some conceptual grounding.
The wiki-page below, and related pages are a WORK-IN-PROGRESS. Many of the lessons outlined below have not yet been created. We hope they will be. Most of the material referenced in the outline already exists, but it's scattered across other wiki pages, Github README files and examples. The goal of the Hands-On tutorials is to take all the material and present it in a systematic way.
If you actually create a tutorial wiki page corresponding to one of the entries on this page, please link to it from this page (treating this page as a "table of contents") and also add the page to the Hands On With OpenCog category. See also the Curation page on how to categorize wiki pages.
Lesson : Building OpenCog (Hands On)
How to download, build and install OpenCog.
Scheme is the primary scripting language used in OpenCog. Its pretty stright-forward; using OpenCog and the AtomSpace does not require strong knowledge of Scheme. It can be used quite casually. But having at leas a basic acquaintance is very important.
Lesson : Manipulating Atoms in Python
Studying Python should be delayed until much later. Although most programmers know python, it is not the best way to work with opencog: it's awkward, and presents many difficulties. Skip on first reading.
Lesson : Manipulating Atoms in C++
Accessing the AtomSpace and OpenCog in C++ is an advanced topic, and should be skipped on the first, second and third reading. System programmers creating new Atom types will need to know this stuff.
Lesson : Manipulating Atoms in Haskell
Accessing the AtomSpace and OpenCog in Haskell is an advanced topic, and should be skipped on the first, second and third reading.
Searching, Querying, Inferencing, Chaining
Deep in it's heart, OpenCog holds the AtomSpace, which is a fancy in-RAM knowledgebase. It's designed to hold general "knowledge" in the form of a graph. That is, it is a graph database. It has far more features and functions than other graph databases, and so there's a lot to it, but thinking of it that way is a good way to start.
Lesson : The Pattern Matcher
Every good database requires a query language to search it and retrieve answers. Every good query language has a huge number of bells, whistles and features. The AtomSpace is no exception. The engine of the query langauge is the pattern matcher.
The above is a short tutorial covering a smattering of pattern matcher basics. A complete set of AtomSpace examples and demos, going into much greater detail than here, can be found in the GitHub AtomSpace examples folder. Please, please, please! go through those; they will give you a very good grounding.
Lesson : Adding New Atom Types
Many important atoms are both declarative and active: they can be used to declare knowledge, but also do things, when called upon. The prototypical example of this is the GreaterThanLink, which can be used to declare that one thing is greater than another. The axioms describing order are well-known: one can perform reasoning and inference with GreaterThanLink, because one knows what it means. It has a semantics to it. It's well-defined.
Yet, it is impossible to store all possible number-pairs in the AtomSpace. So if you need to know when some particular number is greater than some other number, you have to compute it algorithmically. So, GreaterTanLink does that too: under the covers, in hard-coded C++ code.
And so it goes. If you have some kind of data, and it describes knowledge, but the only way to describe it is with an algorithm, then you must create a new atom type for it, and you must write C++ code to implement it. (Sure call out to Python if you wish. Or ROS. Or your local GPU, cellphone tower of wifi hotspot.) Your new atom type can now be reasoned over, because it has a semantics, but it can also do useful computations, when needed.
Lesson : The OpenCog Rule Engine
The most important lesson to be learned from the query language is that the queries themselves are also graphs, and can be stored in the AtomSpace. This allows one to perform meta-queries: queries of queries, recursively. This allows many nifty new things to be done. This includes building inference engines, rule engines, reasoning systems, theorem provers, path planners and constraint systems.
Many of these can be viewed as the process of assembling together a collection of different different queries (now called "rules") in such a way that the rules fit together into a coherent whole, they way one might in natural deduction or in a parse. Or a jig-saw puzzle.
The "universal rule engine" or URE is a backward/forward chainer that assembles chains of rules, so that they span a path between premises and conclusions.
The rule engine is built on a query trick: Given an "answer", one can search for all queries that would return that answer. This is done by the DualLink, and is the central powerhouse behind the rule engine.
Lesson : Hands On with Attention Allocation
As a general rule, doing any kind of reasoning or inferencing or chaining promptly leads to a combinatorial explosion of possibilities. One needs to have a strategy for weeding down the choices and possibilities, honing in something that seems promising. The current system for doing this in OpenCog is called attention allocation or ECAN for short.
Lesson : PLN by Hand
Probabilistic Logic Networks (PLN) is a specific set of rules of inference describing uncertain, probabilistic reasoning. It is currently the primary means of performing reasoning in OpenCog. It's a collection of rules that run on top of the rule engine.
This tutorial describes how to represent knowledge in such a way that PLN can access it, and perform deduction and inferencing on it. (Of course, you can represent your data (your collection of knowledge, your semantic netowrk) in other ways; but if you do, PLN won't be able to understand it.)
Lesson : PLN Forward Chaining
The rule engine currently has only two modes: forward and backward. This is one mode.
Lesson : PLN Backward Chaining
This is the other mode.
Lesson : Hands On with Action Selection
OpenCog has two rule engines: the second one is called OpenPsi. It works on very very different principles than the URE. The way that the rules are written are quite different, and the way it moves from one rule to the next is quite different. It does share ECAN with the URE as a general guiding system, however. This tutorial goes into that.
Lesson : Hands On with Emotion Modelling
Calling them "emotions" is perhaps a bit mis-named: this is not a good model of human emotions. It is, however, a mechanism by which not only attention can be focused, but also a means of assigning priorities and urgencies, a means of setting goals so that they result in desires to be fulfilled. A means of modulating and balancing different objectives. In short: it is another way of crawling over a collection of individual rules, and picking the next ones to run.
Interacting with External Stuff
ROS is the Robot Operating System. OpenCog is regularly used to control robots. This tutorial assumes you are already familiar with ROS.
Lesson : The OpenCog Visualizer
- Also there is another visualizer 'graph description language' - if your keen, here it is on github
Lesson : Using a Backing Store
The AtomSpace is an in-RAM database. You just might want to sometimes write some of it out to disk, and save it for later. The most robust way to do this is to attach to a commonly available, industry standard database. They're great for managing data, and distributing it across clusters, and doing cloud-type things. The AtomSpace does not try to reinvent this wheel.
The AtomSpace has been designed with a generic "backend" layer, so that data can be saved to any database. A number of these have been tried. The one that works the best is PostgresSQL. Some have not worked out very well: we tried some Java-based graph DB's, but the network overhead is a real killer, and they are much too slow.
The most promising future backend is probably Apache Ignite; that's mostly because it has an impressive set of features, and it seems likely that it will interface well with C++ code. Anyway, that does not exist yet.
Lesson : Importing External Knowledge Sources
This requires a basic understanding of Using a Backing Store in OpenCog.
Lesson : Using the REST API
The REST API is hoplessly old, stale, out-of-date. And also slow. Avoid like the plague.
Lesson : Using MOSES via the R Wrapper
- give some simple examples here ... Mike Duncan can make this section !
Lesson : PLN Reasoning on MOSES Output
Natural Language Processing (NLP)
There's a lot to NLP. The tutorials below scratch the surface.
Lesson : The NLP Comprehension Pipeline
Lesson : NL Generation Pipeline
Warning: the pattern miner tutorial needs to be updated for the new pattern miner. Meanwhile see  which contains references to examples at the end.
Lesson : Using The Pattern Miner
Also see the Tutorial of running Pattern Miner in Opencog tute
Lesson : Pattern Miner Scheme Functions
Lesson : Hands On with the Time Space Map
This componet is in the process of being redesigned; work is not complete, and so the tutorial won't be either.