Distributed AtomSpace

From OpenCog
(Redirected from Persistence)
Jump to: navigation, search

This page sketches how distributed operation currently works in the OpenCog AtomSpace. A general review of concepts and defintions can be found on the Distributed AtomSpace Architecture page.

There are several points to keep in mind:

  • Distribution is possible because every atom is assigned a unique 64-bit integer UUID. This is used to be able to identify an atom, no matter what server it may reside on. To avoid UUID collisions, blocks of unused UUID's may be reserved via a malloc-like mechanism.
  • Atoms are immutable: the name of an atom cannot be changed, no can it's outgoing set be changed. Only the TruthValue and AttentionValue can be changed. Thus, when one atom exists in two or more Atomspaces, it is, in a sense, the "same" atom, but with different TV's and AV's. Thus, the act of sharing atoms is really the act of sharing TV's and AV's, and reconciling thier values.
  • The sharing and reconciliation strategy is left up to the user. The Atomspace itself does not know enough to "automatically" distribute an atom. For example: an atom may be some short-term temporary work result that should not be shared. If there are millions of these being created and destroyed all the time, it would be dumb to try to automatically share these. Another example: since the only difference between a local and a remote atom is the truth value, how should these be reconciled? Should the remote version always "win"? The local version? Should they be somehow merged? For these reasons, the AtomSpace only provides tools for sharing, but it is up to the user themselves to do the actual sharing.

The current design assumes a single, centralized server that has omniscient knowlege of all atoms that it has been given. A peer-to-peer design should also be possible, but it has not yet been explored.

Other, more adventurous API's might be possible, such as a publish-subscribe API, where a given AtomSpace can be notified of any changes to atoms matching some given pattern. Note that it is very easy to design and build something that works, but is so horribly slow, that it is unusable. Simplicity is keenly important for a distributed algorithm.

A hypothetical, conceptual design for a different sort of Distributed Atomspace is described here: File:Distributed AtomSpace Design Sketch v6.pdf


Currently, atoms are located in one of two locations:

  1. Local (atoms are in CPU-attached local RAM)
  2. Remote (atoms are in a remote, network-attached location)

When working directly with atoms in C++, python or scheme, they are always, by defintion, local. To work with an atom that exists on a remote server, it must first be pulled into local storage. This is done with the scheme methods fetch-incoming-set, fetch-atom, store-atom, load-atoms-of-type and barrier (corresponding to the C++ methods fetchIncomingSet(), fetchAtom(), storeAtom(), loadType() and barrier()).


The store-atom call pushes the atom out to a centralized server.

store-atom in the Postgres implementation is non-blocking, and, by default, uses four asynchronous write-back queues.


fetch-atom can only be used if you already know what the atom is, or know what it's UUID is. Thus, it practical utility is only to obtain the TruthValue from the remote location. Thus, it is perhaps mis-named: it should be called fetch-tv or something like that. In practice, fetch-atom is not very useful, whereas fetch-incoming-set is of central importance and utility.


The fetch-incoming-set will recursively fetch all of the atoms in which this atom appears. Thus, if remote servers have computed a graph that makes use of this atom, all of those graphs will be pulled into local storage.


The barrier is a synchronization primitve: it can be used to guaranteee that all of the current asynchoronous store queues have been drained, before any fetches are performed.


The load-atoms-of-type will instantiate all atoms of a given type in the local server. This is useful for bulk transfer of atoms. To be effective, the type must be judiciously choosen: For example, loading all WordNodes might be reasonable, as there are only some 100K words in the English langauge. Loading all EvaluationLinks is probably unreasonable, as there are surely too many of them to handle.

Database Backends

Currently, there exists a stable and functional backend to PostgreSQL. This backend solution probably scales to a few dozen opencog servers, although this limit has never been pushed, so the true limit is unknown. Actual performance and scalability will depend strongly on the workload. For example, NLP processcessing will probably scale well. Subgraph mining might scale poorly, because the subgraph algo was never designed to be scalable. The pattern matcher currently searches only the local atomspace; it could be esily extended to search multiple atomspaces (by over-riding the default getIncomingSet() callback.)

Other popular cloud SQL databases include MariaDB. Popular cloud NoSQL databases include Hypertable, HBase and Cassandra. Popular graph databases include Neo4J (see Neo4j Backing Store).

An excellent discussion of the issues involved in proper database design, and the relationship between SQL and NoSQL dtabases is given in the 2011 paper A co-Relational Model of Data for Large Shared Data Banks, byErik Meijer and Gavin Bierman. This paper suggests a design point to ponder: should a backend store the outgoing set of an atom (and compute the incoming set on the fly), or should it store the incoming set of an atom (and compute the outgoing set on the fly)?

A prototype backend for Hypertable was created for GSOC 2009. It never worked correctly, due to a design flaw/misunderstanding -- the incoming set must not be stored as a table; it must be computed from the outgoing set. The flaw should not be hard to fix. The code is here.

At this time, it is not clear what advantage, if any, these other databases might offer to OpenCog. This is because OpenCog does not currently have any algorithms or code that strech the boundaries of distributed operation. So far, almost no one uses distributed OpenCog.

A proposal exists for a Neo4j Backing Store.