Distributed AtomSpace Architecture Redux

From OpenCog
Jump to: navigation, search

Obsolete documentation! The current architecture is described on the Distributed AtomSpace page. The ideas presented below are based on several incorrect or invalid assumptions. It mis-represents the current architecture: defacto, the current architecture resembles the "desired" architecture. Thus, this page offers an interesting sketch, but is not representative of the actual challanges facing a distributed architecture!

The initial version of this document was written by Cassio Pennachin, October 2011 ... after discussions with Ben Goertzel, Joel Pitt and the OpenCog HK team (Zhenhua Cai, Deheng Huang, Shujing Ke, Jared Wigmore)....

NOTE (late May 2012): Some of the ideas on this page are extended on the page DistributedAtomspace. As of the present time, the ZeroMQ messaging aspects of the design proposed on this page are being implemented by Erwin Joost. The Atomspace refactoring is under discussion, and the specific suggestion of Redis made in the present page is being reconsidered, along the lines discussed in DistributedAtomspace. END OF NOTE.

NOTE User:Linas has some very strong doubts as to whether the architecture below even makes sense. The reservations are based on practical experience with the technologies in question: the ZeroMQ interfaces are literally about 1000x (yes, that's one-thousand!) times slower than the C++ interfaces. NoSQL databases such as Redis have a very poor performance profile for storing very small objects, such as truth values, which are 8 or 12 bytes in size: these databases are optimized for much larger objects (web pages, photos, mp3's, etc) and behave disasterously poorly for AtomSpace-style workloads: this was already explored by GSOC students: Atom store/fetch performance was measured in the hundreds-of-atoms per second range, which is .. absurd, given what PLN would need! Thus, although the design below might seem beguiling, it is in fact completely devoid of any performance estimates. In order to work, one must know how fast one can move Atoms from a central server to the interested process, and back again. Failure to estimate the performance, and to deal with its reprecussions, will result in a system that will not be usable. END OF NOTE.

Context and Motivation

This document outlines a proposed new architecture for the AtomSpace. The goal is to enable highly distributed and parallel processing of MindAgents in a way that’s a lot simpler than what the current architecture naturally leads to. The ideas presented here were motivated by a bunch of exploratory work already done by OpenCog developers (especially Joel Pitt), and related discussions in the mailing list over the past several months (such as discussions on key-value pairs). I don’t claim exclusive credit for the ideas, and it looks like the project was already moving on a similar direction before I discussed these ideas with Ben. Still, it seems worthwhile to consolidate all these thoughts in a single document for discussion.

Current Scenario

The current OpenCog Prime architecture makes the following assumptions, on a large-scale deployment oriented toward real AGI:

  • There is one (or very few) processes per machine (these are the CogServers)
  • Each such process has one AtomSpace (but could also be a shared back-end that supports distributed storage, such as the existing PostgeSQL back-end.)
  • Each such process has a bunch of MindAgents, which are activated on every cognitive cycle, with some priority. There is a scheduler that is responsible for activating the MindAgents, respecting their priorities and other constraints such as desired cycle duration.
  • The MindAgents are executed by a thread pool, and they collaboratively release the CPU (i.e., each agent is supposed to implement its action as short processing slices).
  • Concurrent access to the AtomSpace is enabled through traditional locks between the threads.

The diagram below depicts the current architecture.

RevisedAtomspace pic1.png

The above assumptions aren’t necessarily valid for small-scale, application specific deployments, which are the vast majority of current OpenCog usage. These tend to have one or few CogServers, with very specialized processing going on in some or all of those servers (e.g. the Learning Server). Architecture for those scenarios is typically ad hoc, because the “conceptually correct” architecture becomes cumbersome. We’ve historically accepted that, but it’s a symptom of its inadequacy.

This architecture has a number of issues. It makes concurrent processing difficult because this multi-threaded design is complex to get right by its very nature. It makes distributed processing difficult because we rely on our own internal data structures and communication protocols. It’s also complicated to respect agent priorities, especially when external processes (such as a virtual world) are involved, because the scheduler becomes a complex beast and we rely on agent implementations being naturally collaborative, and doing their things in small slices, with little variation between one activation and the next – a guarantee that’s hard to provide in some cases, such as inference and natural language processing.

Of course, we never adequately implemented keys aspects of this architecture, such as a smart-enough scheduler and the distributed AtomSpace. This is partially a consequence of prioritization, but also a consequence of the difficulties involved.

Proposed Architecture

This proposed scenario is depicted below.

RevisedAtomspace pic2.png

The major goal behind this architecture redesign is to simplify concurrent (and parallel) as well as distributed processing of MindAgents, through the use of new tools and techniques that have become popular in the decade since the original architecture was conceived. The proposal is to do much less, and rely on external components to handle three key aspects:

  1. AtomSpace storage: I propose replacing the existing AtomTable and supporting repositories with a Redis database. Redis is a high-performance key-value DB with limited persistence support and cluster support. Redis would run on a separate process; MindAgents would have an AtomSpace API object they’d use to communicate with the server, using an event-based design similar to Joel’s work.
  2. Distribution and communications: I propose using ZeroMQ for communication between MindAgents and the AtomSpace server, as well as for communications between MindAgents.
  3. Scheduling: I propose we either completely remove the CogServer (making each MindAgent its own standalone process) or assume it’s always going to be very simplistic (holding an ordered list of MindAgents with the same priority, and no internal concurrency). This moves the burden of scheduling to the OS, and removes the needs for complicated multi-threaded development inside the CogServer (there may still be multiple threads, but those would be concerned solely with communications, not processing).

As pointed out by Joel in discussions leading up to the production of the initial version of this page, we will also need some sort of machine-level controller, which allocates priorities to MindAgents based on their utility. I don’t think this actually exists right now in the codebase, but if it does, a simple controller process per machine could be implemented, which dynamically adjusts the priorities of the MindAgent processes. Communications between the controller and the utility-computing MindAgent can use ZeroMQ just like communications between vanilla MindAgents.

I believe that this architecture has, as an extra benefit, much higher compatibility with current and forthcoming application-specific deployments, and it enables an easier incremental intelligence improvement for those applications. For instance, adding complex PLN inferences doesn’t require re-tuning the scheduler or worrying about either concurrent MindAgent execution or distributed deployment and communications.

Gradually Getting There

If we decide to implement the new architecture outlined above, it looks like there’s a natural incremental path from the current architecture leading to the target one without much disruption.

We can do the migration in the following sequence, where at the end of each step we still should have a fully functional system:

  1. Create an AtomSpace server, which would host the current AtomSpace and provide its current API as a service. At the same time, replace the Atomspace object in the CogServer with an Atomspace client, which provides the existing API to MindAgents, and communicates with the server using ZeroMQ and a protocol to be defined (web services are probably too high-ceremony).
  2. Eliminate or simplify the CogServer, enabling MindAgents to run on their own processes (or as single Agents in a simple CogServer), communicating directly with the Atomspace through the Atomspace client API. At this moment we can create API client implementations in different languages, so MindAgents need no C++ code at all to run. We’d start with a Python implementation, possibly reusing some of the existing Python adapter code.
    • A note on the use of ZeroMQ for communications. We can also use it for MindAgent to MindAgent communications, effectively enabling sequencial activation, such as in a perception-cognition-action loop. We’re no longer bound by the concept of a cognitive cycle, which seems unnecessary from an AI design perspective anyway.
  3. Replace the Atomspace internals with the Redis-based design. A detailed design is still needed for this, but I believe Redis’ support for data structures will be handy.

The last step is the major one, of course, but it only takes place once the communications and distribution layers are already working, so the effort is 100% focused on the Atomspace data structures.

I suspect that once this architecture is in place we’ll want to revise the Atomspace API so it more closely matches the performance contracts of the new database. That would involve changes to the MindAgents, but it can be done in a carefully controlled way later on.

Discussion on the Local Cache

(Initially written by Ben Goertzel)'

Discussion on the OpenCog email list following the initial posting of this page, focused partly on the value of a very rapid local Atomspace cache (which both Linas and Joel emphasized in discussions).

The basic desire here is to have some sort of software that various MindAgents (or other OpenCog cognitive processes) can use, in real time during their cognitive processing, to store and manipulate their interim knowledge. More precisely, the desire is to have something that

  • Can serve as a local (single-machine) AtomSpace cache within an overhauled Atomspace design like Cassio's
  • Will be fast enough (for access and manipulation) that cognitive processes could use it in place of the specialized data representations they currently use. For instance, MOSES could use it instead of Combo. PLN could use it instead of the BackInferenceTree. Attention allocation could use it instead of a large sparse matrix. Etc.


Detailed design of such a component will require some thought. However, Cassio and Ben suspect that it can be made very fast if:

  • We eliminate multi-threading considerations, assuming that the local cache can be accessed in a single-threaded way
  • We allow direct pointer access of the local cache (or boost smart pointer access, perhaps) while including some behind the scenes synchronization with the hypergraph server.

The latter isn't trivial, but seems possible.

Comments from Nil on the local cache and MOSES

Here would be my requirements from the perspective of using such a library to represent combo programs.

It would be cool if

  • one can have a pool of programs where sub-programs are uniquely represented (just like atoms in the atomspace)
  • running a program under such representation would be just as fast as in the current one using tree.h (which should be achievable if links are pointers)
  • we rely on some generic key-value representation so that one can store any mutable information in an Atom (the value part). And the type of the key is completely generic as well (perhaps via some template programming).

By doing that MOSES could maintain a pool of programs at lower memory cost. Plus, one can use the mutable value part of an Atom (sub-program) to store information such as a list of outputs obtained by evaluating that sub-program on a list of inputs (thus using the mutable part to cache information at the sub-program level).

So it would save both RAM and CPU.

The drawback would be that when a new program is generated its subcomponents would have to be retrieved from the AtomSpace so the program can find its place. If the hashing function is well done (log(|Key|) I think this shouldn't be a problem.

PLN and the local cache (Ben's quick comments)

From a PLN view, in order to get rid of the separate BackInferenceTree and just use a generic local Atom cache, it seems all we'd need is flexible capability to create something like a BITNode in the local Atomspace, and then link these together with customized links, and rapidly traverse links between the BITNodes...

So, my quick impression is that if the requirements for Combo trees were met, the requirements for BITs would be met too...


Comment from Jared

"Efficient custom links would be awesome."


Comments from Linas

My requirements are somewhat flexible. "Fast" is a requirement. Ability to sync with handles & atoms a requirement. I'm pretty sure it should be "thread enabled", i.e. not necessarily thread-safe (i.e. not necessarily idiot-proof), but reasonable if you know what you're doing. Things like key-value and dynamically-created types seem reasonable, but I don't really understand how these requirements mesh with the atomspace. What does it mean for the atomspace to store key-value pairs? How would they be represented?

''Response from Ben:

I'm curious for more detail about "thread-enabled" ...

About key/value pairs, of course it would be reasonably straightforward to extend the Atomspace so that each Atom is associated with a hashtable or similar. If this were done in the local cache I suppose it would need to be done in the centralized Atomspace proper as well.... I can see how this would be very helpful for the use cases of MindAgents using the Atomspace local cache as their main real-time data representation.

Apache Storm can also be considered as the framework for distributed computation (i.e. vs ZeroMQ). --Hendy (talk) 04:31, 4 July 2014 (CDT)