OpenCogPrime:DistributedCognition

From OpenCog
Jump to: navigation, search

Distributed Cognition

This page discusses distributed cognitive processing in OCP. The topic is just lightly reviewed — it is a large and deep topic, but one more to do with "standard computer science" than with AGI per se. Getting these difficult sorts of standard computer science done right is critical for achieving AGI on current networks of von Neumann machines, but, does not require dramatic original conceptual innovations going far beyond the current state of the art.

Specialized Strategies for Distributing Particular Cognitive Algorithms

There are many ways to set up distributed processing in OCP, consistent with the above generalities regarding complex AtomSpaces. The following section addresses this issue on a high level — it reviews the way a multipart AtomSpace can be divided into functionally distinct units, each corresponding to a particular domain of cognition, such as language processing, visual perception, abstract reasoning, etc. But there are also specifics of distributed processing that correspond to cognitive algorithms rather than domains of cognition. The two main cognitive algorithms in OCP are PLN and PEL, and each of these naturally lends itself to different styles of distribution. Each functionally specialized unit as described in the following chapter may then consist of a network of machines, including subnetworks carrying out distributed PLN or distributed PEL according to special distribution architectures appropriate for those algorithms. A detailed description of approaches for distributing PLN or PEL would bring us too far afield here, but we will give a rough indication.

In the case of PLN, one useful "trick" may be to take N nodes and partition them among K machines, and then use, on each machine, only the links between the nodes on that machine. There are then cycles of inference and repartitioning. This allows inference to deal with a body of knowledge much larger than can fit on any one machine, but without any particular inference MindAgent ever needing to wait for the Mind OS to make a socket call to find the Atom at the end of a Link.

On the other hand, distributed PEL is an even simpler beast, conceptually. Evolutionary algorithms like GA/GP, BOA and MOSES lend themselves very naturally to distributed processing — if one is evolving a population of N items (e.g. SchemaNodes), then one can simply partition them among K machines and speed up fitness evaluation by a factor of roughly K (subtracting off a little bit for communication overhead, but in the usual case of expensive fitness functions this isn't a significant matter). The subtler aspect of distributed PEL is distributing the probabilistic modeling and instance generation phase. This can also be done, though less simply than distributing fitness evaluation, but the details depend on which species of probabilistic modeling one is using inside BOA. For instance, if an expensive approach for instance generation is used, then the probabilistic model of fit population elements can be distributed across several machines and each machine can generate a subset of the instances required.

Globally Distributed Processing

Finally, we must not omit the possibility of broadly distributed processing, in which OCP intelligence is spread across thousands or millions of machines networked via the Internet. Even if none of these machines is exclusively devoted to OCP, the total processing power may be massive, and massively valuable.

In terms of OCP core formal structures, a globally distributed network of machines, carrying out peripheral processing tasks for a OCP system, is best considered as its own, massive, Multipart-Atomspace. Each computer or local network involved in carrying out peripheral OCP processing is itself a simple AtomSpace. The whole Multipart-AtomSpace that is the globally distributed network must interact with the Multipart-AtomSpace that is the primary OCP system — or perhaps with multiple separate Multipart-Atomspaces representing several OCP systems drawing on the same pool of Webworld locations.

The use of this kind of broadly distributed computing resource involves numerous additional control problems, of course and we will not address these here. A simple case is massive global distribution of PEL fitness evaluation. In the case where fitness evaluation is isolated and depends only on local data, this is extremely straightforward. In the more general case where fitness evaluation depends on knowledge stored in a large AtomSpace, it requires a subtler design, wherein each globally distributed PEL subpopulation contains a pool of largely similar genotypes, and contains a cache of relevant parts of the AtomSpace, which is continually refreshed during the fitness evaluation process. This can work so long as each globally distributed lobe has a reasonably reliable high-bandwidth connection to a machine containing a large AtomSpace.