From OpenCog

Economic Attention Allocation (ECAN) was an OpenCog subsystem intended to control attentional focus during reasoning. The idea was to allocate attention as a scarce resource (thus, "economic") which would then be used to "fund" some specific train of thought. This system is no longer maintained; it is one of the OpenCog Fossils.

The general idea of attention allocation as a way for allocating compute resources remains an important and unsolved problem within OpenCog. As such, it remains an "important" problem. There were multiple reasons to retire the initial implementation:

  • It attempted to micro-manage: each and every Atom was given an AttentionValue. This is a waste of memory, a wast of resources. Attention can be (and should be) managed at the scale of millions of Atoms at a time, as a block. Perhaps in some hierarchical, multi-scale fashion, so that computation resources are managed from second to second, minute to minute, instead of one microsecond at a time.
  • There was no effective plug-in system to manage what was being controlled. Such a plug-in system has recently been created: the idea of ProxyNodes. So, for example, an ECAN-guided StorageNode can be created, which can control which Atoms remain in RAM, and which are retired to disk. However, the ProxyNode infrastructure did not exist at the time of the original ECAN implementation.
  • ECAN depended strongly on the Agent subsystem. The Agent subsystem was itself flawed: it attempted to reinvent basic operating system scheduling concepts, but in a naive and unscalable fashion. A replacement for the Agent subsystem does not yet exist; perhaps one could be built on the ProxyNode concept? This remains terra incognita.

Economic Attention Allocation

We have discussed the semantics of STI, LTI and other measures of Atom importance, and have also described how the information collected in the SystemActivityTable may be data-mined to produce information pertinent to adjusting STI and LTI values. In this section we will build on these ideas and describe an approach to rapid, dynamic, quick and dirty attention allocation in OCP — which is able to incorporate information from the more sophisticated, more expensive data-mining approach when it is available.

In the economic approach, STI and LTI are represented as currencies. The amount of STI currently possessed by an entity (e.g. an Atom) is intended to be interpreted as roughly proportional to the STI as probabilistically defined above. However, a precise mapping from STI currency stores to probabilistic STI values does not ever need to be calculated in the course of practical attention allocation. Rather, estimations of probabilistic short-term importance may be used to affect STI currency values in other ways.

All "financial agents" within OCP own certain amounts of LTI and STI currency. Financial agents include at least: Units, MindAgents, and Atoms. Financial agents must exchange currencies among each other, according to equations designed to embody judgments of what allocation of currency is likely to optimize overall system effectiveness, and encourage overall proportionality between the currency store associated with an agent and the (not calculated in practice) probabilistically-defined STI and LTI values associated with the agent.

In the initial design for economic attention allocation within OCP, there is not an explicit competition among financial agents, in the sense that different agents are negotiating with each other or auctioning services to each other, each one seeking to obtain its own economic advantage. This sort of dynamic might be worth introducing into OCP later, but is not being suggested for initial exploration. Rather, the only "economic" thing about the currencies in the initial design is that they are conserved. If one agent's STI currency store is increased, another one's must be commensurately decreased. This conservation makes the overall dynamics of the system easier to understand and regulate, in a number of ways.

The "Attentional Focus" (AF) in this approach is simply defined as the set of Atoms with STI greater than a certain fixed threshold. This is important because some dynamics act differently on Atoms in AF than on other ones.

Within this basic framework, a number of kinds of currency exchange must occur. In the following I will qualitatively describe these, along with some other dynamics that are important for causing or regulating currency exchange.

Most of the discussion will pertain to generic STI and LTI values, however, the same exact mechanisms may be utilized for MindAgent-specific STI and LTI values, as regards Atoms. The only difference is that MindAgents do not get MindAgent-specific STI and LTI, so that for these values, the economics is entirely between the Atoms and the Lobe.

The basic logic of attentional currency flow to be described in this section is as follows. The flow occurs for STI and LTI separately:

  • Each Unit is its own economy, there is no flow of currency between Units. Units contain central banks, and when a Unit's central bank carries out a financial transaction, we will say that the Unit carries out the transaction.
  • Currency flows from Units to Atoms when a Unit rewards its component Atoms with wages for being utilized by MindAgents or by helping to achieve system goals
  • Currency flows from Atoms to Units when Atoms pay Units rent for occupying the AtomTable
  • Currency flows from Units to MindAgents when Units reward MindAgents for helping achieve system goals
  • Currency flows from MindAgents to Units when MindAgents pay Units rent for the processor time they utilize
  • Currency flows from Atoms to Atoms via importance spreading which utilizes HebbianLinks

The discussion here will mostly be qualitative. See the document AttentionAllocationEquations for specific, currently implemented equations along the lines discussed here.

(NOTE: This page may be obsolete in some of its details. Joel Pitt needs to update it to accord with the recent implementation of Attention Allocation he's done. Take it away, Joel!!!)

Atom Stimulation

In the economic approach to attention allocation, when an Atom is used by a MindAgent, it should get some economic reward for this (both STI and LTI). The reward may be greater if the Atom has been extremely useful to the MindAgent, especially if it's been extremely useful in helping the MindAgent achieve some important system goal.

One way to implement this would be for MindAgents to give some of their currency directly to the Atoms that they used; but we have decided not to go with this approach initially, though it may be useful in future. It seems overcomplex to have MindAgents restricted in the number of Atoms they can use, based on their STI currency store. Instead, as noted below in the discussion of MindAgent scheduling, in the current approach a MindAgent's currency store controls its processor time utilization, which indirectly affects the number of Atoms it can use.)

Instead, in the initial approach, MindAgents give "stimuli" to Atoms when using them. Then, each core cycle, during the importance updating process (see below), Atoms are rewarded proportionately to the amount of stimulus they have received (out of the Unit's fund of currency).

Importantly, this is where estimates of utility made by data mining the SystemActivityTable may come into play. A MindAgent may give a certain default amount of stimulus to an Atom merely for being utilized, but, if data mining has revealed that the Atom actually was important for what the MindAgent did, then two things may happen:

  • The Unit may disburse a significantly greater amount of stimulus to the Atom
  • The MindAgent-specific importances of the Atom may be incremented, via sending out MindAgent-specific stimuli

Atom Importance Updating

Next, the ImportanceUpdating MindAgent loops through all Atoms in the AtomTable and updates their STI and LTI values according to two principles:

  • rewarding of Atoms (aka paying wages to Atoms) for getting stimuli (which are delivered by MindAgents and also based on SystemActivityTable data-mining, as mentioned above)
  • charging of "rent" to Atoms

LTI rent payment occurs for all Atoms every time the ImportanceUpdating MindAgent carries out its loop through the AtomTable. On the other hand, STI rent payment occurs only for Atoms in the AttentionalFocus.

Initially, a constant tax rate has been assumed, although this assumption will be lifted in future versions. For instance, it makes sense for Atoms that consume a large amount of memory to be charged a differentially higher LTI rent.

The charging of rent has the effect of a time decay of Atom importance values, which is consistent with the probabilistic semantics of importance values: both STI and LTI are intended to measure recent importance, though STI works with a more stringent definition of recency. The rewarding of wages compensates for the time decay by adding new importance to the Atoms that have been found useful in recent past.

Homeostatic mechanisms

The dynamics of economic attention allocation is complex, and some special mechanisms are necessary to keep the dynamics of STI and LTI updating from becoming overly volatile:

  • Adjustment of compensation or rental rates: The amount of currency given an Atom for each unit of stimulus, or the amount of rent charged to each Atom per cycle, may need to be adjusted periodically if the Atom STI or LTI values overall seem to be drifting systematically in some particular direction.
  • Enforcement of caps; maximum and minimum values on the STI and LTI of any Atom are enforced. No one can get too rich or go into too much debt.

Hebbian Learning

Hebbian learning in OCP refers to the creation or truth-value-updating of HebbianLinks. As noted above, the general semantics of a HebbianLink from A to B is that when A is important, B is often also important. We may also have PredictiveHebbianLinks that have a time delay associated with them, similar to PredictiveImplicationLinks.

HebbianLink truth values may be updated by data mining on the SystemActivityTable; these updates will have fairly high weight of evidence, generally speaking. On the other hand, there are also lower-confidence heuristics for updating HebbianLink truth values. For instance, a simple heuristic is:

  • if A and B are both in the AF at a certain point in time, then this is a piece of evidence in favor of the HL pointing from A to B.
  • if at a certain point in time, A is in the AF but B is not, then this is a piece of evidence against the HL pointing from A to B

Of course this heuristic with its binary nature is somewhat crude, and more sensitive heuristics may easily be imagined.

What the SystemActivityTable data-mining will catch, and this simplistic approach will miss, are Hebbian predicates embodying criteria such as

"If both A and B are important at time T, and C is important at time T+1, then D is likely to be important at time T+2"

Ultimately, PLN inference may be used to help determine the truth values of HebbianLinks and Hebbian predicates. This will allow the attention allocation process to be smarter than a simple reinforcement learning system.

Importance Spreading

Next, importance spreading refers to the phenomenon wherein an Atom decides to pass some of its currency to other Atoms to which it is related. This is in a sense a poor man's inference — it is preferable conceptually and ultimately more accurate to use PLN inference to infer one Atom's importance from the importance of other suitably related Atoms. But, computationally, doing this much inference is not always feasible. So, in the current design, the inference goes into the truth values of HebbianLinks — and HebbianLinks are then utilized within a non-inferential process of importance spreading, conducted by a separate ImportanceSpreading MindAgent.

For instance, if there is a Hebbian link from A to B, then A may want to pass some of its currency to B, depending on various factors including how much currency A has, and how many Hebbian links point out from A. Of course, the equations governing this must ensure that A doesn't pass too much of its currency to other Atoms, or A will have no currency left for itself. One approach is for each Atom to allocate a maximum percentage of its importance to spreading, and then spread this along its outgoing HebbianLinks proportionately.

The ultimate goal of this process is to heuristically approximate what would be achieved by using inference to estimate the implications of various Atoms' importance for various other Atoms' importance.

An interesting question is what happens, in OCP, when importance is spread to schemata or predicates? These schemata or predicates may spread some of their importance to their component schemata or predicates, but in order to guide this, it may be useful to have a specialized process for building HebbianLinks between Procedures and their component Procedures. The HebbianLink from procedure P to component procedure P1 should be given an STI proportional to the "importance" of P1 to P, which may be measured in various ways, e.g. by how much P changes if P1 is removed from P.

MindAgent Importance and Scheduling

So far we have discussed economic transactions between Atoms and Atoms, and between Atoms and Units. MindAgents have played an indirect role, via spreading stimulation to Atoms which causes them to get paid wages by the Unit. Now it is time to discuss the explicit role of MindAgents in economic transactions. This has to do with the integration of economic attention allocation with the Scheduler that schedules MindAgents.

This integration may be done in many ways, but one simple approach is:

  1. When a MindAgent utilizes an Atom, this results in sending stimulus to that Atom. (Note that we don't want to make MindAgents pay for using Atoms individually; that would penalize MA's that use more Atoms, which doesn't really make much sense.)
  2. MindAgents then get currency from the Lobe periodically, and get extra currency based on usefulness for goal achievement as determined by the credit assignment process. The Scheduler then gives more processor time to MindAgents with more STI.
  3. However, any MindAgent with LTI above a certain minimum threshold will get some minimum amount of processor time (i.e. get scheduled at least once each N cycles).

As a final note: In a multi-Lobe Unit, the Unit may use the different LTI values of MA's in different Lobes to control the distribution of MA's among Lobes: e.g. a very important (LTI) MA might get cloned across multiple Lobes.