Attention allocation

From OpenCog

(Redirected from ECAN)
Jump to: navigation, search

Attention allocation within OpenCog weights pieces of knowledge relative to one another, based on what has been important to the system in the past and what is currently important. Attention allocation has several purposes:

  1. To guide the process of working out what knowledge should be stored in memory, what should be stored locally on disk, and what can be stored distributed on other machines.
  2. To guide the forgetting process. Deleting knowledge that is deemed to not be use any more (or that has been integrated into system in other ways, e.g. raw perceptual data).
  3. To guide reasoning carried out by PLN. During the inference process, the combinatorial explosion of potential inference paths become a bane to effective and efficient reasoning. By providing an ordering of which paths should be used first, and potentially also providing a cutoff (knowledge with too low an importance is ignored) inference should become more tractable.

Contents

Status

Currently Attention allocation is implemented for keeping track of the importance of atoms. The overall design of OpenCog calls for keeping track of MindAgent importance as well. MindAgents confer attention to the atoms they use, and are then rewarded in importance funds when they achieve system goals. MindAgents can currently store attention, but the mechanism for rewarding them for fulfilling system goals is not yet implemented (in part because the goal system is not yet implemented).

Entities involved

Overview

This sections presents how the flow of attention allocation works.

  1. Rewarding "useful" atoms:
    1. Atoms are given stimulus by a MindAgent if they've been useful in achieving the MindAgent's goals.
    2. This stimulus is then converted into Short and Long Term Importance, by the ImportanceUpdatingAgent.
  2. STI is spread between atoms along HebbianLinks, either by the ImportanceDiffusionAgent or the ImportanceSpreadingAgent.
  3. The HebbianLinkUpdatingAgent updates the HebbianLink truth values, based on whether linked atoms are in the Attentional Focus or not.
  4. The ForgettingAgent removes either atoms that are below a threshold LTI, or above it.

Specification for future work

MindAgent specific stimulus

The conversion from stimulus (the quantity that MindAgents reward useful atoms with) should be converted to short term importance (STI) after each MindAgent runs.

This change of when conversion occurs is because the amount of STI to reward per unit of stimulus is dependent on the amount of STI the MindAgent has to distribute. This prevents a MindAgent that isn't particular effective (in achieving system goals) from upping the importance of the atoms it uses - since they are not important in the overall OpenCog instance.

At the moment, stimulus -> STI/LTI is just done by the ImportanceUpdatingAgent (along with collecting rent and taxation when necessary). For the above stim->STI/LTI scheme, it'd probably be better to include it as a separate process.

I'm thinking:

  • Add an AttentionValue and a stimulus map to the MindAgent class.
  • Split the ImportanceUpdatingAgent into a RentCollectionAgent and a WageAgent: The WageAgent needs to be called after every other MindAgent has completed it's cycle (assuming the MindAgent has distributed any stimulus or has STI to pay its atoms with.
  • move the atom stimulus map from the AtomSpace to a separate map per MindAgent.
  • In the MindAgent base class, provide a method rewardAtoms() which calls the WageAgent. This method would also signal to the WageAgent who the caller was so that the correct stimulus map and amount of STI for reward could be worked out.
  • It's up to the MindAgent to figure out the appropriate time to reward the atoms it uses. Usually at the end of a run cycle.
  • This scheme would prevent us from batching/preprocessing the importance updates -- which would be left as one of the tasks of the WageAgent. Alternatively, we could let the server itself collect the stimulus/importance data from each agent and pass it to the Wage agent.

Launchpad bug report.

HebbianLinkMining

Importance diffusion

  • Make it possible for importance to spread beyond a single Hebbian link in a single diffusion of importance. E.g. try implementing a Lévy flight for importance spread (use a Lévy distribution to work out the proportion of importance that gets spread beyong a single link. Note: depending on the density of HebbianLinks, this could be computationally impractical. Also, PLN may be used to do inference on HebbianLinks, which would make this redundant.

Rent updates

  • Gradually update rent when AtomSpace funds go out of homeostatic bounds. Use time since last rent change to work out the weighting combining the new value with the old.

Launchpad bug report.