Hands On with Attention Allocation

From OpenCog
Jump to: navigation, search

Theory

So, what is Attention Allocation?


Attention allocation within OpenCog weights pieces of knowledge relative to one another, based on what has been important to the system in the past and what is currently important. Attention allocation has several purposes:

  1. To guide the process of working out what knowledge should be stored in memory, what should be stored locally on disk, and what can be stored distributed on other machines.
  2. To guide the forgetting process. Deleting knowledge that is deemed to not be useful any more (or that has been integrated into system in other ways, e.g. raw perceptual data).
  3. To guide reasoning carried out by other modules such as PLN and MOSES. During the inference process in PLN, for example, the combinatorial explosion of potential inference paths become a bane to effective and efficient reasoning. By providing an ordering of which paths should be used first, and potentially also providing a cutoff (knowledge with too low an importance is ignored) inference should become more tractable.


Key Terms

Attention

  • Attention: The process of focusing mental activity (Actions) on some particular subset of a Mind. Attention is the process of bringing content to consciousness. The aspect of an intelligent system’s dynamics focused on guiding which aspects of the system's memory & functionality gets more computational resources at a certain point in time.
  • Attention Allocation: The cognitive process concerned with managing the parameters and relationships guiding what the system pays attention to, at what points in time. This is a term inclusive of Importance Updating and Hebbian Learning.
  • Attentional Currency: Short Term Importance and Long Term Importance values are implemented in terms of two different types of artificial money, STICurrency and LTICurrency. Theoretically these may be converted to one another.
  • Attentional Focus: The Atoms in an OpenCog Atomspace whose ShortTermImportance values lie above a critical threshold (the AttentionalFocus Boundary). The Attention Allocation subsystem treats these Atoms differently. Qualitatively, these Atoms constitute the system’s main focus of attention during a certain interval of time, i.e. it’s moving bubble of attention.
  • Attentional Memory: A system’s memory of what it’s useful to pay attention to, in what contexts. In PrimeAGI this is managed by the attention allocation subsystem.

Importance

  • Importance: A generic term for the Attention Values associated with Atoms. Most commonly these are STI (short term importance) and LTI (long term importance) values. Other importance values corresponding to various different time scales are also possible. In general an importance value reflects an estimate of the likelihood an Atom will be useful to the system over some particular future time-horizon. STI is generally relevant to processor time allocation, whereas LTI is generally relevant to memory allocation.
  • Importance Decay: The process of Atoms’ importance values (e.g. STI and LTI) decreasing over time, if the Atoms are not utilized. Importance decay rates may in general be context-dependent.
  • Importance Spreading: A synonym for Importance Updating, intended to highlight the similarity with “activation spreading” in neural and semantic networks.
  • Importance Updating: The CIM-Dynamic that periodically (frequently) updates the STI and LTI values of Atoms based on their recent activity and their relationships.
  • Long Term Importance (LTI): A value associated with each Atom, indicating roughly the expected utility to the system of keeping that Atom in RAM rather than saving it to disk or deleting it. It’s possible to have multiple LTI values pertaining to different time scales, but so far practical implementation and most theory has centered on the option of a single LTI value.
  • Short Term Importance: A value associated with each Atom, indicating roughly the expected utility to the system of keeping that Atom in RAM rather than saving it to disk or deleting it. It’s possible to have multiple STI values pertaining to different time scales, but so far practical implementation and most theory has centered on the option of a single STI value.
  • Very Long Term Importance (VLTI): A bit associated with Atoms, which determines whether, when an Atom is forgotten (removed from RAM), it is saved to disk (frozen) or simply deleted.

Learning

  • Hebbian Learning: An aspect of Attention Allocation, centered on creating and updating HebbianLinks, which represent the simultaneous importance of the Atoms joined by the HebbianLink
  • Hebbian Links: Links recording information about the associative relationship (co-occurrence) between Atoms. These include symmetric and asymmetric HebbianLinks.

Other

  • Competition for Attention: A competition to decide what perceptual information and local associations will receive more Attention, in humans and other animals, information carrying more affect has an advantage. In PrimeAGI, the competition for attention is specifically managed by a process called Economic Attention Allocation. See also Global Cognitive Process.
  • Economic Action Selection: Action Selection in PrimeAGI is carried out via an artificial-economics approach, closely interacting with the Economic Attention Allocation (ECAN) subsystem.
  • Economic Attention Allocation: ECAN is a general term for the way that Attentional dynamics is carried out within PrimeAGI.

Media

A videoed tutorial on Economic Attention Allocation in Artificial Intelligence by Ben Goertzel (held at Hong Cogathon Dec 2016).

Practice

Commands

start-ecan: Starts the ECAN system assuming libattention is loaded using loadmodule command or specified in the preload list of the conf file.

agents-active: List of running agents.

agents-stop <agent-id>: Stops an agent.

agents-start <agent-id>: Starts an agent.

list-ecan-params: Shows lists of ECAN parameters.

set-ecan-param <param-name> <param-value>: Sets a value for an ECAN parameter.

How to start the attention system

Inside the opencog shell, follow the following steps:

  1. loadmodule PATH_TO/libattention.so
  2. start-ecan

Agents

  • Importance Spreading Agents
    • Attentional Focus(AF) Importance Spreading Agent
    • Whole AtomSpace(WA) Importance Spreading Agent
  • Rent Collection Agent
    • Attentional Focus(AF) Rent Collection Agent
    • Whole AtomSpace(WA) Rent Collection Agent
  • Hebbian Link Creation Agent
  • Hebbian Link Updating Agent
  • Forgetting Agent

Walkthrough

Quiz

1. Attention allocation within OpenCog

weights knowledge graphs against each other
weights pieces of knowledge relative to one another, based on what has been important to the system in the past and what is currently important
is closely tied to MOSES, maps salient Emotion Models to preference functions and ranks pieces of knowledge accordingly
is implemented for keeping track of the importance of atoms
is a technique to increase the economy of attention of humans towards the efforts of OpenCog

Your score is 0 / 0


Info

Priority: High

Tutorial Creators: To Be Determined -- Misgana or Matt Ikle; can figure this out ;-) (see the discussion page for more info)

How to start diff process - identify key parameters etc…

Smokes example?

Should come after the visualizer tute?