Hopfield network emulator

From OpenCog
Jump to: navigation, search

Introduction

The Hopfield net is a simple, idealized model of attractor neural net dynamics. It is easily amenable to mathematical analysis, but is not in itself suitable for either detailed neural modeling or practical computational intelligence.

Hopfield nets can be used to store memories on a series of nodes, each connected by a number of weighted links. These links are updated such that the memories are attractors of the network. I.e. if you present the network with a pattern it doesn't know, it will update and converge to a nearest memorized pattern.

Please see Ben Goertzel's Continuous Learning in Sparsely Connected Hopfield Nets for a more thorough description.

Hopfield network can store 0.14N patterns (N = number of nodes) before saturation. Trying to store more than this will result in the network not converging to clearly defined attractors, as the the network will create new attractors that are combinations of patterns being stored.

The aim of creating this example in OpenCog is to test Attention allocation dynamics - where Hopfield network nodes are represented by ConceptNodes, the links are represented by HebbianLinks, and the Short term importance (STI) of each node indicates the activation of the node. Those nodes whose STI is greater than the Attention focus boundary indicate the current pattern that the nodes this display.

Of particular interest is the possibility of having continuous learning. The "palimpsest scheme" of continuous learning (Parisi, G. (1986). "A memory that forgets" J. Phys. A 19, L617-L620) allows 0.05N rolling patterns to stored.

Through the application of the ForgettingAgent to the HebbianLinks connecting the nodes of the pattern, and addition of new random links it should be possible to forget and replace the older imprinted patterns with new ones.

Process

The Hopfield network emulator (HNE) makes use of the following:

Initial setup

On start up, the HNE creates a a number of ConceptNodes for presenting the pattern to be learned:

// Create nodes
for (int i=0; i < this->width; i++) {
    for (int j=0; j < this->height; j++) {
        nodeName = "Hopfield_";
        nodeName += to_string(i) + "_" + to_string(j);
        Handle h = atomSpace->addNode(CONCEPT_NODE, nodeName.c_str());
        // We don't want the forgetting process to remove
        // the atoms perceiving the patterns
        atomSpace->setVLTI(h,AttentionValue::NONDISPOSABLE);
        hGrid.push_back(h);
    }
}

A number of HebbianLinks are also randomly distributed to connect these nodes, (the number is specified either directly or by the density parameter). This is not the most efficient way to do it, but it suffices:

    while (amount > 0) {
        int source, target;
        HandleSeq outgoing;
        HandleEntry* he;

        source = rng->randint(hGrid.size());
        target = rng->randint(hGrid.size()-1);
        if (target >= source) target++;

        outgoing.push_back(hGrid[source]);
        outgoing.push_back(hGrid[target]);
        he = atomSpace->getAtomTable().getHandleSet(outgoing, (Type*) 
NULL, (bool*) NULL, outgoing.size(), SYMMETRIC_HEBBIAN_LINK, false);
        if (he) {
            MAIN_LOGGER.log( Util::Logger::FINE,"Trying to add %d -> 
%d, but already exists %s", source, target, TLB::getAtom(he->handle)->
toString().c_str());
            delete he;
        } else {
            atomSpace->addLink(SYMMETRIC_HEBBIAN_LINK, outgoing);
            amount--;
        }
    }

By default Symmetric Hebbian links are added.

Imprinting a pattern

A "pattern" in this context is a bitmap. Patterns are generally imprinted several times each (since HebbianLink weights update gradually).

The process is as follows:

  1. All nodes have their AttentionValue reset.
  2. The pattern is taken and the corresponding nodes in OpenCog are stimulated.
  3. The ImportanceUpdatingAgent is run. This converts the stimulus into STI and LTI through the payment of wages. Those atoms in the Attentional focus should correspond to active bits in the pattern.
  4. If the number of HebbianLinks in the network is less than the desired amount (due to forgetting).
  5. The HebbianLearningAgent is run to update the weights to reflect the correlations between nodes in the pattern.
  6. Importance is spread between by the ImportanceSpreadingAgent (This step isn't actually necessary).
  7. A proportion of nodes with the lowest LTI are forgotten.

Retrieving a pattern

The retrieval process is similar to imprinting:

  1. A cue pattern is taken and the corresponding nodes in OpenCog are stimulated. This cue pattern may be exactly the same as one in memory or somewhat mutated.
  2. The ImportanceUpdatingAgent is run as above, except that link updating is turned off first with ImportanceUpdatingAgent::setUpdateLinksFlag(false). This is restored to true after the MindAgent is run.
  3. The ImportanceSpreadingAgent is run times.
  4. Repeat from step 1 times.

Usage

For a detailed description of command line options see Hopfield network emulator usage.

Learning modes

Unlike traditional Hopfield networks, whose link weights are calculated directly from patterns that one wishes to store, the OpenCog version works iteratively. As such, the order in which patterns are presented and retrieved impact on the Hopfield network emulator performance.

Training one pattern after the other

In this scenario patterns are imprinted for N cycles one after another. After each imprint cycle, we do a retrieval process to observe the progress of the network.

Interleaved training

This trains and tests the patterns one after another, in the following manner:

Pattern 1: T T T T T T H H H H H H
Pattern 2: - - - T T T T T T H H H
Pattern 3: - - - - - - T T T T T T

Each column is a imprinting/testing cycle and this example has an interleave of 3, i.e. the overlap between one pattern and next. A "T" indicates that the pattern is imprinted AND tested, an "H" indicates that the pattern is only tested (A "-" means the pattern is ignored in that cycle). The upshot of this is that we can see how performance falls off once a pattern is longer imprinted by observing the results in the "H" positions.

It's possible that interleaved training doesn't allow sufficient imprinting of each pattern, since HebbianLink weights are updated gradually and strengthen over repeated imprinting. In this case, the interleave mode of the hopfield network emulator is still useful, as it can act like the sequential mode when given an interleave value of 0, however it will still test how performance falls off for patterns that are no longer being imprinted.

Palimpsest training

This method tries to copy the training scheme of Storkey's 1998 paper "Palimpsest memories: a new high-capcity forgetful learning rule for Hopfield networks".

Here we repeat Storkey's definition of "Relative/absolute palimpsest storage":

Store m patterns in the network. For each pattern, starting with the most recent and moving back through time, check whether it is recalled with an error within a given tolerance. Stop when one pattern fails this test. The total number of pattern recalled within the tolerance is called the palimpsest storage for m patterns. For absolute palimpsest storage, this tolerance level is zero. For relative palimpsest storage, the tolerance level is small, but non-zero.

This is essentially what the palimpsest training scheme in the Hopfield network emulator does. You specify palimpsest learning with the "-P" or "--palimpsest" option... followed by the tolerance * 100. So for a tolerance of 5%, such that 95% of bits in a pattern need to be correctly recalled for the memory to palimpsest storage, you'd use:

./hopfield -p 50 --palimpsest 5

Note that the -p at the start is lowercase and represents the number of patterns to try to store (i.e. the m in Storkey's definition). The --palimpsest option is what sets the training scheme. The output then looks like:

palimpsest tolerance 5
cycle 0 ,	   1
cycle 1 ,	   1
cycle 2 ,	   0
...
cycle 50 ,        2

Which displays the cycle (or pattern #, or "Memory loading" of Storkey) and the palimpsest storage, or in other words, the number of previous sequential patterns that can be recalled with 5% tolerance. In this case it is very low because by default the Hopfield network emulator only uses a small network (and the amount of storage is dependent on the number of nodes in the network).

Other things to note from Storkey are:

  • the cue pattern for retrieval uses only 1 bit errors in the entire pattern - this can be set up by using the option "-e 1".
  • the stored patterns have a mean of 0. Storkey doesn't mention the density of generated patterns (the number of +1/-1 relative to zeroes), although the mention of "random independent" patterns indicates that -1, 0, or 1 were equally possible values for bits. This means that there is 1/3 chance for any bit in a pattern to be set to 1, which would be equivalent to specifying the Hopfield network emulator option "-g 0.33". The way patterns are imprinted in the Hopfield network emulator is such that bits can be 1 or 0 (but not -1). The nodes equal to 0 are translated to a negative STI value. These nodes have their summed -ve STI balance equal to the summed +ve STI of all the active nodes.

Example Usage

Performance

Hopfield Network Experiment Results

Future ideas

Ideas to explore, both to improve performance and potentially understanding.

  • Construct a ConceptNode to represent each pattern imprinted and then connect with HebbianLinks to the nodes activated by the pattern (don't connect ConceptNodes.
  • Use HebbianLink#Symmetric Hebbian links connecting more than two nodes. Instead of using multiple links to represent a pattern, represent it with one. The difficulty here is that nodes that are "active" in multiple patterns will cause importance to leak between them during the importance spreading phase. To counteract this, several links each representing part of the pattern could be used, such that the Outgoing set for the SymmetricLinks of each pattern don't intersect.