OpenPsi

From OpenCog
Jump to: navigation, search

OpenPsi is an implementation, within OpenCog, of significant aspects of the Psi model of human motivation and emotion, inspired significantly by aspects of Joscha Bach's MicroPsi AI system. (However, MicroPsi also contains other aspects not included in OpenCog, for example a specific neural net like knowledge representation.)

An overview of Psi, oriented toward software implementation and tuning in the robotics context, is Emotion Modelling on the Hanson Robotics wiki. That page also contains many useful URLs linking into the relevant literature.

The current implentation can be found in github, here: opencog/openpsi. Very minimalist examples can be found here: examples/openpsi - these examples only illustrate the core "decider" subsystem, rather than the entire integrated robot.

The previous version of OpenPsi, as it worked in the 2007-2014 time-frame, is documented on the OpenPsi (2010) page. The source code that implemented that older version was removed from the github repositories in 2015.

Current status, Notes

The current OpenPsi system really consists of two parts: a rule-base, and a mechanism for selecting rules. The rules are of the generic form

context + action -> goal

The "decider" picks among these rules to select a handful to be performed. The selection/decision process is probabilistic, and is driven by demands, modulated by modulators.

For example, a fork-lift robot might have the strategic goals of keeping the oldest boxes on the north end of the warehouse, while also minimizing distance traveled. Time-varying "demands" might be the arrival of shipments at the loading dock, and new dispatch orders. Tactical actions might include temporary re-stacking of boxes to get them out of the way. All of this involves modulation and demands and actions and goals, but has nothing to do with human emotion. Its a generic rule-selection system that folds together strategy, planning, demands, which modulate action.

The current code base is a frankensteinian mashup of two unrelated ideas: a generic rule-processing system, and a simplistic model of human emotions. These two need to be divorced from one-another.

The generic OpenPsi system attempts to to achieve certain strategic goals, through the selection of a tactical set of actions to be taken, in response to time-varying and situation-varying demands. The tactical actions are achieved by selecting sets of rules which are processed by the rule engine.

The code, as implemented, does not quite do the above. Worse, it is shot-through with labels suggesting human emotions: "happiness" and "sadness", which have nothing at all to do with the selection of tactical actions to achieve strategic goals (other than that humans also try to make themselves "happy" by accumulating wealth, power, fame, drugs or whatever). Thus, for example, a fork-lift would also need to perform tactical actions to achieve strategic goals in the face of time-varying demands. However, the fork-lift isn't going to be "happy" or "sad" while it does this, although the factory owner might be.

Realistically, the current deployment of OpenPsi within OpenCog is to solve these kinds of strategic goal problems, and NOT to model human emotional state. We really really need to fix the code to divorce these two concepts from one-another.

The central location defining the current robot model is here: /opencog/eva and the code that drives it can be found in the files here /opencog/openpsi/openpsi.scm#L15-L23 Openpsi(aka decider) is not closely coupled with the emotion modeling module; the modulators aren't influencing how action-selection is made presently.

other misc notes

This section to be cleaned up later:

We need to explain how these pieces all fit together ...