OpenPsi (2010)

From OpenCog
(Redirected from OpenPsi (Embodiment))

This page documents OpenPsi as it worked in the 2007-2014 timeframe. This documentation is obsolete, and the code that it describes was removed from the github repo in 2015. See OpenPsi for a description of the current work.

OpenPsi is an implementation within OpenCog of significant aspects of the Psi model of human motivation and emotion, inspired significantly by aspects of Joscha Bach's MicroPsi AI system. (However, MicroPsi also contains other aspects not included in OpenCog, for example a specific neural net like knowledge representation.)


Overview of MicroPsi (in the context of OpenPsi)

Demands

Demands correspond to physiological needs, such as:

  • Water
  • Energy

or more abstract needs, such as:

  • Certainty (i.e. how much confident the knowledge of the agent is)
  • Competence (how well the agent can solve the problems it encounters)
  • Affiliation

More demands can also be defined.

Goals

The primary goal is to maintain these demands within certain ranges.

Let say the min and max of each demand is 0 and 1. That way a demand can be represented as a GroundedPredicateNode. So let's give an example of a goal which is to maintain the GroundedPredicateNode EnergyDemands within [0.6,0.9].

EvaluationLink
    WithinRange
    ListLink
        EnergyDemands
        0.6
        0.9

(One may define WithinRange as a fuzzy predicate for more flexible reasoning).

Personality

Demands must be kept within certain ranges, these ranges define (at least partially) the personality of the agent. For instance a very zen person can feel fine without eating for days. I think the dynamic of the modulators (see below) should be part of the personality as well.

Regulation of Demands

When these Demands go out of range for a certain period of time they become Urges. Once Urges appear the agent must use its intelligence to take care of them (see below), but before Demands actually become Urges, it might be possible to have some automatic process trying to regulate them, this would be taken care of by a DemandRegulator MindAgent. Obviously in a biological body there's a lot of automatic regulation going on, but it's not sure that this would be useful for a virtual agent or even a simplistic robot.

Action

Decision

In OpenPsi, action decision is made based on Cognitive Schematics

Context & Procedure → Goal <p>

What would be great is that PLN can be directly used to find or infer those CS.

Interruption

It's good to have the process being able to re-evaluate the wellness of its action during its execution, and even interrupt momentary a procedure execution.

Emotion

In MicroPsi an emotion is a state of (all?) modulators in the system. A modulator is a parameter that modulates how a cognitive component works. So for instance the modulators

  • Resolution level affects perception, a low resolution level causes the system to spend less effort in processing perceptions
  • Certainty affects for instance inference, if that parameter is low then less emphasis is placed on certainty to guide the inference

That is just an small example, there can be a lot of modulators.

A system where these modulators are well regulated will be more efficient that a system with badly regulated or fixed modulators.

Implementation of OpenPsi

Story of OpenPsi

RuleEngine is the core of previous controlling system before OpenPsi, which is now obsoleted. This section is a short history of how the OpenPsi takes over the system gradually. To understand how it worked, see the Architecture section below.

From RuleEngine to OpenPsi

In embodiment RuleEngine refers to the main engine that updates the knowledge of the agent (write atoms in the atomspace) and take action. This Embodiment's RuleEngine is not a MindAgent it lives inside the Embodiment CogServer (the OAC or Operational Avatar Controller). The code is located under

 opencog/embodiment/Control/OperationalAvatarController

RuleEngine does the following:

  1. decides which action to take (RuleEngine::processNextAction())
  2. update the feelings (RuleEngine::processRules() called from RuleEngine::processNextAction()!!!)
  3. update the relations (RuleEngine::processRules() called from RuleEngine::processNextAction()!!!)

The OAC also has MindAgents, like the ImportanceDecayAgent for managing attention and ActionSelectionAgent that wraps RuleEngine::processNextAction() (via OAC::schemaSelection()).

Currently "feelings" in the embodiment code is a mixture of Demands and Modulators, so the code and the terminology must be fixed accordingly. I suppose that the modes (PLAYING, LEARNING, SCAVENGER_HUNT) should be seen as modulators.

So currently the heart of the Embodiment code is not very elegant, what we need to do is clearly separate Demands, Modulators, Relations and Actions.

So the RuleEngine should be split into 4 MindAgents:

  1. DemandUpdater
  2. ModulatorUpdater
  3. RelationUpdater
  4. ActionSelection

Let's recapitulate their functions

  • DemandUpdater updates the demands (how hungry the agent is)
  • ModulatorUpdater controls the dynamic of the emotion of the agent (Perhaps how angry an agent is. Note that "angry" should either correspond to a modulator, or, more likely a configuration of modulators, but this is just an example).
  • RelationUpdater (if the pet licks X it becomes Familiar with X, Familiar(self, X) relation is then created)
  • ActionSelection, ideally using PLN, looking for, or inferring cognitive schematics and choosing the one according to their weights (calculating the weight can depend on the Modulator Certainty).

Both DemandUpdater and ModulatorUpdater are fairly mechanical, the set of rules do not change or rarely, so they just rely on some loop that evaluates each rules and do the updates. However RelationUpdater would rely on the ForwardChainer, while ActionSelection would rely on the BackwardChainer. These last tasks are difficult in nature and the knowledge involved is changing (and more knowledge are accumulating with time) so it will have to be coupled with Attention Allocation to work successfully.

Uploading Agent's Knowledge on Cognitive Schematic

In order to use PLN to find cognitive schematics the SchemaRule (currently used by the ActionSelectionAgent, i.e. the RuleEngine) must be translated in the AtomSpace.

That knowledge (relating context, called precondition in the code, and action) is actually already uploaded in the AtomSpace, see RuleEngine::addRuleToAtomSpace(). But there are not in the right format, the notion of Goal is completely absent. They upload something like

Context => Procedure

and what we want is

context & Procedure => Goal

Architecture

OpenPsi consists of five mind agents (written in c++) and a bunch of scripts (scheme and combo scripts). The cpp code of these mind agents listed below could be found at

 opencog/embodiment/Control/OperationalAvatarController/
  1. PsiModulatorUpdaterAgent
  2. PsiDemandUpdaterAgent
  3. PsiFeelingUpdaterAgent
  4. PsiActionSelectionAgent
  5. PsiRelationUpdaterAgent

These agents themselves are quite simple, except PsiActionSelectionAgent. Their main job is just invoking corresponding scripts at suitable time.

PsiModulatorUpdaterAgent

Modulators are a group of parameters controlling the dynamic of the agent.

The format of modulator in AtomSpace is

 AtTimeLink
     TimeNode "timestamp"
     SimilarityLink (stv 1.0 1.0)
         NumberNode: "modulator_value"
         ExecutionOutputLink
             GroundedSchemaNode: xxxModulatorUpdater
             ListLink (empty)

There are four modulators in OpenPsi (also defined by PSI_MODULATORS in config file):

  • Activation
  • Resolution
  • SecuringThreshold
  • SelectionThreshold

For each modulator, there's a corresponding updating scheme function named xxxModulatorUpdater (xxx is the modulator name), in

 opencog/embodiment/modulator_updaters.scm

These updaters would be invoked during cognitive cycles (not every cycle, the frequency of mind agent execution is controlled by PSI_MODULATOR_UPDATER_CYCLE_PERIOD in config file).

PsiDemandUpdaterAgent

This mind agent is responsible for updating the demands.

The complete list of demands in OpenPsi is as below (also defined by PSI_DEMANDS in config file):

  • Energy
  • Water
  • Integrity
  • Affiliation
  • Certainty
  • Competence

It should be noted that some of the demands are not enabled, such as Water and Affiliation, since our virtual world is not ready to present these demands.

Here we should make clear distinction between demand levels and demand satisfactions. For each demand, there's a target range [min_level, max_level]. If current demand level falls in the target range, we say the demand is satisfied (and the demand satisfaction is 1.0), otherwise the demand is not satisfied (and the demand satisfaction is below 1.0). A fuzzy logic function (a scheme function named fuzzy_within) would calculate demand satisfactions based on the current demand value and target range.

For each demand, there's a corresponding demand updater (a scheme function named xxxDemandUpdater, xxx is the demand name) in

 opencog/embodiment/demand_updaters.scm

Like modulator updaters, demand updates take the similar format as below:

 AtTimeLink
     TimeNode "timestamp"
     SimilarityLink (stv 1.0 1.0)
         NumberNode: "demand_value"
         ExecutionOutputLink
             GroundedSchemaNode: xxxDemandUpdater
             ListLink (empty)

However, demand updaters above only update the demand levels, not demand satisfactions! Fuzzy logic function fuzzy_with_in will handle demand satisfactions.

We use the structure below in AtomSpace to relate demand goal, current level, target range (suitable minimum and maximum level) and fuzzy logic function (fuzzy_within).

 SimultaneousEquivalenceLink
     EvaluationLink
         (SimpleTruthValue indicates how well the demand is satisfied)
         (ShortTermInportance indicates the urgency of the demand)
         PredicateNode: "demand_name_goal" 
         ListLink (empty)
     EvaluationLink
         GroundedPredicateNode: "fuzzy_within"
         ListLink
             NumberNode: "min_acceptable_value"
             NumberNode: "max_acceptable_value"
             ExecutionOutputLink
                 GroundedSchemaNode: "demand_schema_name"
                 ListLink (empty)

The implementation of fuzzy_within function is in file

 opencog/embodiment/psi_util.scm

PsiDemandUpdaterAgent would invoke corresponding scheme functions to update demand levels and demand satisfactions during cognitive cycles. And the frequency of this mind agent execution is controlled by PSI_DEMAND_UPDATER_CYCLE_PERIOD in config file.

PsiFeelingUpdaterAgent

Feelings are stored in AtomSpace with the format:

 EvaluationLink
     (SimpleTruthValue indicates the intensity of the feeling)
     PredicateNode "feelingName"
     ListLink
         petHandle

For the moment we only implement five emotions (also defined by PSI_FEELINGS in config file)

  • fear
  • anger
  • happiness
  • excitement
  • sadness

For each feeling, there's a corresponding feeling updater (a scheme function named xxxFeelingUpdater, xxx is the feeling name) in

 opencog/embodiment/feeling_updaters.scm

As the table below illustrated, feelings are recognized as different regions in the space spanned by modulators and pleasure.

 Emotion       ||    Activation    Resolution    SecuringThreshold    SelectionThreshold    Pleasure
 ======================================================================================================
 anger         ||        H             L                H                      L                L
 ------------------------------------------------------------------------------------------------------ 
 fear          ||        H             L                L                      H                L 
 ------------------------------------------------------------------------------------------------------ 
 happiness     ||        H             L                H                      H                H
 ------------------------------------------------------------------------------------------------------ 
 sadness       ||        L             H                L                      L                L
 ------------------------------------------------------------------------------------------------------ 
 excitement    ||        H             L                                       L                EH/EL  
 ------------------------------------------------------------------------------------------------------ 
 pride         ||                      L                H                      H                H 
 ------------------------------------------------------------------------------------------------------ 
 love          ||                      EL               EH                     EH               EH 
 ------------------------------------------------------------------------------------------------------ 
 hate          ||        EH            EL               EH                                      EL
 ------------------------------------------------------------------------------------------------------ 
 gratitude     ||                                                              H                H
 ------------------------------------------------------------------------------------------------------
                             ( H = high, L = low, M = medium, E = extremely )

PsiFeelingUpdaterAgent would call corresponding feeling updates during cognitive cycles. The execution frequency is controlled by PSI_FEELING_UPDATER_CYCLE_PERIOD in config file.

PsiActionSelectionAgent

This mind agent involves intention selection, action planning and action execution. It works as below:

  // Step 1: Check the state of current running action
  if (current_action is still running) {
      if (current_action success) {
          increase the truth value of psi rule related to current action; 
      }
      else if (current_action fails) {
          decrease the truth value of psi rule related to current action; 
      }
      else if (current_action times out) {
          abort current action; 
          decrease the truth value of psi rule related to current action; 
      }
  }

  // Step 2: Do intention selection and action planning 
  //         if no more actions are available 
  if (action_list is empty && step_list is empty) {
      pick up an demand; 
      do action planning for the selected demand;  
  }

  // Step 3: Get a new action
  if (action_list is not empty) {
      pup up an action;      
  }
  else {
      pop up a step from step list; 
      pop up an action from current step;  
  }

  // Step 4: Execute the action
  Execute current action;

Step 1: Check the state of current running action

Step 2: Do intention selection and action planning if no more actions are available

The scheme function do_planning in the file below handles both work.

 opencog/embodiment/action_selection.scm

do_planning always select the demand with lowest satisfaction via get_most_critical_demand_goal for the moment. We will introduce random selections controlled by the modulator SelectionThreshold later.

After intention selection, do_planning will do action planning for the selected demand via make_psi_plan. Finally, it will store all the planning results into AtomSpace, including planning result (success or fail), psi rules (ImplicationLinks) used, actions and selected demand goal with the format below:

 EvaluationLink
     (SimpleTruthValue indicates whether the planning success or fails)
     PredicateNode "plan_success"
         ListLink (empty)
 ReferenceLink
     ConceptNode "plan_rule_list" 
     ListLink
         psi_rule_1
         psi_rule_2
         ...
 ReferenceLink
     ConceptNode "plan_action_list"
     ListLink
         step_1
         step_2
         ...
 ReferenceLink
     ConceptNode "plan_selected_demand_goal"
     EvaluationLink
         PredicateNode "demand_name"
         ListLink(empty) 

All psi rules (ImplicationLinks) used by do_planning could be found at

 opencog/embodiment/unity_rules.scm 

And also there are many utility functions in the file

 opencog/embodiment/rules_core.scm

Step 3: Get a new action

Here we should make clear distinction between step and action. For each psi rule, there is a single step. While for each step, there might be several actions. Actions can be grouped together by means of AndLink, OrLink or SequentialAndLink, forming a single step.

For instance, the following psi rule contains a single step (i.e. a SequentialAndLink) but three actions (goto_obj, grab and eat).

 (ForAllLink (cog-new-av 1 1 1)
     (ListLink
         (VariableNode "$var_food") 
     ) 
     (add_rule (stv 1.0 1.0) EnergyDemandGoal 
         (SequentialAndLink
             (add_action (GroundedSchemaNode "goto_obj") (VariableNode "$var_food") (NumberNode "2") )
             (add_action (GroundedSchemaNode "grab") (VariableNode "$var_food") )
             (add_action (GroundedSchemaNode "eat") (VariableNode "$var_food") )
         )    
         GetFoodGoal
     )
 ) 

PsiActionSelectionAgent::getPlan gets step list of current planning, not actions.

PsiActionSelectionAgent::getActions will return the corresponding actions given the handle of a step. It will handle AndLink, OrLink and SequentialAndLink automatically.

Step 4: Execute the action

This is done by PsiActionSelectionAgent::executeAction, which encapsulates all the messy work.

Actions currently available are defined in file (all of them are implemented in the unity world)

 scripts/embodiment/unity_action_schema.combo

A complete list of all the actions could be found at (many of them are not implemented in the unity world)

 scripts/embodiment/all_action_schema.combo

PsiRelationUpdaterAgent

There are some experimental codes, but this mind agent is not functional for the moment. We should make it work once a new version of PLN is implemented.

Plan

  1. Perhaps the first step would be to fix this SchemaRule representation so we get cognitive schematics, for that the SchemaRules (present in /opencog/embodiment/pet_rules.lua) must be rewritten, and the C++ code that loads them must be corrected accordingly. I am not sure one needs to keep the current format of SchemaRules (lua tuple), I'm wondering if one could not write then in SCHEME and load the SCHEME code at start up, that way, there is even no need to adapt the code RuleEngine::addRuleToAtomSpace().
  2. The RuleEngine code taking care of action selection must of course be changed (RuleEngine::processNextAction()) so that it uses PLN with the new cognitive schematic format
  3. Eventually it would be good to split the RuleEngine into the various MindAgents mentioned above, it is perhaps less urgent but I think at this stage it'd be better now than later
  4. As told above feelings should be correctly split into Demands and Modulators (they can still be labeled feeling to the user-end though). So the FEELINGS_RULES must be split as well and the C++ code must be adapted
  5. Finally one can experiment with the Modulator dynamics and so on.

An aspect which has not been discussed at all is attention allocation, it's clear that in order to infer cognitive schematics efficiently ECAN and other sophisticated processes will have to be integrated, once the plan presented above has been completed this will be the next step.

References

  • Zhenhua Cai, Ben Goertzel, Changle Zhou, Yongfeng Zhang, Min Jiang, Gino Yu, "Dynamics of a computational affective model inspired by Do ̈rner’s PSI theory", 2011 Cognitive Systems Research (SciVerse ScienceDirect http://www.elsevier.com/locate/cogsys)