OpenCogPrime:GoalFormation

From OpenCog
Jump to: navigation, search

Goal Formation

Goal formation in OCP is done via PLN inference. In general, what PLN does for goal formation is to look for predicates that can be proved to probabilistically imply the existing goals. These new predicates will then tend to receive RFS's, according to the logic of RFS's to be outlined later.

As an example of the goal formation process, a simple initial ubergoal (not an adequate one, just a simple initial example) could be:

AND
   Pleasure
   RecentLearning

where Pleasure and RecentLearning are FeelingNodes assessing the amount of "pleasure" stimulus and the amount of recent learning that the system has undertaken. It may then be learned that whenever the teacher gives it a reward token, this gives it pleasure:

SimultaneousAttraction
   Evaluation give (teacher, me, reward_token)
   Pleasure

This information allows the system (the Goal Formation CIM-Dynamic) to nominate the atom:

EvaluationLink give (teacher, me, reward_token)

as a goal (a subgoal of the original supergoal). This is an example of goal refinement, which is one among many ways that PLN can create new goals from existing ones.