EmbodimentLanguageComprehension ReferenceResolution

From OpenCog
Jump to: navigation, search

This page describes all the ideas and the specifications about the Reference Resolution rules for embodiment disambiguation, using RelEx.

Categories

The Reference Resolution Rules are classified into categories; each category group's rules relate to a specific task. 

Entity resolution

These rules are responsible for identifying which object was mentioned in a sentence said by an avatar.

Command resolution

Sentences that uses imperative verbs will be converted into actions and then executed by the agent. So, these rules will have to identify the meaning of the avatar's sentence and execute a specific agent built-in or composed action. Composed actions are written in combo and can be found in the .combo files in the scripts/embodiment directory inside the OpenCog repository.

Built-in actions

  • goxx : Move to a given position
    • goto_obj(destination speed) : move, at a given speed, from the current position to a near to the destination object position
    • gonear_obj(destination speed) : like goto_obj but get near the destination instead of at it
    • gobehind_obj(object speed) : move to a position behind a given object, given the current agent and object position
    • step_towards(object direction) : direction can be TOWARDS or AWAY (Note: seems way too far (more than 10 meters)
    • step_backward (Note: seems way too far (more than 10 meters)
    • step_forward (Note: seems way too far (more than 10 meters)
    • go_behind(object1 object2 speed)
    • Frame: #Motion

Frame instance example:

#Motion:Theme = Fido
#Motion:Goal = blueBall_99
#Motion:Manner = walking | running | approach | step_to

// used by go_behind, gobehind_obj and gonear_obj
#Locative_relation:Figure = Fido
#Locative_relation:Ground = Object
#Locative_relation:Relation_type = near | behind

In order to resolve the motion and the motion manner, we added the following concept vars:

$Motion
go
move
run
walk
step

$Motion_walking
walk

$Motion_running
run

$Motion_step
step


We also added the following mapping rules to create the correct 
frames that allow us to identify the goal of the motion and its manner:

; Motion
# IF $Imperative_relation($Motion) ^ NOT _to-do($var0,$Motion) THEN ^1_Motion:Theme($Motion,you)
# IF $Imperative_relation($Motion) ^ _to-do($var0,$Motion) THEN ^1_Motion:Theme($Motion,$var0)
# IF to($Motion,$var0)  THEN ^1_Motion:Goal($Motion,$var0)
# IF _amod($Motion,away) THEN ^1_Motion:Goal($Motion,away)
# IF _obj($Motion,$var0) ^ NOT date($var0) ^ NOT $var0=$Time THEN ^1_Motion:Goal($Motion,$var0)

; Motion Manner
# IF $Imperative_relation($Motion_walking) THEN ^1_Motion:Manner($Motion_walking,walking)
# IF $Imperative_relation($Motion_running) THEN ^1_Motion:Manner($Motion_running,running)
# IF $Imperative_relation($Motion_step) THEN ^1_Motion:Manner($Motion_step,step_to)

Relex experiments:

Case 1) Fido, go to the ball.
^1_Motion:Theme(go,Fido)
^1_Motion:Goal(go,ball)

Case 2) Fido, move to the ball.
^1_Motion:Theme(move,Fido)
^1_Motion:Goal(move,ball)

Case 3) Fido, run to the ball.
^1_Motion:Theme(run,Fido)
^1_Motion:Goal(run,ball)
^1_Motion:Manner(run,running)

Case 4) Fido, walk to the ball.
^1_Motion:Theme(walk,Fido)
^1_Motion:Goal(walk,ball)
^1_Motion:Manner(walk,walking)

Case 5) Fido, step to the ball.
^1_Motion:Theme(step,Fido)
^1_Motion:Goal(step,ball)
^1_Motion:Manner(step,step_to)

Case 6) Go to the ball.
^1_Motion:Theme(go,you)
^1_Motion:Goal(go,ball)

Command resolution rule:

ImplicationLink
   AndLink
      InheritanceLink 
         DefinedFrameNode "#Motion"
         VariableNode "$var0"
      InheritanceLink
         DefinedFrameElementNode "#Motion:Theme"
         VariableNode "$var1"
      InheritanceLink
         DefinedFrameElementNode "#Motion:Goal"
         VariableNode "$var2"
      InheritanceLink
         DefinedFrameElementNode "#Motion:Manner"
         VariableNode "$var3"

      FrameElementLink
         VariableNode "$var0" 
         VariableNode "$var1" 
      FrameElementLink
         VariableNode "$var0" 
         VariableNode "$var2" 
      FrameElementLink
         VariableNode "$var0" 
         VariableNode "$var3" 

      EvaluationLink
         VariableNode "$var1"
         SemeNode "Fido"
      EvaluationLink
         VariableNode "$var2"
         VariableNode "$var4"
      EvaluationLink
         VariableNode "$var3"
         ConceptNode "walking"

   EvaluationLink
      ExecutionLink
         GroundSchemaNode "goto_obj"
         ListLink
            VariableNode "$var4"
            NumberNode "2.0" // walk speed


Frame instance example:

#Motion:Theme = Fido
#Motion:Goal = redBall_01

#Relative_time:Focal_occasion = now
#Relative_time:Focal_participant = Sally (owner avatar)
#Relative_time:Landmark_occasion = dog_park
#Relative_time:Interval = indefinitely


  • rotate_left
  • rotate_right

Frame instance example:

#Moving_in_place:Theme = Fido
#Moving_in_place:Direction = Counterclockwise (left) Clockwise (right)

Frame instance example:

#Self_motion:Area = air
#Self_motion:Direction = up
#Self_motion:Goal = jump up
  • turn_to_face(object)

Frame instance example:

#Moving_in_place:Theme = Fido
#Moving_in_place:Direction = Sally(object)

Frame instance example:

#Manipulation:Agent = Fido
#Manipulation:Bodypart_of_agent = mouth
#Manipulation:Entity = blueBall_99
#Manipulation:Depictive = grab

Add concept var for grab detection:

$Manipulation_grab
grab

In order to detect the subject of an imperative sentence, we changed the 
following frame. This is important to detect that in the sentence 
"Fido, grab the ball", the subject is Fido and not "you".

Frame changed FROM

# IF $Imperative_relation($Manipulation) THEN ^1_Manipulation:Agent($Manipulation,you)

TO

# IF $Imperative_relation($Manipulation) ^ NOT _to-do($var0,$Manipulation) THEN ^1_Manipulation:Agent($Manipulation,you)
# IF $Imperative_relation($Manipulation) ^ _to-do($var0,$Manipulation) THEN ^1_Manipulation:Agent($Manipulation,$var0)

Added the following frames in order to identify the grab depictive:
; Grab
# IF $Imperative_relation($Manipulation_grab) ^ NOT _to-do($var0,$Manipulation_grab) THEN ^1_Manipulation:Depictive($Manipulation_grab,grab)
# IF $Imperative_relation($Manipulation_grab) ^ _to-do($var0,$Manipulation_grab) THEN ^1_Manipulation:Depictive($Manipulation_grab,grab)

Relex experiments:

Case 1) Fido, grab the ball.
^1_Manipulation:Entity(grab,ball)
^1_Manipulation:Agent(grab,Fido)
^1_Manipulation:Depictive(grab,grab)

Case 2) Grab the ball.
^1_Manipulation:Entity(grab,ball)
^1_Manipulation:Agent(grab,you)
^1_Manipulation:Depictive(grab,grab)

Case 3) Grab it.
^1_Manipulation:Entity(grab,it)
^1_Manipulation:Agent(grab,you)
^1_Manipulation:Depictive(grab,grab)
; anaphora resolution (it, ball)

4) Grab the ball with your mouth.
^1_Manipulation:Entity(grab,ball)
^1_Manipulation:Agent(grab,you)
^1_Manipulation:Bodypart_of_agent(with,mouth)
^1_Manipulation:Depictive(grab,grab)

Locative resolution: In order to perform locative resolution, the 
frame Locative_relation is used. We added the term next_to to the $Locative_relation concept 
vars. The Locative_relation frame output depends on the relex parser, 
once it needs the $Locative_relation(var0, var1) 
( i.e. next_to(ball,barrel) ) relex output:

5) Grab the ball near the barrel.
^1_Manipulation:Entity(grab,ball)
^1_Manipulation:Agent(grab,you)
^1_Manipulation:Depictive(grab,grab)
^1_Locative_relation:Figure(ball,barrel)
^1_Locative_relation:Distance(near,near)

6) Grab the ball next to the bone.
^1_Manipulation:Entity(grab,ball)
^1_Manipulation:Agent(grab,you)
^1_Manipulation:Depictive(grab,grab)
^1_Locative_relation:Figure(ball,bone)
^1_Locative_relation:Distance(next_to,near)
  • nudge_to(object1 object2)
    • seems not to nudge object only to execute goto_obj actions

Frame instance example:

#Motion_directional:Area = dog park
#Motion_directional:Direction = down
#Motion_directional:Goal = floor
#Motion_directional:Source = Fido
#Motion_directional:Theme = ball_01
#Motion_directional:Depictive = drop

It requires the following concept to identify the drop action:

$Motion_directional_down
drop

Frame rules added to the motion_directional for drop:

# IF _obj($Motion_directional_down,$var0) THEN ^1_Motion_directional:Theme($Motion_directional_down,$var0)
# IF verb($Motion_directional_down) THEN ^1_Motion_directional:Direction($Motion_directional_down,down)
# IF $Imperative_relation($Motion_directional_down) ^ NOT _to-do($var0,$Motion_directional_down) THEN ^1_Motion_directional:Source($Motion_directional_down,you)
# IF $Imperative_relation($Motion_directional_down) ^ _to-do($var0,$Motion_directional_down) THEN ^1_Motion_directional:Source($Motion_directional_down,$var0)
# IF $Imperative_relation($Motion_directional_down) THEN ^1_Motion_directional:Depictive($Motion_directional_down,drop)

Relex Experiments:

1) Fido, drop the ball.
^1_Motion_directional:Direction(drop,down)
^1_Motion_directional:Theme(drop,ball)
^1_Motion_directional:Goal(drop,floor)
^1_Motion_directional:Source(drop,Fido)
^1_Motion_directional:Depictive(drop,drop)

2) Drop the ball.
^1_Motion_directional:Direction(drop,down)
^1_Motion_directional:Theme(drop,ball)
^1_Motion_directional:Goal(drop,floor)
^1_Motion_directional:Source(drop,you)
^1_Motion_directional:Depictive(drop,drop)

3) Drop it.
^1_Motion_directional:Direction(drop,down)
^1_Motion_directional:Theme(drop,it)
^1_Motion_directional:Goal(drop,floor)
^1_Motion_directional:Source(drop,you)
^1_Motion_directional:Depictive(drop,drop)
; anaphora resolution (it, ball)

4) Drop.
^1_Motion_directional:Direction(drop,down)
^1_Motion_directional:Goal(drop,floor)
^1_Motion_directional:Depictive(drop,drop)
^1_Motion_directional:Source(drop,you)
  • sniff
  • sniff_at(object)
  • sniff_avatar_part(avatar avatar_part)
    • does like sniff
  • eat(object)
  • drink(object)
  • beg
  • hide_face
  • look_up_turn_head
  • sit
  • stretch
  • run_in_circle
  • trick_for_food
  • pee
  • poo
  • bark
  • bark_at(object)
  • lick
  • lick_at(object)
  • growl
  • growl_at(object)
  • whine
  • whine_at
  • fearful_posture
  • tap_dance
  • lean_rock_dance
  • anticipate_play
  • back_flip
  • widen_eyes
  • shake_head
  • wag
  • bite(object)
  • kick_left
  • kick_right
  • look_at(object)
    • look at object for a few seconds
  • group_command(command, target, <parameters>... ) : broadcast a command to the other agents
  • receive_latest_group_commands : receive group commands broadcasted by the other agents
Not implemented yet
  • random_step
  • heel
  • jump_towards(object)
  • sniff_pet_part(pet pet_part)
  • tail_flex(contin)
  • pet(object)
  • kick(object)
  • scratch_self(part)
    • part can be NOSE|RIGHT_EAR|LEFT_EAR|NECK|RIGHT_SHOULDER|LEFT_SHOULDER
  • chew(object)
  • scratch_ground_back_legs
  • scratch_other(agent)
    • the pet turn instantly toward the thing to scratch
    • (not tested properly since the pet is supposed to go to the agent first)
  • lie_down
  • speak
  • belch
  • move_head(contin contin contin)
  • clean
  • sleep
  • bare_teeth
  • bare_teeth_at(object)
  • play_dead
  • vomit
  • move_left_ear(direction)
    • direction in TWITCH|PERK|BACK
  • move_right_ear(direction)
    • direction in TWITCH|PERK|BACK
  • dream(object)

Proxy-PAI Attributes setup

Inside the processMapInfo method of the PerceptionActionInterface some structures must be built to represent the properties of a real objects that is inside the map and perceived by the agent. So here follows an example:

Suppose that a message containing the following blip arrived at PAI:

    <blip timestamp="2007-06-18T20:15:00.000-07:00">
        <entity id="ball_99" name="Ball" type="accessory"/>
        <position x="15" y="164" z="14"/>
        <rotation pitch="0.01" roll="0.02" yaw="2.53"/>
        <velocity x="1.0" y="1.0" z="0.0"/>
      <properties>
        <property name="detector" value="false" />
        <property name="length" value="2" />
        <property name="width" value="2" />
        <property name="height" value="2" />
        <property name="color" value="red" />
        <property name="texture" value="soft" />
      </properties>
    </blip>

So, an structure of properties must be created inside the AtomTable to represent the characteristics of this object.

AccessoryNode "ball_99"
SemeNode "ball_99"

// representing color
ConceptNode "red"
WordNode "red"

ReferenceNode
   ConceptNode "red"
   WordNode "red"

// representing texture
ConceptNode "soft"
WordNode "soft"

ReferenceNode
   ConceptNode "soft"
   WordNode "soft"

// connecting to the SemeNode
ReferenceNode
   AccessoryNode "ball_99"
   SemeNode "ball_99"

// referencing the SemeNode
ReferenceNode
   SemeNode "ball_99"
   WordNode "ball"

// creating the predicates that shows color and texture
// ps.: use AtomTableUtil::setPredicateValue to set up the predicate value
// remember that the color presence in the object must be defined by the Predicate Truth value. Perhaps the setPredicateValue will not accept values bellow 0.5. it must be checked.
EvaluationLink
   PredicateNode "color"
   ListLink
      AccessoryNode "ball_99"
      ConceptNode "red"
   
EvaluationLink
   PredicateNode "texture"
   ListLink
      AccessoryNode "ball_99"
      ConceptNode "soft"