MultiVerse User Manual (Embodiment)

From OpenCog

Quick (but not too much) start

Installing and Configuring OpenCog

Status: Unknown. If anyone has the info, or at least a bzr revision to start from, please add it here. Until that information is added, the rest of this document is useless to new developers.

Installing and configuring Multiverse (only on Windows).

  1. Download and Install Sun Java Runtime or Development Kit version 1.6
  2. Download and Install Multiverse Client (
  3. Download and Install Multiverse tools (
  4. Download and configure the world assets (
    1. Unpack the assets in any directory.
    2. Get another assets package:
      1. Unpack it Inside the directory you've unpacked
    3. Run Multiverse Viewer (From multiverse tools)
      1. Click File->Designate Asset Repository...
      2. Click "Browse And Add Directory" and select the directory you've unpacked the
      3. Click Ok and close Multiverse Viewer
  5. Go to C:\Program Files\Multiverse Client\bin\
    1. Left-mouse button on MultiverseClient.exe and "Send To -> Desktop (create shortcut)"
    2. Left-mouse button on the shortcut and select Properties
    3. Put the following command on the Target: "C:\Program Files\Multiverse Client\bin\MultiverseClient.exe" --use_default_repository --frames_between_sleeps 2 --world_settings_file "C:\Program Files\Multiverse Client\Worlds\world_settings.xml"
      1. Ok to confirm the changes
  6. Edit the file: C:\Program Files\Multiverse Client\Worlds\world_settings.xml. It must contain the following text:
  <loopback_world_response world_id="sampleworld">
    <account id_number="1"/>
    <server hostname="" port="5041"/>

Please, note that is the address of the Multiverse Proxy that is running in our server. Just to let you know, there are basically three services running on the server side: 1) Multiverse Proxy 2) Opencog (controls the PetBrain) 3) NLGen for generating sentences in English that are the answers for the user questions.

Ok. Now, you can

  1. Launch Multiverse Client by using the shortcut you've configured. It will open the Character Selection screen.
  2. Choose Suzy or Sally and click "Play". A help screen will appear after the simulation start, close it.
  3. Put the mouse over the console (at left bottom), type enter (to put focus on console - ESC will remove the focus from console).
  4. type /loadagent

Fido, the default pet, will appear in front of your avatar. Now you can "talk" to the agent via console. The agent "understands" basically two types of sentences: commands and questions.


  1. Grab: the agent will go to a given item and grab it
    1. i.e: grab the <item>
  2. Go to: the agent will go the position where is located a given object
    1. i.e. go to the <object>
  3. Drop: the agent will drop the item it is holding
    1. i.e. drop the <item>
    2. i.e. drop it

That is, the agent can execute three different commands.


The agent is capable if answering questions related to: Physiological needs, emotions and spatial perceptions.

  1. Available physiological needs
    1. hunger (hungry)
    2. thirsty (thirsty)
    3. poo urgency (poo)
    4. pee urgency (pee)
  2. Available emotions
    1. happiness (happy)
    2. fear (fearful)
    3. pride (proud)
    4. love (loving)
    5. hate (hateful)
    6. anger (angry)
    7. gratitude (grateful)
    8. excitement (excited)
  3. Available Spatial relations
    1. in front of
    2. below
    3. above
    4. behind
    5. beside
    6. inside
    7. near
    8. beside
    9. far
    10. inside
    11. outside
    12. on the left of
    13. on the right of
  1. Yes/No questions
    1. Do you want to <poo|pee>?
    2. Are you <happy|fearful|proud|loving|hateful|angry|grateful|excited>?
    3. Is the <object1> {in front of|below|above|behind|beside|inside|near|beside|far from|inside|outside|on the left of|on the right of} the <object2>?
    4. Is the ball between the <object1> and the <object2>?
  2. Discursive questions
    1. What is {in front of|below|above|behind|beside|inside|near|beside|far from|inside|outside|on the left of|on the right of} the <object>?
    2. What is between the <object1> and the <object2>?
    3. Where is the <object>?
Available Objects

tree fountain stick (item) box barrel bowl house ball (item) bear (item) weight (item) robot (item)


What is inside the box?
the green ball is inside the large box
What is above the box?
the robot is above the large box
What is below the robot?
the large box is below the robot
What is between the fountain and the tree?
the red ball is between the active fountain and the tree

Please notice that:

  1. Only items can be held (grab) by the agent.
  2. The robot is above the box
  3. There is a geen ball inside the box
  4. Not necessarily a command sent to the agent will be executed. Commands are only requests. For instance, if the agent need to eat, drink, pee, poo, etc. the command execution will be delayed and can also be forgotten by the agent if it the higher priority actions takes too much time to finish. That case, you need to send another command request.
  5. There is an issue on the MV-Proxy that keeps a "ghost" of the latest loaded agent after the MV Client logout and login again. You can define a name for the agent like "/loadagent Pet1", "/loadagent Pet2", etc. to let you continue the tests. Or, alternatively, you can send me an email and I will restart the Proxy to clean the "ghosts".
  6. If you want more info about the application, go to:

Learning Session

The user can teach tricks to the agent. To do that it is necessary to put the agent in the Learning Mode. There are some basic commands that you need to know, in order to put the agent in Learning Mode and teach it:

LEARN <trick name>

It will put the agent in Learning mode

I WILL <trick name>

It determines the start of an exemplar (a set of actions that will be used by the agent to try to imitate the avatar)

DONE <trick name>

It determines the end of an exemplar


The agent will select a trick candidate to execute (a single exemplar can produce more than one candidates. each candidate is a combination of the actions used in the exemplar). Each time you type TRY, the Learning Server will get the next candidate of the candidates list and then will send it to the agent to be executed. You can approve or decline the tentative by giving reinforcements to the agent. The command TRY is just a request. It doesn't mean that the trick will be executed immediately. It is possible that the agent has a higher priority action to execute. If so, it is also possible that your TRY request may be forgotten by the agent. That case, you will need to send another TRY command.


Positive reinforcement


Negative reinforcement


The Learning Server will send the same candidate instead of sending the next candidate of the list.


Puts the agent back to the Playing mode.

DO <learned trick name>

Only available in Playing Mode. It is a request to the agent to execute a learned trick.


learn foo
i will foo

the avatar:

 1) goes to a given ball
 2) grabs it by selecting the ball with the right 
    mouse button and typing /grab in the console
 3) drops the ball by typing /drop


done foo
good boy
stop learn foo
do foo

Basic Avatar commands when using Multiverse:

/grab (the target object must be selected and the avatar must be near it)