From OpenCog

Jump to: navigation, search

Embodiment is a system of components for OpenCog that are designed to control an avatar, although that avatar may be in a virtual world or a robot in the real world.

This part of OpenCog is currently in flux and under heavy development. Please see New_Embodiment_Module,_March_2015 for the new Embodiment architecture which, at time of writing in August 2015, is largely but not entirely in place.

The remainder of information on this page is not guaranteed to be current, and much is known to be obsolete. If you want to work on this part of the system please discuss on the email list or Slack.... By the end of 2015, hopefully well before, the new Embodiment module should be in place and this wiki page will be replaced!



The Embodiment system uses a proxy to connect to the world where the agent's body lives. The proxy provides an interface for collecting perceptions from the world and for commanding the agent's body through action plans.

Note: The Embodiment system traditionally called the agents being controlled "Pets" due to the project it was founded on being focussed on virtual pets. However we are in the process of renaming them to "Avatar". If you see old references to pets please feel free to change them (you can ask on the discussion list or IRC room first if you are unsure).


Components and Modules

Feelings and Predicates

Functionalities and features

System usage and tests


Combo-related pages

Behavior-related pages

Occams Razor

Ben's notes and ideas

These are pages written by Ben that may still be useful for further work:

Language Comprehension

(this stuff is all semi-obsolete, I'm not sure it will ever be used again because the NLP comprehension pipeline has been massively upgraded since this old NLP/Embodiment work was done)