Embodiment (2010 version)

From OpenCog


THIS PAGE IS OBSOLETE

Embodiment is a system of components for OpenCog that are designed to control an avatar, although that avatar may be in a virtual world or a robot in the real world.

This page documents Embodiment as it was in the 2007-2014 timeframe, The code that implemented this version was removed from github in 2015.

Please see New Embodiment Module, March 2015 for the new Embodiment architecture which, at time of writing in August 2015, is largely but not entirely in place.

The remainder of information on this page is not current, and is obsolete.

OLD INFORMATION BELOW!!

The Embodiment system uses a proxy to connect to the world where the agent's body lives. The proxy provides an interface for collecting perceptions from the world and for commanding the agent's body through action plans.

Note: The Embodiment system traditionally called the agents being controlled "Pets" due to the project it was founded on being focussed on virtual pets. However we are in the process of renaming them to "Avatar". If you see old references to pets please feel free to change them (you can ask on the discussion list or IRC room first if you are unsure).

Architecture

Components and Modules

Feelings and Predicates

Functionalities and features

System usage and tests

Miscellaneous

Combo-related pages

Behavior-related pages

Occams Razor

Ben's notes and ideas

These are pages written by Ben that may still be useful for further work:

Language Comprehension

(this stuff is all semi-obsolete, I'm not sure it will ever be used again because the NLP comprehension pipeline has been massively upgraded since this old NLP/Embodiment work was done)

Others