Embodiment (2018 Archive)

From OpenCog
(Redirected from Embodiment)

Historical record. Work on Embodiment has ceased in 2019. There are some good ideas, some interesting designs in the documentation below; however, none of it is being actively developed. This page stands as a historical record of how things were in 2018.


The current Embodiment design is in flux. It is primarily aimed at powering the Hanson Robotics virtual avatars and physical robots.

The proper implementation of embodiment requires "understanding the real world". This can be accomplished by creating internal models of the real world (i.e. within the AtomSpace), and then analyzing what is happening in the real world by analyzing what is happening to the internal model of it. For example, logical inference can be applied to what is known and represented in the atomspace. Behavior need not be logical; it can be driven by moods and feelings; this moods are driven by the internal state of the machine, the internal state being held in the AtomSpace. Similarly vocalizations can be created, based on what is believed about the world, as represented in the internal model of the world. In addition, reasoning about facial expressions, arms and body, viz. "self-awareness" takes place on the internal representation of the body and it's position and orientation in the real world.

A prototype of world-model system can be found here, in github. The architecture of the world-model system is described in this PDF.

Current development is focused on layering the ghost chat system onto the world model, so that the world-representation and the self-representation is more closely tied with the chat subsystem.

Earlier Versions

Earlier versions of the embodiment design can be found here: