There are multiple half-finished concepts for action orchestration in OpenCog.
- The Action Orchestrator for multiplexing different streams of robot control commands, i.e. so that the robot will not both smile and frown at the same time. As of march 2016, the code is in heavy development, and its here in github. This orchestrator is upstream of the self-model; the self-model records the current knowledge that the robot has of what its currently doing. self-model in github
- This page: Economic Action Selection
- This page: Improved Action Selection
- See also the first-pass Action orchestrator design on the Hanson Robotics wiki site,
All four of these ideas will need to be further articulated, refined, made to work with one-another, etc.
See also: Execution management