OpenCog HK 2014 High Level Goals

From OpenCog

2014 Planning Summary for OpenCog Hong Kong Project (including tasks for iCog staff allocated to support OpenCog HK)

(initial version by Ben Goertzel, Nov 2013; heavily revised by Ben early May 2014)

This document gives a brief summary of the 2014 goals for the OpenCog Hong Kong project. Lots of things are omitted or glossed over; this is only intended as a very high level summary.

Very High Level View

At the very high level, what we're after during 2014 and early 2015 is:

  • An OpenCog game character with reasonable learning ability in the Unity3D game world context, and a bit of creativity as well
  • A game character showing simple social modeling in the game world, with some capability for deception based on knowledge of what other agents can see and what they cannot
  • A TurtleBot type robot that can use OpenCog to move around the room with a bit of understanding, e.g. following people, going to objects in categories it recognizes, etc.
  • OpenCog to control the Hanson or Hanson-ish humanoid robots for some reasonable subset of the TurtleBot functionality
  • A generally capable OpenCog based NL dialogue system
  • Basic NL dialogue capability in the context of what an OpenCog embodied agent is perceiving and doing

Specific Goals and Tentative Task Breakdowns

1) 3D Game World

Our high-level game-world goal for 2014 and early 2015 is to get

  • planning / navigating
  • uncertain reasoning about objects and their relationships
  • pattern mining
  • dialogue
  • social modeling

working together in the game world, to enable game characters to autonomously carry out a variety of simple, "intelligent" behaviors.

In pragmatic terms, two high level goals are

  • To get a nice enough game AI demonstration that we have something interesting to show game-AI industry people, indicating that what we have is different than typical game AIs
  • To have interesting enough behaviors to motivate open source contributors to help improve the AI and the game world

The first three items on the above list are things we've been thinking about and working on for a while in the game AI context. Dialogue is something being worked on separately in the dialogue-system team, which can fairly easily be integrated with the game world. Social modeling is important partly because it's important for the robotics applications of our collaborating firm Hanson Robotics, and partly because a strong potential funder has a strong interest in seeing a game character capable of simple acts of deception implemented demonstrably in the game world by the end of 2014.

More concretely, some previous thinking on game-world AI goals is on this page: ... the tasks described below should enable achievement of these target behaviors, though also of other behaviors.

On the operational level, we also have the goal of enabling a reliable, stable live demonstration of OpenCog in a Unity3D game world by early fall 2014.

See OpenCog HK 2014 Game World Task Breakdown

2) Robotics

Our robotics goals for 2014 and early 2015 are fairly modest AI-wise, but this is because getting things to work on physical robots is difficult.

The high-level goal is to get a robot to do some basic things, controlled via OpenCog, so as to create a platform that can be used for ongoing OpenCog robotics work that is more advanced in various ways. So we want e.g. to get a simple robot (a Turtlebot or some variant) to

  • navigate using OpenCog's planner
  • chat using first OpenDial, then (via the same interface) OpenCog's dialogue system
  • be controllable via commands that are uttered in English and then translated into robot control commands using OpenCog's NL comprehension system
  • reason about other agents' perceptions, in simple ways, using OpenCog's inference engine

We want to achieve this in a flexible ROS-based architecture that also works with David Hanson's robots and can be extended to other robots as well.

See OpenCog HK 2014 Robotics Task Breakdown

3) Dialogue System

Our goal here, by the end of 2014, is to have a basic OpenCog based dialogue system with all the parts in place and the whole thing holistically working, for the three use cases of

  • standalone dialogue
  • dialogue with a game character
  • dialogue (via speech) with a robot

After this, the next phase of effort will involve improving the components of the system (via hand-engineering them more, or in some cases via replacing hand-engineered components via learned components), and customizing the system for particular functionalities.

See OpenCog HK 2014 Dialogue System Task Breakdown

4) Generic AI Tasks

Alongside tasks focused on dialogue, robotics or game AI, we have some generic AI tasks to undertake as well -- tasks that will serve multiple AI functionalities and applications going forward.

See OpenCog HK 2014 Generic AI Task Breakdown

4) Infrastructure

In addition to tasks related to dialogue, robotics or game AI, or AI in general, there are some purely infrastructural software engineering tasks that need doing to prevent OpenCog from becoming more of a big mess than it is. Collectively these should result in an OpenCog system that is a better platform for all sorts of AI development going forward.

See OpenCog HK 2014 Infrastructure Task Breakdown