OpenCog HK 2014 Robotics Task Breakdown
Contents
- 1 Task 1: Get the Turtlebot stack fully running
- 2 Task 2: Get an OpenDial dialogue system, together with speech I/O, working on the robot
- 3 Task 3: Get a robot navigating using Shujing's planner
- 4 Task 4: Get a robot navigating using Shujing's planner, including moving lightweight obstacles when needed
- 5 Task 5: Supervised learning based identification of 5 different object types based on robot vision
- 6 Task 6: Using DeSTIN to enhance object classification
- 7 Task 7: Extend Jamie's robotics API
- 8 Task 8: Integrate Jamie's API into robot-control architecture
- 9 Task 9: Integrate Jamie's API into robot-control architecture
- 10 Task 10: Implement ROS-based robot control/perception architecture (from Goertzel/Hanson vision document)
- 11 Task 11: Integrate and test face identification/tracking software on Turtlebot
- 12 Task 12: Integrate and test salience detection software on Turtlebot
- 13 Task 13: Integrate and test salience detection software on Turtlebot
- 14 Task 14: Integrate various functions into unified ROS-based architecture
- 15 Task 15: Make NLP-based interface to Jamie's API, using RelEx2Logic
- 16 Task 16: Make the robot recognize where a person is looking
- 17 Task 17: Get OpenCog to model what the robot sees that person X can see
- 18 Task 18: Get the OpenCog-powered robot to recognize who is who
- 19 Task 19: Get the OpenCog-powered robot to answer questions regarding which people know the location of which objects
- 20 Task 20: Get the OpenCog-powered robot to fetch named objects.
- 21 Task 21: Prepare ITF demo using Turtlebot
Task 1: Get the Turtlebot stack fully running
Most likely this will be done on a newly purchased TurtleBot2, but if that doesn't work out, then on the existing iRobot Create + Kinect infrastructure.
Tentatively assigned to: Jamie Current time estimate: 2 weeks Tentatively scheduled for: end of June
Task 2: Get an OpenDial dialogue system, together with speech I/O, working on the robot
This basically is about connecting OpenDial with ROS, as Man Hin is creating the OpenDial based dialogue system.
Tentatively assigned to: Jamie Current time estimate: 2 weeks Tentatively scheduled for: early July
This requires getting a map from the Kinect's output into the 3D SpaceMap, and then getting the planner commands out to the robot...
Tentatively assigned to: Jamie Current time estimate: 2 weeks Tentatively scheduled for: May / early June
As a shortcut, we may label some obstacles with some tag specifically indicating they are movable.
Tentatively assigned to: Jamie/ Mandeep Current time estimate: 1 month Tentatively scheduled for: late July / early August
Task 5: Supervised learning based identification of 5 different object types based on robot vision
As a first experiment we can just do this using a Waffles neural net on the output of the Kinet, using supervised learning on 5 classes of objects commonly seen in the robot's environment.
Tentatively assigned to: Mong Current time estimate: 2-3 months Tentatively scheduled for: June-August
Task 6: Using DeSTIN to enhance object classification
This involves re-doing Task 5, but using DeSTIN states as input features to the neural net classifier, and seeing what improvement one gets
Tentatively assigned to: Mong (intern) Current time estimate: 2-3 months Tentatively scheduled for: Sep-Nov
Task 7: Extend Jamie's robotics API
Jamie's python API needs to have some functions added to enable it to initiate arbitrary robot actions.
Tentatively assigned to: Mandeep /Jamie Current time estimate: 2 weeks Tentatively scheduled for: early July
Task 8: Integrate Jamie's API into robot-control architecture
We want the RC component of our robotics architecture to be able to take commands issues using Jamie's python API
Tentatively assigned to: Mandeep Current time estimate: 2 weeks Tentatively scheduled for: late July
Task 9: Integrate Jamie's API into robot-control architecture
We want the RC component of our robotics architecture to be able to take commands issues using Jamie's python API
Tentatively assigned to: Mandeep Current time estimate: 2 weeks Tentatively scheduled for: late July
Task 10: Implement ROS-based robot control/perception architecture (from Goertzel/Hanson vision document)
This involves implementing a "walking skeleton" version of the architecture, with all the components in place, and able to communicate with external components using ZeroMQ.
Initial applications will be the Turtlebot, and David Hanson's robot faces.
Tentatively assigned to: Mandeep Current time estimate: 1 month Tentatively scheduled for: May thru early June
Task 11: Integrate and test face identification/tracking software on Turtlebot
We also must decide which OSS face identification/tracking software to use
Tentatively assigned to: someone in Addis + a bit of Mandeep / Jamie Current time estimate: 2 weeks Tentatively scheduled for: June
Task 12: Integrate and test salience detection software on Turtlebot
We also must decide which salience detection software to use
Tentatively assigned to: someone in Addis + a bit of Mandeep / Jamie Current time estimate: 2 weeks Tentatively scheduled for: June
Task 13: Integrate and test salience detection software on Turtlebot
We also must decide which salience detection software to use
Tentatively assigned to: someone in Addis + a bit of Mandeep / Jamie Current time estimate: 2 weeks Tentatively scheduled for: June
Task 14: Integrate various functions into unified ROS-based architecture
The unified ROS-based architecture can be used, at this stage, to integrate
- Navigation using Shujing's planner
- face detection/tracking and salience detection
- various Turtlebotty stuff
Tentatively assigned to: Mandeep Current time estimate: 2 weeks Tentatively scheduled for: early August
Task 15: Make NLP-based interface to Jamie's API, using RelEx2Logic
This will give a flexible NLP-based way to command robots to do simple things.
This may require adding some new RelEx2Logic rules, however, as RelEx2Logic hasn't been particularly designed or tested for sentences like those that will be required for translation into Jamie's python commands.
Tentatively assigned to: Eyob, in Addis Current time estimate: 2 months Tentatively scheduled for: June/July
Task 16: Make the robot recognize where a person is looking
This will let the robot make basic models of a person's subjective state of visual knowledge at a point in time.
Tentatively assigned to: Mandeep Current time estimate: 1 month Tentatively scheduled for: September
Task 17: Get OpenCog to model what the robot sees that person X can see
I.e., get the robot to make basic models of a person's subjective state of visual knowledge at a point in time.
Tentatively assigned to: Mandeep Current time estimate: 1 month Tentatively scheduled for: October
Task 18: Get the OpenCog-powered robot to recognize who is who
If a person tells the robot "I am Jamie" and another person tells the robot "I am Alex", then the robot should be able to associate the name with the image of that person, and track where each specific person is moving in the room during a single encounter.
Tentatively assigned to: Mandeep Current time estimate: 2 months Tentatively scheduled for: Dec-Jan
Task 19: Get the OpenCog-powered robot to answer questions regarding which people know the location of which objects
Where does Bob think the red ball is? Does Jane know where the blue box is?
The robot should be able to answer this kind of question, if it understands what parts of the room a certain person can see, and knows where a person is looking.
Tentatively assigned to: Mandeep Current time estimate: 3 months Tentatively scheduled for: Feb-April 2015
Task 20: Get the OpenCog-powered robot to fetch named objects.
Get the red ball. Get the blue sock. Etc. If the robot can recognize objects in certain classes, and has a grabber arm that can pick up objects in those classes, then it should be able to fetch specified objects.
This requires some basic NLP, and will surely require plenty of experimentation and trial and error as well. It also requires the robot arm is gotten to work, i.e. if the arm we buy doesn't have inverse kinematics supplied, that will need to be implemented/integrated.
Tentatively assigned to: New Intern, and Alex with help from Mandeep and Jamie Current time estimate: 4 months Tentatively scheduled for: Sep-Dec
Task 21: Prepare ITF demo using Turtlebot
In November there will be a demo of the robots to the ITF, our funding source. So we must set aside a month of robotics effort to put together the best demonstration we can at that time.
Tentatively assigned to: Mandeep & Mong with some help from Alex and Jamie. Current time estimate: 1 month Tentatively scheduled for: Oct