OpenCog GSOC 2009 Summary

From OpenCog
Jump to: navigation, search


In 2009 the OpenCog Project was pleased to participate in Google Summer of Code 2009.

Our official project page for Google Summer of Code 2009 is here.

This marks our second consecutive annual participation in this excellent mentoring program. This summer Google generously sponsored 9 students to work with us, and all but one successfully completed their proposed projects.

Our student projects this summer covered a diverse range of topics and this page summarizes them briefly with appropriate links. Many of the projects led to fully successful and complete results that were integrated into the main OpenCog system and are now in practical use; some others led to interesting research results that will be useful only after further work has been done, but this is to be expected where "research coding" projects are concerned. All in all we consider this to be an extremely successful GSoC summer, and we would like to thank Google, our mentors and our students for making this a productive and memorable summer.

While a great deal of excellent work was done, the highlight was perhaps Samir Araujo's work connecting OpenCog's virtual pet control infrastructure with OpenCog's RelEx language comprehension system. Some videos highlighting Samir's work may be found at

OpenCog Framework Projects

The goal of this project was to write Python language bindings for the OpenCog Framework's API, so that users could then write Mind Agents (and other applications) in the Python programming language. We had a choice between using a generator and manually writing the bindings using some library. The latter was chosen, and we decided to use Boost.Python to write the bindings. As of today, many of the important classes have been exposed and are ready for experimental use, although we still need tests, nicer documentation, and a more Python-like interface to the classes.


The goal of this project was to experiment with using a BigTable to create persistent storage for OpenCog's in-RAM knowledge store, the AtomTable.

To do a thorough job of this first requires linking to the BigTable to implement a simple save/load API for Atom Handles, and then using that implementation to provide just-in-time data persistence for AtomTable. That is, AtomTable must be modified to, in conjunction with the ECAN system, maintain in memory only those atoms which are currently most important -- thus relegating the AtomTable into an AtomCache of the larger, persistent BigTable.

During this summer's work, hypertable persistence of Atoms was achieved. This code includes implementation of all of the functions in the BackingStore.h API, and supports storage and retrieval of importance and truth values. However, the desired modifications of the AtomTable were not undertaken, and this has been left for ongoing work.

Code: lp:~jeremy-schlatter/opencog/hypertable

OpenCog AI Projects

The objective of this project was adding support for Language Comprehension to the Virtual Agents controlled by OpenCog (which act in the Multiverse or RealXTend virtual worlds, though most of our work this summer was in Multiverse, and in principle the system can be adapted to work in any virtual environment).

Two key aspects were considered:

Anaphor resolution, according to which the Virtual Agents are capable of "listening" to sentences written in English and, using their knowledge about the physical Environment (iwhich the agent is inserted), to identify the elements mentioned in the given sentence.

Command resolution, according to which the Virtual Agent is capable of understanding what a verbal command means and execute it.

The project was completely implemented and integrated with the OpenCog, and some examples of the functionality are shown in the videos at

Final Documentation:


Revisions: 3236, 3237, 3238, 3248 and 3251
Multiverse Proxy:
Revisions: 7, 8, 49, 50

  • Ruiting Lian - Natural Language Generation using RelEx and the Link Parser

The goal of this project was to take the semi-functional natural language generation component of OpenCog (NLGen) and refactor and improve it into a functional English language generation subsystem. This was accomplished, via refactoring and rewriting a significant percentage of the code. The NLGen system is now capable of generating simple English sentences; and NLGen has now been wrapped in a standalone server and integrated with OpenCog's CogServer, allowing OpenCog to generate simple English sentences based on the semantic knowledge in its AtomTable knowledge store.

Work on NLGen is ongoing, aimed at allowing it to more adeptly generate complex sentences from complex sets of semantic Atoms.


The aim of this project was to find classes of words which behave syntactically similarly, using the link parser to define features of a word. Among other goals, we intended to increase coverage of the link parser in this way.

The challenge was that a word can have thousands of features. In spite of this challenge, we succeeded in forming classes using clustering techniques and dimension reduction with careful analysis. These classes were integrated with the link parser, succeeding at the goal of improving its coverage!


The goal of this project was to make MOSES smarter by integrating the BBHC (Building Block Hill Climber) and SA (Simulated Annealing) into MOSES in its optimization step.

The project was partially finished and work on it is ongoing. SA was implemented for the MOSES, but the BBHC method is still not integrated into MOSES. Another uncompleted task, which is expected to be completed in September or October 2009 was make the current hill-climbing within MOSES support continuous variables.

In all, this summer project made a good start on a difficult problem, and the work will continue!


The purpose of this project was to extend MOSES, which typically evolves program trees, to evolve recurrent neural networks. The challenge was to find a suitable representation and reduction rules such that MOSES could efficiently evolve solutions to challenging benchmark tasks.

While much thought went into choosing appropriate representation and reduction rules, which were then implemented, the results at the end of the project were not better than existing methods.

However, due to time limitations not all the obvious avenues for improvement and development were tried -- for instance, replacement of univariate search with multivariate search (such as simulated annealing) within MOSES was not attempted. So we look forward to seeing this work continued in a subsequent GSoC project or otherwise.

Preliminary Documentation Media:Moses_rnn_doc.pdf


OpenBiomind Project

OpenBioMind is already a powerful tool for geneticists. However, it need not be limited to analyzing gene sequences, microarray data, and related genetic datasets. Neurobiologists are also generating large datasets. The goal of this project was to extend OpenBioMind to analyze these neurobiological data.

The NiFTI-1 format for fMRI data was integrated with OpenBiomind, and the libsvm library was also integrated with OpenBiomind, so as to enable SVM classification of fRMRI datasets. A great deal of time was spent working on a class that can stream datasets from the disk, so that OpenBiomind can analyze any size datasets. Based on this work, some initial experiments were done using OpenBiomind to analyze some test fMRI datasets. However, high-accuracy results were not yet obtained, because there was not sufficient time during the summer to implement data reduction algorithms such as PCA which are needed to provide machine learning algorithms such as SVM with a sufficiently small number of features. Once the data reduction algorithms are added we anticipate OpenBiomind will be a powerful tool for fMRI data analysis.