GSoC 2015 Project Ideas
This page gathers some project suggestions for OpenCog's participation in Google Summer of Code 2015.
If you're looking for stuff to do, see Suggested Projects instead
These are not the only projects with a chance of being accepted for GSoC 2015, but they are projects that somebody qualified is particularly eager to mentor. You may propose any project: many of the ideas listed here are poorly formed, and really need to be fleshed out in much greater detail. Being able to do so would be particularly noteworthy!
To apply, see GSoC Student Application Template.
The projects here are not listed in any sort of priority order.
- 1 AI Infrastructure
- 1.1 Implement a Neo4J Backing Store for OpenCog
- 1.2 Web UI for OpenCog NLP pipeline
- 1.3 Performance Measurement Suite
- 1.4 OpenCog Visualization Workbench
- 1.5 Haskell Bindings for OpenCog (Possible mentor: Nil Geisweiller)
- 1.6 Prototype Haskell Atomspace and/or MOSES Deme
- 1.7 User defined indexes for the Atomspace
- 1.8 Support higher-order types in Atoms
- 2 Embodiment
- 3 AI Algorithms
- 3.1 AtomSpace
- 3.2 MOSES
- 3.3 Probabilistic Logic Networks
- 3.4 Natural Language Processing
- 3.4.1 Unsupervised Language Learning.
- 3.4.2 Microplanning
- 3.4.3 PLN Inference on extracted semantic relationships
- 3.4.4 Implementing limited-window search in the link parser
- 3.4.5 Implement simplified Word grammar parsing in the Atomspace
- 3.4.6 Extending the link parser to handle Quoted text, long-range structure, hierarchical structure
- 3.4.7 Enhancing the NL comprehension pipeline to handle biological and biomedical texts
- 3.5 Machine Vision
- 3.6 Concept Formation
- 4 Other Stuff
Implement a Neo4J Backing Store for OpenCog
See Neo4j_Backing_Store for some ideas.
Web UI for OpenCog NLP pipeline
We have a nice NL comprehension pipeline that parses English sentences into semantic graphs, and also turns semantic graphs into English sentences.
It would be good to have a Web UI that demonstrates this functionality, and allows anyone who wishes to use it online. This could be integrated with the Atomspace visualizer as well.
Performance Measurement Suite
Many contemplated changes to the opencog infrastructure have a real impact on execution time, and the amount of memory used. A performance suite would collect into one place a large variety of different test cases, instrument them properly so as to measure speed and memory usage, and then report the results, in a completely automated fashion. An existing but simplistic benchmark utility for the AtomSpace exists in opencog/benchmark.
OpenCog Visualization Workbench
The AtomSpace visualiser only addresses one aspect of a UI for understanding OpenCog dynamics. A generic GUI workbench, which can handle the parsing of log files and dynamic configuration of a running OpenCog instance, would be useful.
Other features that such a workbench might have:
- graphing dynamics through time:
- total atoms of various types
- distribution of truthvalues
- distribution of atom importance (STI/LTI)
- average coordination number (number of links atoms are connected with)
- network diameter
- # of disjoint clusters
- visualising and controlling expansion of the backward inference tree of PLN.
- active MindAgents and their resource usage (memory, CPU)
This could also be integrated with, or made to incorporate the functions of, the OpenPsi GUI that Zhenhua Cai has developed. That UI uses:
- zeromq messages for monitoring state changes from OpenCog.
- implementation in Python.
- pyqt for widgets.
Haskell Bindings for OpenCog (Possible mentor: Nil Geisweiller)
We would like to have comparable Haskell bindings. This should not be extremely hard for someone with strong knowledge of both Haskell and C++. Thought as a first step, instead of binding Haskell directly to C++ it could be easier to bind it to Scheme (or even Python), generating and interpreting the corresponding scheme code and communicating with OpenCog via the scheme shell.
Prototype Haskell Atomspace and/or MOSES Deme
To explore the possibility of implementing components of OpenCog in Haskell in the future, we would like to see prototype versions of the Atomspace and of a MOSES deme (evolving primitive recursive functions expressed using fold) implemented in Haskell.
User defined indexes for the Atomspace
The AtomSpace is essentially a certain kind of graphical database. Like other databases, it should allow users to define the kinds of structures that they neeed to be able to quickly locate. This could be done with a user-defined index.
This requires work to be done in conjunction with the type checking proposal. The type checker, including unit tests, shoul take no more than two or three weeks for someone unfamiliar with the Atomspace.
This is mostly straight-ahead coding, little/no AI research needed. The hard part will be understanding the existing code base, to understand where/how to slot this in.
Support higher-order types in Atoms
- Caution: Adding type checking should take no more than 2-3 weeks for someone inexperienced with opencog. Adding full support for SignatureLink inn various subsystems should take no more than a few more weeks. What more is needed to implement this proposal?
Suppose you have a SchemaNode S with behavior like (using a Haskell-like notation, where A-->B means a function mapping A to B)
[[Number --> Concept] --> [Concept --> Concept] ] --> Number
This SchemaNode S has output type SchemaNode. However, this is not that precise a statement. S actually outputs SchemaNodes that have particular input restrictions and a particular output type. These can be described with the SignatureLink.
So, ideally, a SchemaNode or PredicateNode would come with an optional SignatureLink object indicating what higher-order type they embody.
The SignatureLink object is a dag indicating a higher-order type. One could introduce a TypeMapLink, that associates signatures to specific atoms (huh??) and then store higher-order type objects in the Atomspace.
Right now type restrictions are left implicit in the Atomspace, which is generally OK since we're mostly dealing with simple cases at the moment. e.g. an AndLink between two or more ConceptNodes outputs a ConceptNode, so AndLink has type
List(Concept) --> Concept
However, to extend PLN, concept creation and other functions to deal with higher-order types effectively, explicit typing and type checking will probably be needed. This will be useful for abstract reasoning, not just mathematical reasoning but e.g. for analogical reasoning between significantly different domains, or experiential learning of symbol groundings or linguistic rules.
Enable OpenCog to control the Eva Robot from Hanson Robotics
simulates the Eva humanoid robot head created by Hanson Robotics. The robot head is now controlled by Owyl behavior trees.
As a first step toward controlling the robot intelligently by OpenCog, it would be useful to
- port the Owyl behavior trees into OpenCog, re-expressing them as combinations of Atoms
- create a ROS bridge between OpenCog and the Eva robot , enabling passing back and forth of appropriate messages
Build a Minecraft interface for OpenCog
Build an integration between OpenCog and a stable, well-documented open-source Minecraft client API (such as Mineflayer or Spock) in order to allow use of any server that implements the Minecraft standard (including the official Minecraft, as well as any open-source implementations), so that this existing standard environment could be utilized to build simulation environments for Artificial General Intelligence experiments without having to spend extensive time engineering new software for simulation.
The project would involve writing interface code between a Minecraft client API and OpenCog and would also involve replacing some of the existing OpenCog embodiment code that implements messaging with external systems with a simplified and modernized implementation. The project would be an opportunity to utilize the software design concepts of message queues, loose coupling and object serialization for external consumers. Ideally, a candidate should have experience in both Python and C++.
For more ideas about what this would look like, refer to this thesis paper on Minecraft as a simulation environment for Artificial General Intelligence: http://tmi.pst.ifi.lmu.de/thesis_results/0000/0073/thesis_final.pdf Note that this thesis was written in the context of a different project, and many of the implementation decisions contained within will be different from this project proposal; however, the document does include many nice explanations and examples that give a good general overview of the problem space.
- Alternative: use spock to get python3 bindings to Minecraft, and then use rospy to provide a ROS interface to mincraft...
Write a reduct library for the AtomSpace (Possible mentor: Nil Geisweiller)
MOSES already relies on a powerful program reduction library, to reduce program into a normal form as to avoid re-evaluating semantically equivalent programs, such as (X and Y), (Y and X), (Y and (Y and Z)), etc. The idea is to do the same for the AtomSpace. There is the possibility of producing redundant representation of the same piece of knowledge such as
Concept "A" Concept "B"
ConceptNode "A" AndLink ConceptNode "A" ConceptNode "B"
So the idea is to start writing a library to enable such reduction. There are differs several ways to go about it. It seems once Haskell binding is enabled, writing such a tool would be much easier, due to the support of pattern matching in Haskell. One could write a mock Haskell AtomSpace, then start writing reduction rules, as a first exploratory step. On the other hand, the tool could be written in scheme, relying on Linas's Pattern Matcher.
Provide extensible Combo and Reduct API
The Combo programming language is a small, lisp-like language used inside of MOSES for representing the programs that MOSES learns. Combo is used instead of lisp, scheme, python or any other popular language because the most important step of evolving program trees is being able to reduce them, to find simpler, smaller but otherwise equivalent program trees. None of the existing, popular programming languages have a publicly available, usable API to perform program reduction (even though this is, de fato, a part of what compilers and byte-code machines do). Thus, in order to be able to perform reduction, Combo was invented.
Combo expressions are trees consisting of operators that combine terms; terms may be constants, variables or trees. Combo currently supports four kinds algebras: boolean algebras, where all terms are true-or-false valued, the "contin" algebra, where all terms are real-valued, an action-perception algebra, which is useful for evolving small programs that control robot actions in response to perception stimuli, and a neural-net algebra used to represent simple neural nets.
A core issue within the current combo/reduct infrastructure is that there is no easy way to add new kinds of algebras into the system, without hard-coding them into assorted overly-fat C++ structures and header files. This makes extending combo difficult, and the resulting system fragile and over-weight. What is needed is a way of creating pluggable extensions to combo/reduct. This requires making the type system pluggable, the vertex object pluggable, and the assorted reduction subsystems pluggable.
A successful refactoring would move the neural-net and perception-action parts of the code into their own subsystems/extensions. A good place to start might be to finish abstracting away the "ant" example -- this is a demo where virtual ant learns navigation patterns that are efffective for foraging (finding virtual food).
Add Library of Known Routines
Currently, MOSES constructs programs out of a very minimal collection of functions and operators; for example, boolean expressions are constructed entirely from and, or, not. However, learning of programs could happen much faster if more complex primitives were provided: for example, xor, a half-adder, a multiplexer, etc. Rather than hard-code such functions into combo, this project calls for creating a generic library that could hold such functions, and the infrastructure to allow MOSES to sample from the library when building exemplars.
This is a needed step, on the way to implementing the pleasure algorithm below, as well as for transfer learning, etc.
Proof that it is working would be defining xor in some file (as combo, viz. that xor($1,$2) := and(or($1, $2), not (and($1, $2))) and then show that this definition was actually used in solving the k-parity problem.
Currently, the combo boolean primitives are and, or, not. Moses can learn to model input data (tables of boolean values, with hundreds or thousands of rows, and dozens or hundreds of columns) by randomly assembling and trying out trees of these three boolean ops.
If the input data just happens to be exclusive-or truth tables, or multiplexor tables, or adders, whatever. then, given enough time and effort, moses can discover the tree expressions that correspond to the truth tables. Unfortunately, it can take a huge (exponentially, combinatorially large) amount of time to do this.
So, the core idea is to short-cut the learning process by enriching the three primitives and/or/not with some extra ones. That way, instead of trying out random combinations of and/or/not, it would explore random combinations of and/or/not/other-stuff.
There are two ways to do this:
1) hand-code some new a-priori trees.
2) automatically discover and remember useful trees.
The second way is strongly preferred. So, that is the big picture. But its open-season on all of the details. How does discovery happen? What gets stored after being discovered? What format is used to store things? After being discovered, how does moses draw upon this knowledge base? Does it keep track of how useful some bit of "knowledge" is? How? What modifications are needed to the random-tree generator to make this all work?
These all need answers, and all need to be converted into C++ code, And it all needs to be done in a way that is efficient, well-designed, doesn't perturb the existing code base by too much, looks elegant, etc.
AtomSpace-ish Pleasure Algorithm (Possible mentor: Nil Geisweiller)
Start implementing (even partially) a version of Pleasure  on the AtomSpace. Since MOSES models can be exported to the AtomSpace many processes can take place there, such as the Pattern Miner, PLN, etc. The idea is to attempt to leverage that to implement one or a few ideas of the PLEASURE algorithm for program learning.
Causing MOSES to generalize across problem instances, so what it has learned across multiple problem instances can be used to help prime its learning in new problem instances. This can be done by extending the probabilistic model building step to span multiple generations, but this poses a number of subsidiary problems, and requires integration of some sort of sophisticated attention allocation method into MOSES to tell it which patterns observed in which prior problem instances to pay attention to.
A much simpler approach is to exchange the MOSES inner loop needs with the outer loop. Unfortunately, this would be a major tear-up of the code base. Exchanging the order of the loops would allow an endless supply of new stimulus to be continuously processed by MOSES, in essence performing continuous, ongoing transfer learning. This should be contrasted to the current design: the input is a fixed-size, fixed-content table, and during training, it is iterated over and over, within the inner loop. If the order of the loops were reversed, then equivalent function could be obtained simply by replaying the same input endlessly. However, there would now be the option to NOT replay the same input over and over, but rather, it could slowly mutate, and the learned structures would mutate along with it. This is a much more phenomenologically, biologically correct approach to learning from input stimulus.
MOSES: Improved hBOA
MOSES consists of four critical aspects: deme management, program tree reduction, representation-building, and population modeling. For the latter, the hBOA algorithm (invented by Martin Pelikan in his 2002 PhD thesis) is currently used, but we've found it not to be optimal in this context. So there is room for experimentation in replacing hBOA with a different algorithm; for instance, a variant of simulated annealing has been suggested, as has been a pattern-recognition approach similar to LZ compression. A student with some familiarity with evolutionary learning, probability theory and machine learning may enjoy experimenting with alternatives to hBOA so as to help turn MOSES into a super-efficient automated program learning framework. It already works quite well, dramatically outperforming GP, but we believe that with some attention to improving the hBOA component it can be improved dramatically.
Probabilistic Logic Networks
Uncertain temporal reasoning using fuzzy Allen Interval Algebra
Code exists for doing temporal reasoning within PLN, using a fuzzy/probabilistic version of Allen Interval Algebra. But it's not integrated into the main PLN system, and some issues remain. This needs to be thought through again.
PLN: Reasoning about Biological Interactions
There are many databases denoting interactions between genes, proteins, chemical compounds and other biological entities. Some of these have recently be imported into OpenCog's AtomSpace format (after some scripting to reformat them) so they are ready to be reasoned on using PLN.
An example of the value of this kind of reasoning would be: using analogical inference to approximatively transfer knowledge from one organism to another, e.g. flies or mice to humans.
Implement and integrate Imprecise Probability truth value math
The theory of PLN includes truth value formulas based on "indefinite probabilities", a kind of imprecise (interval) probability. But the current code doesn't use these. Implementing truth value formulas using indefinite probabilities would be a good project for someone with a math background, an interest in logic, and some C++ or python programming skill.
Planning via Temporal Logic & Consistency-Checking
A general and elegant way to do planning using PLN would be
- Use explicit temporal logic, so that the temporal dependencies between different actions are explicitly encoded in Atoms using relations like Before, After, During, etc.
- Improve the backward chainer so that, when it spawns a new Link in its chaining process, it checks whether this link is logically consistent with other promising Links occurring elsewhere in the backward chaining tree... (where if A and B are uncertain, logical inconsistency means that (A and B) has a much lower truth value strength than one would expect via an independence assumption.)
The search for logical consistency can be done heuristically, via starting at the top of the back-inference tree and working down. If quick inference says that a certain Link L is consistent with the link at a certain node N in the BIT, then consistency-checking with the children of N is probably not necessary.
This approach would subsume the heuristics generally used in planning algorithms, into a generally sensible method for inference control...
Natural Language Processing
Unsupervised Language Learning.
A proposal for this is described in the paper: Learning Language from a Large (Unannotated) Corpus on ArXiv. This project is described in greater detail on the language learning wiki page. There are several GSOC-2015 Candidate Projects described on that page.
A microplanner for OpenCog has recently been implemented, which divides Atom-sets representing cognitive content into smaller Atom-sets suitable to be turned into sentences. It also deals with insertion of anaphora. While the current microplanner basically works in simple situations, it's quite crude, and could use extension and generalization. This is good for a student who is a strong programmer and interested in the intersection of linguistics and cognition.
PLN Inference on extracted semantic relationships
Currently the RelEx2Logic (R2L) subsystem of OpenCog takes in a sentence, and outputs a set of logical relationships expressing the semantics of the sentence. It is possible to take the logical relationships extracted from multiple sentences, and combine them using a logical reasoning engine, to see what conclusions can be derived. Thus, for example, the English-language input Aristotle is a man. Men are mortal should allow the deduction of Aristotle is mortal.
Some prototype experiments along these lines were performed in 2006, using sentences contained in PubMed abstracts (see paper). But no systematic software approach was ever implemented.
This project is appropriate for a student who is interested in both computational linguistics and logical inference, and has some knowledge of predicate logic.
A simple but detailed example of language-based inference using RelEx and PLN is given here: File:RelEx PLN Example Inference.pdf. This can be done better now, using RelEx2Logic.
See also NLP-PLN-NLGen pipeline.
Implement left-to-right, limited-window search through parse space. The parser currently examines sentences as a whole, which means that the parsing of long sentences becomes very slow (approximately as N^3, with N the number of words in the sentence). Implementing a "window" to limit searches for connections between distant words should dramatically improve parse performance. Not only that, but it makes the parser more "neurologically plausible", by limiting difficult, long-range correlations between words. The window algorithm can be thought of as a kind of Viterbi decoder. This project is technically challenging, and requires a reasonable grounding in the theory of context-free grammars and at least a passing acquaintance with the idea of chart parsing, the backward-forward algorithm, and the Viterbi algorithm. This project requires a deep dive into the C code that implements the parse algorithms of Link Grammar. This project is the most critical, outstanding fix needed for Link Grammar; the slowness of long-sentence performance is the biggest thing holding back link grammar at this time.
This would be a fairly difficult task, but might be right for a student with appropriate skills and background.
Implement simplified Word grammar parsing in the Atomspace
The idea here would be to do parsing in the Atomspace rather than in separate external software like the link parser -- but using the link grammar dictionary. (The link grammar dictionary has already been fed into the Atomspace as Nodes and Links.)
The parsing process I'm envisioning would proceed forward through a sentence, and when it encounters a word W, would try to find a way to link W to words occurring before W in the sentence, consistent with the links already drawn among the words before W. If this is not possible, the process would then backtrack and explore alternate linkages among the words occurring before W.
Implementation-wise, this in-Atomspace parser would be a chainer somewhat similar to the existing PLN backward chainer, but customized for the parsing process.
This project would require some intuition for linguistics, plus good C++ skills (e.g. it will require making a custom callback for the Pattern Matcher in C++).
This is in a loose sense a "word grammar style" parser, but using the link grammar dictionary. The first version would utilize only syntactic links but of course since it's all taking place in the Atomspace, there is also an option to use semantic information generated mid-parse from syntactic links to prioritize possible linkages.
Currently, Link Grammar is unable to properly handle quoted text and dialogue. A mechanism is needed to disentangle the quoting from the quoted text, so that each can be parsed appropriately. This might be done with some amount of pre-processing (e.g. in RelEx), or possibly within Link Grammar itself.
Its somewhat unclear how to handle this within link-grammar. This is somewhat related to the problem of morphology (parsing words as if they were "mini-sentences",) idioms (phrases that are treated as if they were singe words), set-phrase structures (if ... then... not only... but also ...) which have a long-range structure similar to quoted text (he said ...)
Enhancing the NL comprehension pipeline to handle biological and biomedical texts
The OpenCog NLP pipeline -- the link parser, RelEx and RelEx2Logic -- handles a variety of simple English sentences well. However, parsing typical biomedical texts such as PubMed abstracts requires additional attention. Biomedical entity lists can easily be integrated with the link parser (and this has been done before), but attention must also be given to adjusting the link grammar dictionary and RelEx rule-bases to handle the particular syntactic and semantic constructs that are important in biological texts. This is heavier on linguistic intuition than programming skill, though it does require running complex software and editing linguistic rule-files with obscure syntax.
Using DeSTIN and SVM or Neural Nets to Effectively Classify Images
DeSTIN can be used as a feature generator for supervised image classification using tools like SVM, Neural Nets or MOSES.
Some work has been done on this, using e.g. the CiFAR image classification corpus. But the accuracy achieved so far is not that high. It would be good to have someone focus on tweaking and tuning DeSTIN, and doing appropriate image preprocessing, to get the classification accuracy higher. This is useful because it is a way of teaching us how to adjust DeSTIN to make it work better in general.
Connecting DeSTIN and OpenCog
Here is one AI-intensive idea for a project involving DeSTIN (by Ben Goertzel)
I want to feed input from a webcam into DeSTIN, and then (in abstracted form) into OpenCog. Toy problems like MNIST are not really interesting to me, because my research goal in this context is robot vision. So scalability is important.
There will be a "smart interface" between DeSTIN and OpenCog, that will basically
-- record a database of DeSTIN state vectors
-- identify patterns in that database using machine learning algorithms
-- output these patterns into OpenCog's probabilistic/symbolic knowledge base
The goal would to get really simple stuff to work, like
-- feed DeSTIN pictures of cars, motorcycles, bicycles, unicycles and trucks (videos would be great, but let's start with pictures)
-- see if the whole system results in OpenCog getting ConceptNodes for car, motorcycle, bicycle, unicycle and truck, with appropriate logical links between them like
Inheritance car motor_vehicle <.9>
Inheritance truck motor_vehicle <.9>
Inheritance bicycle motor_vehicle <.1>
Similarity bicycle unicycle <.7>
Similarity bicycle motorcycle <.7>
Similarity motorcycle car <.6>
Similarity bicycle car <.3>
Note that while I'm using English terms here, in this experiment the OpenCog ConceptNodes would have no English names and English words would not be involved at all -- I'm just using the English terms as shorthands.
This would be a first example of the bridging of DeSTIN's subsymbolic knowledge and OpenCog's symbolic knowledge, which would be pretty cool and a good start toward further work on intelligent vision ;)
Concept Creation from Predicates
The project here is to implement, and experiment with, creation of concepts from predicates and experimentation with these. For instance, if we have many links in the Atomspace of the form
EvaluationLink PredicateNode "in" ConceptNode "Beijing" ConceptNode "China" EvaluationLink PredicateNode "in" ConceptNode "women" ConceptNode "China" etc.
then a ConceptNode can be formed consisting of all $X so that
EvaluationLink PredicateNode "in" ConceptNode $X ConceptNode "China"
These ConceptNodes can then be fed into PLN inference and conclusions drawn about them. These conclusions will lead to new links being formed, which can then be mined to form new concepts, etc.
Implement conceptual blending as a heuristic for combining concepts, with a fitness function for a newly formed concept incorporating the quality of inferential conclusions derived from the concepts, and the quality of the MOSES classification rules learned using the concept.
The project is to integrate a clustering algorithm with OpenCog so that a MindAgent ongoingly operates to create and maintain Atoms representing clusters in the Atomspace.
Lojban Interface to OpenCog
Lojban, a constructed language based on formal logic but suited for everyday human communication, would make an excellent tool for communication between humans and AIs. There is a Lojban parser, but it just does grammaticality checking, it doesn't export parse structures in a form that can be imported into an AI system like OpenCog.
So the project here is to build a Lojban comprehension and generation pipeline, which is much easier than doing this for any natural language. This is suitable for anyone who knows Lojban and is able to deal with complex formal grammar parsers.