Code Maturity Guide
This page summarizes information on the relative maturity levels of various parts of the OpenCog codebase. It was created by Ben Goertzel in late January 2014, and was most recently systematically updated by Ben in late 2014.
Definition of Terms
Inspired somewhat by earlier comments from Matt Chapman, we use the following criteria to describe code maturity levels here.
For current OpenCog development purposes, define an "end-user" as someone capable of operations such as:
- basic unix shell commands
- executing a python script with appropriate environment variables set
- running the Scheme shell
- running mind-agents from the cogserver console;
i.e. we are not talking about product end users, but rather about would-be OpenCog developers, or AI researchers or software developers interested in using OpenCog in some software project, etc.
The maturity levels used are:
- PROVEN: Proven in a real application. Provides functionality to an end-user, has been used in commercial products or services (or nonprofit products or services of commercial-grade quality).
- WORKS WELL: Maybe at the level of "proven", or near there, but hasn't actually been used in practical applications yet to a significant degree.
- WORKS: Provides functionality to an end-user, either independently or in concert with the Atomspace/Cogserver. No big problems with the functionality, but the code maybe isn't quite professional-grade and/or the functionality still has quirks. But it basically works.
- KINDA WORKS: Provides functionality to an end-user, either independently or in concert with the Atomspace/Cogserver. But this functionality may still be "research grade" and it's not quite clear when it will work effectively or not. To be in this status, there must at least be some test cases that can be run, and functionality evaluated.
- ALMOST WORKS: Doesn't currently work, or can't quickly/easily be verified to work. Currently Broken, unmaintained, or does not yet expose any functionality to an end-user, but there is code in a publicly accessible repository -- and it either used to work, or was left off with very little left to be done.
- BROKEN: Code exists but is badly broken and would be a looooot of work to do anything with it.
- BADLY INCOMPLETE: Someone started but didn't come near to finishing.
- DOESN'T EXIST: No publicly available code yet.
- OBSOLETE: May have worked once upon a time, but the code has been archived or deleted. This happens because people lose interest, and stop maintaining the code; it then bit-rots and fails to compile/run. Reasons for losing interest is that, in the end, it never worked all that well, to begin with.
Of course there will be many intermediate cases; the above is intended as a rough guide for the perplexed rather than as a scientific and rigorous ontology. It would be possible to quibble endlessly about the above labels, but we just need some rough categories for practical use, so such argument seems less than important...
Code Maturity Guide
The below should be considered approximate, and if you are a developer who has better knowledge of some OpenCog component than is represented below, please make appropriate updates!
The components in the table are listed in terms of decreasing maturity of the most mature subcomponent.
(Please nobody be offended if you don't like the label affixed to the component you're working on. These categories are not intended as value judgments, only as rough guides, and of course components may reside near a boundary between categories where judgment calls have to be made. This is why there is a comments field in the table.)
Components that are designed but not implemented are generally not listed here. This category includes most of the CogPrime AGI design. Those nonexistent components that are listed here, appear because someone might have a reason to believe those specific components exist (e.g. a version existed once but was not checked into the repository, etc.).
|MOSES||for Boolean programs||PROVEN||PLOS ONE, 28 Jan 2014, Predicting the Risk of Suicide by Analyzing the Text of Clinical Notes. MOSES has also been used for financial prediction, for biology data analysis via a number of firms, and for marketing data mining.|
|for arithmetic programs||WORKS||Works as software, but the AI results are not so smart|
|for programs containing game-world actions, conditionals and loops||ALMOST WORKS||Used to work well, but unmaintained and small bugs accumulated that must be fixed|
|for programs containing recursion or local variables||DOESN'T EXIST|
|MOSES-Atomspace interaction||ALMOST WORKS||This used to work but hasn't been tried for years, so far as I know; I'm sure some fixing is needed|
|Pattern Matcher||WORKS||Has been used in a host of AI applications by now.|
|Pattern Miner||KINDA WORKS||Has been run on some test databases with OK results. Needs to be made more scalable, and to use better "surprisingness" measures.|
|RelEx||PROVEN||Used in some govt-funded question-answering system in 2005; it was used in a japanese/english learning system in 2008, and in a big-data search engine in 2010|
|Scheme shell||WORKS WELL|
|Unity3D Game World||OBSOLETE|
|Embodiment||for the Unity3D game world||OBSOLETE|
|For robots||ALMOST WORKS||A lot of work has been done toward connecting OpenCog with robots using ROS, the initial use case being running the planner for guiding robot movement, with maps created via Kinect Fusion.|
|OpenPsi||WORKS||OpenPsi works fine, though there's some hacky code in there that would be nice to clean up.|
|Planner||WORKS||Currently the planner learns plans for the Unity3D game world. It can be customized to learn other sorts of plans, via implementing new "rules" within its C++ code.|
|DeSTIN||Core DeSTIN perception hierarchy||OBSOLETE||Runs on test cases; with appropriate, domain-specific parameter tuning can be used as a feature generator in computer vision applications|
|Frequent subtree mining||KINDA WORKS||Some work done by Ted Sanders experimentally, not integrated into any DeSTIN workflow for automatic usage|
|DeSTIN to OpenCog pipeline||DOESN'T EXIST||This pipelining was done "by hand" once for a research paper.|
|Cython interface||WORKS WELL||This interface, allowing MindAgents written in python to interact w/ the C++ Atomspace and Cogserver, works and is in use, but has a bunch of known bugs and oddities|
|PLN||KINDA WORKS||Test examples can be run; tuning and improvement (mainly of inference control) are likely needed to make it do useful stuff in practice|
|ECAN||KINDA WORKS||There are no known problems with the ECAN code, but it's only been used for some demo examples and academic experiments some years ago|
|Atomspace visualizer||KINDA WORKS||The visualizer works, but the visualizations often look tangled up, so work on layout is needed before it's really useful|
|Concept Blending||OBSOLETE||There is some partial code, but it uses pretty much random heuristics to do the blending. The framework is there to be finished.|
|RelEx2Logic||KINDA WORKS||Code is under active development and pretty functional, but still partial|
|NLGen||NLGen/SegSim||ALMOST WORKS||Worked OK but with limitations, and hasn't been tried for a long time, and the approach is considered obsolete|
|Surface Realization||ALMOST WORKS||Is able to generate English expressions from simple Atom structures; needs extension|
|Microplanning||KINDA WORKS||Can turn some sets of Atoms into sequences of Atom-sets to be fed to Surface Realization, for verbalization; does some anaphor insertion.|