Running MOSES for Automated Program Learning
One of the uses for MOSES is for automated program learning.
The first part of this tutorial is conceptual, jump forward to the Hands On Tutorial if you want to skip the theory.
Why gain a theoretical understanding of MOSES? To really understand the MOSES such that it can be applied usefully, arguably you need good intuitions about certain general principles of MOSES, as well as it's inner workings of MOSES.
Automated Program Learning
- What distinguishes Program Learning from Machine Learning?
Program Learning is a sub-field of Machine Learning. 'Programs' are:
- Well Specified (sentences in Natural Language don't count)
- Compact (specifying a lot on a small amount of space)
- Combinatorial (made up sub programs and can make calls to other programs)
Many aspects of ML don't have these properties.
Types of Program Learning
Types of Program learning include:
One aspect of Program Learning is to find a good function (ideally an optimal function) that predicts a specified outcome.
What does this function do? Given that:
What is ? We can see that the function returns the 2nd value in the list supplied.
Given some space of hypotheses (containing a list of programs) and some data - what is the posterior distribution? What are the likely interesting points? Program Learning is often formulated as an induction problem. Given examples of a function , you want to find generalized the function.
- Maximize over all in some program space
- Search for likely points or infer the distribution
OpenCog's Automated Program Learning
What is meant by automated program learning in the context of OpenCog?
|As described in the section on MOSES terminology, in MOSES a program refers to a combo program representing a tree structure of operators, variables and values. MOSES automatically 'learns' new programs through genetic evolutionary programming.
The initial program (tree structure) (often based on an exemplar) is first peppered with parameters (or tunable knobs) - who's combination of values (MOSES_terminology#Instance instances) are mutated into a population of programs (or population of programs is referred to as a deme). The best programs are kept (using a scoring function) and used as exemplars from which representations are constructed - a deme is created from each representation (or exemplar?). Now we have a population of demes (referred to as a Metapopulation). These demes can be fed into the next round of evolution. A MOSES evolutionary cycle in abstract: Start with a Program -> create a deme -> add knobs in random locations (or is this step previous to 'create a deme'?) -> randomly tune the knobs of all these programs -> score and promote the best programs to exemplars -> create new demes from the exemplars -> repeat back to step (create a deem) until optimization is complete.
To clarify: The representation-building process: you can derive representations from an exemplar through the process of creating additional placeholders (for knobs) at random locations. A representation combined with knob settings is a program. A deme of these programs which explore a variety of a different combinations of parameter settings* is created Programs which pass selection criteria are exemplars, and new demes are seeded with exemplars.
@ / \ @ 8 / \ * @ / \ @ 5 / \ + 3
Hands On Tutorial
1) (Optionally) Read through the MOSES terminology so that the documentation makes sense
2) (Optionally) Read through the MOSES documentation,
3) Do below examples:
Other uses for moses include time series learning.
Git is located here: https://github.com/opencog/moses
Created by Adam Ford
MOSES experts: Nil, Eyob (In Addis), Misgana, Mike Dunkan, Ben Goertzel
Priority (Medium Priority)