OpenCogPrime:FitnessEstimationViaIntegrativeIntelligence

From OpenCog
Jump to: navigation, search

Fitness Estimation via Integrative Intelligence

If instance generation is very cheap and fitness evaluation is very expensive (as is the case in many applications of evolutionary learning in OCP), one can accelerate evolutionary learning via a 'fitness estimation' approach. Given a fitness function embodied in a predicate P, the goal is to learn a predicate Q so that:

  1. Q is much cheaper than P to evaluate, and
  2. There is a high-strength relationship:
Similarity Q P

or else

ContextLink C (Similarity Q P)

where C is a relevant context.

Given such a predicate Q, one could proceed to optimize P by ignoring evolutionary learning altogether and just repeatedly following the algorithm:

  • Randomly generate N candidate solutions.
  • Evaluate each of the N candidate solutions according to Q.
  • Take the k<<N solutions that satisfy Q best, and evaluate them according to P.

improved based on the new evaluations of P that are done. Of course, this would not be as good as incorporating fitness estimation into an overall evolutionary learning framework.

Heavy utilization of fitness estimation may be appropriate, for example, if the entities being evolved are schemata intended to control an agent's actions in a real or simulated environment. In this case the specification predicate P, in order to evaluate P(S), has to actually use the schema S to control the agent in the environment. So one may search for Q that do not involve any simulated environment, but are constrained to be relatively small predicates involving only cheap-to-evaluate terms (e.g. one may allow standard combinators, numbers, strings, ConceptNodes, and predicates built up recursively from these). Then Q will be an abstract predictor of concrete environment success.

We have left open the all-important question of how to find the 'specification approximating predicate' Q.

One approach is to use evolutionary learning. In this case, one has a population of predicates, which are candidates for Q. The fitness of each candidate Q is judged by how well it approximates P over the set of candidate solutions for P that have already been evaluated. If one uses evolutionary learning to evolve Q's, then one is learning a probabilistic model of the set of Q's, which tries to predict what sort of Q's will better solve the optimization problem of approximating P's behavior. Of course, using evolutionary learning for this purpose potentially initiates an infinite regress, but the regress can be stopped by, at some level, finding Q's using a non-evolutionary learning based technique such as genetic programming, or a simple evolutionary learning based technique like standard BOA programming.

Another approach to finding Q is to use inference based on background knowledge. Of course, this is complementary rather than contradictory to using evolutionary learning for finding Q. There may be information in the knowledge base that can be used to "analogize" regarding which Q's may match P. Indeed, this will generally be the case in the example given above, where P involves controlling actions in a simulated environment but Q does not.

An important point is that, if one uses a certain Q1 within fitness estimation, the evidence one gains by trying Q1 on numerous fitness cases may be utilized in future inferences regarding other Q2 that may serve the role of Q. So, once inference gets into the picture, the quality of fitness estimators may progressively improve via ongoing analogical inference based on the internal structures of the previously attempted fitness estimators.