From OpenCog
Jump to: navigation, search

(initial version of this page by Ben Goertzel, August 2012)

A rough outline of an initial, heuristic approach to concept blending is as follows:

  • 1) Choose a pair of concepts C1 and C2, which have a nontrivially-strong HebbianLink between them, but not an extremely high-strength SimilarityLink between them [i.e. the concepts should have something to do with each other, but not be extremely similar; blends of extremely similar things are boring]. These parameters may be twiddled.
  • 2) Form a new concept C3, which has some of C1's links, and some of C2's links
  • 3) If C3 has obvious contradictions, resolve them by pruning links. (For instance, if C1 inherits from alive to degree .9 and C2 inherits from alive to degree .1, then one of these two TruthValue versions for the inheritance link from alive, has got to be pruned...)
  • 4) For each of C3's remaining links L, make a vector indicating everything it or its targets are associated with (via HebbianLinks or other links). This is basically a list of "what's related to L". Then, assess whether there are a lot of common associations to the links L that came from C1 and the links L that came from C2
  • 5) If the filter in step 4 is passed, then let the PLN forward chainer derive some conclusions about C3, and see if it comes up with anything interesting (e.g. anything with surprising truth value, or anything getting high STI, etc.)

Steps 1 and 2 should be repeated over and over...

Step 5 is basically "cognition as usual" -- i.e. by the time the blended concept is thrown into the Atomspace and subjected to Step 5, it's being treated the same as any other ConceptNode.

Obviously this is more of a meta-algorithm than a precise algorithm. Many avenue for variation exist, including

  • Step 1: heuristics for choosing what to try to blend
  • Step 3: how far do we go here, at removing contradictions? Do we try simple PLN inference to see if contradictions are unveiled, or do we just limit the contradiction-check to seeing if the same exact link is given different truth-values?
  • Step 4: there are many different ways to build this association-vector. There are also many ways to measure whether a set of association-vectors demonstrates "common associations". Interaction information is one fancy way; there are also obvious simpler ones.
  • Step 5: there are various ways to measure whether PLN has come up with anything interesting

SUV Example

To illustrate these ideas, consider the example of the SUV -- a blend of "Car" and "Jeep"

Among the relevant properties of Car are:

  • appealing to ordinary consumers
  • fuel efficient
  • fits in most parking spots
  • easy to drive
  • 2 wheel drive

Among the relevant properties of Jeep are:

  • 4 wheel drive
  • rugged
  • capable of driving off road
  • high clearance
  • open or soft top

Obviously, if we want to blend Car and Jeep, we need to choose properties of each that don't contradict each other. We can't give the Car/Jeep both 2 wheel drive and 4 wheel drive. 4 wheel drive wins for Car/Jeep because sacrificing it would get rid of "capable of driving off road", which is critical to Jeep-ness; whereas sacrificing 2WD doesn't kill anything that's really critical to car-ness.

On the other hand, having a soft top would really harm "appealing to consumers", which from the view of car-makers is a big part of being a successful car. But getting rid of the hard top doesn't really harm other aspects of jeep-ness in any series way.

However, what really made the SUV successful was that "rugged" and "high clearance" turned out to make SUVs look funky to consumers, thus fulfilling the "appealing to ordinary consumers" feature of Car. In other words, the presence of the links

  • looks funky ==> appealing to ordinary consumers
  • rugged & high clearance ==> looks funky

made a big difference. This is the sort of thing that gets figured out once one starts doing PLN inference on the links associated with a candidate blend.

However, if one views each feature of the blend as a probability distribution over concept space (e.g., indicating how closely associated with feature is with each other concept (e.g. via HebbianLinks) … then we see that the mutual information (and more generally interaction information) between the features of the blend, is a quick estimate of how likely it is that inference will lead to interesting conclusions via reasoning about the combination of features that the blend possesses.