Frame

From OpenCog
Jump to: navigation, search

A Frame is a classical GOFAI notion: it consists of Slots and default values for the slots, used to capture expectations about a frequently encountered class of situations.

For instance, the Frame for a family might include slots for the mother, the father, the son, the daughter, the home they live in, the family pet, the family car, etc.

Implicit Frames

There seems little doubt that something like Frames exists in the human Mind, but the question is whether it exists explicitly or implicitly.

Implicit Frames in OpenCogPrime

For instance, in OpenCogPrime, the analogue of a Frame would be a Map consisting of a set of Atoms that

  • are all describing relationships involving the same Concept Atom (e.g. the Atom representing "family")
  • are tightly interlinked via HebbianLinks, denoting frequent common utilization

In this approach, there is no need for a separate "Frame" data structure; frames are fuzzy and emerge via the cooperation of Inference and Economic Attention Allocation dynamics.

Frame-based vs. non-frame-based

OpenCog uses non-frame-based representation for the subject and object of a verb and employs frame-based representation for the other arguments of the verb (e.g. prepositions or relative clauses) [1], as can be seen in the following example:

(EvaluationLink
    (PredicateNode "like")
    (ListLink
        (ConceptNode "I")
        (EvaluationLink
            (PredicateNode "eat")
            (ListLink
                (ConceptNode "I")
                (ConceptNode "bass@12345")))))

While using frame-based representation for everything would be more consistent, it requires more atoms and the rewriting of existing rules. A predecessor of OpenCog employed a frame-based format (similar to FrameNet [2]) using semantic role labeling to designate verb arguments. Disambiguation, however, often is not self-evident (figuring it if an argument is the patient or agent, etc) and this representation doesn't add any additional meaning over the existing style. Furthermore, as long as the mappings are relatively simple and mutually consistent, PLN should be able to convert between different representations to do any meaningful inference. In the near future, ML techniques should help in disambiguating and differentiating verbs, e.g. telling apart "Ben eats lunch" from "Ben appears ridiculous".