Linguistic interpretation

From OpenCog
Jump to: navigation, search

This interim page is a "work in progress" (Dec 3, 2013) -- it contains some suggestions regarding the Atom-ese representation of the output of RelEx2Logic.

Linas says: "From what I can tell, this is entirely consistent with previous work. For example, I think the old WSD work can be revived, and then used to trivially generate the "interpretations" below. So, for example, the WSD code generates things like this:

  WordSenseLink
     WordInstanceNode "bark, the word instance in parse 3 of sentence 55"
     WordSenseNode "bark, the sense #3 in wordnet"

It should be straightforward to create the corresponding InterpretationLink's and Nodes and ContextLinks below, given the above. The WSD code is just one way of obtaining the above; not the best way of the most accurate way, but a simple, proven and more-or-less direct way ... so the below looks like the right framework; it can support different algos for obtaining the most-likely interpretations.

See also


Example Sentence

The example to be considered here is the good old classic

 I saw the man with the telescope

Note that this has two parses, which are easily found by the link parser, and which result in the RelEx binary links

PARSE 1:
    _obj(see, man)
    _subj(see, I)
    _det(telescope, the)
    with(man, telescope)
    _det(man, the)

PARSE 2:
    with(saw, telescope)
    _obj(saw, man)
    _subj(saw, I)
    _det(telescope, the)
    _det(man, the)

Here we will work with the obvious interpretations of PARSE 1 and PARSE 2, and also with an alternative interpretation of PARSE 2.

In the alternative interpretation of PARSE 2, we interpret the word “saw” as “to cut or divide with a saw”,

(Yeah it would be kinda difficult to saw a man in half with a typical telescope, but people are capable of amazing things; and perhaps it’s a very sharp telescope!!)

What I'll do on this page is give an example of one way the various parses and interpretations of this sentence could be expressed sensibly in OpenCog Atoms.

Notational conventions

First some remarks on the notation to be used on this page...

The format

H = AtomType “name” 

means the Atom in question has type AtomType and name “name”, and will be referred to via shorthand “H”

The name of an Atom is part of the OpenCog system; the shorthand is just for communicating about the Atom on this page.

(Of course, these shorthands are similar to variables in Scheme scripts, for example.)

Generic Syntactic Atoms, applicable to all parses/interpretations

Here are some basic possibilities for representing the words in the sentence, and the sentence itself:

Sentence1 = SentenceNode “I saw the man with the telescope” 

// From here on $ will refer to the 1st and 2nd parse of the sentence

Parse$ = ParseNode Sentence1  // Atom representing $ parse of the sentence
ParseLink Parse$ Sentence1   // tells what sentence Parse$ refers to

I$= WordInstanceNode “I_$”
saw$ = WordInstanceNode “saw_$”
man$ = WordInstanceNode “man_$”
with$ = WordInstance Node “with_$”]
telescope$ = WordInstanceNode “telescope_$”

ReferenceLink
    Parse$
    (ListLink I$ saw$ man$ with$ telescope$)

LemmaLink I$ (WordNode “I”)
LemmaLink saw$ (WordNode “saw”)
LemmaLink man$ (WordNode “man”)
LemmaLink with$ (WordNode “with”)
LemmaLink telescope$ (WordNode "telescope")

WordInstanceLink I$ Parse$
WordInstanceLink saw$ Parse$
WordInstanceLink man$ Parse$
WordInstanceLink with$ Parse$
WordInstanceLink telescope$ Parse$


None of the above should be very controversial -- it's just a basic representation of the structure of the parses of sentence, before any semantic interpretation is done. The full detail of the syntax of any sentence, i.e. the link parse as it is represented in OpenCog format, can be found here

Semantic Atoms for Parse 1

Here are some suggestions about a possible way to represent the semantics of the sentence.

An IntepretationNode represents a certain interpretation of the sentence, i.e. an assignation of words to meanings

Interpretation1_1 = InterpretationNode "interpreation_1_first_parse"
InterpretationLink
    Interpretation1_1
    Parse1  //says what parse this particular InterpretationNode refers to


The following represent concepts corresponding to the specific entities or events or relationships mentioned in the sentence

I_1_1 = ConceptNode “I_1_1”
saw_1_1 = PredicateNode “saw_1_1”
man_1_1 = ConceptNode “man_1_1”
with_1_1 = PredicateNode “with_1_1”
telescope_1_1 = ConceptNode “telescope_1_1”

The following indicate that the words in the sentence refer to the above particular concepts, in the context of the given interpretation of the sentence

ContextLink
     Interpretation1_1
     ReferenceLink I1 I_1_1

ContextLink
    Interpretation1_1
    ReferenceLink saw1 saw_1_1

ContextLink
    Interpretation1_1
    ReferenceLink man1 man_1_1

ContextLink
    Interpretation1_1
    ReferenceLink with1 with_1_1

ContextLink
    Interpretation1_1
    ReferenceLink telescope1 telescope_1_1

The following connect the specific concepts/entities/events/relationships mentioned in the sentence to general concepts. the WN and TPP refer to specific external "dictionary" type data sources, to be described below.

InheritanceLink I_1_1(WordSenseNode “I” )

InheritanceLink saw_1_1 (WordSenseNode “WN_see#1”)

InheritanceLink man_1_1 (WordSenseNode “WN_man#1”)

InheritanceLink with_1_1 (WordSenseNode “TPP_with#1”)

InheritanceLink telescope_1_1 (WordSenseNode “WN_telescope_1”)

Finally, the following links at last express the relational meaning of the sentence

EvaluationLink
    saw_1_1
    I_1_1
    man_1_1

EvaluationLink
    with_1_1
    man_1_1
    telescope_1_1
    

Now let me explain the WN and TNN in the names of the general concepts above.

By WN_see#1 I mean the first sense of the word “see” in WordNet, which I assume as a reference dictionary, since we have already loaded WordNet into OpenCog…. See the online Wordnet browser

http://wordnetweb.princeton.edu/perl/webwn?c=7&sub=Change&o2=&o0=1&o8=1&o1=1&o7=&o5=&o9=&o6=&o3=&o4=1&i=3&h=00010000000000000000000000000000&s=saw

and select the option “show sense number” to see the sense numbers for different meanings of common words.

On the other hand, note that by saw_1_1 I simply mean a ConceptNode that embodies a particular instance of the concept “saw.” Actually it could be named “jfjl;kslkjfjjfjf777” instead of saw_1-1 , that doesn’t matter. Whereas the ConceptNode with a WordNet-based name, the name actually has some meaning in its reference to WordNet as an external knowledge source. And for a WordNode, the name matters too, as it tells you what string of characters the WordNode refers to.

Similarly, by (PredicateNode “TPP_with#1”) I refer to the first sense of “with” in the TPP database of preposition senses, see

http://www.clres.com/prepositions.html#inventory

for pointers. This resource has not been loaded into OpenCog yet so far as I know…. I'm not sure if there is any resource disambiguating prepositions loaded into OpenCog format. In the Novamente Cognition Engine we used our own preposition sense resource called the LARDict, built by Hugo Pinto, which I can't find on my hard drive right now.

The Mihalcea algo

(section added by Linas July 2014) I think it is important to understand the basic Michalcea algo here, as I think it avoids some of the difficulties the above seems to present. What that algo does is to suggest the most-likely-correct word sense for an interpretation. So, it works as follows.

First, create links to ALL possible word-senses:

InheritanceLink saw_1_1 (WordSenseNode “WN_see#1”)
InheritanceLink saw_1_1 (WordSenseNode “WN_see#2”)
InheritanceLink saw_1_1 (WordSenseNode “WN_see#3”)

InheritanceLink man_1_1 (WordSenseNode “WN_man#1”)
InheritanceLink man_1_1 (WordSenseNode “WN_man#2”)
InheritanceLink man_1_1 (WordSenseNode “WN_man#3”)
InheritanceLink man_1_1 (WordSenseNode “WN_man#4”)

InheritanceLink with_1_1 (WordSenseNode “TPP_with#1”)
InheritanceLink with_1_1 (WordSenseNode “TPP_with#2”)

InheritanceLink telescope_1_1 (WordSenseNode “WN_telescope#1”)
InheritanceLink telescope_1_1 (WordSenseNode “WN_telescope#2”)

Each of these InheritanceLinks initially has a TruthValue of "maybe, uncertain". Then, run an algorithm that adjusts each of the TV's to be either more false, or more true, and increasing in certainty. When the algo completes, the InheritanceLinks with the highest TV are the most-likely interpretation.

Activation spreading, Page Rank

What does that algo look like? The original Mihalcea algo did this: it computed a "similarity" value between all possible word-senses. So, for example similarity(“WN_telescope#2”, “WN_see#3”) = 0.9 but similarity(“WN_telescope#2”, “WN_see#1”) = 0.1 and simiarlity(“WN_telescope#1”, “WN_see#3”) = 0.5, and so-on. the similarities depend ONLY on WordNet, and NOT on the sentence.

The, run the google page-rank algorithm (which is the same as solving a Markov chain for its largest eiegenvector, which is the same as finding the Frobenius-Perron eigenvector) I don't want to explain the full algo (its not hard, but does require some study) but it is kind-f-like "activation spreading". So for example, pick a random InheritanceLink, say (InheritanceLink telescope_1_1 (WordSenseNode “WN_telescope#2”)) and lets say it has a medium-high TV. Then since it has a similarity of 0.9 to (InheritanceLink saw_1_1 (WordSenseNode “WN_see#3”)) then "spread" some of the first one's TV onto the second one. By 'spidering' the inheritance links over and over, and spreading the TV's between the strongly-connected ones, some of the TV's will get large(true) and confident, others will get false confident. After running for a while, the word-senses that "go together" re-inforce each other. The ones with the highest (most true, most confident) values are the most likely interpretation.

Similarity and PLN

So: how do you obtain similarity? Mihalcea used several ad-hoc methods; we can do much better. One of her ad-hoc methods was to compute overlap between word-sense descriptions in WordNet. That works, but ...Yuck... We can obtain similarity in many ways, the most powerful is to use PLN.

So, for example, maybe WN_see#1 means 'threaten' ("we'll see about that" means "I'm going to try to stop you") and we can use PLN to deduce that you cannot threaten with a WN_telescope#1 or a WN_telescope#2 ... so the similarity here would be low. But WN_see#3 is "to look", so PLN should be able to deduce that WN_telescope#2 can be used to look. (But WN_telescope#1 cannot, because WN_telescope#1 is like a 'telescoping antenna' ... which maybe can be used to threaten .. ?)

Basically, if PLN can verify that 'common sense facts' about the different WN sensess go together, and do not contradict each other, then it can raise the TV for those inheritanceLinks, and lower all the other ones.

Semantic Atoms for Parse 2, first interpretation

Moving on...

Interpretation2_1 = InterpretationNode "interpretation_1 of second parse"
InterpretationLink Interpretation2_1 Parse2

I_2_1 = ConceptNode “I_2_1”
saw_2_1 = PredicateNode “saw_2_1”
man_2_1 = ConceptNode “man_2_1”
with_2_1 = PredicateNode “with_2_1”
telescope_2_1 = ConceptNode “telescope_2_1”

ContextLink
    Interpretation2_1
    ReferenceLink I2 I_2_1

ContextLink
    Interpretation2_1
    ReferenceLink saw2 saw_2_1

ContextLink
    Interpretation2_1
    ReferenceLink man2 man_2_1

ContextLink
    Interpretation2_1
    ReferenceLink with2 with_2_1

ContextLink
    Interpretation2_1
    ReferenceLink telescope2 telescope_2_1

InheritanceLink I_2_1 (WordSenseNode “I” )

InheritanceLink saw_2_1 (WordSenseNode “WN_see#1”)

InheritanceLink man_2_1(WordSenseNode “WN_man#1”)

InheritanceLink with_2_1 (WordSenseNode “TPP_with#2”)

NOTE a different sense of “with” here than in Parse1

InheritanceLink telescope_2_1(WordSenseNode “WN_telescope_1”)

EvaluationLink
    saw_2_1
    I_2_1
    man_2_1

EvaluationLink
    with_2_1
    see_2_1
    telescope_2_1

Semantic Atoms for Parse 2, second interpretation

Interpretation2_2 = InterpretationNode "2nd interpretation of parse 2"
InterpretationLink Interpretation2_2 Parse2

I_2_2 = ConceptNode “I_2_2”
saw_2_2 = PredicateNode “saw_2_2”
man_2_2 = ConceptNode “man_2_2”
with_2_2 = PredicateNode “with_2_2”
telescope_2_2 = ConceptNode “telescope_2_2”

ContextLink
    Interpretation2_2
    ReferenceLink I2 I_2_2

ContextLink
    Interpretation2_2
    ReferenceLink saw2 saw_2_2

ContextLink
    Interpretation2_2
    ReferenceLink man2 man_2_2

ContextLink
    Interpretation2_2
    ReferenceLink with2 with_2_2

ContextLink
    Interpretation2_2
    ReferenceLink telescope2 telescope_2_2

InheritanceLink I_2_2(WordSenseNode “I” )

InheritanceLink saw_2_2 (WordSenseNode “WN_saw#1”)

NOTE: a different sense of “saw” here than in Interpretation2_1

InheritanceLink man_2_2 (WordSenseNode “WN_man#1”)

InheritanceLink with_2_2 (WordSenseNode “TPP_with#2”)

InheritanceLink telescope_2_2 (WordSenseNode “WN_telescope_1”)

EvaluationLink
    saw_2_2
    I_2_2
    man_2_2

EvaluationLink
    with_2_2
    saw_2_2
    telescope_2_2

Why So Many Atoms??!!

Note that in the above, for instance, we have separate nodes for

ConceptNode “man_2_1”
ConceptNode “man_2_2”

which may seem odd, as it's basically the same man being referred to in the two interpretations…. But this is because while processing the sentence, it’s unknown whether these are going to actually be found to refer to the same concepts or not. If no differences between them are found, then PLN inference (or simpler inference heuristics) can determine that they are the same, and ultimately a heuristic can fuse them into the same Atom. But for the purpose of language understanding it’s important to distinguish them, otherwise the system will get confused in some cases.

Short Term Strategy

The above is all very well, but at the moment (Dec 2013), we want to get RelEx2Logic outputting sensible and reasonably robust Atomese, even though we don't have sense disambiguation hooked up to RelEx2Logic...

So in practice, now, for each parse output by RelEx/ link parser, we will only use one default interpretation, so it will always be Interpretation1. But we need to use the right structures so we can deal with multiple interpretations of each parse later on. Also, for now, instead of Atoms like

ConceptNode “WN_saw#1”

we may just use

ConceptNode “saw”

But this is just a crude simplification for now. Later we will need to build a process that replaces these crude Atoms with properly disambiguated ones, or the system won’t really understand what it’s reading.

Another note is: We don't *just* want the RelEx2Logic output in the Atomspace. We also want, in general, the syntactic output from the link parser. But there is already code for generating this from RelEx (see RelEx OpenCog format). For many kinds of logical reasoning, the RelEx2Logic output will be enough. But for some kinds of understanding, the system will need to use all the different kinds of output together.

Notational Clarification

This section attempts to minimize confusion by explicitly relating the notation used here to that used on the RelEx2Logic Rules#SV_rule.

For the SV rule (as a simple example), the Scheme helper function as defined on that page is

(define (SV-rule subj_concept  subj_instance  verb  verb_instance)
	(define new_predicate (PredicateNode verb_instance ) )
        (define verb_node (PredicateNode verb))
	(define new_concept_subj (ConceptNode subj_instance ) )
	(define subj_node (ConceptNode subj_concept))
	(InheritanceLink new_predicate verb_node )
	(InheritanceLink new_concept_subj subj_node )
	(EvaluationLink new_predicate new_concept_subj)
)

[I've removed the ANDLink which seems unnecessary to me at the moment]

Mapping the notation there into the notation on this page, in the context of the example

_subj(smile, Pumpkin)

given there, we'd have

new_predicate is (PredicateNode "smile_1")

new_concept is (ConceptNode Pumpkin_1)

verb_node is (PredicateNode "smile")

subj_node is (ConceptNode "Pumpkin")

However, the RelEx2Logic output as described on that page doesn't handle the existence of multiple parses and interpretations, nor the links from concepts to words. I guess that has to be dealt with too via additional Scheme functions that create appropriate links connecting the output of RelEx2Logic with the regular RelEx Scheme output?