From OpenCog
Jump to: navigation, search

This page describes a current experiment using hand-coded rules to map RelEx output into sets of relationships utilizing FrameNet and other similar semantic resources.

Note: the Relex2Frame system was never accurate enough for practical use. It has been removed from RelEx version 1.5.0. It last appears in version 1.4.2, which is available for download at the usual sites. The Relex2Frame code was quite large, consisting of thousands of rules, and a considerable mass of java code. Unfortunately, its lack of usefulness meant that it mostly just cluttered things up, and made code maintenance difficult. So it was removed.

Semantic Frame Output

Semantic framing or normalization provides a higher-level, more abstract, but semantically more tractable description of the parsed sentence. An example of the semantic frame output, for the sentence "Alice looked at the cover of Shonen Jump.", is shown below. Note, for example, the identification of "the cover of Shonen Jump" as being a part-whole relationship, in that the cover was identified as being a part of the whole thing. The goal of such framing is to assist cognitive reasoning; rather than requiring a large common-sense database to deduce that the covers of magazines are a part of a magazine, this information can be directly inferred from the sentence itself, based on the linguistic structure, and a relatively small set of framing rules.


The framing rules are specified as simple IF..THEN rules, and are evaluated using a simple forward-chaining reasoner. The forward-reasoner code is home-grown; it might make sense to replace it by some generic forward-reasoning code, ideally, by one of the smallest, simplest yet still widely supported and popular open source rule engines. Anything implementing JSR 94 - Java Rule Engine seems like a good bet.

Syntax for FrameNet relationships

The syntax decided upon for describing semantic relationships drawn from FrameNet and other sources is exemplified by the following:


The ^1 indicates the data source, where 1 is a number indicating that the resource is FrameNet. The "give" indicates the word in the original sentence from which the relationship is drawn, that embodies the given semantic relationship. So far the resources we've utilized are:

  1. FrameNet
  2. Custom relationship names from Novamente.

but using other resources in future is quite possible.

An example using a custom Novamente relationship would be:


which defines an inheritance relationship: something that is part of Novamente's ontology but not part of FrameNet.

The "Benefit" part of the first example indicates the frame indicated, and the "Benefitor" indicates the frame element indicated. This distinction (frame vs. frame element) is particular to FrameNet; other knowledge resources might use a different sort of identifier. In general, whatever lies between the underscore and the initial parenthese should be considered as particular to the knowledge-resource in question, and may have different format and semantics depending on the knowledge resource (but shouldn't contain parentheses or underscores unless those are preceded by an escape character).

Example FrameNet-izations of Simple Sentences

Note that these were created by hand, rather than via applying transformation rules to the output of RelEx. However, following the creation of this file by hand, mapping rules were created that (when applied by hand) result in similar results. These rules are attached to this page in another file, in rough-draft form.

As an example, consider:

Put the ball on the table

RelEx output:

imperative(Put) [1]
_obj(Put, ball) [1]
on(Put, table) [1]
singular(ball) [1]
singular(table) [1]

FrameNet_Mapping Rules:

$var0 = ball
$var1 = table
# IF imperative(put) THEN ^1_Placing:Agent(put,you)
# IF _obj(put,$var0) THEN ^1_Placing:Theme(put,$var0)
# IF on(put,$var1) & _obj(put,$var0) THEN ^1_Placing:Goal(put,$var1) \
 ^1_Locative_relation:Figure($var0) ^1_Locative_relation:Ground($var1)

FrameNet Mapping:


Syntax for FrameNet Mapping Rules

The syntax used for rules mapping RelEx to FrameNet, at the moment, looks like this:

# IF imperative(put) THEN ^1_Placing:Agent(put,you)
# IF _obj(put,$var0) THEN ^1_Placing:Theme(put,$var0)
# IF on(put,$var1) & _obj(put,$var0) THEN ^1_Placing:Goal(put,$var1) \
 ^1_Locative_relation:Figure($var0) ^1_Locative_relation:Ground($var1)

The syntax may get refined a little once the script for processing the rules is made.

Basically, each rule looks like

# IF condition THEN action

where the condition is a series of RelEx relationships, and the action is a series of FrameNet relationships.

The arguments of the relationships may be words or may be variables in which case their names must start with $ (An escape character like "\" must be used to handle cases where the character "$" starts a word, I suppose). The only variables appearing in the action should be ones that appeared in the condition.

A Priori Probabilities For Rules

It may be useful to attach a priori, heuristic probabilities to rules, say

# IF _obj(put,$var0) THEN ^1_Placing:Theme(put,$var0) <.5>

to denote that the a priori probability for the rule is 0.5

This is a crude mechanism and it's not yet clear how useful it will be (The reason it's crude is that the probability of a rule being useful, in reality, depends so much on context).

Exclusions Between Rules

It may be useful to specify that two rules can't semantically-consistently be applied to the same RelEx relationship.

To do this, we need to associate rules with labels, and then specify exclusion relationships such as

# IF on(put,$var1) & _obj(put,$var0) THEN ^1_Placing:Goal(put,$var1) \
  ^1_Locative_relation:Figure($var0) ^1_Locative_relation:Ground($var1) [1]
# IF on(put,$var1) & _subj(put,$var0) THEN ^1_Performing_arts:Performance(put,$var1) \
  ^1_Performing_arts:Performer(put,$var0) [2]

In this example, Rule 1 would apply to "He put the ball on the table", whereas Rule 2 would apply to "He put on a show". The exclusion says that generally these two rules shouldn't be applied to the same situation. Of course some jokes, poetic expressions, etc., may involve applying excluded rules in parallel.

For maximal flexiblity, the FrameNet-ization of a RelEx parse would include the application of all applicable rules, regardless of exclusions; but then the reasoning process that acts on the FrameNet-ization would make use of the exclusion relationship in the course of its reasoning process.

Example Rules

Text files attached to this page contain example rules, abstracted by mapping a collection of sentences intended as commands for virtual pets, and by mapping sentences from the Simple English Wikipedia.

At the moment, the rules in the attached files are in somewhat "raw" state, and are in need of substantial revisions. However, they are included here for their evocative value.

As an example of the known issues with the files, the first arguments of FrameNet relations seem to be omitted. For instance, the file has

# IF _subj(understand,$var0) THEN ^1_Grasp:Cognizer($var0)

but it should have

# IF _subj(understand,$var0) THEN ^1_Grasp:Cognizer(understand,$var0)

where the first argument of the FrameNet relationship Grasp:Cognizer is the entity that maps into the Grasp frame, and the second element ($var0) is the entity that maps into the Cognizer frame-element.

So far as I can tell the vast majority of issues with the files seem to be "cosmetic" formal ones like this that can be cleaned up without digging into FrameNet.

Here is a message written by Ben on Nov. 26 (2007) pertaining to the two-argument issue:

For example, I would replace

# IF _subj(show,$var0) THEN ^2_Demonstrate:Agent($var0)
# IF _obj(show,$var0) _obj2(show,$var1) ^ present($var1) THEN \
 ^2_Demonstrate:Action($var1) ^2_Demonstrate:Benefitee($var0)


# IF _subj(show,$var0) THEN ^2_Demonstrate:Agent(show,$var0)
# IF _obj(show,$var0) _obj2(show,$var1) ^ present($var1) THEN \
 ^2_Demonstrate:Action(show,$var1) ^2_Demonstrate:Benefitee(show,$var0)

The case where FrameNet is used to formalize a preposition frame can be handled similarly, i.e. one can say

# IF for($var0,$var1) ^ {present($var0) OR past($var0) OR \
 future($var0)} THEN ^2_Benefit:Benefitor($var1) ^2_Benefit:Act($var0)


# IF for($var0,$var1) ^ {present($var0) OR past($var0) OR \
 future($var0)} THEN ^2_Benefit:Benefitor(for,$var1) ^2_Benefit:Act(for,$var0)

In cases like

# IF _subj-a($Size,$var0) THEN ^1_Dimension:Measurement($Size) \

I think we need to go with something like

# IF _subj-a($Size,$var0) THEN ^1_Dimension:Measurement(size,$Size) \
 ^2_Dimension:Object(size, $var0)

One way to look at my point is that all the relationships that are produced by a certain rule, need to have some commonality that lets us tell that they all came from the same rule.

Another, more thorough, way to look at it as follows... The general syntax of the FrameNet-type relations produced by your rules should be of the form

^knowledge-source_frame-type:frame-element-type(frame-instance, \
  frame-element instance)

For instance in the case

  • "show" denotes the frame-instance
  • $var0 denotes the frame-element instance

Anyway this change is necessary for the output of the rules to be used by an AI system in the way that I want them to be. Because, in the AI system, we will apply multiple rules to a sentence, producing a bunch of FN relations for that sentence -- and for each of these FN relations, we need to know which frame-instance it corresponds to.

Finally, one complexity I'll mention is, consider the sentence:

"Bob says killing for the Mafia beats killing for the government"
uncountable(Bob) [6]
present(says) [6]
_subj(says, Bob) [6]
_that(says, beats) [3]
uncountable(killing) [6]
for(killing, Mafia) [3]
singular(Mafia) [6]
definite(Mafia) [6]
hyp(beats) [3]
present(beats) [5]
_subj(beats, killing) [3]
_obj(beats, killing_1) [5]
uncountable(killing_1) [5]
for(killing_1, government) [2]
definite(government) [6]

In this case there are two instances of "for". The script that transform this into FN relationships will need to take care to distinguish the two different for's (or we might want to modify RelEx to make this distinction). It might want to do something like insert an index after the second for, as in

uncountable(Bob) [6]
present(says) [6]
_subj(says, Bob) [6]
_that(says, beats) [3]
uncountable(killing) [6]
for(killing, Mafia) [3]
singular(Mafia) [6]
definite(Mafia) [6]
hyp(beats) [3]
present(beats) [5]
_subj(beats, killing) [3]
_obj(beats, killing_1) [5]
uncountable(killing_1) [5]
for_1(killing_1, government) [2]
definite(government) [6]

That way upon applying the rule:

# IF for($var0,$var1) ^ {present($var0) OR past($var0) OR future($var0)} \
 THEN  ^2_Benefit:Benefitor(for,$var1) ^2_Benefit:Act(for,$var0)

we would obtain:



Here the first argument of the FN relationships allows us to correctly associate the different acts of killing with the different benefitors

Comparatives and Phantom Nodes

A bit of subtlety is needed to deal with sentences like

Mike eats more cookies than Ben.

which RelEx should (at time of writing, Jan 6, 2007, does not) deal with as something like

_subj(like, Mike)
_subj(eat, Mike)
_obj(eat, cookie)
more(cookie, $cVar0)

Then we could use a RelEx2FrameNet mapping rule such as:

^2_AsymmetricEvaluativeComparison:ProfiledItem(more, $var1)
^2_AsymmetricEvaluativeComparison:StandardItem(more, $var1_1)
^2_AsymmetricEvaluativeComparison:Valence(more, more)

which embodies the commonsense intuition about comparisons regarding eating.

Note that I introduced a new frame AsymmetricEvaluativeComparison here, by analogy to the FrameNet frame Evaluative_comparison.

Note also that the above rule may be too specialized, though it's not incorrect. One could also try more general rules like

^2_AsymmetricEvaluativeComparison:ProfiledItem(more, $var1)
^2_AsymmetricEvaluativeComparison:StandardItem(more, $var1_1)
^2_AsymmetricEvaluativeComparison:Valence(more, more)

Note that this rule is a little different than our current RelEx2Frame rules, in that it produces output that then needs to be processed by the RelEx2Frame rule-base a second time.


A few comments on how these mapping rules are intended to be used, and what their limitations are, etc.

The Need for Inference

Let's suppose we have created a more comprehensive list of rules than the ones contained in the documents attached to this wiki page. And let's suppose that we have created a script that applies these rules to the RelEx relationships output from applying RelEx to a sentence. Then what?

The first conceptual point to be understood is that, in general, there may be multiple rules with identical or overlapping left-hand sides. This means that we may get a whole bunch of FrameNet? relationships from a sentence, and some of them will be correct and some will be incorrect.

The choice of which of multiple rules to apply in a given context must be made via inference which must take context into account.

This choice will be easier to do when there is a nonlinguistic context to utilize, e.g. when the sentence interpretation process is occurring in the context of embodied activity in a world.

Note that in reality this choice will be mixed-up with the choice of a RelEx parse, too. So there will be multiple parses, each with multiple interpretations, and contextual inference must be used to choose the (parse, interpretation) combination that makes the most sense.

Creating a Cyc-Like Database via Text Mining

Once there are enough RelEx2FrameNet mapping rules in place, we can use these to create a vaguely Cyc-like database of common-sense rules.

The approach would be as follows:

  • Get a corpus of text
  • Parse the text using RelEx
  • Transform the parses into FrameNet-relationship-sets using mapping rules
  • Rank the parses and FrameNet interpretations using inference or heuristics or both
  • Mine logical relationships among FrameNet relationships from the data thus produced, using greedy data-mining, MOSES, or whatever

These mined logical relationships will then be analogous to the rules the Cyc team have programmed in. For instance, there will be many rules like:

# IF _subj(understand,$var0) THEN ^1_Grasp:Cognizer(understand,$var0)
# IF _subj(know,$var0) THEN ^1_Grasp:Cognizer(understand,$var0)

So statistical mining would learn rules like

IF ^1_Mental_property(stupid) & ^1_Mental_property:Protagonist($var0) 
  THEN ^1_Grasp:Cognizer(understand,$var0) <.3>
IF ^1_Mental_property(smart) & ^1_Mental_property:Protagonist($var0) 
  THEN ^1_Grasp:Cognizer(understand,$var0) <.8>

which means that stupid people mentally grasp less than smart people do.

Note that these commonsense rules would come out automatically probabilistically quantified.

Note also that to make them come out well, one needs to do some (probabilistic) synonym-matching on nouns, adverbs and adjectives, e.g. so that mentions of "smart", "intelligent", "clever", etc. will count as instances of


By combining probabilistic synonym matching on words, with mapping RelEx output into FrameNet input, and doing statistical mining, it should be possible to build a database like Cyc but far more complete and with coherent probabilistic weightings.

Note that, though this way of building a commonsense knowledge base requires a lot of human engineering, it requires way less than something like Cyc. One just needs to build the RelEx2FrameNet mapping rules, not all the commonsense knowledge relationships directly -- those come from text.

I don't view this as a solution to the AI problem, but I think it could produce a bunch of useful knowledge to pump into an AI's brain.

I also note that, the better an AI one has, the better one can do the step labeled Rank the parses and FrameNet interpretations using inference or heuristics or both above. So there is a potential virtuous cycle here: more commonsense knowledge mined helps create a better AI mind, which helps mine better commonsense knowledge, etc.