Dialogue Controller

From OpenCog
Jump to: navigation, search

A Dialog Controller (DC) is an element that composes the Language Comprehension module and provide a way of establishing a dialog between two agents. A DC can be prepared to answer questions or start a new conversation spontaneously. For instance, the internal state of a given agent can be monitored. So, after a given state is recognize by the monitor (let's say, the agent is hungry), the corresponding DC will 'communicate' with another agent about its needs.

How it works

Dialog Controllers are components of the LanguageComprehension module. All DCs live inside that module. At each update cycle (tick), all DCs are updated. Each DC can produce a sentence to be 'said' to another agent. If more than one DC produce a sentence, then a "chooser mechanism" will select the best sentence to be 'said' at that specific moment. Figure 1 shows a diagram containing the modules involved in the NLP processing.

DialogController.png
Figure 1 - Modules used for NLP processing

In a general way, we can classify the Dialog Controllers into two categories: Respondents and Talkers. Respondents are those DCs that will do some processing if there is an external stimulus (something was said). Talkers DCs are those that do some processing when it judges that is necessary, for instance:

Respondents Question Answerer MegaHal ChatBot PowerSet Arguer
Talkers Rambler NeedArticulator PowerSet QuestionAsker

In fact, it is just a logical classification. It nothing has to do with the way they will be invoked. The treatment given by the LanguageComprehension will be the same for both types. Internally, each DC will must to check the AtomTable to “see” if it needs to do something. The Arguer, for example, will must to check if something was said and then look into the agent AtomTable to verify if there are some contradiction between what was said and what the agent believes. Then, the DC must to do its job if needed. OTOH, NeedArticulator, for example, will, at each execution, inspect the predicates that represent the agent physiological needs to verify if the agent itself needs to ask something to another agent (i.e. food).

Creating a new Dialog Controller

In order to create a new Dialog Controller, one needs to extend the class DialogController, implement the update mechanism of that new DC, and then register the final class into the LanguageComprehension. There are some examples of already built DCs into the file opencog/embodiment/Control/OperationalAvatarController/LanguageComprehensionDialogController.cc. If you want to better understand how to create a new DC, please refer to that file.

Choosing the best answer

As stated above, all DCs will be executed at each update cycle of the agent. That way, more than one sentence can be produced to be sent to another agent, as a speech. After all DCs have been updated, the produced sentences will then be filtered and just one will be chosen. That filtering process is executed by an external script written in Scheme. An AnchorNode is used to mark the latest sentences produced by the DCs as follows:

ListLink [1.0, 1.0]
   AnchorNode "# Possible Sentences"
   SentenceNode "The green ball is inside the large box"
   SentenceNode "The grass is green"
   SentenceNode "There is a box under the table"

A Scheme function called "(choose-sentence)" is called to select the best sentence among all produced by the DCs. That function can be found at opencog/embodiment/scm/language-comprehension.scm. It returns a string containing the chosen sentence. Given that that function is a Scheme program, it can be easily modified without needing to recompile the code. Besides, it is possible to define any desired heuristics to select the best answer.