Probabilistic logic networks

From OpenCog

PLN is a system intended for uncertain inference. Although conceptually interesting and potentially useful, it has been abandoned as of 2021. It is one of the OpenCog Fossils.

There are several reasons for it's abandonment:

  • It was built on a fragile subsystem (the URE) that did not leverage the strengths of Atomese. Thus, the URE became difficult to maintain, difficult to debug, and had a raft of performance and scaling issues. It ran slow, and, to add insult to injury, it had a hard-to-use API.
  • The PLN rules themselves are hand-curated. They are carefully selected by experts, and crafted to perform specific kinds of logical reasoning. As such, they are quite typical of what one might find in books on proof theory, or in expositions of firs-order logic. The principal difference is that the PLN rules added the ability to work with both uncertainty and with confidence (uncertainty was handled in a Bayesian maximum-likelihood sense, more or less, while confidence simplified the Bayesian the likelihood distribution into a single number.)
  • As hand-curated rules, they suffer as all hand-curated narrow-AI systems do: they must be maintained by human experts. Any one given rule has some chance of being buggy, creating undesired outcomes. Sequences of rules may lead to unforeseen, undesired conclusions. There is an overall sense of fragility that must be constantly battled by human experts tweaking the system.
  • As probabilistic rules, they suffer from a combinatorial explosion of possibilities. Thus reasoning can never be particularly deep, and attention allocation systems must be used to select fruitful directions of inference.
  • The result of decision-making is similar to a decision tree. Yet, from standard machine learning textbooks, one knows that a single decision tree is never as good as a decision-tree forest. The PLN subsystem, being envisioned as a rule system, never provided any infrastructure for managing "reasoning forests", or multiple, independent "reasoning agents".
  • The role of "logical reasoning" within AGI is unclear. It may perhaps be useful at some low level; but certainly, at the human level, this is not how humans perform common-sense reasoning. This is not how children reason about the world around them, nor do courts, judges and juries apply Bayesian inference when determining guilt. As a simplified Bayesian system, PLN can't be used in either of these contexts. Perhaps PLN could have deployed as a kind of expert system, operating in a limited domain (such as petroleum engineering or medical diagnosis), but such limited domain reasoning was never pursued.
  • In the end, it seems that the reasoning rules need to be learned, by interacting with the world, rather than hand-coded by experts. Neither the URE, nor PLN itself provided the kind of infrastructure needed for learning rules.

Introduction

PLN is a novel conceptual, mathematical and computational approach to uncertain inference. In order to carry out effective reasoning in real-world circumstances, AI software must robustly handle uncertainty. However, previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN is able to encompass within uncertain logic such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.

The goal underlying the theoretical development of PLN has been the creation of practical software systems carrying out complex, useful inferences based on uncertain knowledge and drawing uncertain conclusions. PLN has been designed to to allow basic probabilistic inference to interact with other kinds of inference such as intensional inference, fuzzy inference, and higher-order inference using quantifiers, variables, and combinators, and be a more convenient approach than Bayes nets (or other conventional approaches) for the purpose of interfacing basic probabilistic inference with these other sorts of inference.

PLN begins with a term logic foundation, and then adds on elements of probabilistic and combinatory logic, as well as some aspects of predicate logic, to form a complete inference system, tailored for easy integration with software components embodying other (not explicitly logical) aspects of intelligence.

PLN was developed by Ben Goertzel, Matt Ikle', Izabela Freire Goertzel and Ari Heljakka for use as a cognitive algorithm used by MindAgents within the OpenCog Core. PLN was developed originally for use within the Novamente Cognition Engine.

PLN represents truth values as intervals, but with different semantics than in Imprecise Probability Theory.

The basic goal of PLN is to provide reasonably accurate probabilistic inference in a way that is compatible with both Term Logic and Predicate Logic, and scales up to operate in real time on large dynamic knowledge bases.

The current version of PLN has been used in narrrow-AI applications such as the inference of biological hypothesis from knowledge extracted from biological texts by language processing, and to assist reinforcement learning of an embodied agent, in a simple virtual world, figure out how to play "fetch".

PLN was previously known as PTL or "Probabilistic Term Logic".

For a more thorough discussion of PLN, see multiple published books (some with free PDFs online), linked via Background_Publications

Old, obsolete implementations

Some historical notes:

  • The first codebase for PLN was written by Izabela Freire Goertzel in 2004-5, but this version only did first-order forward-chaining.
  • The first really thorough code base for PLN was written by Ari Heljakka in 2006. In 2008 Joel Pitt (with assistance with Cesar Mercondes during GSoC) ported this version from the Novamente Cognition Engine to OpenCog.
  • During 2011-2013, Jade O'Neill reimplemented the algorithm in Python.
  • More recently, in 2014, Misgana Bayetta created a new C++ implementation using the Unified Rule Engine. Then Nil Geisweiller completed it by 2018. It works; it even supports sophisticated control mechanism (for the backward chainer,) enabling inference control meta-learning. Premise and rule selection based on ECAN doesn't work well. By 2021, this version as been abandoned.

For suggestions about how to do rule selection better, see the page on Adaptive Rule Selection.