Interactive Atomspaces

From OpenCog
Jump to: navigation, search

Specifications for software tools to assist in the control and visualization of AtomSpaces.

Consists of a real-time graphical user-interface system for interacting with, editing, and analyzing OpenCog AtomSpaces. Supports all functionality necessary for AGI development, analysis, and experimentation.

It facilitates the comprehension of AGI dynamics and allow a human to guide, tune, and understand the various mind processes, in action.

Why is this needed? The evolution of a hypergraph is difficult to grasp, as is understanding it's topology. A picture is worth a thousand words, and the alternative is to work with a textual representation of Nodes and Links.

There is also something alluring to outside researchers about a user friendly interface to control complex processes. Many people are somewhat visual learners, which make an OpenCog Workbench a powerful tool for helping understanding exactly what goes on within the system.

This will hopefully also assist interactive debugging and experimentation OpenCog at the level of mind dynamics (as opposed to just debugging low-level programming bugs). Being able to interact with, tweak, and tune the system as it runs and observe the effects could serve to accelerate the mind engineering process.

Input Devices

Sensorium (plural: sensoria) refers to the sum of an organism's perception, the "seat of sensation" where it experiences and interprets the environments within which it lives.

http://en.wikipedia.org/wiki/Sensorium

Each input device provides a mapping from its realtime event stream into a layer of AtomSpace.


Keyboard

Boolean map of key event state and history Buffer of typed characters Type rate (historic)

Pointer (Mouse)

pixel x and y world x and y button states

Multitouch = Multiple Pointers

See: http://ccv.nuigroup.com/ http://en.wikipedia.org/wiki/Multi-Pointer_X

Camera

Snapshot control Video recording mode

  • data degrades progressively in quality (ex: bitrate) as it ages

Optical Object Tracking and Gesture Recognition

Stereoscopic Vision

http://en.wikipedia.org/wiki/Natural_user_interface http://en.wikipedia.org/wiki/Kinect

Optical Character Recognition

OCR... use an existing open source library?

Voice

Provides a circular audio buffer

  • data degrades progressively in quality (ex: bitrate) as it ages
  • Each buffer segment provides FFT spectrum and volume analysis

Optional noise filtering

Eye Tracking

Information could be displayed or manipulated with the use of your eye. An intelligent visualization system would always considers attentional focus points to create a rich and ever changing interface.

Visual Attention Tracker The eye is a great indication of reverse engineering what the person may be thinking about, coupled with a navigational interface it will be possible to interface with computer software simply by looking at the screen.

http://en.wikipedia.org/wiki/Eye_tracking

Mind (Biofeedback and Brainwaves)

Arduino EEG

Emotiv NeuroSky LightStone


Visualization

Visualizing the OpenCog Atomspace in a useful way is a complex matter and there seems little doubt that many different approaches may work well for different purposes.

Large-scale visualization could be useful for understanding the overall topology of an Atomspace.

On the other hand, small-scale visualization could also be useful for graphically browsing the Atomspace, especially if coupled with an intuitive interface for navigating and filtering the graphs.

Some old thoughts on how to make a small-scale Atomspace visualizer are given in the following documents, which were written in the context of the Novamente Cognition Engine AtomTable, which was similar enough to the current OpenCog Atomspace that the visualization ideas are still basically applicable:

http://www.opencog.org/wiki/Image:Viz_NMGraph.pdf

http://www.opencog.org/wiki/Image:Viz_ExampleDiagrams.pdf

Also, one thing needed for mapping Atoms into existing graph visualizers is a mapping from the Atom hyperspace into a conventional graph. Some code doing this was written by Junfei Guo in 2008 and the code and docs are here:

http://www.opencog.org/wiki/Image:UpdateFinalEvaluation.rar

The process of hypergraph to graph transformation is, however, very simple and all three of Tulip exporter, the Dotty module, and Ubigraph module already do this.

I envisage the view of the atomspace looking somewhat like the maps of the Opte Project - at least from a zoomed out and distant viewpoint. A general page on Atomspace Visualization also exists.

Also, some other visualisation types:

  • Visualising the BIT as it expands (prototype by Jared).
  • MOSES combo tree viewer?


  • Graph Visualization
    • Dynamic Layouts (Multithreaded)
    • Force-Directed
    • Tree
    • Row, Column, Grid, Table
  • UbiGraph compatibility API
  • Sandbox for AGI self-exploration and simulation
  • "Laws of Physics" provides a lowest-common-denominator communication substrate
    • Optics and Vision
    • Dynamics and Collision Contact (touching, pushing, pulling)

There are a number of visualisation toolkits available. VTK is often used in visualizing scientific data, and Paraview is a generic viewer based on it.

It may however be worth designing the viewer from scratch (although making use of relevent OpenGL based libraries e.g. Ogre) since the interaction with the CogServer is quite specific and is visualizing highly dynamic data - rather than relatively static or slowly changing datasets which are more commonly used with VTK (as far as I can tell, it might not have this limitation!).

See the log visualisation done in Ruby here - this could possibly be used as a prototype. There is also the visualisation language Processing] which is gaining popularity.

Skyrails is a cool application for visualisation small (~300 node) graphs. See a video of it in action on youtube.

Visualization Parameters

Nodes and links can have different themes. These differ in the amount of data they can display visually.

For example, the simplest representation of a node is a filled circle. The colour inside the circle can be mapped to any characteristic of a node. It could be the type (each type should have a different colour associated with it, and this should be changeable in the preferences), it could be one of ShortTermImportance, LongTermImportance, TruthValue, TruthValueConfidence, etc.

More complex representations of nodes would be two half circles joined together, each side representing a different chracteristic. A 3 sectioned circle, 4 sectioned etc. Another way to represent a value would be to make it represent the size of the node.

Each characteristic should be able to map to a colour gradient. E.g. you could have a filled circle with the colour representing TruthValue. A gradient could be associated with TruthValues like Yellow->Green - atoms that are more green have higher TruthValues.

The basic idea is that the visualisation is highly configurable, since depending on what you are investigating, you'll be interested in certain characteristics more than others.

Atom Properties Display Axis
Type (ordinal) Brightness/Transparency
STI Hue
LTI Saturation
Strength Value
Confidence Size
Shape/Linestyle

Technology

  • C++
  • OpenGL, GLU, GLUT
  • SDL (Simple Directmedia Layer)
  • FTGL (FreeType GL)
  • OpenMP (Multiprocessing Parallelization)

Requirements

One option is for the tool to have a 3d view implemented in OpenGL. This view displays nodes, and the links between them using a intuitive colouring scheme to visualise AtomTypes, their TruthValues, and AttentionValues. Not all this data need be displayed at once (see #Visual themes below).

Another, more adventurous option, would be to use a 3D virtual world like OpenSim or Multiverse or Second life to display the data. The basic 3D view notion would be the same as in the OpenGL idea, but, the nodes and links would exist in a 3D virtual world, and the user's avatar could walk around and inspect different parts of the data. Data viz in virtual worlds is an up and coming area. Potentially, one could implement this using a modification of the proxy currently used to interface between the Novamente Pet Brain and virtual world servers like Multiverse.

The tool should be a separate process to OpenCog, accessing a OpenCog instance through a UDP connection. OpenCog should fire updates to the visualisation tool, and the tool should communicate to OpenCog what sort of updates it is interested in.

The tool should also provide some ability to control the execution of OpenCog, providing a debugging tool of sorts. This is for dynamics debugging rather than code debugging.


Update Frequency

Updates should be able to occur at a number of resolutions. The simplest is after every cycle, but a finer resolution where one can visualise the state after each MindAgent is run would be useful. This obviously wouldn't work for multithread/multiprocess OpenCog instances, but for single instances while debugging OpenCog dynamics, this would be extremely useful. Whatever the update step is, the tool should display either the cycle number and/or the last MindAgent run.

Updated atoms should maintain their positions in the 3d representation of the AtomSpace. New atoms, should be dynamically added to the view as they appear.

Atoms that change their TruthValue, AttentionValue, or other characteristics should visually indicate this, perhaps by flashing/glowing briefly (exactly what causes a flash/glow should be configurable).

The ability of the visualisation tool to "listen" for AtomSpace updates may necessitate changes to the design of the AtomSpace. There is currently no way to monitor for changes in an efficient manner.

Filtering

The system generally displays, at a time, only a subset of a complete AtomSpace. An attention function maps atoms to scalar values, generating the contents of this subset.

This means it's possible to interactively "tune" what is visualized in analog ways - like a graphical equalizer for data.

By clicking the user should be able to select atoms. Using shift should allow a rectangular region to be selectable, ctrl-click should add atoms to the existing group of selected atoms. The user should then be able to group these, such that they are represented by a singular atom on the screen (as well as being able to ungroup these atoms too!).

This this is analogous to a form of user directed Map Encapsulation, it'd be useful if the visualiser could somehow identify atom maps that have been generated by OpenCog and use these to hide and show the elements of this map.

For those selections and groups made by the user, the tool should allow the user to create a new concept node representing the group.

  • By central or focal atom and only atoms within 1-3 hops/links.
  • By node and link type
  • By STI or LTI


Selecting Atoms

By clicking the user should be able to select atoms. Using shift should allow a rectangular region to be selectable, ctrl-click should add atoms to the existing group of selected atoms. The user should then be able to group these, such that they are represented by a singular atom on the screen (as well as being able to ungroup these atoms too!).

This this is analogous to a form of user directed Map Encapsulation, it'd be useful if the visualiser could somehow identify atom maps that have been generated by OpenCog and use these to hide and show the elements of this map.

For those selections and groups made by the user, the tool should allow the user to create a new concept node representing the group.

Creating and Editing atoms

When atoms are selected, their characteristics should be editable.

Input Text and Voice

Input text or voice instantaneously creates string objects and trigger inference.


Interactive Console

Shell (aka console) commands can be instantaneously executed in-place. The shell forms around the first line of input.

An integrated interactive shell should allow the user to make AtomSpace manipulations directly. This might be better as a separate tool, or at least acting on a OpenCog server through a separate interface (there already exists the telnet interface).

Currently the thought is to have a python based shell (perhaps an instantiation of ipython) that runs over JSON-RPC to a Cog. We'll assume that everything is carried out through a single Cog and leave the question of controlling a distributed version to future versions. That doesn't mean that initial versions won't work with a distributed architecture, just that the workbench will only connect to a single CogServer at a time.

AtomSpace Metrics

Various statistics about the AtomSpace should be collectable.

Some very basic ideas:

  • Number of each atom type
  • Number of each type in the Attentional Focus.
  • Number of each type that are in memory.
  • Distribution of STI/LTI for a given type.

Indexes

What

Who

Where

When

Frequently changed Atoms

Recently changed Atoms

Keep a list of recent changes to the atomspace, a log of sorts, but allow the selection of the change to zoom to the atom in the visualiser.

Packaging

  • GIT, Makefile, etc...
  • Package manager packages and installers (.deb, .rpm, etc..)

Applications

Ideas for tools that may be used to inspect and/or manipulate a system's state or dynamics, or other purposes: