Comparison between Multiverse and RealXtend for creating Embodiment Demos

From OpenCog

THIS PAGE IS OBSOLETE

This document covers a comparison between the Virtual World platforms Multiverse and RealXtend with the purpose of choose the one more suitable for building new Demos for Opencog Embodiment system. Although Opencog Embodiment is used to control intelligent agents (pets or humanoids) within a virtual world, the idea is to use PC (player characters) to make the role of themselves and of the agents controlled by Embodiment system. This way, we intend to produce more controlled and visually appealing videos for showing in business meetings.

Aspects to be compared

The following items should be compared:

  1. Set the avatar for the player to any of the 3D models used by Embodiment (i.e., woman or chihuahua).
  2. Execute any animation present in the 3D model used by the avatar and map each animation to a keyboard shortcut.
  3. Execute a few more elaborated actions composed of animation, sound effects and other primary action (like grab,drop,throw,bite,etc). This should be done preferentially through a GUI component (like the wheel widget with menus when clicking over a specific target object), but it may also be done through any other way that allows quick execution/command of the action.
  4. Allow to make the PC to walk or run (or walk faster)
  5. Allow to kick or nudge objects around.
  6. Support for voice chat (including usage by Non-Player characters)

Comparison table

Aspect Multiverse Realxtend
1. Set avatar 3D model Yes. The models must be exported to a MV-compatible Ogre format (see http://update.multiverse.net/wiki/index.php/Using_the_3ds_Max_Export_Tool, http://update.multiverse.net/wiki/index.php/Exporting_Models and http://update.multiverse.net/wiki/index.php/Platform_Tutorial_Importing_Animated_Models) and the corresponding files (mesh, skeleton, textures, etc) must be put in the proper client/sampleworld subdirectories. Besides, a bunch of phyton script files (as character_factory.py and SampleCharacterCreation.py) must be changed to include the new model as an option (or to define specific things like attachment points, attach offsets and others). Yes. The models must be exported to a ReX-compatible Ogre format (see http://wiki.realxtend.org/index.php/Exporting_Avatars_from_3D_Studio_Max) and the corresponding files (mesh, skeleton, textures, etc) must be put in the proper RealXtend viewer’s media subdirectories. Then, the Avatar Generator tool must be used to select the model for a registered user.
2. Execute animation by pressing a key Yes. Almost unrestricted Key mapping is defined in bindings.txt and its handling is defined in bindings.xml, which calls methods from a phyton script (this may send any avatar action like /tapDance, for example). One can configure gestures to be used by the avatar. A gesture may be mapped to a single Ogre 3D model animation previously uploaded from a *.skeleton file using the “File->Upload 3D model animation...” menu option. Also, you can configure a keyboard shortcut for that gesture, but this is restrict to the F2 to F12 keys (and optionally used simultaneously with Shift or Ctrl keys as well). So, we have at most 33 gestures activated by pressing a key.
3. Execute elaborated actions (usually involving a target object; like grab, kick, eat and others) quicky by using special GUI components I have not investigated enough about creating a GUI component like a right-click menu at target object to execute an action on it. Anyway, one can just select an object with a mouse click and, then, execute the action on it by pressing the shortcut key for that action, as explained in the previous aspect. Each existing avatar actions was already mapped to a shortcut. More actions (those usually executed by chihuahua; e.g., back flip) must be added later though. Also, chihuahua actions would have to be adjusted when it is acting as an avatar (the same way humanoid actions must be adjusted when it is acting as a Mob). At first, there is no mechanism to synchronize actions, animations and other things (like sound) in ReX. Perhaps this may be possible using Liden script at client side. More investigation on this is needed.

NOTE1: One can also add previously uploaded sounds to gestures. However, in preliminary tests they cannot be played simultaneously to the animation, which makes this feature too restrict. Also, even when an animation and a sound are played in sequence, there is a big lag between their execution. Again, more investigation on this may be needed.

NOTE2: Attachment of objects to the Ogre model is now possible using a RexViewer build (not publicly released) sent by Rex development guys. However, I was not able to use this feature for neither chihuahua nor woman model. I’m not sure the exportation from 3ds Max to Ogre done for these models has included the attachment points though (I guess they weren’t exported, but I need to double check that).

4. Allow walk and run (or walk faster) Already implemented. For run, just press the Shift+(Up or Down arrows) keys simultaneously (this could be extended for any key used for walk forward or backward, like the W and S) For now, we can just walk and fly. I don’t know any way to walk faster or run yet.
5. Allow to kick or nudge objects around We knew physics is not a strong aspect of Multiverse (see http://update.multiverse.net/wiki/index.php/Physics). An integration with the Aegia Physics system was scheduled to Q3 (3rd quarter?) 2008, but by searching for this feature on MV site and forums I was not able to find anything concrete about this yet (see message posted on April 27th 2009 in the following discussion: http://update.multiverse.net/forum/viewtopic.php?t=5554&highlight=physics&theme=multiverse) . So, for now, it seems no physics system can be used on MV. This way, anything that involves object movement must be specified and implemented by hand, which is a pain. There is a way to apply physics to objects so that when they are hit by an avatar (or a NPC) it moves around. However, there are known issues like no friction between an object and the floor, which makes the object to move forever (with no deaceleration) in a same direction.

The physics engine used by RealXtend is ODE (OpenDynamicsEngine – see http://www.ode.org/).

5. Support for voice chat MV (version 1.5 or later) has a voice chat system based on Speex, a free software speech codec. However, the MV's voice system API (both high and low level ones) does not provide any method receive voice packages, as it would be necessary for manipulation of voice data by an Opencog/Embodiment agent (or its Proxy component). Besides, the encode and decode of voice packages are done directly by MV clients. So, it does not seem straightforward to implement this within MV's Embodiment Proxy, which currently lives in MV servers (not at the client side).

It's possible to create/implement a customized voice group. The VoiceGroup interface provides the sendVoiceFrameToListeners method, which sends voice packages to all current listeners of the given group. However, it seems these voice packages can be originated only by MV clients, not by an Opencog/Embodiment agent living at the server side. Another thing that makes hard to implement voice for Opencog/Embodiment agents in Multiverse (even if Embodiment Proxy was implemented at client side) is that MV Client (and most of the interface among MV clients and servers) are proprietary code.

There is few documentation about voice support for the current RealXtend/Viewer version (0.4). However, Rex Viewer comes with widgets (a "talk" and a "list active speakers" buttons used for voice chat). I don't know more details about since I was not able to test that feature when connected alone.

Also, I could find the design document for ReX-NG, which mentions, in communications section , that voice support is possible with ModRex. Nothing else was found about voice support in RealXtend, but it seems that, like in Multiverse, most voice support is done at client side only. So, implementation of voice support for Opencog/Embodiment agents (currently fake avatars/clients at the server side) is not supposed to be straightforward, if even possible.

A good article on how OpenSim supports voice chat is written here. I don't know about the current status of OpenSim voice support though.

Conclusion

For now (May, 2009), it seems to me that Multiverse is still the best platform to implement visual Demos for Opencog’s Embodiment features. The mentioned aspects compared above should be continuously being evaluated though, since there are many features in development in both platforms (mainly in ReX). However, there may be some specific demos that are better implemented with RealXtend. A demo for a soccer game, for example, may eventually use the ODE physics engine existing in RealXtend (although this would still be a challenging and laborious task).

Finally, implementing an agent (bot) with real conversation capacity may require a change of the architecture for Embodiement Proxy (and consequent for Perception Action Interface within Embodiment servers) so that it interfaces with the Virtual World(s) as a client, not as an intrusive specialized server component (as done currently) that usually impacts performance, scalability and maintenance. In this direction, it seems that RealXtend has the advantage of being a fully free/open source software. Anyway, further and deeper study on this is required (see Embodiment Proxy as Client for initial analysis, comments and rough estimates for implementing Embodiment Proxies this way in both Multiverse and RealXtend platforms).