ReX issues related to Embodiment integration
Questions, issues and comments on aspects for integrating Embodiment into ReX platform/virtual worlds.
Known issues and comments
Each issue or pending task is described and commented in the next sections. The tables bellow summarizes and categorizes them, as follows:
- ReX-dependent: for those issues whose solution depends on new features from Rex team
- ReX-help-welcome: for those issues whose solution without any help from Rex seems to be possible, but would be solved faster with some help/support from them.
- ReX-independent: for those issues that does not depend on any help from ReS at all.
Table of the major issues:
|SL/OpenSim] to ReX/Mesh attachment point association||not even started||Rex-dependent||unknown|
|Custom GUI widgets||not even started||Rex-dependent||unknown|
|Synchronization of animations and other operations||not even started||Rex-dependent||unknown|
|Full control for playing mesh animations||partially done||Rex-dependent||unknown|
|Object properties||workaround done||Rex-dependent (for a definite solution)||unknown|
|Object size||workaround done||Rex-dependent (for a definite solution)||unknown|
|Customized avatar bot model||works partially||Rex-help-welcome (for a definite solution)||a couple days (for NM team), a few hours (for Rex team)|
|Playing sounds||works with restrictions||Rex-help-welcome (for a definite solution)||unknown|
|Persistence of objects info||Works randomly, sometimes info is lost when world is reloaded||OpenSim-dependent (maybe a bug)||unknown|
The following table contains some less important issues:
|Walk action||works with some problems||Rex-help-welcome||unknown|
|Turn action||works with some problems||Rex-help-welcome||1 day (for a hacky solution)|
|Duplication of chat messages||happens in breakpoints in debug mode or when system is too slow (overloaded)||Rex-help-welcome||unknown|
|Delay on sending avatar-action messages||to be investigated||Rex-independent||1 day|
Finally, the next table show the pending tasks to get the ReX-Proxy fully functional (the way MV-Proxy is at the time I wrote this: October 2, 2008) :
|Suitable scenario (like dog park with required tubes and fences)||Stopped (by Carlos)||Rex-dependent||2 days|
|Add missing pet actions||to be done for all actions already implemented at MV||Rex-independent||1 day|
|Positioning adjusts for some implemented pet actions||to be done for actions that needs special positioning before, during or after the animation (sniff, lick, eat, drink, pee, poo, jumpUp, bite, etc)||Rex-independent||1-3 days (jumpUp and some other else may be tricky; should we include nudgeTo?)|
|Change animations for different states||to be done for actions allowed in other states (e.g. wagTail and bark while sitting)||Rex-independent||1 hour|
|Exportation and integration of custom human model||not done (it was waiting for finishing woman model/animations)||Rex-independent||1 day|
|Avatar actions||hacky solution implemented using chat commands (for a few actions only)||Rex-independent (for hacky solution) and Rex-dependent (for a definite solution)||1-2 days (for finishing hacky solution); unknown (for definite solution)|
|Load pet in front of the owner||partially implemented. Pet is now loaded in front of the owner, but not facing it||Rex-independent||1 hour|
|Specify and implement special object types (visible ones that are not sent via map-info and invisible ones that needs to be sent via map-info), like ones needed for representing tubes||partially implemented. Actually, only those objects whose type (structure, accessory or just object) is set in its description field are send in map-info messages to PB||Rex-independent||1 day|
|Drop eventually grabbed item on pet unloading||not done||Rex-independent||2 hours|
|Allow to select different chihuahua models||not done||Rex-independent||1 day (for each new exported Chihuahua model with different texture/size, which is another task, but should not be a problem)|
|Send feedback messages to the owner's user only||not done||Rex-help-welcome||unknown|
- For objects (i.e., SceneObjectPart]/Group</nop>), we are currently using the description field for the first Prim (BTW, we're going to use only just 1-Prim objects, with rex-mesh models inside). For now, reserved words are used to set/extract properties in/from that text field. Further, we can use a better defined language to parse the properties from such a field. Anyway, this is kinda hack, of course. According with Tuomo, a more appropriate solution (i.e., proper support for custom object properties) would require changes in both server and client sides (being the latter more complex). The current reserved words are:
- object, accessory, pet, humanoid, avatar, structure, unknown (mutually exclusive properties, for indicating object type)
- edible, drinkable, pethome, foodbowl, waterbowl (for indicating special object properties used by pet behaviors)
- For avatars (i.e., ScenePresence, which include pets), I've tried to use the "about" field of the user profile, but I was not able to access it at the server side (within my RegionModule). For know, we are just mapping avatar types (player or bot) to a predefined set of properties (basically object type and size).
- For objects, we are using the scale of the first Prim (since we're going to use only just 1-Prim objects, with mesh models inside). The scale of all parts may be obtained, but the overall group size depends on how they are linked each other, which is tricky (they may even overlap each other, for example) .
- For avatars, there is no info about that yet. For now, we are using a predefined size for each object type (real user, for avatars or bot, for the pet). For pets, we could, optionally, set the size according with the avatar appearance it uses (from a registered user for this purpose).
Customized avatar bot model
- Partially solved. The current solution is kinda hack. First, we need to create an Rex avatar account (following the steps at http://www.realxtend.org/get.php?id=18 and http://www.realxtend.org/get.php?id=16) using a custom model. Then, you must get the AvatarStorage's address created for that account (I've created an !avatarstorage command you can enter in the chat area of RexViewer to obtain the avatar storage url of the current user, but it's also shown at the time you create the user account). Finally, this address is used to set the appearance of the ScenePresence for the bot (this is currently configurable within EmbodimentProxy.properties file).
- The problem is that when the pet is loaded, it first appears with Jack's appearance, the default model for avatar users. Only after the bot is created, it's possible to change its appearance by overriding its avatar storage address member. We can just find a way to hide the bot until its appearance is changed, but I didn't find a way to do that yet. Anyway, I think that a definite solution would be to pass the AvatarStorage address at the ScenePresence constructor, but this would take too much time/effort if I did it myself. So, I think Rex team should take care of that.
Full control (at server side) for playing custom mesh animations
The Rex team has already provided a way to play custom animations through the implementation of the following method of the Scene class:
public void [SendRexPlayAvatarAnimToAll(LLUUID vAgentID, string vAnimName, float vRate, bool vbStopAnim); // rex, new
This method may be called within a RegionModule (as PVP is) and, therefore, allows to play custom mesh animations commanded by the pet brain running at the back-end or server-side (at the client-side, gestures are used for playing animations). However, there are several issues and missing features at the current approach, as follows:
- There should be a way to know when an animation commanded this way has finished (OnStartAnim and OnStopAnim events does not seem to work for this case).
- There should be a way to play an animation in loop for indeterminate period until a stop command is sent to the clients. This is useful for specific actions that have arbitrary duration (e.g., the hiding_face animation used for playing scavenger-hunt or hide-and-seek games). In this case we need to play an start animation, then keep playing an idle animation in loop and finally, play an end animation to go back to the Stand (or any other state) position.
- Alternatively to 2, there should be a way to set a state for the bot/avatar so that it keeps executing a specific idle or walk animation (instead of the “Stand” or “Walk” it usually executes). Examples of animations that runs for indeterminate period of time are:
- grab_idle or grab_stand (when the pet/avatar is holding something)
- sit_grab_idle or grab_siting (when the pet/avatar is holding something while sit)
- walk_sniffing (when the pet is walking and sniffing on the ground level)
- stand_hiding_face (which should go for an undetermined period of time, until a specific event happens)
- There is no way to ask for executing just part of the animation as well (i.e., start_offset and end_offset arguments).
- There should be a way to control the speed of each animation (The current animRate argument of the method mentioned above to play an animation does not affect anything by now).
Synchronization of animations and other operations
This feature consists basically of a way to synchronize the execution of animations and other operations within a major pet action/behavior (like the concept of Coordinated Effects in Multiverse – see http://update.multiverse.net/wiki/index.php/Coordinated_Effects). The main example of action that requires that is “grab”. The avatar (both the avatar controlled by a player and that one controlled by a Pet Brain at server-side) must be able to play the grab animation and at a specific point (a frame/offset of that animation) attach an object to the right attachment point. Alternatively we can have 2 animations for the grab action (one for before the attach operation and another one for after it). Anyway, we need to synchronize the execution of these animations and the attachment operation. Performing that synchronization by remote commands at the server-side is a very complex (if not impossible) task. So, my suggestion is do that using Client-side scripts that could be commanded by methods at the server side. After the client-side script is executed, the server should be notified somehow (automatically or through specific messages sent back by the script itself).
For now, whenever an animation is played, a timer configured with the known animation length (time) is used. When it expires, the corresponding is considered finished. Of course, this is just an ugly temporary hack. For more complex actions (that mix playing animations, sounds and other operations), just the essential operations are done and once Rex team implement the features that allows synchronization between these operations, we should change them properly.
Association of SL attachment points to Rex/mesh attachment points:
This is needed for implementing avatar/pet actions like grab, drop, throw, put down and any other one that uses attachment points. According with what I heard from some Rex guys, although we can customize the appearance of an avatar by using a generic Ogre mesh (like the default Jack’s model), the object attachment mechanism was not changed to adapt to this situation at all. So, when an object is attached to an attachment point (any of those ones available in the action’s wheel), it’s actually attached to a invisible SL “mesh” (a default one) that runs behind the scenes. That’s why the attached object keeps floating in the air when we move our custom avatar around. This is more critical when our avatar has a body very different from a human model, as it’s the case of our pet (a Chihuahua dog). So, there should be a way to make an association between an SL-like avatar model attachment point to the custom Ogre-mesh attachement point. For starters, we just need to associate a single attachment point (the mouth, for a dog model; and the right hand, for a human model)
Custom GUI widgets
There should be custom GUI widgets for providing better interface for the following features:
- Showing the physiological and emotional indicators for a pet (a image of the panels implemented for Multiverse will be useful here)
- Controlling some specific avatar actions that needs some interaction with object in the world (e.g., pick up, put down, drop, throw, pet a pet, etc). This may be also useful for commanding actions that does not require any interaction with other objects, but this may be done by gestures or using special keys from keyboard instead.
Potentially the Novamente team could create these widgets, but it would seem they are the sort of thing that would be relatively simple for someone having experience with the relevant client-side code; whereas for the Novamente team they would require a significant period of familiarization with the client-side code prior to doing the work.
Playing sounds at the server-side
This feature is already implemented. Currently, the pet (bots controlled at the server-side) use the sound assets at its owner's global inventory. The main drawback of this approach is that pets will not be able to play sounds if the owner (the real user) disconnects from the world. The follow attempts were done to solve this, but they did not work yet:
- Try to get the sound assets from the user used to set the pet's appearance. Do not work because that user is not really connected to the world.
- Copy the sound assets from owner user's global inventory to the server's db as soon as it loads a pet. This way, all these sound assets would be available at this server. Unfortunately this didn't work yet. I don't know if the problem is in the method that is supposed to make such copy or in the method that gets the assets from the cache (I bet it's the later).
Other pet action-specific issues
- walk(target posistion) action:
- Sometimes the bot goes beyond the target position and then goes back to it. This happens because the current bot position is checked periodically and, depending on how loaded the system is, the bot may already have reached the target position between a check and another. I noticed this happens often on less powerful machines, as expected. Anyway, it should be fixed.
- Sometimes the bot makes weird movements (goes under the ground, for example) while going to a destination. I have no idea of what causes that.
- turn(finalAngle) action: the current implementation is based on the same mechanism that makes an ordinary player/avatar to turn. Note that when we press the right or left arrow keys to turn the avatar of our user, it does not necessarily turns the avatar, but just the camera. I found no way to change (visually) the bot's rotation, but this way. So, although its rotation value changes properly, visually it keeps facing another angle. A hack for solving that would be to turn the avatar a bit more than the specified angle value so that the pet turns visually to the right angle and just after that, turn back to the specified angle, which will not result in any turn movement.
Common world scenario
This feature is half implemented. The world is a full 256 x 256 flat region, 25 meters above the sea. The scenario itself is placed on a 128 x 128 region surrounded by a fake tree wall. Inside it there is a dog house, some toys, food and water bowl, objects like bench, chair, street lamps, trees that the dog can use to pee and poo. The scenario also has a tunnel so that avatars and pets walk through.
This new scenario is stored within the rex_embodiment_proxy repository under scenario folder. This folder contains the database files for the region, the 1x1 region map and another folder named regions with a modified default.xml file. These files and folders should be put in OpenSim.bin folder (a backup of the originals can be a good idea if something goes wrong).
If the world map does not load flat, insert the following commands in the open simulator console window:
terrain load-tile 1x1-island.r32 1 1 1000 1000 terrain fill 25
these lines will load the region map file at coordinate 1000 1000 (not world coordinates) and then level the altitude to 25 meters above the sea.
- For some reason, still unknown, object properties like name, description, etc are getting lost when the world is unloaded and reloaded later. These information should be saved on OpenSim.db (in default database configuration using SQLite). So, sometimes we had to edit that db file directly using a specific db toll (btw, we have used a trial version of "Innovetec SQLite Manager" program).
- GLOBAL_POSITION_X, GLOBAL_POSITION_Y and PET_VISION_RADIUS should also be changed properly in PetaverseProxy.properties file to respectively 128, 0 and 64 (since 2 opposite corners of the area are the points at coordinates [128,0] and [256,128]).
Additional comments and troubleshooting
While prototyping Embodiment Proxy within Rex platform, I had to deal with practical issues. So, here goes some tips:
- When RexViewer cannot connect to the server due to the error: "Login failed. Could not authenticate user. Please check your username and password", make sure the following parameters are set properly in OpenSim.ini:
rex_mode = True rex_authentication = True
- When you can't run rexserver/simulator/bin/PreBuild.exe or when RexServer (simulator) throws an exception (System.Reflection.ReflectionTypeLoadException) at startup, this may be due to wrong permissions for the executable and dynamic library files (when they are gotten from svn repository...). You must give permission for execution for all *.dll and *.exe files in the simulator's bin folder.
- physics not working: in OpenSim.ini file, set the physics parameter to OpenDynamicsEngine, as follows:
#physics = basicphysics physics = OpenDynamicsEngine
- Assets library: there is a assets library of Rex-compatible meshes available for download at http://www.realxtend.org/get.php?id=6 (realXtend Asset Library 0.2 (25 Mb)). Just use RexViewer's "File->Upload 3d Model..." menu item to put them in your inventory and, then, you can add to a Prim using Rex tab of the object properties (when creating/editing an object).
- Changing log level: in the rex simulator bin directory, open Opensim.exe.config and change the root's log level, as follows:
<root> <level value="INFO" /> <appender-ref ref="Console" /> <appender-ref ref="LogFileAppender" /> </root>