Embodiment Automated System Tests
Automated System Tests was originally implemented to allow us to check periodic and automatically if the system is working for a set of specific test scenarios. The idea is to perform manual (usually visual) tests with the system by logging into the Simulated World as a final user (or even more than 1 user, if the test scenario requires that) and do the needed actions that leads to the specific test scenario. During the manual test, a component should be in charge of generating a set of data that can be used later to reproduce the same test scenario for the system, including the data required as response by the system. We call this set of data as Golden Standard.
Unfortunately, the Embodiment system is not really deterministic, since it depends on several external factors (network lag, cpu load, linux process scheduling, etc). This makes hard to validate the tests through checking if the sequence of messages the system sends and receives during the automated test is equals to the one generated by the manual test that generated the golden standard. Anyway, this mechanism has proven to be valuable for reproducing scenarios for debugging. So, we decided to keep it, even though the automated system tests were suspended until a better approach is implemented.
Since the golden standards are used for testing a system, no component of such system should be used to generated it. So, we need to define the system to be test. In case of Embodiment, we define the system as the set of processes needed to run the Avatar brain(s). So, this set includes:
This way, the Embodiment Proxy is not defined as part of the system, but as an adapter for connecting the Pet Brain(s) to a specific Virtual World. Currently we have 3 types of Embodiment Proxy, as follows:
- PVPSimulator (whith uses AGISimSim on the backend and may be obsolete)
- Multiverse-Proxy (MV-Proxy - the main one)
- RealXtend-Proxy (ReX-Proxy -- which is quite incomplete yet)
So, the related code for generating the golden standards for automated tests are added to these Proxies, which are implemented respectively in C++, Java and C#.
Golden Standard Contents
For Embodiment, the golden standards are text files composed simply by a sequence of timestamped sent and received messages between the Proxy and the Embodiment. Click here to see an example of a golden standard file generated in a simple session where a pet is loaded, do a few actions and is unloaded next.
Golden Standard Generation
For each test scenario, one golden standard file must be generated. This is a manual procedure, what means that a person should run the system and check for the success of each test scenario manually, while generating the golden standard files. In summary, the following steps are needed in this phase:
- Configure the system and proxy parameters for tests properly, i.e., edit system's configuration file and Proxy's configuration file and set the right parameters, as follows:
- Start the system (i.e., run pb.sh)
- Start the proxy (MV-Proxy, ReX-Proxy or PVPSim)
- Run the test scenario (connect the avatar(s) to the world, load the pet(s) and do the avatar actions that leads to the test scenario). The success of the test should be manually checked by the user/tester.
- If the test passes (visually in the Simulated World and with no errors in the embodiment log files), simply stop the Proxy, so that it stops generating entries for the golden standard file. If test fails, the generated golden standard is not really "golden". In this case, the code should be fixed until the test passes.
- Then, stop the system (using killpb.sh), since it was already tested. You can optionally save the generated logs as well (just move /tmp/$User/Petaverse folder to a temporary place) so that you can check it later, if needed.
At this point you already have the golden standard for a given test scenario. See how to check if it's ok in the next section. This procedure must be repeated for each test scenario.
Checking a Golden Standard
After generating a Golden Standard, it may be included in the list of golden standards used for periodic and automatic system tests. However, before committing the new golden standard files to the project repository, they should be tested manually (as done for unit tests). For each test scenario, we need to:
- Restart the system (pb.sh). Make sure AUTOMATED_SYSTEM_TESTS and UNREAD_MESSAGES_RETRIEVAL_LIMIT parameters are set to 1 in embodiment.conf file.
- Run the system tester (pbTester) passing the name of the golden standard file name as argument
- Wait for the end of the pbTester program. It should stop if any error happens (unexpected message from the system or timeout waiting a message from the system)
- Check the /tmp/$USER/Petaverse/Logs/PROXY log file: it should have no error message and should be finished with the following statement: "Automated test passed".
- If everything went fine, the new golden standard file is ok to be included in the automated tests. Otherwise, the problem should be investigated and fixed.
What is actually checked?
One must be aware that checking the validity of a golden standard is not that straight because running a scenario involves the execution and the interaction of separate components, possibly in different environments (distributed or not, different OS and scheduler policy), and, therefore, potentially non-reproducible and non-deterministic. Using our own random generator across the project and seeding it identically for the generated and tested scenario has been a way to reduced that unpredictability but is far from enough to ensure it, the whole system remains non-deterministic. This way, when pbTester is checking a golden standard against the actual results, it does not check the timestamp of the sent and received message, but their order and contents. Further a higher level of comparison between the sequence of messages must be implemented as well, but that's a quite complex thing to implement.
Schedule of Periodic Automated System Tests
A script was created to run the system test for a set of test scenarios (i.e., each one of the gold standard files we generated while running the system tests manually). This was scheduled to run every night, not for every single committed/pushed revision, because these tests are usually much more longer than unit tests. Unfortunately, as already mentioned in the introduction, this was not pragmatic since tests failed when minor changes were applied to the system and generating new tests were a pain in the ass.
Golden Standard files
The golden standard files used to perform system tests are located inside the directory opencog/embodiment/AutomatedSystemTest/GoldenStandardFiles. They are named as follows: gsfile_0.txt, gsfile_1.txt, gsfile_2.txt and so forth. In this same folder, there is a README.txt that describes what scenario each golden standard file represents and any other relevant information about how the golden standards were generated (opencog and proxy revisions used in the test, special values for configuration parameters used during the test, etc). So, if someone wants to include a test scenario in the automated system tests, it must add the generated golden standard in this file and describe what this scenario is and how to reproduce it manually in the README.txt file. Also, one must add the new golden standard file in the list of files to be distributed in the scripts/embodiment/makeDistribution script file.
Running System Test Script
When makeDistribution script (/scritps/embodiment/makeDistribution) is executed with "dev" option or none ("dev" is default) in third argument (./makeDistribution bin TEST_DIR "dev" or ./makeDistribution bin TEST_DIR), it copies all Golden Standard Files and the script run_system_test.sh to distribution directory. To execute the run_system_test.sh script, which will run the pbTester with all golden Standard Files, you must:
- Change the connection ports for ROUTER, SPAWNER, LS and OACs (MIN_OAC_PORT and MAX_OAC_PORT) in embodiment.conf file;
- Change the connection port of PROXY in test.cfg file.
After the embodiment.conf and test.conf were configured you can type: ./run_system_test.sh to execute the system tests. If during the execution any test fail, the script will copy the logs files to /tmp/$USER/SystemTest directory. It will not stop if a test fail, the script just stop when all files finished the execution.
-- Main.WelterLuigi - 15 Apr 2008