- Tests are run using a test framework called Avocado. The version known to work with the existing tests is 52; if you yum install on Boro that should be the version you get.
- You need to install it on all the cluster nodes that are going to be used to run tests.
- Specifically:
- yum install python2-avocado.noarch
- yum install python2-avocado-plugins-output-html.noarch
- yum install python2-avocado-plugins-varianter-yaml-to-mux.noarch
- yum install python2-aexpect.noarch
- yum install python2-pip
- pip install gitpython (only needed for IOR build)
- pip install pathlib
- DAOS should be built normally, following the instructions in the quickstart.md file.
- Tests are in myrepo/src/tests/ftest. At the top level is launch.py which simplifies running DAOS with avocado. Sub-directories contain categories of tests e.g. the pool directory contains tests relating to pools.
- A given group of tests is implemented as a combination of a python file (.py) and a yaml file (.yaml). So for example in the pool directory there is a SimpleCreateDeleteTest.py and SimplecreateDeleteTest.yaml. The python contains the test code that drives the test, and the yaml file contains the test parameters. A single function in the python file can execute a large number of test cases because its run for different combinations of inputs as found in the yaml file.
- The way the tests and servers are launched is not compatible with orterun so they are launched in single client mode. This means setting some extra environment variables. Given here are all the environment variables I use including the extras for single client mode.
DAOS_BASE=/home/my-home-dir/daos_m
export CRT_ATTACH_INFO_PATH=$DAOS_BASE/install/tmp
export DAOS_SINGLETON_CLI=1
export CRT_CTX_SHARE_ADDR=1
export CRT_PHY_ADDR_STR="ofi+sockets"
export ABT_ENV_MAX_NUM_XSTREAMS=64
export ABT_MAX_NUM_XSTREAMS=64
export OFI_INTERFACE=ib0
export D_LOG_FILE=$DAOS_BASE/install/tmp/daos.log
export D_LOG_MASK=DEBUG,RPC=ERR,MEM=ERR
- Before running a test you must identify the cluster nodes that are to be used. To do this you edit the yaml file for the test you want to run (src/tests/ftest/*/*.yaml). In the yaml file you’ll see placeholders for machine names e.g. boro-A. If the test requires more than 1 host you’ll see boro-A and boro-B, etc. using the yaml array syntax (e.g. each array item begins with a dash). Edit these names replacing the A, B, … with a real cluster node e.g. boro-17 that you have reserved. You can replace the whole name as well, e.g. boro-A becomes wolf-18. Depending on how many tests you want to run you may need to edit a number of the yaml files.
- Tests are started with the launch.py script in the repo/src/tests/ftest directory along with a test tag. Groups of tests are identified by tags. Using the SimpleCreateDeleteTest as an example again –tests in this file are tagged with ‘simplecreate’. So to run these tests you would enter: ./launch.py simplecreate. Tests are given multiple tags of increasing specificity. So SimpleCreateDeleteTest includes the simplecreate tag (most specific) but also the pool tag (least specific). The simplecreate tag is presently associated with dozens of test cases, running the pool tag would run all the pool test cases currently around 1000. If you were to make the tragic mistake of specifying the all tag the tests would run for a day at least.
- If you are running a test to reproduce a defect, the writer of the defect will provide the name of the yaml file to edit and the tag to run the test.