Using launch.py

The launch.py python script is used by CI to detect and run the functional tests using the avocado test framework. Tests to run/list can be specified by either their ftest relative path to the python test script (e.g. ./ior/ior_small.py) or one or more avocado test tags (e.g. ior).

Selecting tests (tags)

One or more test methods in one or more python test files can be selected by using avocado test tags. See Test Tags for details and examples.

Listing tests

The --list / -l command line option is used to list any tests that match the tests/tags specified. Manually this can be useful for verifying the use of tags in tests or checking tags before using in a commit pragma.

$ ./launch.py -l datamover Arguments: Namespace(archive=False, clean=False, discard=False, failfast=False, include_localhost=False, insecure_mode=False, jenkinslog=False, list=True, logs_threshold=None, mode='normal', modify=False, nvme=None, process_cores=False, rename=False, repeat=1, sparse=False, tags=['datamover'], test_clients=None, test_servers=None, verbose=0, yaml_directory=None) Using PYTHONPATH=/usr/lib/daos/TESTING/ftest/util/apricot:/usr/lib/daos/TESTING/ftest/util:/usr/lib/daos/TESTING/ftest/cart/util:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages Running: fault_status Total test time: 0s Running avocado -v Running with Avocado 69.3 Running avocado list --paginator=off --filter-by-tags=datamover ./ Detected tests: ./datamover/dm_large_dir.py ./datamover/dm_obj_small.py ./datamover/dm_dst_create.py ./datamover/dm_posix_types.py ./datamover/dm_posix_symlinks.py ./datamover/dm_posix_subsets.py ./datamover/dm_serial_large_posix.py ./datamover/dm_large_file.py ./datamover/dm_copy_procs.py ./datamover/dm_negative.py ./datamover/dm_obj_large_posix.py ./datamover/dm_serial_small.py ./datamover/dm_posix_meta_entry.py

Test yaml replacements

To allow test portability, certain test requirements are specified in the test yaml with placeholders that launch.py can replace with real values. Launch.py creates a temporary copy of the test yaml file with the modifications applied for the test execution. Currently supported placeholders are:

Category

Test yaml keyword

Launch.py arugment

Category

Test yaml keyword

Launch.py arugment

Nodes running servers

test_servers

--test_servers / -ts

Nodes running agents

test_clients

--test_clients / -tc

Server NVMe tier storage

bdev_list

--nvme / -n

The values for the --test_servers and --test_clients arguments can be specified as a list (e.g. “wolf-1,wolf-2,wolf-5”) or “wolf-[1-2,5]“.

If the --test_clients launch.py argument is not specified any nodes in the --test_servers argument that have not already been used to replace “test_servers” test yaml entries will be used to replace the “test_clients“ test yaml entries.

Optionally the --discard / -d launch.py argument can be used to remove any placeholders that do not end up with replacements (this will normally produce an error). Verification that the test can be run with a reduced node set is required.

Log and Core File Collection

In CI launch.py is run with the --archive / -a and --process_cores / -p arguments to collect DAOS log files and any core files generated during the test, respectfully. These options can also be used in manual execution and will end up storing the results on the local host in sub-directories in the ~/avocado/job-results/latest/ directory.

Another usefully argument to use with this feature is the --clean / -c argument. It will ensure any existing logs are removed prior to running the test and the archived logs should be relevant for the test execution.

Running Tests

In CI launch.py is run using the -jcrispa , -th 1G, -ts <node_list>, and (on HW clusters only) --nvme=auto:Optane arguments. For manual execution it is recommended to:

  • exclude the --jenkinslog / -j argument as you will typically be running the same test multiple times and will want to retain the default avocado timestamped directory names to keep them unique. Yet retain the --rename / -r argument to make test results easier to find.

  • use the -c , -p , -a and arguments in order to archive DAOS logs and core files

  • use --include_localhost / -i to run clients on the local host when the test does not explicitly specify a client

The --sparse / -s argument is a personal preference. Without it the entire test job.log output will be included in the launch.py command output. Its generally advised to use it when running multiple test methods/variants.

A typically manual run will look like this:

cd <path_to_ftest> ./launch.py -crispa -ts <server_nodes> -tc <client_nodes> -n auto <tags>

Repeating Tests

Execution of all the specified tests can be repeated by using the --repeat / -re argument. Repeated tests will have their test results stored in a numbered sub-directory.

Environment Variables

Launch.py uses/modifies the following environment variables when running tests. Typically no changes are needed when running from a RPM install out of the /usr/lib/daos/TESTING/ftest directory, but other test environments may require adjustments.

Environment Variable

Comments

Environment Variable

Comments

DAOS_TEST_LOG_DIR

Common directory used on all hosts for temporary files. Defaults to /var/tmp/daos_testing

DAOS_INSECURE_MODE

Defines the default setting for server and agent insecure mode. This value is set by the --insecure_mode / -ins launch.py argument.

OFI_INTERFACE

If not already set, it will be set to the fastest active interface on the host executing launch.py.

PYTHONPATH

Launch.py extends the python path to include the following paths if they are not already included in the definition:

  • util/apricot

  • util

  • cart/util

PATH

The ../../.build_vars.json file is read to prepend the PREFIX/bin, PREFIX/sbin, and PREFIX/usr/bin paths to PATH