Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

The launch.py python script is used by CI to detect and run the functional tests using the avocado test framework. Tests to run/list can be specified by either their ftest relative path to the python test script (.e.g. ./ior/ior_small.py) or one or more avocado test tags (e.g. ior).

Selecting tests (tags)

One or more test method in one or more python test files can be selected by using avocado test tags. Tags can be combined to further filter out tests by using logical AND/OR operations. Tags can also be excluded by preceding them with a '-'. Specifying tags in a comma-separated list (w/o spaces) will work like an AND requiring a test to have all the tags specified. For example the “pr,-hw” (used by default in CI for Functional VM stages) runs any “pr” tagged test that does not also have a “hw” tag. Specifying tags in a space-separated list will behave like an OR requiring tests to have either tag specified. For example, “datamover ior“ will run any test with a “datamover” tag or a “ior” tag. These two specification types can be combined, e.g. “pr,hw,large multicontainerdelete“.

Listing tests

The --list / -l command line option is used to list any tests that match the tests/tags specified. Manually this can be useful for verifying the use of tags in tests or checking tags before using in a commit pragma.

$ ./launch.py -l datamover
Arguments: Namespace(archive=False, clean=False, discard=False, failfast=False, include_localhost=False, insecure_mode=False, jenkinslog=False, list=True, logs_threshold=None, mode='normal', modify=False, nvme=None, process_cores=False, rename=False, repeat=1, sparse=False, tags=['datamover'], test_clients=None, test_servers=None, verbose=0, yaml_directory=None)
Using PYTHONPATH=/usr/lib/daos/TESTING/ftest/util/apricot:/usr/lib/daos/TESTING/ftest/util:/usr/lib/daos/TESTING/ftest/cart/util:/usr/local/lib64/python3.6/site-packages:/usr/local/lib/python3.6/site-packages:/usr/lib64/python3.6/site-packages:/usr/lib/python3.6/site-packages
Running: fault_status
Total test time: 0s
Running avocado -v
Running with Avocado 69.3
Running avocado list --paginator=off --filter-by-tags=datamover ./
Detected tests:
./datamover/dm_large_dir.py
./datamover/dm_obj_small.py
./datamover/dm_dst_create.py
./datamover/dm_posix_types.py
./datamover/dm_posix_symlinks.py
./datamover/dm_posix_subsets.py
./datamover/dm_serial_large_posix.py
./datamover/dm_large_file.py
./datamover/dm_copy_procs.py
./datamover/dm_negative.py
./datamover/dm_obj_large_posix.py
./datamover/dm_serial_small.py
./datamover/dm_posix_meta_entry.py

Test yaml replacements

To allow test portability certain test requirements are specified in the test yaml with placeholders that launch.py can replace with real values. Launch.py creates a temporary copy of the test yaml file with the modifications applied for the test execution. Currently supported placeholders are:

Category

Test yaml keyword

Launch.py arugment

Nodes running servers

test_servers

--test_servers / -ts

Nodes running agents

test_clients

--test_clients / -tc

Server NVMe tier storage

bdev_list

--nvme / -n

The values for the --test_servers and --test_clients arguments can be specified as a list (e.g. “wolf-1,wolf-2,wolf-5”) or “wolf-[1-2,5]“.

If the --test_clients launch.py argument is not specified any nodes in the --test_servers argument that have not already been used to replace “test_servers” test yaml entries will be used to replace the “test_clients“ test yaml entries.

Optionally the --discard / -d launch.py argument can be used to remove any placeholders that do not end up with replacements (this will normally produce an error). Verification that the test can be run with a reduced node set is required.

To view the modified test yaml after the placeholder replacements have been made use the --modify / -m launch.py argument. In addition specifying the --verbose / -v argument is useful in this situation.

The modified test yaml file can be saved by using the optional --yaml_directory / -y lauch.py argument to specify a directory in which to save the file.

Log and Core File Collection

In CI launch.py is run with the --archive / -a and --process_cores / -p arguments to collect DAOS log files and any core files generated during the test, respectfully. These options can also be used in manual execution and will end up storing the results on the local host in sub-directories in the ~/avocado/job-results/latest/ directory.

Another usefully argument to use with this feature is the --clean / -c argument. It will ensure any existing logs are removed prior to running the test and the archived logs should be relevant for the test execution.

Use of the --archive argument requires the installation of the daos-tests RPM on all nodes.

In order use --process_cores / -p the core files need to be located in /var/tmp/ on each host. Use the following command on each host to ensure the core files are written to this location:

echo "/var/tmp/core.%e.%t.%p" > /proc/sys/kernel/core_pattern

Running Tests

In CI launch.py is run using the -jcrispa , -th 1G, -ts <node_list>, and (on HW clusters only) --nvme=auto:Optane arguments. For manual execution it is recommended to:

  • exclude the --jenkinslog / -j argument as you will typically be running the same test multiple times and will want to retain the default avocado timestamped directory names to keep them unique. Yet retain the --rename / -r argument to make test results easier to find.

  • use the -c , -p , -a and arguments in order to archive DAOS logs and core files

  • use --include_localhost / -i to run clients on the local host when the test does not explicitly specify a client

The --sparse / -s argument is a personal preference. Without it the entire test job.log output will be included in the launch.py command output. Its generally advised to use it when running multiple test methods/variants.

A typically manual run will look like this:

cd <path_to_ftest>
./launch.py -crispa -ts <server_nodes> -tc <client_nodes> -n auto <tags>

Repeating Tests

Execution of all the specified tests can be repeated by using the --repeat / -re argument. Repeated tests will have their test results stored in a numbered sub-directory.

Environment Variables

Launch.py uses/modifies the following environment variables when running tests. Typically no changes are needed when running from a RPM install out of the /usr/lib/daos/TESTING/ftest directory, but other test environments may require adjustments.

Environment Variable

Comments

DAOS_TEST_LOG_DIR

Common directory used on all hosts for temporary files. Defaults to /var/tmp/daos_testing

DAOS_INSECURE_MODE

Defines the default setting for server and agent insecure mode. This value is set by the --insecure_mode / -ins launch.py argument.

OFI_INTERFACE

If not already set, it will be set to the fastest active interface on the host executing launch.py.

PYTHONPATH

Launch.py extends the python path to include the following paths if they are not already included in the definition:

  • util/apricot

  • util

  • cart/util

PATH

The ../../.build_vars.json file is read to prepend the PREFIX/bin, PREFIX/sbin, and PREFIX/usr/bin paths to PATH

When running tests on a set of non-homogenous nodes the OFI_INTERFACE environment variable may need to be manually set to a common active interface (e.g. “eth0”) before running launch.py.

  • No labels