IO-500 ISC24
Table of Contents
Notes
Changes in this version from the one before include:
Updating build instructions and the io-500 hash
Pre-requisites
DAOS - See Hardware Requirements - DAOS v2.6 for installation and setup instructions
MPI - any version / implementation
clush - See https://clustershell.readthedocs.io/en/latest/install.html for installation
Alternatives are possible, though examples are not provided in these instructions.
Build Paths
These instructions assume the following paths. For simplicity, you can set these variables to the actual locations where you have/want these installed.
After setting these variables, most of the scripts can be "copy-pasted".
MY_DAOS_INSTALL_PATH=${HOME}/install/daos
MY_IO500_PATH=${HOME}/io500
Clone and Build IO-500
Clone the IO-500 repo
git clone https://github.com/IO500/io500.git -b io500-isc24 "${MY_IO500_PATH}" &&
cd "${MY_IO500_PATH}"
Edit prepare.sh to:
Point to the new pfind branch
Build ior with DFS support
Assuming MY_DAOS_INSTALL_PATH is set, you can run:
cat << EOF > io500_prepare.patch
diff --git a/prepare.sh b/prepare.sh
index e38cae6..54dbba5 100755
--- a/prepare.sh
+++ b/prepare.sh
@@ -8,7 +8,7 @@ echo It will output OK at the end if builds succeed
echo
IOR_HASH=bbfea005e2c05726da07171dfe3dfdd6c6011e14
-PFIND_HASH=778dca8
+PFIND_HASH=dfs_find
INSTALL_DIR=\$PWD
BIN=\$INSTALL_DIR/bin
@@ -59,7 +59,7 @@ function get_ior {
function get_pfind {
echo "Preparing parallel find"
- git_co https://github.com/VI4IO/pfind.git pfind \$PFIND_HASH
+ git_co https://github.com/mchaarawi/pfind pfind \$PFIND_HASH
}
function get_schema_tools {
@@ -73,7 +73,7 @@ function build_ior {
pushd "\$BUILD"/ior
./bootstrap
# Add here extra flags
- ./configure --prefix="\$INSTALL_DIR"
+ ./configure --prefix="\$INSTALL_DIR" --with-daos=\${MY_DAOS_INSTALL_PATH}
cd src
\$MAKE clean
\$MAKE install
EOF
git apply io500_prepare.patch
Update the Makefile with correct paths
The Makefile needs to be updated to use the actual install location of DAOS. you can run:
Run the prepare.sh script
Run IO-500
Setup the config file
A sample config-full.ini file for reference:ย https://github.com/mchaarawi/io500/blob/main/config-full-isc22.ini
If you want to download this:
you need to change the result dir:
io500/config-full-isc22.ini at ini_files ยท mchaarawi/io500
to point to a directory where the results will be stored. This directory is required to be accessible from rank 0 of the io-500 application. So it can be either:
A shared filesystem (example: an NFS, dfuse, lustre fs) accessible from the first node in the hostfile where rank 0 is running.
A local file system (/tmp/results) on the first node in the hostfile where rank 0 is running.
After the run is complete, the result files are all stored under this directory.
When running at first, set a short stonewall (5 seconds) to just verify everything runs fine.
For [find] the nprocs setting under that should be the same or less than the number of processes you want to run with the entire workflow (in io500.sh).
Create DAOS pool, container with type POSIX
For documentation on creating pools, see Pool Operations - DAOS v2.6.
For documentation on creating containers, see Container Management - DAOS v2.6.
For example:
Set the pool and cont environment variables
Note that when using Intel MPI, some extra environment variables are required as detailed on:
https://docs.daos.io/v2.0/user/mpi-io/?h=intel+mpi#intel-mpi
Substitute variables in the config file
This will replace $DAOS_POOL, $DAOS_CONT with their actual values.
Run the io500 in one of two ways:
Run the binary directly with or without the extended mode:
The extended mode is not required for an official submission and will extend your runtime significantly.ย After the run completes, you will need to tar up the result dir for that run.
Note that some versions of OpenMPI require setting the environment variables on the mpirun command line, so one needs to add the environment variables that are mentioned above on the mpirun command line with the following format:
Run the io-500.sh script:
This requires mounting dfuse on the launch node only (not all the compute nodes):
Then, edit the io500.sh launch script with the mpirun command and change the local workdir, to add the dfuse prefix
Note that some versions of OpenMPI require setting the environment variables on the mpirun command line, so one needs to add the environment variables that are mentioned above on the mpirun command line in the script here with the following format:
Then run the io500.sh script which will tar the results for you at the end and place them in the result directory you specified in the ini file:
Lastly umount dfuse on the launch node:
Results
The tarball generated at the end (whether ran the binary or with the script) with the results can be submitted to the io500 committee for consideration.