Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 4 Current »

Running the IO-500 with the POSIX API and getting similar performance to when running with the -a DFS API, requires a few extra / different steps:

  1. Build the IO-500 the same way as indicated in the parent page here.

    • Note that updating the prepare and makefile is not really required but preferred, just to provide the option of running using either APIs (POSIX and DFS) to compare performance while the same io500 binary.

  2. Create the pool the same way, but change the default oclass on the container:

dmg pool create -z=100% --label io500_pool
daos container create --type POSIX --pool io500_pool --label=io500_cont --file-oclass=S1 --dir-oclass=SX
  • When creating the container with different redundancy factors than RF:0, change the file-oclass to RPnG1 and dir-oclass to RPnGX; where n depends on the redundancy factor you need: (RF:1, n=2 ; RF:2, n=3);

  1. Mount the container with dfuse on all the client nodes. You can use clush or pdsh, etc. In this example we mount the container on /tmp/dfuse (does not have to be that location):

clush --hostfile ~/path/to/cli_hosts "mkdir -p /tmp/dfuse; dfuse --pool=io500_pool --container=io500_cont -m /tmp/dfuse/ --disable-caching"
  1. Update the io500.sh script:

    1. Change the MPI run command to add LD_PRELOAD for using the interception library and the DAOS env variables for the pfind app (make sure to update the pool, container, and dfuse mount for the DAOS_PREFIX, as well as the path for the pil4dfs interception library) (Note: the options to the mpirun command are specific to mpich, and other MPI implementation have different ways to pass environment variables; like -x for open-mpi for example):

    2. io500_mpirun="mpirun -genv LD_PRELOAD=/scratchbox/daos/mschaara/install/daos/lib64/libpil4dfs.so -genv DAOS_POOL=io500_pool -genv DAOS_CONT=io500_cont -genv DAOS_PREFIX=/tmp/dfuse"
    3. Pre-create the ior-easy and ior-hard directories in the setup() function to change the oclass of those to be different than the container default (need to be widely striped for better BW). Change the oclass below to an EC oclass with GX if you are using rf > 0:

    4. function setup(){
        local workdir="$1"
        local resultdir="$2"
        mkdir -p $workdir $resultdir
      
        mkdir $workdir/ior-easy $workdir/ior-hard
        mkdir $workdir/mdtest-easy $workdir/mdtest-hard
        daos fs set-attr --path=$workdir/ior-easy --oclass=SX
        daos fs set-attr --path=$workdir/ior-hard --oclass=SX
  2. Use an io500 ini file with the POSIX API. An example is provided below.

    1. Update the nproc for find to be the same number of procs you used in the the io500.sh script.

    2. Update resultdir to where you want to store the result tarball.

    3. For ior-easy you can always change the file-per-proc and transfer size to something that suits better your configuration.

[global]                                                                                                                                                                                                                                    
datadir = /tmp/dfuse/datafiles
timestamp-datadir = TRUE
resultdir = /path/to/results
timestamp-resultdir = TRUE
api = POSIX
drop-caches = FALSE
drop-caches-cmd = sudo -n bash -c "echo 3 > /proc/sys/vm/drop_caches"
io-buffers-on-gpu = FALSE
verbosity = 1
scc = FALSE

[debug]
stonewall-time = 300

[ior-easy]
API = POSIX
transferSize = 1m
blockSize = 99200000m
filePerProc = FALSE
uniqueDir = FALSE
run = TRUE
verbosity =

[mdtest-easy]
API = POSIX
n = 10000000
run = TRUE

[timestamp]

[ior-hard]
API = POSIX
segmentCount = 10000000
run = TRUE
verbosity =

[mdtest-hard]
API = POSIX
n = 10000000
files-per-dir =
run = TRUE

[find]
nproc = 64
run = TRUE
  • No labels