MACSio

Get MACSio:

git clone https://github.com/LLNL/MACSio.git

MACSio uses HDF5 → MPI-IO → DAOS

Build:

https://github.com/LLNL/MACSio/blob/master/INSTALLING.md

After configure and building MPICH (see directions here):

  1. build HDF5 (see here)
  2. Download and install json-cwx json-c with extensions
  3. cd MACSio; mkdir build; cd build
  4. cmake -DCMAKE_INSTALL_PREFIX=/home/mschaara/install/MACSio/ -DWITH_JSON-CWX_PREFIX=/home/mschaara/install/json-cwx/ -DENABLE_HDF5_PLUGIN=ON -DWITH_HDF5_PREFIX=/home/mschaara/install/hdf5/ -DENABLE_SILO_PLUGIN=OFF ..

  5. make; make install

Note we disable Silo above, but if you have Silo installed with HDF5, you can use that instead of HDF5. But since MACSio has an HDF5 driver directly, there is no need to use Silo.

Run:

Launch server(s)

Using all the directions from this page for client side execution:

  1. create pool with dmg and export related env variables
  2. Run macsio with SIF mode. MIF mode uses the HDF5 posix driver, so it won't go through MPI-IO and hence not through the MPI-IO DAOS driver.

  3. mpirun -np 4 /home/mschaara/install/MACSio/macsio --interface hdf5 --parallel_file_mode SIF 1 --filebase daos:

  4. Default per-proc request size is 80,0000 bytes (10K doubles). To use a different request size, use --part_size
    --part_size 10M
    'M' means either decimal Megabytes (Mb) or binary Mibibytes (Mi) depending on setting for --units_prefix_system. Default is binary.
  5. Default #parts/proc is 2, which is common for applications that support 'domain overload' workflows. Change with --avg_num_parts arg
    --avg_num_parts 2.5
    means that 50% of procs have 2 parts and 50 % of procs have 3 parts.
  6. Default number of dumps is 10, change with --num_dumps argument
    --num_dumps 2
  7. for more options:  ./macsio --help

Logs and timings will be located in macsio-log.log and macsio-timings.log in CWD by default.

Status:

Passing / Should Pass all tests