The code of the MPI-IO ROMIO ADIO driver is in an MPICH fork on github:
https://github.com/daos-stack/mpich
git submodule init
git submodule update
To build on Boro:
- export MPI_LIB=""
- download the mpich repo from above and switch to daos_adio branch; might need to update your autotools (autoconf, automake, libtool) as required by mpich.
- ./autogen.sh
- mkdir build; cd build
../configure --prefix=dir --enable-fortran=all --enable-romio --enable-cxx --enable-g=all --enable-debuginfo --with-file-system=ufs+daos --with-daos=dir --with-cart=dir
make -j8; make install
Switch PATH and LD_LIBRARY_PATH where you want to build your client apps or libs that use MPI to the above installed MPICH. note that the DAOS server will still need to be launched with OMPI's orterun. This is a unique situation where the server uses OMPI and the clients will be launched with MPICH.
To be able to run with PSM2 provider we need to force mpich to use sockets instead of PSM2 since DAOS/CART use PSM2. We can't have two instances of PSM2 on the same node, so the following configure option should be added to the above:
--with-device=ch3:sock
Also to disable debugging and get better for performance from mpich, remove those options:
--enable-g=all --enable-debuginfo
and add:
--enable-fast=O3,ndebug --disable-error-checking --without-timing --without-mpit-pvars
Build any client (HDF5, ior, mpi test suites) normally with the mpicc and mpich library we installed above (see child pages).
To run an example:
- In one shell Launch DAOS server(s) with orterun (the PATH and LD_LIBRARY_PATH should point to the ompi installation that is installed to the DAOS installation):
orterun --enable-recovery -np 1 --hostfile ~/my_hosts --report-uri ~/uri.txt /path/to/daos/bin/daos_server -c 8 -a /home/mschaara/
(the -a path will create a connect file that the clients will use to connect to the DAOS server in singleton mode. so the path should accessible for the client and server.) At the client side, the following environment variables need to be set:
export PATH=/path/to/mpich/install/bin:$PATH
export LD_LIBRARY_PATH=/path/to/mpich/install/lib:$LD_LIBRARY_PATH
export MPI_LIB=""
export CRT_ATTACH_INFO_PATH=/path/ (whatever was passed to daos_server -a)
export DAOS_SINGLETON_CLI=1
create a DAOS pool with dmg:
mpirun -np 1 /path/to/daos/bin/dmg create (This returns a DAOS pool uuid and the rank or list of ranks of svc_leaders)- export DAOS_POOL=pool_uuid; export DAOS_SVCL=svc_leader(s)
- This is just temporary till we have a better way of passing pool connect info to MPI-IO and other middleware over DAOS.
- run the client application or test (see child pages for examples).
Limitations to the current implementation include:
- Reading Holes does not return 0, but leaves the buffer untouched (Not sure how to fix this - might need to wait for DAOS implementation of iov_map_t to determine holes vs written bytes in the Array extent).
- No support for MPI file atomicity, preallocate, shared file pointers. Those features were agreed upon as OK not to support.