...
Code Block | ||
---|---|---|
| ||
# Run mpirun ior $ /usr/lib64/mpich/bin/mpirun -host <host1> -np 30 ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M IOR-3.4.0+dev: MPI Coordinated Test of Parallel I/O Began : Fri Apr 16 18:07:56 2021 Command line : ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M Machine : Linux boro-8.boro.hpdd.intel.com Start time skew across all tasks: 0.00 sec TestID : 0 StartTime : Fri Apr 16 18:07:56 2021 Path : /tmp/daos_test1/testfile FS : 3.8 GiB Used FS: 1.1% Inodes: 0.2 Mi Used Inodes: 0.1% Participating tasks : 30 Options: api : POSIX apiVersion : test filename : /tmp/daos_test1/testfile access : single-shared-file type : independent segments : 1 ordering in a file : sequential ordering inter file : no tasks offsets nodes : 1 tasks : 30 clients per node : 30 repetitions : 1 xfersize : 25 MiB blocksize : 25 MiB aggregate filesize : 750 MiB verbose : 1 Results: access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter ------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ---- Commencing write performance test: Fri Apr 16 18:07:56 2021 write 1499.68 59.99 0.480781 25600 25600 0.300237 0.500064 0.483573 0.500107 0 Max Write: 1499.68 MiB/sec (1572.53 MB/sec) Summary of all tests: Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNum write 1499.68 1499.68 1499.68 0.00 59.99 59.99 59.99 0.00 0.50011 NA NA 0 30 30 1 0 0 1 0 0 1 26214400 26214400 750.0 POSIX 0 Finished : Fri Apr 16 18:07:57 2021 # Run mpirun mdtest $ /usr/lib64/mpich/bin/mpirun -host <host1> -np 30 mdtest -a DFS -z 0 -F -C -i 1 -n 1667 -e 4096 -d / -w 4096 --dfs.chunk_size 1048576 --dfs.cont <container.uuid> --dfs.destroy --dfs.dir_oclass RP_3G1 --dfs.group daos_server --dfs.oclass RP_3G1 --dfs.pool <pool_uuid> – started at 04/16/2021 22:01:55 – mdtest-3.4.0+dev was launched with 30 total task(s) on 1 node(s) Command line used: mdtest 'a' 'DFS' '-z' '0' '-F' '-C' '-i' '1' '-n' '1667' '-e' '4096' '-d' '/' '-w' '4096' 'dfs.chunk_size' '1048576' 'dfs.cont' '3e661024-2f1f-4d7a-9cd4-1b05601e0789' 'dfs.destroy' 'dfs.dir_oclass' 'SX' 'dfs.group' 'daos_server' 'dfs.oclass' 'SX' '-dfs.pool' 'd546a7f5-586c-4d8f-aecd-372878df7b97' WARNING: unable to use realpath() on file system. Path: FS: 0.0 GiB Used FS: -nan% Inodes: 0.0 Mi Used Inodes: -nan% Nodemap: 111111111111111111111111111111 30 tasks, 50010 files SUMMARY rate: (of 1 iterations) Operation Max Min Mean Std Dev --------- — — ---- ------- File creation : 14206.584 14206.334 14206.511 0.072 File stat : 0.000 0.000 0.000 0.000 File read : 0.000 0.000 0.000 0.000 File removal : 0.000 0.000 0.000 0.000 Tree creation : 1869.791 1869.791 1869.791 0.000 Tree removal : 0.000 0.000 0.000 0.000 – finished at 04/16/2021 22:01:58 – $ /usr/lib64/mpich/bin/mpirun -host <host1> -np 50 mdtest -a DFS -z 0 -F -C -i 1 -n 1667 -e 4096 -d / -w 4096 --dfs.chunk_size 1048576 --dfs.cont 3e661024-2f1f-4d7a-9cd4-1b05601e0789 --dfs.destroy --dfs.dir_oclass SX --dfs.group daos_server --dfs.oclass SX --dfs.pool d546a7f5-586c-4d8f-aecd-372878df7b97 – started at 04/16/2021 22:02:21 – mdtest-3.4.0+dev was launched with 50 total task(s) on 1 node(s) Command line used: mdtest 'a' 'DFS' '-z' '0' '-F' '-C' '-i' '1' '-n' '1667' '-e' '4096' '-d' '/' '-w' '4096' 'dfs.chunk_size' '1048576' 'dfs.cont' '3e661024-2f1f-4d7a-9cd4-1b05601e0789' 'dfs.destroy' 'dfs.dir_oclass' 'SX' 'dfs.group' 'daos_server' 'dfs.oclass' 'SX' '-dfs.pool' 'd546a7f5-586c-4d8f-aecd-372878df7b97' WARNING: unable to use realpath() on file system. Path: FS: 0.0 GiB Used FS: -nan% Inodes: 0.0 Mi Used Inodes: -nan% Nodemap: 11111111111111111111111111111111111111111111111111 50 tasks, 83350 files SUMMARY rate: (of 1 iterations) Operation Max Min Mean Std Dev --------- — — ---- ------- File creation : 13342.303 13342.093 13342.228 0.059 File stat : 0.000 0.000 0.000 0.000 File read : 0.000 0.000 0.000 0.000 File removal : 0.000 0.000 0.000 0.000 Tree creation : 1782.938 1782.938 1782.938 0.000 Tree removal : 0.000 0.000 0.000 0.000 – finished at 04/16/2021 22:02:27 – |
Run with 4 DAOS
...
hosts server, rebuild with dfuse_io and mpirun
Environment variables setup
...
Run dfuse
Code Block | ||
---|---|---|
| ||
# Bring up 4 hosts server with appropriate daos_server.yml and # access-point, reference to DAOS Set-Up # After DAOS servers and DAOS admin and client RPMs loadedloaded, and started. $ dmg storage format Format Summary: Hosts SCM Devices NVMe Devices ----- ----------- ------------ boro-[8,35,52-53] 1 0 $ dmg pool list Pool UUID Svc Replicas --------- ------------ 733bee7b-c2af-499e-99dd-313b1ef092a9 [1-3] $ daos cont create --pool=$DAOS_POOL --type=POSIX --oclass=RP_3G1 --properties=rf:2 Successfully created container 2649aa0f-3ad7-4943-abf5-4343205a637b $ daos pool list-cont --pool=$DAOS_POOL 2649aa0f-3ad7-4943-abf5-4343205a637b $ dmg pool query --pool=$DAOS_POOL Pool 733bee7b-c2af-499e-99dd-313b1ef092a9, ntarget=32, disabled=0, leader=2, version=1 Pool space info: - Target(VOS) count:32 - SCM: Total size: 5.0 GB Free: 5.0 GB, min:156 MB, max:156 MB, mean:156 MB - NVMe: Total size: 0 B Free: 0 B, min:0 B, max:0 B, mean:0 B Rebuild idle, 0 objs, 0 recs $ df -h -t fuse.daos df: no file systems processed $ mkdir /tmp/daos_test1 $ dfuse --mountpoint=/tmp/daos_test1 --pool=$DAOS_POOL --cont=$DAOS_CONT $ df -h -t fuse.daos Filesystem Size Used Avail Use% Mounted on dfuse 19G 1.1M 19G 1% /tmp/daos_test1 $ fio --name=random-write --ioengine=pvsync --rw=randwrite --bs=4k --size=128M --nrfiles=4 --directory=/tmp/daos_test1 --numjobs=8 --iodepth=16 --runtime=60 --time_based --direct=1 --buffered=0 --randrepeat=0 --norandommap --refill_buffers --group_reporting random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=pvsync, iodepth=16 ... fio-3.7 Starting 8 processes random-write: Laying out IO files (4 files / total 128MiB) random-write: Laying out IO files (4 files / total 128MiB) random-write: Laying out IO files (4 files / total 128MiB) random-write: Laying out IO files (4 files / total 128MiB) random-write: Laying out IO files (4 files / total 128MiB) random-write: Laying out IO files (4 files / total 128MiB) random-write: Laying out IO files (4 files / total 128MiB) random-write: Laying out IO files (4 files / total 128MiB) Jobs: 8 (f=32): [w(8)][100.0%][r=0KiB/s,w=96.1MiB/s][r=0,w=24.6k IOPS][eta 00m:00s] random-write: (groupid=0, jobs=8): err= 0: pid=27879: Sat Apr 17 01:12:57 2021 write: IOPS=24.4k, BW=95.3MiB/s (99.9MB/s)(5716MiB/60001msec) clat (usec): min=220, max=6687, avg=326.19, stdev=55.29 lat (usec): min=220, max=6687, avg=326.28, stdev=55.29 clat percentiles (usec): | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 293], | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 396], | 99.00th=[ 445], 99.50th=[ 465], 99.90th=[ 523], 99.95th=[ 562], | 99.99th=[ 1827] bw ( KiB/s): min=10976, max=12496, per=12.50%, avg=12191.82, stdev=157.87, samples=952 iops : min= 2744, max= 3124, avg=3047.92, stdev=39.47, samples=952 lat (usec) : 250=0.23%, 500=99.61%, 750=0.15%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=0.81%, sys=1.69%, ctx=1463535, majf=0, minf=308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,1463226,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): WRITE: bw=95.3MiB/s (99.9MB/s), 95.3MiB/s-95.3MiB/s (99.9MB/s-99.9MB/s), io=5716MiB (5993MB), run=60001-60001msec |
...